text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
The Dequeue() method is used to returns the object at the beginning of the Queue. This method is similar to the Peek() Method. The only difference between Dequeue and Peek method is that Peek() method will not modify the Queue but Dequeue will modify. This method is an O(1) operation and comes under System.Collections.Generic namespace. Syntax: public T Dequeue (); Return value: It returns the object which is removed from the beginning of the Queue. Exception: The method throws InvalidOperationException on calling empty queue, therefore always check that the total count of a queue is greater than zero before calling the Dequeue() method. Below programs illustrate the use of the above-discussed method: Number of elements in the Queue: 4 Top element of queue is: 3 Number of elements in the Queue: 3 Example 2: Number of elements in the Queue: 2 Top element of queue is: 2 Number of elements in the Queue: 1 Reference: -
https://www.geeksforgeeks.org/getting-an-object-at-the-beginning-of-the-queue-in-c-sharp/
CC-MAIN-2021-10
en
refinedweb
In doing Flash Development for a multi-swf site, we ran into the “first class wins” problem. Basically, an ActionScript class’ bytecode is stored with the SWF that uses it. If the compiler detects a SWF being used, it’ll compile it in. This means, in a website that utilizes multiple SWF’s, 2 SWF’s that utilize the same class will both have their own copy. This is a waste of filesize which costs the user bandwidth. All classes in Flash Player 8 and below are stored on the _global namespace. This is a special object added in Flash Player 6 that allows the storage of data common to all SWF’s that are loaded in dynamically at runtime, either via loadMovie or loadMovieNum. Before _global, classes were stored on _level0 …usually. Flash Player 8 and below have a concept of levels. These allow SWF’s to be stacked on top of one another. You could do the same thing with MovieClips stacked on top of one another, but they had to exist in _root. Levels on the other hand each had their own _root. Now, there was a safe way to ensure every SWF was looking at the same data, and a great place to put classes. No class could be overwritten, however. Classes are basically written in an #ifdef fashion behind the scenes when using ActionScript 2, like so: // if the class isn't defined" if ( _global.Ball == undefined) { // define it _global.Ball = function(){}; } Using that code above as a base, you can see how all subsequent loading of SWF’s have their classes of the same name / package ignored. The Flash Player assumes it’s already loaded, and thus does not use the class bytecode in the loaded SWF. On a job I’m on currently, we ran into this problem. We put a trace in our code, and it never ran when we deployed our SWF to the website. That’s because the website was already loading a SWF that had the same class, and thus was ignoring our new one since our SWF loaded later. Again, first class in wins. This same problem also exhibited itself with loading applications into Central, Macromedia’s early foray in getting SWF applications onto the desktop (pre Apollo). Since Central was a SWF loader of sorts, you’d have classes left defined on _global, like they should be. This caused problems however, when you would load in a new SWF to test and not see your changes. If you didn’t reload Central, or delete your class from _global, it’d stick around, and your newly loaded SWFs would use the old classes. The way class definitions work with loaded SWFs is not a bug and is by design for a few reasons. Additionally, it has been significantly improved in Flash Player 9. Paraphrasing Roger’s entry about flash.system.ApplicationDomain, the justifications are basically: - A Loader should control what it’s loading - Classes not being replaced by loaded SWFs eases security concerns - Makes static / Singleton implementations predictable, and work with child content accessing the same classes There are others, but those are the ones relevant to this discussion concerning Flash Player 8 and below. You can additionally use this implementation to your advantage. Roger discusses some ways with the pro’s can con’s to each with regards to Flex development. For Flash development, some additional points are a little different. For example, using Interfaces to allow 2 or more SWFs to talk to each other prevents the same class being included in both SWFs, and thus saving space. This includes all class dependencies, so you can see how this is a great way to encourage the good practice of coding by contracts (using interfaces) with strong-typing, both in ActionScript 2 and 3. This also prevents you from having to go through the laborious process of using Remote Shared Libraries. While RSL’s are great, there are currently no good tools (aside from ANT in Eclipse which Flash doesn’t have) to help management of RSL’s, thus they are a card house. When you get them working, they rock and look great. They are not change friendly; one breeze of scope creep or refactoring, and the whole thing comes crashing down. Hell, if it makes Darron Schall sweat, you know they are hard. Roger mentions that using implied dependencies via external-library-path isn’t such a good idea because the reverse, having the shell exclude loaded classes, doesn’t work as a use case and is backwards. With context to Flash, I disagree with the first part. I do agree that a shell excluding child classes is silly. Now, interfaces imply you have a good API designed. Our API and designs fluctuate daily. While I agree with Roger that usually your implementation changes, yes, in application development, I’d agree. However, in the Flash world, the shiz is chaos. It may sound like a cop-out, and it is. I refuse to write good code knowing it’ll get torched tomorrow. I’d rather write working code that is maintainable and flexible enough to adjust to change requests. If 20% of what I’ve been writing dies, so be it. In the design world, things change down to the deadline. We also don’t have the kind of time to “flesh out” out our API’s. We can make a great first pass, yes, but when your design can change at a moment’s notice, what’s the point? “Man, this ComboBox is phat! It extends our base TweenComponent, and moves really well in that list. Huh? What do you mean they want the list items to fade in now vs. the list zooming up!? I thought they liked it and signed off on it last week? Son of a LKEJRLKJgfdsjglkdfjg””. That’s not sarcasm; it does happen, a lot. Imagine re-writing an entire list component and all sub-classes… it sucks. Will you now spend the same amount of time hammering out an API… or just make it “good enough”? What if you suddenly have no need for a List in the first place? Using the exclude.xml functionality built into Flash, having loaded SWF’s exclude classes that the shell will contain for them can work quite well. The trade off is, you need to remember to compile the “framework.swf” every time you make a change to a shared class. That way, all of your child SWF’s that are loaded into the main one use the same class, and are smaller since they don’t have to keep a copy of it. How do you create & mange this framework FLA without tools like ANT? JSFL – JavaScript for Flash. There are four things you need to do. First, you need to identify commonly used classes. These are classes, visual components and/or utility classes, that many FLA’s use. You then put these, like so, into a framework.fla on frame 1: com.company.project.Class; Notice the lack of import. Flash will recognize that as a usage and compile the class into the SWF. Keep in mind you do not need the MovieClip assets that go with visual components. With Flash Player 8 and below, these cannot be shared in a simplified fashion. Thus, you’re best bet is to externalize your bigger assets like images and sounds so they’ll be downloaded to the internet cache, and other SWF’s can load the same images and sounds. For frameworks like the mx one, it’s kind of a big deal, because the base component framework has a lot of built-in graphic assets that now have to be duplicated in many SWF’s. However, most design frameworks are made to be skinned, and thus are usually just lightweight MovieClips; containing a bounding box, a stop, and maybe more lightweight base classes on their assets layer, so this really isn’t that bad. The second thing to do is to write a JSFL script that can auto-generate exclude.xml files. These text files that must reside next to your FLA file must also have the name yourfile_exclude.xml where the yourfile is replaced with the name of your FLA file. When you compile your FLA, Flash will look for this file, and if it finds it, it’ll make sure NOT to include any classes listed in the exclude.xml file in the SWF. What your JSFL file will do is loop through all library items, and if it finds a class that is used the base framework, it’ll make sure to add that class to the list in the exclude.xml. The downside? You won’t be able to test this SWF locally. That is unacceptable. Therefore, the third thing you need to do is to write another JSFL that deletes the exclude.xml file if there, and then performs a test movie. Both of the above JSFL files can run in Flash MX 2004 7.2 and Flash 8 which both include the FLfile.dll. This allows JSFL to read and write text fields. With Flash Player 9, it’s different. It also depends on if you are using the Flex 2 framework or not. Without repeating Roger’s good entry on it, Flash Player 9 has made a new place to manage the different sections where these classes are. The reason for this is because SWFs are no longer always tied to the display. Now, with the DisplayList, classes that represent display objects can exist independently, and need to be managed someplace with relation to, but not being tied to, the DisplayList. There is also a little more control over how these loaded classes are handled, and where they are stored. When loading a SWF, you can determine which AppDom (pronounced like Kingdom – Roger’s slang for them) the Loader will use when loading the SWF. When the SWF is loaded, the classes are held in that context. Someone, sometime, is going to have to handle this for ActionScript 3 in an easier way than is done currently with regards to Flash. Not all of us are building Enterprise Applications with SWF. Some of us don’t need the large Flex 2 component framework for our work. We need something lighter weight, like the set that Grant n’ Friends are working on for Flash 9 (Blaze), with less dependencies so it’s easier to modularize classes without the need for interfaces and Remote Shared Libraries. Again, I agree with Interfaces with regards to Flex 2 apps, but in the design world, we don’t have that kind of time, nor can we garner that kind of commitment to API’s. Until I get a better IDE, I’ll never agree with RSL’s. To be fair, I haven’t given them a hardcore run through in a sizable Flex 2 project yet. Anyway, I haven’t had time to peruse Roger’s Module source yet, but the “on the fly class loader” sounds pretty hot. It’d certainly make that skins on the fly tons easier via DLL-like SWFs, yes? I hope to write more about this subject in the future when I have some scripts to show as well as example file size savings. Additionally, while some bitch about how it’ll be hard to do this kind of stuff with the Flex 2 framework, SOME of it can be done, enough to make a positive impact I’m sure. When I’m done doing Flash consulting, hopefully I can jump back to some Flex 2! Till then, attach this. 10 Replies to “Modular ActionScript Development” Did you try intrinsic classes? I think it’s better to use intrinsics in SWF which classes are not used anyway because the SWF is loaded later and first class wins. The other way round would be you compile the main SWF with intrinsics and when the first SWF with the ‘real’ class gets loaded this is the first to win. Another advantage of intrinsics is that you save filesize. When only one version of a class is used why load the bytecode multiple times. Also the compile time of SWFs with intrinsic is fast because there is only a typecheck and no ‘real’ compilation I guess. An interesting post Jesse. I am really pleased that you have brought up the subject of Flash-based-Framworks. As I too have been having a lot of thoughts latley on creating an acceptable and flexible framework for use with Flash. For now I am still basing my sites loosly on ARP, as you pointed out in an earlier post, it is not flexible enough for Flash websites. Having to recompile the base.swf each time a Command is added is not really ideal. But I do like the idea of ARP’s Commands, they make the Model side of things much easier to maintain, so this is something I would like to try to keep. ModelLocator is also something that needs to be addressed. The thoughts I have had regarding a better approach have also been leaning towards a ‘classloader.swf’, possibly xml driven and loading in swf’s containing a particular class. The swf’s could then be organised in a similar manner as the actual packages (as in directories). Again, thanks At S You can create an intrinsic class out of every class. They simply start with ‘intrinsic class’ and you have to define the properties and methods like in an interface. Then you can compile against the intrinsic classes. for example FDT supports generating intrinsic classes. The class will be functional when the intrinsic class is ‘overwritten’ by a concrete class. Cheers, S Few problems with intrinsic with regards to Flash Development. There is no workflow to support this. You have to have a coder utilize a class with the intrinsic keyword knowing full well what he is doing. This means you now have to have 2 code-bases; one with the intrinsics and one with the concrete. With the above, you can at least install 2 JSFL scripts so even designers can easily work side-by-side with developers. If you are a Flash Developer all by yourself, no problem. Again, you cannot test locally. If I do a test movie, and it doesn’t work because of some dependency, that sucks. Finally, intrinsic isn’t supported in ActionScript 3. Although backwards compatibility is a moot point since ActionScript 3 won’t work in 8 and below, the workflow should be somewhat the same. The native keyword doesn’t do the same thing as intrinsic. Hopefully Flash 9 can somehow do something similiar to Flex’ mxmlc compiler to exclude classes. We’ll see. Yes, but the intrinsic codebase can be generated automatically. Hamtasc supports this and so it could be used via Ant. Another scenario where I used intrinsics. I had an AS2 classlib which was not compatible to MTASC. So I generated intrinsics with which I compiled my application using MTASC. Then the main movie loads a SWF which contains the comiled classlib and all is working … Anyway you have to know what you are doing when using intrinsics and it’s perhaps not the best workflow but in some situations I think it’s a nice solution. Again, from a Flash Developer perspective, I agree. However, things like mtasc, ANT, and FDT do not work well when doing design based projects. If the designer cannot compile locally on his/her machine with the FLA, that’s a failure in the workflow. My favorite workflow is when a desginer hands off a FLA, and I rebuild the FLA and integrate the assets into my codebase. At more creative shops and marketing agencies, that workflow doesn’t work. You’ll get drastic design changes down to the wire, and there is no way to shield your code base from such changes. The best bet is to keep the design in the FLA so your designers can tweak things if needed. As such, the developer(s) need to be able to work in tandem, peacefully, with the designers. Setting them up with Eclipse, or getting an existing code-base to be mtasc compatible isn’t relistic in some of the short time frames introduced. Granted, I’ve heard of a positive workflow in another company where the designer utilizes Flex Builder 2 to mock up designs in MXML and the programmer takes over from there. That is ideal, but a lot of these places still use, and will continue to use, Flash. Eclipse is an alien thing to inject into the workflow, but maybe it could work. If you look at this from a developer / designer workflow, you can see where the sacrifices are and why some things just don’t work in from that perspective. I can totally relate to what Jesse is saying here. In the wild west of online advertising, the one constant is change. The developer must remember that they are working for the designer, not the other way around. They want what they want regardless if it conforms to your framework or not. In this kind of business, one must sacrifice a certain amount of the benefits yielded by a framework by keeping that framework very open. Probably the most you can hope for is; code external, a loose interpretation of mvc, some managers, and a whole lotta jsfl. Great post. I agree that there is no easy way to develop and deploy a large Flash/Flex app without worrying about which classes are going to get clobbered. I don’t neccessarily agree that using RSL’s is a bad thing though. In Flex 2 building and deploying RSL’s is a piece of cake. If you use an RSL to deploy shared classes ie (a ModelLocator, Commands, utility classes) then it works great, all movies share the exact same classes, nice and easy to manage. The only big issue I have with RSL’s in Flex 2 is that anything you build that leverages the Flex framework means you pull the entire (almost 400K) framework into your RSL. So a simple class with a TextInput or Button and you take a big hit. Once Adobe gives developers some better tools for controlling what goes into an RSL from the framework I think that path will provide most people with a really decent solution. enable/disable exclude files
https://jessewarden.com/2006/09/modular-actionscript-development.html
CC-MAIN-2021-10
en
refinedweb
Having some issues getting my repository to retrieve information - keeps coming back null. Any Thoughts would be appreciated - new to this and teaching myself. Repository: public class CustomerRepository : ICustomerRepository { private masterContext context; public CustomerRepository(masterContext context) { this.context = context; } public IEnumerable<Customer> GetCustomers() { return context.Customer.ToList(); } public Customer GetCustomerById(int customerId) { var result = (from c in context.Customer where c.CustomerId == customerId select c).FirstOrDefault(); return result; } public void Save() { context.SaveChanges(); } Controller: public class CustomerController : Controller { private readonly ICustomerRepository _repository = null; public ActionResult Index() { var model = (List<Customer>)_repository.GetCustomers(); return View(model); } public ActionResult New() { return View(); } } MasterContext which i had efc make: public partial class masterContext : DbContext { public masterContext(DbContextOptions<masterContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity<Customer>(entity => { entity.Property(e => e.CustomerName).IsRequired(); }); } public virtual DbSet<Customer> Customer { get; set; } public virtual DbSet<Order> Order { get; set; } } I think you need to create instances of you Context and your Repository. So in your Controller you need to something like this: private masterContext context = new masterContext(); private ICustomerRepository repository = new CustomerRepository(context); I assume that you're not using Dependency injection ... if so you just need to create a Constructor for your Controller that takes CustomerRepository as argument: public CustomerController(ICustomerRepository _repository) { repository = _repository; } If you did not configure your database context, look here: This will than enable you the dependency injection. Everything you than need to do for the Repository is to use services.AddScoped<ICustomerRepository, CustomerRepository>(); And I think it could be good to remove the ToList() in the Repository class and remove the Cast List<Customer> in your Controller and use ToList() instead, if it's really needed. Because if you're using it in the View the ienumerable could also work.
https://entityframeworkcore.com/knowledge-base/39022090/csharp-entity-framework-core---repository
CC-MAIN-2021-10
en
refinedweb
Talk:Main Page - 1 News discussion page? - 2 Multibyte character(especially Japanese & Chinese) contents corruption by the Wiki update - 3 Cleaning Main Page Talk - 4 Tutorial link - 5 Nothing in Source sdk is working - 6 Counter-Strike Mod? - 7 January 5th update now causing Hammer to crash - 8 Anti-Spam Brigade - 9 It's free. - 10 Source SDK not ready to compile with Visual Studio 2005 - 11 No installation instructions anywhere to be found - 12 *checks watch* - 13 "MOD" - 14 Uploading Sample Maps - 15 Converting a dwg/3ds file to use in Hammer - 16 Having problems starting the tool - 17 Issue running maps. - 18 Link the obvious - 19 HL2MP SDK - Unbounded use of an array in bone_setup.cpp - 20 Visual Studio .NET 2003 - 21 win x64, after installing Hammer, CS acts odd (slow motion) - 22 Wiki Policy suggestion - 23 64-bit windows: no SSDK?) Tutorial link The link to tutorials for modders should lead to Category:Tutorials, not Special:Categories. Nothing in Source sdk is working Every time i try to open something in source sdk for example hammer it says error The system cannot find the file specified. Plz some 1 help i dont know what to do. Counter-Strike Mod? I am trying to make a mod for counter-strike source but I keep hitting some errors. I can load the counter-strike source screen as if i were going to play counter-strike normally, (i copied the files and fooled around with them to get this to work) but I still get the following error and I don't know how to fix it. "client.dll init() in library client failed" then I would get another box that said this: hl2.exe - instruction at 0x242aa924 referenced memory at 0x0d48808c. the memory could not be "read". Can you guys help me? I would greatly appreciate it. ~Will January 2, 2006 As far as i have seen, you are only able to "mod" HL2, the stripped SDK, and the stripped DM versions into a game of your own. CS is not openly mod-able. You can make smaller, less capable server plugins, but they are only really useful for manipulating the server via what has been allowed for output/listening functions. In other words, the games (HL2, HL2DM, DOD, CS, ETC) that valve release are not going to be able to be modded, they have released the 3 coding options. As for the error, did you build your own code or rip the dll's out of the gcf?—ImaTard 03:18, 2 Jan 2006 (PST) Hey Thanks for getting back to me so soon. Yeah, I just ripped the dll's out of the gcf file. It works fine until a couple seconds of the "loading" in the bottom right hand corner and then crashes. I appreciate your help. ~Will There doesn't happen to be a work-around? I'm just curious. You are unable to mod CS. Ripping the CS DLL's out of the GCF will only allow you to play CS. You may as well just jump in via steam and not bother. I dont know why it would be giving you errors, but it may be tied to how cs is loaded via steam, to force the players not to make their dlls work in their game, although i dont know why or how that would be possible. As many posters/web designers/Mappers have realized. There isnt much in the way of modding that can be done without C++, ie changing the DLL's. Either way, fix or no fix, you are really up a creek without a paddle. I would suggest making a server plugin if you want to truely effect CSS, but it wont be anythign significant enough to call a mod.—ImaTard 03:18, 2 Jan 2006 (PST) Please don't forget to sign your comments, using either three tildes (—ts2do (Talk | @)) for your name, or four tildes (—ts2do (Talk | @) 06:36, 3 Jan 2006 (PST)) for your name and a timestamp...ALSO: STOP WITH THE HEADERS!! jeez...it's so hard to follow with them—ts2do (Talk | @) 06:36, 3 Jan 2006 (PST) Thanks again for getting back to me. Sorry for the headers. Just wanted to let you know that I wasn't trying to somehow cheat the system, I'm just a CS junkie and wanted to add some of my ideas to it. Thanks again. --Ninjawillis 09:58, 3 Jan 2006 (PST) January 5th update now causing Hammer to crash Ever since the January 5th update for the SDK, when I try to open an existing map project in Hammer it crashes. I get the standard Windows error that asks you send a report to Microsoft. I can open a new project, but no existing projects. I also tried decompiling a map and opening the .vmf, and it still crashes. I have seen some posts on various mapping forums where others are experiencing this. The only thing people can offer is a polite recommendation to just be patient and wait for Valve to remedy the problem. Is this a known issue, and if so, is Valve working on a resolution? When can we expect to get an update that resolves this? dd - Refresh the SDK content. --TomEdwards 09:54, 10 Jan 2006 (PST) Anti-Spam Brigade I've made somewhat of a. It seems to work, so far. --AndrewNeo 09:47, 10 Jan 2006 (PST) It's free. Ok, I just got to say this. But please, remove that "Sign up - It's Free". Of course it's free, it's a wiki and even though it may be peoples first encoutner of a wiki, they still dosen't need to be told it's free. You make it sound like some kind of commercial where you get some stuff for free but later the bills will kick in. Its almost like saying "Download the demo of Half-Life 2 today, its free!". Can't you just change it to something like "Want to be a part? Create a login - Help buidling the center of Source development" or something. Just me who thinks this? And please don't remove this because it's critisism against the wiki. --Hipshot 10:08, 10 Jan 2006 (PST) Source SDK not ready to compile with Visual Studio 2005 The new Visual Studio 2005 gets new security features and so on many warnings and build errors if u try to compile the sources with Visual Studio 2005 (and or Visual C++ 2005 express edition) - Thank you, person who didn't sign their comment, but we already know this. Please use VS2003 to compile the Source SDK. --AndrewNeo 08:18, 11 Jan 2006 (PST) sorry i forgot my sign should i i have installed both ides on my pc at the same time? so i cant re-convert my nebula2 engine code back to vs2k3. took to mutch time... how long vs2k5 users need to wait for vs2k5 ready source sdk ? [ fibric 15:25, 11 Jan 2006 (PST) ] - If you want to code for Source, then yes, you should have both installed.. and we don't know. That's up to Valve to support it, they might decide not to support VS2005 express if the non-express version works. --AndrewNeo 16:25, 11 Jan 2006 (PST) This is ridiculous. Come on! What's keeping them?! It's can't be that hard to do! People are getting impatient. I for one can't start doing anything because 2003 is no more and 2005 is not supported. What should I do? Illegally download vs 2003 or keep waiting until the sdk upgrade project is scrapped after 6 months because it doesn't bring any money to Valve? Besides, how come a big and 'serious' company like Valve doesn't keep up with what they are releasing? It's nice they released the SDK but not looking into it and not supporting it has brought this whole situation on. They should have been ready for vs 2003 being discontinued. Somebody in Valve is not doing their job properly if you ask me... And they're not doing it AGAIN. --Eyeofaraven 04:16, 12 Feb 2006 (PST) No installation instructions anywhere to be found As simple as it may look... will someone please write *anything* about installation? I know I'm a dumbass but it took me a while to figure out on my own that the SDK was not a direct download but was in fact under the "Tools" tab under "My games" in steam, and this whole wiki didn't help at all. Yeah I'm an idiot... but still us idiots come in droves. --Wisgary 19:22, 11 Jan 2006 (PST) - I can't seem to find the SDK under either the "My Games" or "Tools" tab in steam, and yes, I do own and have two source games installed. Any other places I should be looking? --UrbanPredator 22:00, 13 Jan 2006 (PST) - Uhm. It's greyed out by default, I believe, and says something to the effect of "Source SDK", in the Tools submenu. You should be able to double click it to install it... I might be mistaken on this, but I think it's only visible if you have Half-Life 2 installed. If someone else has only Counter-Strike:Source or Day of Defeat:Source installed, and can verify this, I'd be much obliged. --Spektre1 23:26, 13 Jan 2006 (PST) - Currently, the HL2 is the only way to get the SDK. I'll see what I can do for you, Wisgary. --TomEdwards 01:37, 14 Jan 2006 (PST) *checks watch* Err, today's Tuesday, the 17th. Might wanna fix that. --Charron 04:33, 17 Jan 2006 (PST) - Ah! This finally explains Valve's slightly eccentric approach to release dates - they're obviously using a different calendar to the rest of us... ;-) —Cargo Cult (info, talk) 06:01, Fri 17 Jan 2006 (PST) what what?—ts2do 07:57, 17 Jan 2006 (PST) "MOD" Why is the text MOD displayed in capital letters as if it were an acronym? Its an abreviation for modification, so that makes no sense... EAi 07:44, 18 Jan 2006 (PST) Uploading Sample Maps Having worked on two tutorials here, I've noticed that the system disallows the uploading of .zip or .vmf files. Is this a functional limitation of the system, or can an administrator allow .vmf files to be uploaded? What does everyone else think about this? `zozart .chat @ 04:35, 21 Jan 2006 (PST) - As long as the server was configured right and didn't send the VMFs as the text mimetype, that would be a very good idea. --TomEdwards 04:38, 21 Jan 2006 (PST) Converting a dwg/3ds file to use in Hammer I am new to game developing, actually I am not a game developer I am just interested in finding someone who could convert either 3DS files or 3D DWG files to use them as a level in Half-life. I can supply you with a 3D file and if you could make a small level that would be playable that would be great. If you can do this email me at [email protected] Thank you for your help. Having problems starting the tool So I've installed the Source SDK... three times... The behavior I get is a slight pause and windows hourglass, then nothing. Absolutely nothing happens. No error, no warning, no nothing. I am using the 64 bit version of windows, so I'm hoping it just has something to do with paths, maybe? Steam is currently set to "D:\program files(x86)\steam" etc, rather than the default C:\program files\(etc). SupaSaru 00:28, 24 Jan 2006 (PST) - It's a Known issue. --TomEdwards 00:56, 24 Jan 2006 (PST) Issue running maps. I can't create a multiplayer server, to test a map or otherwise. It tells me that my STEAM authentication has failed. I can join servers just fine, and start offline games. Any clues?--TheRat 15:25, 25 Jan 2006 (PST) Link the obvious Valve Developer Community should be linked to Valve Developer Community:About. - Roy 15:53, 25 Jan 2006 (PST) HL2MP SDK - Unbounded use of an array in bone_setup.cpp Models are crashing HL2MP Clients with the message "Bad sequence (661 out of 601 max) in GetSequenceLinearMotion() for model 'Eli.mdl'!". The problem was debuged (by Paul 'zero' Peloski, HL2CTF Lead) who wrote me this: - The code that crashes in the bone_setup.cpp (function SlerpBones) when compiled on the client side. The problem is the unbounded use of an array offset used to get the global-to-local bone mapping array. I have fixed it in HL2CTF with a simple IsValidIndex() check… Mrmagu 16:17, 25 Jan 2006 (PST) Visual Studio .NET 2003 Is there a way to obtain free or trial version of the .NET 2003, i have the xpress 2005 version and have just read that it doesn't work, and have been trying for 2 days, (duh!). Any help much appreciated. No, there are however, cheap versions found at schools, normaly around 40 bucks each. win x64, after installing Hammer, CS acts odd (slow motion) First of all I found no troubleshooting pages so far. Second: I had no problems (except getting headshoots) with CS or CS:Source on x64 platform. Runs smooth and it is fun. After I installed and set up Hammer and loaded the sdk_de_cbble.vmf I tried to test it by Run the map (F9). It takes a long time to compile, but it is ok. I shut down a machine and another day dawns. Now I experienced that when I play CS or CS:CZ the game just slows down sometimes even if I just run straight. Happens with CS:CZ Deleted Scenes, online game, no matter what I do... My guess is that some debug setting is set and CS detects that but won't warn me about thats a debug mode or something. It is also strange, that numlock just stops the client which is another hint to think it is some kind of debug mode. The SDK has a menuitem which entitled as 'Reset Game Configurtions' but it didn't fix this issue... Any idea how can I get back my game? Any idea will be greatly appreciated (beside I will be driven mad with the current situation). I could delete all local content but due I have limited internet connection (and I have to pay for EVERY BIT I download or upload) reinstalling is not an option (and I'm just not sure it will help at all :((( Is someone going to post news on the Friends Beta? Regular K Wiki Policy suggestion To cut down on the amount of useless unlinked articles lost in the main namespace, I recommend a general policy of no-support, i.e. not allowing people to ask questions pertinent to individual projects of theirs directly on the wiki. We still have the Chatbear forums, and forums are a much better medium for getting help with things. On the same note, I do suggest we post solutions to common problems, but make sure they're linked on a page. We already have a lot of dead-end pages that should be dealt with, as well as Unused images. Personally, I believe that Valve should look into having other sysops, considering they have non-employee moderators on the forums. This would give the community a better standing against trolls and other problems we face every so often, as obviously the current admins can't always be here to watch over it (they, unlike us, have jobs!) ;) --AndrewNeo 17:14, 8 Feb 2006 (PST) 64-bit windows: no SSDK? Hello! I have a strange problem, maybe you can help me. I can't find anything similar in your support FAQ, and noone answered me on the email. About a year ago I traveled to London, there was free broadband internet, and I downloaded full HL2, CS:S, HL2:DM, Codename: Gordon, Source SDK and Source Dedicated Server. I saved all in backup file, bring the file to my home and successfully backup(ed) steam. After that, steam downloaded about 600 mb (update) (it's about 20$ for me... :/). But it doesen't matter. I have WinXP 64-bit edition. In HL2, i saw in the top of screen sign: "64-bit mode enabled" (it's good :))... But problem is... SSDK won't work anyway! If I'm trying to run installed separetely Hammer and then configure it, Windows catching an error and Hammer closing. From steam when I'm pressing "Source SDK" I see sign "loading..." for about 5 secs, then mouse cursor with "sand clocks", and then nothing... Is there special Hammer or SDK tools for 64-bit systems? You developing them or SSDK 32-bit vers. should work on 64-bit system? Why SSDK from steam won't run? Can you help me? Computer is: - AMD Athlon 64 FX-55 - ATI (Sapphire) Radeon X850XT PCI-X - nVidia nForce4 motherboard/chipset (model A8N-SLI Deluxe) - Enought hard disk space and 2GB RAM. I think that's all reasonable information. Please, answer me. Better on [email protected] THANKS!!!!!!!!!!!!!!! +++ SOLUTION FOUND. Thanks goes to Tyler :) To run SDK on 64-bit systems, run HL2 with console parameter -32bit. Then exit HL2. Now SDK should work! --- BUT STILL WAITING for 64-bit SDK support from VALVE!
https://developer.valvesoftware.com/w/index.php?title=Talk:Main_Page&oldid=25786
CC-MAIN-2021-10
en
refinedweb
Creating Custom Html Helper Methods Part 2 In a previous post we talked about creating custom html helpers. The implementation there involved customizing helpers that already existed in the ASP.NET MVC framework. In this post we will talk about creating our own helpers from scratch. This will allow us to create helpers for any kind of html element and will also provide flexibility on the element building process. The Situation When working with the Twitter Bootstrap framework, it is common for each property in the model to correspond to a label and an input. For example: <div class="form-group"> @Html.LabelFor(m => m.Name, new { @class = "control-label" }) @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) </div> Our goal is to create a new helper that wraps the form-group div, so that the end result will be this: @Html.FormGroupFor(m => m.Name) HtmlTags Before we get started, let's install a NuGet package called HtmlTags. We will use this to construct our html elements. HtmlTags can be installed through the NuGet package manager or through the Package Manager Console: Install-Package HtmlTags Building the Helper Class Now let's create a new class that will hold our FormGroupFor extension method. using HtmlTags; using System; using System.Linq.Expressions; using System.Web.Mvc; public static class HtmlHelpers { public static HtmlTag FormGroupFor<TModel, TValue>(this HtmlHelper<TModel> html, Expression<Func<TModel, TValue>> expression) { } } It is going to be an extension method so the class and the method both have to be static. The expression parameter will provide us with the information we need to build the html elements. The code won't compile yet, so let's continue working on our method. ModelMetadata and ExpressionHelper The ASP.NET MVC framework provides us with two classes that will be of great help to us. These are the ModelMetadata and ExpressionHelper classes. The ModelMetadata class is particularly useful because it lets us access the DataAnnotations we used to decorate the model. Let's look at how we can use ModelMetadata and ExpressionHelper. Inside the FormGroupFor method, let's put the following code: ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, html.ViewData); string modelName = ExpressionHelper.GetExpressionText(expression); string labelText = metadata.DisplayName ?? modelName; First, we extract the property metadata using the ModelMetadata.FromLambdaExpression method. Then, we get the model name using the ExpressionHelper.GetExpressionText method. For simple models, the model name is equivalent to the property name. We then determine what label text to use based on the metadata and the model name. The DisplayName property of the metadata gets information from the DisplayAttribute. For example, if the property on the model was decorated with the [Display(Name = "Full Name")] attribute, then the DisplayName property will be populated. If it is populated, we use that as the label text. Otherwise, we just use the model name. Building the Elements Now let's finish the implementation by putting in the code that constructs the html elements:); This is where we use the HtmlTags library. Notice how it lets us construct html elements using a fluent syntax. The method names are also similar to those used in JQuery. We are constructing three elements: a div, a label, and an input. For each element, we add the appropriate Bootstrap class. We also add attributes using the modelName and labelText variables we populated earlier. At the end, we append the label and input to the form group and return the form group. The final method implementation should look like this: public static HtmlTag FormGroupFor<TModel, TValue>(this HtmlHelper<TModel> html, Expression<Func<TModel, TValue>> expression) { ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, html.ViewData); string modelName = ExpressionHelper.GetExpressionText(expression); string labelText = metadata.DisplayName ?? modelName;); } And that's it! Once we include the appropriate using statement (if any) in our view, we should be able to call our method like so: @Html.FormGroupFor(m => m.Name) And it should produce a form-group div with a label and an input inside with the appropriate attributes. Extension Points Using this method will save us from a lot of typing and repetition in our views. But we can still take this further. By exploring the ModelMetadata class and / or using reflection to get attribute information from our models, we can place logic in our helper that will allow us to create customized elements. We can also create overloads that accept more parameters to support further customization. Following are some examples of what can be done: - Supporting html validation attributes, such as required, minlength, and more. - Supporting different input types, such as inputor a textarea. - Supporting angular elements and attributes. - Creating a method that will create an entire form using a single method call. - And more! Conclusion In this post we created a custom html helper extension method that creates a form group for us, which includes the appropriate label and input elements. We also took a quick look at how we can extract information from the model using the ModelMetadata class. Finally, we looked at a few ideas on how to extend our method to support a wider range of elements and attributes.
https://www.ojdevelops.com/2015/12/creating-custom-html-helper-methods.html
CC-MAIN-2021-10
en
refinedweb
The Laravel team released 8.13 this week and updated the changelog detailing all the new features in last week’s 8.12 release. The new features added to Laravel over the previous few weeks are packed with exciting framework updates, so let’s look at what’s new! 8.12: Create Observers With a Custom Path @StefanoDucciConvenia contributed the ability to use stubs with the make:observer command (#34911). 8.12: Lazy Method in 8.x Eloquent Factory Mathieu TUDISCO contributed the ability to create a callback that persists a model in the database when invoked. In previous versions of Laravel, the FactoryBuilder had a lazy() method that only creates a record if called. Now, 8.x factories can do the same: $factory = User::factory()->lazy(); $factory = User::factory()->lazy(['name' => 'Example User']); $factory(); 8.12: Encrypted String Eloquent Cast Jason McCreary contributed an eloquent cast that will handle encryption and decryption of a simple string: public $casts = [ 'access_token' => 'encrypted', ]; 8.12: New DatabaseRefreshed Event Adam Campbell contributed a new DatabaseRefreshed event that fires right after both a migrate:fresh and migrate:refresh command. The new event allows developers to perform secondary actions after refreshing the database. You may use the event class from the following namespace: \Illuminate\Database\Events\DatabaseRefreshed::class 8.12: New withColumn() to support Aggregate Functions Khalil Laleh contributed a withColumn method to support more SQL aggregation functions like min, max, sum, avg, etc. over relationships: Post::withCount('comments'); Post::withMax('comments', 'created_at'); Post::withMin('comments', 'created_at'); Post::withSum('comments', 'foo'); Post::withAvg('comments', 'foo'); You might want to check out Pull Request #34965 for more details. 8.12: Add explain() to Eloquent/Query Builder Illia Sakovich contributed an explain() method to the query builder/eloquent builder, which allows you to receive the explanation query from the builder: Webhook::where('event', 'users.registered')->explain() Webhook::where('event', 'users.registered')->explain()->dd() Now you can call explain() to return the explanation or chain a dd() call to die and dump the explanation. 8.12: Full PHP 8 Support Dries Vints has been working on adding PHP 8 support to the Laravel ecosystem, which involves various libraries (both first- and third-party libraries) and coordination of many efforts. A HUGE thanks to Dries and all those involved in getting Laravel ready for the next major PHP version! 8.12: Route Registration Methods Gregori Piñeres contributed some route regex registration methods to easily define route params for repetitive, regular expressions you might add to route params: //'); 8.12: Don’t Release Option for Job Rate Limiting Paras Malhotra contributed a dontRelease() option for RateLimited and RateLimitedWithRedis job middleware: public function middleware() { return [(new RateLimited('backups'))->dontRelease()]; } When called, the dontRelease() method will not release the job back to the queue when the job is rate limited. 8.13: Aggregate Load Methods In 8.12, Khalil Laleh contributed the withColumn() method to support aggregate functions. In 8.13, he contributed load* methods for aggregate functions: public function loadAggregate($relations, $column, $function = null) public function loadCount($relations) public function loadMax($relations, $column) public function loadMin($relations, $column) public function loadSum($relations, $column) public function loadAvg($relations, $column) 8.13: Add chunk() to Fluent Strings Chris Kankiewicz contributed a chunk() method to fluent strings which allows a string to be chunked by a specific length: // returns a collection Str::of('foobarbaz')->chunk(3); // Returns 'foo-bar-baz' Str::of('FooBarBaz')->lower()->chunk(3)->implode('-'); Release Notes You can see the full list of new features and updates below and the diff between 8.11.0 and 8.12.0 and 8.12.0 and 8.13.0 on GitHub. The following release notes are directly from the changelog: * v8.13.0 Added - Added loadMax()| loadMin()| loadSum()| loadAvg()methods to Illuminate\Database\Eloquent\Collection. Added loadMax()| loadMin()| loadSum()| loadAvg()| loadMorphMax()| loadMorphMin()| loadMorphSum()| loadMorphAvg()methods to Illuminate\Database\Eloquent\Model(#35029) - Modify Illuminate\Database\Eloquent\Concerns\QueriesRelationships::has()method to support MorphTo relations (#35050) - Added Illuminate\Support\Stringable::chunk()(#35038) Fixed - Fixed a few issues in Illuminate\Database\Eloquent\Concerns\QueriesRelationships::withAggregate()(#35061, #35063) Changed Refactoring v8.12.0 Added - Added ability to create observers with custom path via make:observercommand (#34911) - Added Illuminate\Database\Eloquent\Factories\Factory::lazy()(#34923) - Added ability to make cast with custom stub file via make:castcommand (#34930) - ADDED: Custom casts can implement increment/decrement logic (#34964) - Added encrypted Eloquent cast (#34937, #34948) - Added DatabaseRefreshedevent to be emitted after database refreshed (#34952, f31bfe2) - Added withMax()| withMin()| withSum()| withAvg()methods to Illuminate/Database/Eloquent/Concerns/QueriesRelationships(#34965, f4e4d95, #35004) - Added explain()to Query\Builderand Eloquent\Builder(#34969) - Make multiple_ofvalidation rule handle non-integer values (#34971) - Added setKeysForSelectQuerymethod and use it when refreshing model data in Models (#34974) - Full PHP 8.0 Support (#33388) - Added Illuminate\Support\Reflector::isCallable()(#34994, 8c16891, 31917ab, 11cfa4d, #34999) - Added route regex registration methods (#34997, 3d405cc, c2df0d5) - Added dontRelease option to RateLimited and RateLimitedWithRedis job middleware (#35010) Fixed - Fixed check of file path in Illuminate\Database\Schema\PostgresSchemaState::load()(268237f) - Fixed: PhpRedis (v5.3.2)cluster – set default connection context to null(#34935) - Fixed Eloquent Model loadMorphand loadMorphCountmethods (#34972) - Fixed ambigious column on many to many with select load (5007986) - Fixed Postgres Dump (#35018)
https://laravel-news.com/laravel-8-13-0
CC-MAIN-2021-10
en
refinedweb
, Cdf, Suite import thinkbayes2 import thinkplot import numpy as np from scipy.special import gamma import pymc3 as pm (as, in fact, they did)? Let's assume that Germany has some hypothetical goal-scoring rate, λ, in goals per game. To represent the prior distribution of λ, I'll use a Gamma distribution with mean 1.3, which is the average number of goals per team per game in World Cup play. Here's what the prior looks like. from thinkbayes2 import MakeGammaPmf xs = np.linspace(0, 12, 101) pmf_gamma = MakeGammaPmf(xs, 1.3) thinkplot.Pdf(pmf_gamma) thinkplot.decorate(title='Gamma PDF', xlabel='Goals per game', ylabel='PDF') pmf_gamma.Mean() class Soccer(Suite): """Represents hypotheses about goal-scoring rates.""" def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. hypo: scoring rate in goals per game data: interarrival time in minutes """ x = data / 90 lam = hypo like = lam * np.exp(-lam * x) return like Now we can create a Soccer object and initialize it with the prior Pmf: prior = Soccer(pmf_gamma) thinkplot.Pdf(prior) thinkplot.decorate(title='Gamma prior', xlabel='Goals per game', ylabel='PDF') prior.Mean() Here's the update after the first goal at 11 minutes. posterior1 = prior.Copy() posterior1.Update(11) thinkplot.Pdf(prior, color='0.7') thinkplot.Pdf(posterior1) thinkplot.decorate(title='Posterior after 1 goal', xlabel='Goals per game', ylabel='PDF') posterior1.Mean() Here's the update after the second goal at 23 minutes (the time between first and second goals is 12 minutes). posterior2 = posterior1.Copy() posterior2.Update(12) thinkplot.Pdf(prior, color='0.7') thinkplot.Pdf(posterior1, color='0.7') thinkplot.Pdf(posterior2) thinkplot.decorate(title='Posterior after 2 goals', xlabel='Goals per game', ylabel='PDF') posterior2.Mean() from thinkbayes2 import MakePoissonPmf We can compute the mixture of these distributions by making a Meta-Pmf that maps from each Poisson Pmf to its probability. rem_time = 90 - 23 metapmf = Pmf() for lam, prob in posterior2.Items(): lt = lam * rem_time / 90 pred = MakePoissonPmf(lt, 15) metapmf[pred] = prob MakeMixture takes a Meta-Pmf (a Pmf that contains Pmfs) and returns a single Pmf that represents the weighted mixture of distributions:[x] += p1 * p2 return mix Here's the result for the World Cup problem. mix = MakeMixture(metapmf) mix.Print() And here's what the mixture looks like. thinkplot.Hist(mix) thinkplot.decorate(title='Posterior predictive distribution', xlabel='Goals scored', ylabel='PMF') Exercise: Compute the predictive mean and the probability of scoring 5 or more additional goals. # Solution goes here cdf_gamma = pmf_gamma.MakeCdf(); mean_rate = 1.3 with pm.Model() as model: lam = pm.Gamma('lam', alpha=mean_rate, beta=1) trace = pm.sample_prior_predictive(1000) lam_sample = trace['lam'] print(lam_sample.mean()) cdf_lam = Cdf(lam_sample) thinkplot.Cdf(cdf_gamma, label='Prior grid') thinkplot.Cdf(cdf_lam, label='Prior MCMC') thinkplot.decorate(xlabel='Goal scoring rate', ylabel='Cdf') Let's look at the prior predictive distribution for the time between goals (in games). with pm.Model() as model: lam = pm.Gamma('lam', alpha=mean_rate, beta=1) gap = pm.Exponential('gap', lam) trace = pm.sample_prior_predictive(1000) gap_sample = trace['gap'] print(gap_sample.mean()) cdf_lam = Cdf(gap_sample) thinkplot.Cdf(cdf_lam) thinkplot.decorate(xlabel='Time between goals (games)', ylabel='Cdf') Now we're ready for the inverse problem, estimating lam based on the first observed gap. first_gap = 11/90 with pm.Model() as model: lam = pm.Gamma('lam', alpha=mean_rate, beta=1) gap = pm.Exponential('gap', lam, observed=first_gap) trace = pm.sample(1000, tune=3000) pm.traceplot(trace); lam_sample = trace['lam'] print(lam_sample.mean()) print(posterior1.Mean()) cdf_lam = Cdf(lam_sample) thinkplot.Cdf(posterior1.MakeCdf(), label='Posterior analytic') thinkplot.Cdf(cdf_lam, label='Posterior MCMC') thinkplot.decorate(xlabel='Goal scoring rate', ylabel='Cdf') And here's the inverse problem with both observed gaps. second_gap = 12/90 with pm.Model() as model: lam = pm.Gamma('lam', alpha=mean_rate, beta=1) gap = pm.Exponential('gap', lam, observed=[first_gap, second_gap]) trace = pm.sample(1000, tune=2000) pm.traceplot(trace); lam_sample = trace['lam'] print(lam_sample.mean()) print(posterior2.Mean()) cdf_lam = Cdf(lam_sample) thinkplot.Cdf(posterior2.MakeCdf(), label='Posterior analytic')=1000) gap_sample = post_pred['gap'].flatten() print(gap_sample.mean()) cdf_gap = Cdf(gap_sample) thinkplot.Cdf(cdf_gap) thinkplot.decorate(xlabel='Time between goals (games)', ylabel='Cdf') Exercise: Use PyMC to write a solution to the second World Cup problem: In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. How much evidence does this victory provide that Germany had the better team? What is the probability that Germany would win a rematch? with pm.Model() as model: lam = pm.Gamma('lam', alpha=mean_rate, beta=1) goals = pm.Poisson('goals', lam, observed=1) trace = pm.sample(3000, tune=3000) pm.traceplot(trace); lam_sample = trace['lam'] print(lam_sample.mean()) cdf_lam = Cdf(lam_sample)=3000) goal_sample = post_pred['goals'].flatten() print(goal_sample.mean()) pmf_goals = Pmf(goal_sample) thinkplot.Hist(pmf_goals) thinkplot.decorate(xlabel='Number of goals', ylabel='Cdf') from scipy.stats import poisson class Soccer2(thinkbayes2.Suite): """Represents hypotheses about goal-scoring rates.""" def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. hypo: goal rate in goals per game data: goals scored in a game """ return poisson.pmf(data, hypo) from thinkbayes2 import MakeGammaPmf xs = np.linspace(0, 8, 101) pmf = MakeGammaPmf(xs, 1.3) thinkplot.Pdf(pmf) thinkplot.decorate(xlabel='Goal-scoring rate (λ)', ylabel='PMF') pmf.Mean() germany = Soccer2(pmf); germany.Update(1), 10) metapmf[pred] = prob mix = thinkbayes2.MakeMixture(metapmf, label=label) return mix germany_pred = PredictiveDist(germany, label='germany') thinkplot.Hist(germany_pred, width=0.45, align='right') thinkplot.Hist(pmf_goals, width=0.45, align='left') thinkplot.decorate(xlabel='Predicted # goals', ylabel='Pmf') thinkplot.Cdf(germany_pred.MakeCdf(), label='Grid') thinkplot.Cdf(Cdf(goal_sample), label='MCMC') thinkplot.decorate(xlabel='Predicted # goals', ylabel='Pmf')
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/world_cup_mcmc.ipynb
CC-MAIN-2021-10
en
refinedweb
.util.mime;50 51 import java.util.ArrayList ;52 import java.util.List ;53 import java.util.StringTokenizer ;54 55 /**56 * Class which represents a MimeType including the suffixes which are mapped to the mime type.57 *58 * @author Anthony Eden59 */60 61 public class MimeType {62 63 /**64 * Delimiter for separating suffixes in a suffix string (,).65 */66 public static final String DELIMITER = ",";67 68 private List suffixes;69 70 /**71 * Construct a MimeType object.72 */73 74 public MimeType() {75 suffixes = new ArrayList ();76 }77 78 /**79 * Create a MimeType with the suffixes specified in the given String. Suffixes are delimited by one of the80 * characters in the DELIMITER constant.81 *82 * @param suffixString The suffix String83 */84 85 public MimeType(String suffixString) {86 suffixes = new ArrayList ();87 addSuffixes(suffixString);88 }89 90 /**91 * Return a list of suffixes for this mime type. For example, the text/html mime type could have the suffixes92 * <code>html</code> and <code>htm</code> associated with it.93 *94 * @return A List of suffixes95 */96 97 public List getSuffixes() {98 return suffixes;99 }100 101 /**102 * Add suffixes defined in the suffix String to the MimeType's current suffix list. Each suffix will be trimmed to103 * remove leading and trailing spaces.104 *105 * @param suffixString The suffix String106 */107 108 public void addSuffixes(String suffixString) {109 StringTokenizer tk = new StringTokenizer (suffixString, DELIMITER);110 while (tk.hasMoreTokens()) {111 suffixes.add(tk.nextToken().trim());112 }113 }114 115 }116 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jpublish/util/mime/MimeType.java.htm
CC-MAIN-2018-26
en
refinedweb
Details You can use these API anywhere in the application. Here are several usage examples: - A game keeps a score for a player. Suppose players get credits and promotions based on their score. You can maintain the score as secure data while the game is being played and save it as secure storage when the game is closed. - A personal note application stores all data on the web. To improve performance, while maintaining confidentiality, you can download the data (requested by the user) from the web and cache it using secure storage on the device. The next time it is used, the application can read it from the device and does not need to access the web. - An application uses access and authentication to a cloud backend. To keep the communication alive, it uses a token that was created by the backend. The application can cache the token using secure storage, and send the token securely to the backend using secure transport. The API are grouped in ‘mega-functions’ (namespace); each mega-function includes a collection of API within the scope of functionality. This version of the Intel App Security API is primarily targeting the introduction of the new API; you can take advantage of capabilities that are on par with the capabilities of the platform and operating system. The API is built in a way that can be extended while maintaining a solid API layer. Future extensions may include: - Improved security of the middleware implementation using hardware and advanced software techniques. - Additional API scope, such as secure identity, input, and output. API Scope (Mega Functions) - Secure Data Collection of API that provide data-in-use protection and data sealing support. Enables creating, managing, and using a data stream object in memory. Access to this object is done via an instance ID. Sensitive object properties and sensitive content are hidden. - Secure Storage Collection of API that provide data-at-rest protection. Enables storing and retrieving data objects using non-volatile storage. - Secure Transport Collection of API that provide client-server HTTPS communication enhanced protection. Enables sending secure data elements within regular data to trusted remote servers. Data Structures - Common Data Structures Common data structures used within the different API.
https://software.intel.com/en-us/app-security-api/details
CC-MAIN-2018-26
en
refinedweb
GC_malloc, GC_malloc_atomic, GC_free, GC_realloc, GC_enable_incremental, GC_register_finalizer, GC_malloc_ignore_off_page, GC_malloc_atomic_ignore_off_page, GC_set_warn_proc - Garbage collecting malloc replacement #include <gc.h> void * GC_malloc(size_t size); void GC_free(void *ptr); void * GC_realloc(void *ptr, size_t size); + cc ... -I/usr/include/gc -lgc GC_malloc and GC_free are plug-in replacements for standard malloc and free. However,_DEBUG before including gc.h. See the documentation in the include file gc_cpp.h for an alternate, C++ specific interface to the garbage registration of functions that are invoked when an object becomes inaccessible. The garbage collector tries to avoid allocating memory at locations that already appear to be referenced before allocation. (Such apparent ‘‘pointers’’ recommended for large object allocation. (Objects expected to be larger than about 100KBytes should be allocated this way.) It is also possible to use the collector to find storage leaks in programs support incremental collection on machines without appropriate VM support, provisions for providing more explicit object layout information to the garbage collector, more direct support for ‘‘weak’’ pointers, support for ‘‘abortable’’ garbage collections during idle time, etc. The README and gc.h files in the distribution. More detailed definitions of the functions exported by the collector are given there. (The above list is not complete.) The web site at . Boehm, H., and M. Weiser, "Garbage Collection in an Uncooperative Environment", Software Practice & Experience, September 1988, pp. 807-820. The malloc(3) man page. Hans-J. Boehm (Hans.Boehm@hp.com). Some of the code was written by others, most notably Alan Demers. 2 October 2003 GC_MALLOC(3)
http://huge-man-linux.net/man3/gc.html
CC-MAIN-2018-26
en
refinedweb
This will fail to compile on SGI Irix 6.5 systems (at least using the SGI C compiler). The problem is that HAVE_MBRTOWC is not defined so lib/quotearg.c, at line 70, has: # define mbstate_t int But, <wctype.h> is then included, and this include <wchar.h>, which comes across: typedef char mbstate_t; The fix is to define _MBSTATE_T for SGI systems. Patch to fix it is: --- quotearg.c.orig Mon Oct 9 23:54:51 2000 +++ quotearg.c Thu Mar 28 14:49:33 2002 @@ -68,6 +68,9 @@ # define mbrtowc(pwc, s, n, ps) 1 # define mbsinit(ps) 1 # define mbstate_t int +#ifdef __sgi +# define _MBSTATE_T +#endif #endif #if HAVE_WCTYPE_H
http://lists.gnu.org/archive/html/bug-findutils/2002-04/msg00001.html
CC-MAIN-2018-26
en
refinedweb
Provides a splitter component for Angular Dart @Component( selector: 'my-app', styles: const [ ''' :host { display: block; width: 100%; height: 100%; } .container { width: 50%; height: 50%; display: flex; flex-direction: row; background-color: black; } .panel { height: 100%; width: calc((100% - 14px) / 3); } ''' ], template: r''' <div class="container"> <div class="panel first" style="background-color: red;"></div> <splitter></splitter> <div class="panel second" style="background-color: blue;"></div> <splitter></splitter> <div class="panel third" style="background-color: green;"></div> </div> ''', directives: const [materialDirectives, Splitter], providers: const [materialProviders], ) class AppComponent {} More examples: Parent must use flex layout. The direction of flex layout must be chosen depending on the orientation of the panel layout. For vertical layout, .container { display: flex; flex-direction: row; } For horizontal layout, .container { display: flex; flex-direction: column; } Parent must have defined size. .container { width: 50%; height: 100%; } Children's tran-section size must fill the parent. Care must be taken that splitter's transaction size is deducted. Children's cross-section size must fill the parent. For vertical layout, .panel { height: 100%; width: calc((100% - 14px) / 3); } For horizontal layout, .panel { width: 100%; height: calc((100% - 14px) / 3); } Splitterimplementation Add this to your package's pubspec.yaml file: dependencies: webide_splitter: "^1.5.5" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:webide_splitter/webide_splitter.dart'; We analyzed this package on Jun 12, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: web Primary library: package:webide_splitter/webide_splitter.dartwith components: html. webide_splitter.dart.
https://pub.dartlang.org/packages/webide_splitter
CC-MAIN-2018-26
en
refinedweb
Many hyperlinks are disabled. Use anonymous login to enable hyperlinks. Tcl D-Bus Interface dbif - Application layer around the Tcl D-Bus library Synopsis package require Tcl 8.5 package require dbus 2.1 package require dbif 1.3 dbif connect ?-bus bustype? ?-noqueue? ?-replace? ?-yield? ?name ...? dbif default ?-bus bustype? ?-interface interface? dbif delete ?-bus bustype? ?-interface interface? ?-single? path dbif error messageID errormessage ?errorname? dbif generate signalID ?arg ...? dbif get messageID name dbif listen ?-bus bustype? ?-interface interface? path name ?arglist? ?interp? body dbif method ?-attributes attributes? ?-bus bustype? ?-interface interface? path name ?inputargs ?outputargs?? ?interp? body dbif pave ?-bus bustype? ?-interface interface? path dbif property ?-access mode? ?-attributes attributes? ?-bus bustype? ?-interface interface? path name?:signature? variable ??interp? body? dbif return messageID returnvalue dbif signal ?-attributes attributes? ?-bus bustype? ?-id signalID? ?-interface interface? path name ?arglist ??interp? args body?? Description The dbif package provides a higher level wrapper around the low-level D-Bus commands provided by the dbus package. The package also provides an implementation of a number of standard D-Bus interfaces. See STANDARD INTERFACES for more information. Access to all functions of the dbif package from within a Tcl program is done using the dbif command. The command supports several subcommands that determine what action is carried out. - dbif connect ?-bus bustype? ?-noqueue? ?-replace? ?-yield? ?name ...? - Connect to a message bus and optionally request the D-Bus server to assign one or more names to the current application. The -yield option specifies that the application will release the requested name when some other application requests the same name and has indicated that it wants to take over ownership of the name. The application will be informed by a NameLost signal when it loses ownership of the name. The -replace option indicates that the application wants to take over the ownership of the name from the application that is currently the primary owner, if any. This request will only be honoured if the current owner has indicated that it will release the name on request. See also the -yield option. If the requested name is currently in use and the -replace option has not been specified, or the -replace -noqueue option may be specified to indicate that the name request should not be queued. The command returns a list of names that have successfully been acquired. If the dbus connection handle is needed, it can be obtained from the -bus return option. The following code can be used to allow a new instance of a program to replace the current one. This can be useful during program development: dbif connect -yield -replace $dbusname dbif listen -interface [dbus info service] \ [dbus info path] NameLost name {if {$name eq $::dbusname} exit} - dbif default ?-bus bustype? ?-interface interface? - Generally an application will perform several dbif commands related to the same message bus and interface. To avoid having to pass the same values for the -bus and -interface options with all those commands, their defaults can be setup with the dbif default subcommand. An interface name must consist of at least two elements separated by a period ('.') character. Each element may only contain the characters "[A-Z][a-z][0-9]_" and must not begin with a digit. The initial value for -bus is session. The initial value for -interface is taken from the first name requested for the application in a dbif connect command. If no name was ever requested with the connect subcommand, it defaults to "com.tclcode.default". - dbif delete ?-bus bustype? ?-interface interface? ?-single? path - While there currently is no way to remove individual signals, methods, or properties from the published interface, this subcommand allows the removal of a complete node. Unless the -single option is specified, the command will also recursively delete nodes on all underlying object paths. - dbif error messageID errormessage ?errorname? - Send a D-Bus error message in response to a D-Bus method call. If the errorname argument is not specified, it defaults to "org.freedesktop.DBus.Error.Failed". - dbif generate signalID ?arg ...? - Generate a signal as defined by a previous dbif signal command. If a body was specified with the signal definition, the provided arguments must match the args definition for the body. Otherwise they must match the arglist specified during the definition of the signal. - dbif get messageID name - Access additional information about a D-Bus message. Recognized names are bus, member, interface, path, sender, destination, messagetype, signature, serial, replyserial, noreply, autostart, and errorname. - dbif listen ?-bus bustype? ?-interface interface? path name ?arglist? ?interp? body - Start listening for the specified signal and execute body when such a signal appears on the D-Bus. The code in body will be executed in the namespace the dbif listen command was issued from. The arglist argument follows the special rules for dbif argument lists. See ARGUMENT LISTS below for more information. - dbif method ?-attributes attributes? ?-bus bustype? ?-interface interface? path name ?inputargs ?outputargs?? ?interp? body - Define a method that may be accessed through the D-Bus and execute body when the method is invoked. In addition to valid dbus paths, an empty string may be specified for the path argument. This makes the method available on all paths. The inputargs argument specifies which arguments must be provided by the caller. The outputargs argument indicates the type of result the method returns. Attributes may be specified via the -attributes option to provide hints to users of your API. See ATTRIBUTES below for more information. The return value resulting from executing the body will normally be returned to the caller in a D-Bus return message. If an uncaught error occurs or the result of body doesn't match outputargs, an error message will be returned to the caller instead. The body code recognizes an additional -async option for the Tcl return command. When that option is specified with a true boolean value (true, yes, 1), the return value from the body will not automatically be returned to the caller. A response message should then be generated using the dbif return or dbif error subcommands. An additional variable msgid will be passed to the method body. This variable contains a messageID that may be used in combination with the get, return, or error subcommands. The messageID remains valid for a period of time (default 25 seconds), or until a response has been returned to the caller, whichever happens first. The code in body will be executed in the namespace the dbif method command was issued from. The inputargs and outputargs arguments follow the special rules for dbif argument lists. See ARGUMENT LISTS below for more information. - dbif pave ?-bus bustype? ?-interface interface? path -). - dbif property ?-access mode? ?-attributes attributes? ?-bus bustype? ?-interface interface? path name?:signature? variable ??interp? body? - Define a property that may be accessed through the D-Bus using methods defined by the org.freedesktop.DBus.Properties standard interface. The variable argument defines the global variable holding the value of the property. The signature of a property must be a single complete type. The -access option specifies whether the property can be viewed and/or modified through the D-Bus. Valid access modes are read, write, and readwrite. If no access mode is specified, it defaults to readwrite. Attributes may be specified via the -attributes option to provide hints to users of your API. See ATTRIBUTES below for more information. The code in the optional body argument will be executed when the property is modified through the D-Bus. During the execution of body the global variable will still have its original value, if any. The new value for the property is passed to the script as an argument with the same name as the property. If execution of body results in an error, the global variable will not be modified. This allows restrictions to be imposed on the value for the property. The code in body will be executed in the namespace the dbif property command was issued from or, if a slave interpreter was specified, in the current namespace of that slave interpreter at definition time. Generating the property value only when needed can be implemented by putting a read trace on the global variable. Example: dbif property -attributes {Property.EmitsChangedSignal false} / clock sec trace add variable sec read {apply {args {set ::sec [clock seconds]}}} In this example the Property.EmitsChangedSignal attribute is used to prevent the PropertiesChanged signal being generated, which would involve a second read of the variable. - dbif return messageID returnvalue - Send a D-Bus return message in response to a D-Bus method call. The provided returnvalue must match the signature specified earlier in the dbif method command for the method. - dbif signal ?-attributes attributes? ?-bus bustype? ?-id signalID? ?-interface interface? path name ?arglist ??interp? args body?? - Define a signal that the application may emit using the dbif generate subcommand. Signals are referred to by their SignalID. If -id is specified, it is used as the SignalID. Otherwise a new unique identifier is generated. Specifying an existing SignalID replaces the previously defined signal. Attributes may be specified via the -attributes option to provide hints to users of your API. See ATTRIBUTES below for more information. The command returns the SignalID of the newly created signal. If the optional args and body arguments are specified, body will be executed when the signal is transmitted on the D-Bus as a result of the dbif generate subcommand. It is the responsibility of the body code to produce a return value that matches the specified arglist. The code in body will be executed in the namespace the dbif signal command was issued from. If any uncaught error happens during the execution of the body code, the dbif generate command will also throw an error with the same error message. When the body code comes to the conclusion that the signal doesn't need to be sent after all, it may abort the operation by returning using [return -code return]. The arglist argument follows the special rules for dbif argument lists. See ARGUMENT LISTS below for more information. In addition to valid dbus paths, an empty string may be specified for the path argument. This makes the signal available on all paths. In this case a body must be provided and the body code must provide a path in the -path option to the return command. For example: The following helper proc could be used to allow providing a path to the dbif generate command in front of the signal arguments: proc stdsignal {path args} { # Single argument signal bodies are not expected to produce a list if {[llength $args] == 1} {set args [lindex $args 0]} return -path $path $args } BUS TYPES The -bus option of the various subcommands takes a bustype value that can take several forms: One of the well-known bus names: 'session', 'system', or 'startup'. - A bus address, consisting of a transport name followed by a colon, and then an optional, comma-separated list of keys and values in the form key=value. - A handle as returned by the dbus connect subcommand. VALID NAMES The dbif package enforces some limitations on names used with the dbif subcommands. All names must only use the characters "[A-Z][a-z][0-9]_". This limitation applies to method names, property names, signal names, and argument names. Out of this group, only argument names may begin with a digit. Interface names and error names must consist of at least two elements separated by a period ('.') character. Each element must only contain the characters "[A-Z][a-z][0-9]_" and must not begin with a digit. D-Bus names for applications must follow the same rules as interface names, except that also dash ('-') characters are allowed. Unique D-Bus names begin with a colon (':'). The elements of unique D-Bus names are allowed to begin with a digit. Paths must start with a slash ('/') and must consist of elements separated by slash characters. Each element must only contain the characters "[A-Z][a-z][0-9]_". Empty elements are not allowed. ARGUMENT LISTS Due to the fact that the D-Bus specification works with typed arguments, a slightly modified method for specifying argument lists has been adopted for the dbif package. The normal Tcl argument list as used with the proc and apply. The following argument types are available: - s - A UTF-8 encoded, nul-terminated Unicode string. - b - A boolean, FALSE (0), or TRUE (1). - y - A byte (8-bit unsigned integer). - n - A 16-bit signed integer. - q - A 16-bit unsigned integer. - i - A 32-bit signed integer. - u - A 32-bit unsigned integer. - x - A 64-bit signed integer. - t - A 64-bit unsigned integer. - d - An 8-byte double in IEEE 754 format. - g - A type signature. - o - An object path. - a# - A D-Bus array type, which is similar to a Tcl list. The # specifies the type of the array elements. This can be any type, including another array, a struct or a dict entry. - v - A D-Bus variant type. Specifying this type will cause the code to automatically determine the type of the provided value (by looking at the internal representation). - (...) - A struct. The string inside the parentheses defines the types of the arguments within the struct, which may consist of a combination of any of the existing types. - {##} -. Argument lists may contain optional arguments. The use of optional arguments will result in multiple prototypes being reported for the object when introspected. The special meaning of the args argument does not translate well in the D-Bus concept. For that reason using args as the last argument of an argument list should be avoided. STANDARD INTERFACES A number of standard interfaces have been defined in the D-Bus specification that may be useful across various D-Bus applications. org.freedesktop.DBus.Peer - org.freedesktop.DBus.Peer.Ping - Returns an empty response. - org.freedesktop.DBus.Peer.GetMachineId - Returns a hex-encoded UUID representing the identity of the machine the application is running on. org.freedesktop.DBus.Introspectable - org.freedesktop.DBus.Introspectable.Introspect - Returns an XML description of the D-Bus structure, including its interfaces (with signals and methods), objects below it in the object path tree, and its properties. org.freedesktop.DBus.Properties - org.freedesktop.DBus.Properties.Get - Returns the value of the specified property. Only valid for properties with read or readwrite access. - org.freedesktop.DBus.Properties.Set - Changes the value of the specified property. Only valid for properties with write or readwrite access. - org.freedesktop.DBus.Properties.GetAll - Returns a dict of all properties with read or readwrite access. - org.freedesktop.DBus.Properties.PropertiesChanged - This signal is emitted when one or more properties change. The behavior for individual properties may be influenced by their Property.EmitsChangedSignal attribute. See ATTRIBUTES below. All applicable property changes are collected and reported via a single PropertiesChanged signal per path/interface/bus combination when the application enters the idle loop. The signal may also be generated on demand via the command: - dbif generate PropertiesChanged path ?interface? ?bus? -: package require dbif dbif signal -id PropertiesChanged / foobar dbif delete / ATTRIBUTES Attributes may be specified as a list of key/value pairs for methods, signals, and properties. These attributes are reported via annotations in the XML description obtained via an Introspect method call. Annotations may be used to provide hints to users of your API. Some well-know attributes are (default, if any, shown in italics): - Description - Provide a short 1-line description of the method, signal or property. - Deprecated - Indicate that this method is deprecated (true, false). - Method.NoReply - This method may not produce a reply (true, false). For example if you provide a method to exit your application. - Method.Error - This method may throw the indicated Exception in addition to the standard ones. - Property.EmitsChangedSignal - Indicates whether a change to the property is reported via the PropertiesChanged signal (true, false, invalidates, const). The value of this attribute, if specified, is also used internally to influence the automatic generation of the PropertiesChanged signal. - true - The signal is emitted with the value included. This is the default. - false - The signal is not automatically emitted on a change. Parties interested in the property should obtain it every time they need it. The application code may still emit a PropertiesChanged signal whenever desired. This may be used for properties that are implemented with a read trace on the global variable. - invalidates - The signal is emitted but the value is not included in the signal. This may be useful for properties that change much more frequently than they are expected to be queried, and/or have large values. - const - The property never changes its value during the lifetime of the object it belongs to, and hence the signal is never emitted for it.
https://chiselapp.com/user/schelte/repository/dbif/info/3ad654f7e4bc6b0e67d7eac488e6d36d8db84329736012a62d480d9dc4e7b341
CC-MAIN-2018-26
en
refinedweb
Are you sure? This action might not be possible to undo. Are you sure you want to continue? 8865 Return of U.S. Persons With Respect to Certain Foreign Partnerships Attach to your tax return. See separate instructions. Information furnished for the foreign partnership’s tax year (see instructions) beginning , 2002, and ending , 20 OMB No. 1545-1668 2002 Attachment Sequence No. Department of the Treasury Internal Revenue Service 118 Important: All information must be in English. All amounts must be in U.S. dollars unless otherwise indicated. Name of person filing this return Filer’s identifying number Filer’s address (if you are not filing this form with your tax return) A Category of filer (see Categories of Filers in the instructions and check applicable box(es)): 1 2 3 4 , 20 , and ending , 20 B Filer’s tax year beginning C D Filer’s share of liabilities: Nonrecourse $ Name Address Qualified nonrecourse financing $ EIN Other $ If filer is a member of a consolidated group but not the parent, enter the following information about the parent: E Information about certain other partners (see instructions) (1) Name (2) Address (3) Identifying number (4) Check applicable box(es) Category 1 Category 2 Constructive owner F1 Name and address of foreign partnership 2 EIN (if any) 3 Country under whose laws organized 4 Date of organization 5 Principal place of business 6 Principal business activity code number 7 Principal business activity 8a Functional currency 8b Exchange rate (see instr.) G 1 Provide the following information for the foreign partnership’s tax year: 2 Check if the foreign partnership must file: Name, address, and identifying number of agent (if any) in the United States Form 1042 Form 8804 Form 1065 or 1065-B Service Center where Form 1065 or 1065-B is filed: Name and address of foreign partnership’s agent in country of organization, if any 4 Name and address of person(s) with custody of the books and records of the foreign partnership, and the location of such books and records, if different 3 5 Were any special allocations made by the foreign partnership? 6 Number of foreign disregarded entities owned by the partnership (attach list) 7 How is this partnership classified under the law of the country in which it is organized? 8 Did the partnership own any separate units within the meaning of Regulations section 1.1503-2(c)(3) or (4)? 9 Does this partnership meet both of the following requirements? ● The partnership’s total receipts for the tax year were less than $250,000 and ● The value of the partnership’s total assets at the end of the tax year was less than $600,000. If “Yes,” do not complete Schedules L, M-1, and M-2. Sign Here Only If You Are Filing This Form Separately and Not With Your Tax Return Paid Preparer Sign and Complete Only If Form is Filed Separately. Yes No Yes No Yes No Under penalties of perjury, I declare that I have examined this return, including accompanying schedules and statements, and to the best of my knowledge and belief, it is true, correct, and complete. Declaration of preparer (other than general partner or limited liability company member) is based on all information of which preparer has any knowledge. Signature of general partner or limited liability company member Preparer’s signature Firm’s name (or yours if self-employed), address, and ZIP code Date Date Check if self-employed EIN Phone no. Cat. No. 25852A ( ) Form Preparer’s SSN or PTIN For Paperwork Reduction Act Notice, see the separate instructions. 8865 (2002) Form 8865 (2002) Page 2 Schedule A Constructive Ownership of Partnership Interest. Check the boxes that apply to the filer. If you check box b, enter the name, address, and U.S. taxpayer identifying number (if any) of the person(s) whose interest you constructively own. See instructions. a Name Owns a direct interest Address b Owns a constructive interest Identifying number (if any) Check if foreign person Check if direct partner Schedule A-1 Certain Partners of Foreign Partnership (see instructions) Name Address Identifying number (if any) Check if foreign person Does the partnership have any other foreign person as a direct partner? Yes No Schedule A-2 Affiliation Schedule. List all partnerships (foreign or domestic) in which the foreign partnership owns a direct interest or indirectly owns a 10% interest. Name Address EIN (if any) Total ordinary income or loss Check if foreign partnership Schedule B Income Statement—Trade or Business Income Caution: Include only trade or business income and expenses on lines 1a through 22 below. See the instructions for more information. 1a b 2 3 4 5 6 7 8 1a Gross receipts or sales 1b Less returns and allowances Cost of goods sold Gross profit. Subtract line 2 from line 1c Ordinary income (loss) from other partnerships, estates, and trusts (attach schedule) Net farm profit (loss) (attach Schedule F (Form 1040)) Net gain (loss) from Form 4797, Part II, line 18 Other income (loss) (attach schedule) Total income (loss). Combine lines 3 through 7 Salaries and wages (other than to partners) (less employment credits) Guaranteed payments to partners Repairs and maintenance Bad debts Rent Taxes and licenses Interest 16a Depreciation (if required, attach Form 4562) 16b Less depreciation reported on Schedule A and elsewhere on return Depletion (Do not deduct oil and gas depletion.) Retirement plans, etc. Employee benefit programs Other deductions (attach schedule) Total deductions. Add the amounts shown in the far right column for lines 9 through 20 Ordinary income (loss) from trade or business activities. Subtract line 21 from line 8 1c 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16c 17 18 19 20 21 22 Form (see page 8 of the instructions for limitations) Income Deductions 9 10 11 12 13 14 15 16a b 17 18 19 20 21 22 8865 (2002) Form 8865 (2002) Page 3 Schedule D Part I Capital Gains and Losses Short-Term Capital Gains and Losses—Assets Held One Year or Less (b) Date acquired (month, day, year) (c) Date sold (month, day, year) (d) Sales price (see instructions) (e) Cost or other basis (see instructions) (f) Gain or (loss) ((d) minus (e)) (a) Description of property (e.g., 100 shares of “Z” Co.) 1 2 3 4 Short-term capital gain from installment sales from Form 6252, line 26 or 37 Short-term capital gain (loss) from like-kind exchanges from Form 8824 Partnership’s share of net short-term capital gain (loss), including specially allocated short-term capital gains (losses), from other partnerships, estates, and trusts Net short-term capital gain or (loss). Combine lines 1 through 4 in column (f). Enter here and on Form 8865, Schedule K, line 4d or 7 2 3 4 5 5 Part II Long-Term Capital Gains and Losses—Assets Held More Than One Year (b) Date acquired (month, day, year) (c) Date sold (month, day, year) (d) Sales price (see instructions) (e) Cost or other basis (see instructions) (f) Gain or (loss) ((d) minus (e)) (g) 28% rate gain or (loss) *(see instr. below) (a) Description of property (e.g., 100 shares of “Z” Co.) 6 7 8 9 Long-term capital gain from installment sales from Form 6252, line 26 or 37 Long-term capital gain (loss) from like-kind exchanges from Form 8824 Partnership’s share of net long-term capital gain (loss), including specially allocated long-term capital gains (losses), from other partnerships, estates, and trusts Capital gain distributions Combine lines 6 through 10 in column (g). Enter here and on Schedule K, line 4e(2) or 7 Net long-term capital gain or (loss). Combine lines 6 through 10 in column (f). Enter here and on Form 8865, Schedule K, line 4e(1) or 7 7 8 9 10 10 11 11 12 12 *28% rate gain or (loss) includes all “collectibles gains and losses” (as defined in the instructions). Form 8865 (2002) Form 8865 (2002) Page 4 Schedule K Partners’ Shares of Income, Credits, Deductions, etc. (a) Distributive share items (b) Total amount 1 2 3a b c 4 a b c d e Ordinary income (loss) from trade or business activities (enter from Schedule B, line 22) Net income (loss) from rental real estate activities (attach Form 8825) 3a Gross income from other rental activities 3b Expenses from other rental activities (attach schedule) 1 2 Income (Loss) Net income (loss) from other rental activities. Subtract line 3b from line 3a Portfolio income (loss): Interest income Ordinary dividends Royalty income Net short-term capital gain (loss) (1) Net long-term capital gain (loss) (2) 28% rate gain (loss) (3) Qualified 5-year gain f Other portfolio income (loss) (attach schedule) 5 Guaranteed payments to partners 6 Net section 1231 gain (loss) (other than due to casualty or theft) (attach Form 4797) 7 Other income (loss) (attach schedule) Charitable contributions (attach schedule) Section 179 expense deduction Deductions related to portfolio income (itemize) Other deductions (attach schedule) 3c 4a 4b 4c 4d 4e(1) 4f 5 6 7 8 9 10 11 12a(1) 12a(2) 12b 12c 12d 13 14a 14b(1) 14b(2) 15a 15b 15c 16a 16b 16c 16d(1) 16d(2) 16e Form 8 9 10 11 Deductions Adjustments and SelfInvestTax Preference Employ- ment ment Interest Items Credits 12a Low-income housing credit: (1) From partnerships to which section 42(j)(5) applies (2) Other than on line 12a(1) b Qualified rehabilitation expenditures related to rental real estate activities (attach Form 3468) c Credits (other than credits shown on lines 12a and 12b) related to rental real estate activities d Credits related to other rental activities 13 Other credits 14a Interest expense on investment debts b (1) Investment income included on lines 4a, 4b, 4c, and 4f above (2) Investment expenses included on line 10 above 15a Net earnings (loss) from self-employment b Gross farming or fishing income c Gross nonfarm income 16a b c d Depreciation adjustment on property placed in service after 1986 Adjusted gain or loss Depletion (other than oil and gas) (1) Gross income from oil, gas, and geothermal properties (2) Deductions allocable to oil, gas, and geothermal properties e Other adjustments and tax preference items (attach schedule) 8865 (2002) Form 8865 (2002) Page 5 Schedule K (continued) (a) Distributive share items (b) Total amount 17a b c d e f g h 18 19 20 21 22 23 24 Name of foreign country or U.S. possession Gross income from all sources Gross income sourced at partner level Foreign gross income sourced at partnership level: (1) Passive (2) Listed categories (attach schedule) (3) General limitation Deductions allocated and apportioned at partner level: (1) Interest expense (2) Other Deductions allocated and apportioned at partnership level to foreign source income: (1) Passive (2) Listed categories (attach schedule) (3) General limitation Total foreign taxes (check one): Paid Accrued Reduction in taxes available for credit (attach schedule) b Amount Section 59(e)(2) expenditures: a Type Tax-exempt interest income Other tax-exempt income Nondeductible expenses Distributions of money (cash and marketable securities) Distributions of property other than money Other items and amounts required to be reported separately to partners (attach schedule) Beginning of tax year (a) (b) 17b 17c 17d(1) 17d(2) 17d(3) 17e(1) 17e(2) 17f(1) 17f(2) 17f(3) 17g 17h 18b 19 20 21 22 23 Schedule L Other Foreign Taxes Balance Sheets per Books (Not required if Question G9, page 1, is answered “Yes.”) Assets End of tax year (c) (d) 1 2a b 3 4 5 6 7 8 9a b 10a b 11 12a b 13 14 15 16 17 18 19 20 21 22 Cash Trade notes and accounts receivable Less allowance for bad debts Inventories U.S. government obligations Tax-exempt securities Other current assets (attach schedule) Mortgage and real estate loans Other investments (attach schedule) Buildings and other depreciable assets Less accumulated depreciation Depletable assets Less accumulated depletion Land (net of any amortization) Intangible assets (amortizable only) Less accumulated amortization Other assets (attach schedule) Total assets Liabilities and Capital Accounts payable Mortgages, notes, bonds payable in less than 1 year Other current liabilities (attach schedule) All nonrecourse loans Mortgages, notes, bonds payable in 1 year or more Other liabilities (attach schedule) Partners’ capital accounts Total liabilities and capital Form 8865 (2002) Form 8865 (2002) Page 6 Schedule M Balance Sheets for Interest Allocation (a) Beginning of tax year (b) End of tax year 1 2 Total U.S. assets Total foreign assets: a Passive income category b Listed categories (attach schedule) c General limitation income category Schedule M-1 Reconciliation of Income (Loss) per Books With Income (Loss) per Return (Not required if Question G9, page 1, is answered “Yes.”) 6 Income recorded on books this year not included on Schedule K, lines 1 through 7 (itemize): a Tax-exempt interest $ Deductions included on Schedule K, lines 1 through 11, 14a, 17g, and 18b, not charged against book income this year (itemize): a Depreciation $ 1 2 Net income (loss) per books Income included on Schedule K, lines 1 through 4, 6, and 7, not recorded on books this year (itemize): Guaranteed payments (other than health insurance) Expenses recorded on books this year not included on Schedule K, lines 1 through 11, 14a, 17g, and 18b (itemize): a Depreciation $ b Travel and entertainment $ Add lines 1 through 4 Balance at beginning of year Capital contributed: a Cash b Property Net income (loss) per books Other increases (itemize): 6 7 7 3 4 8 9 5 1 2 Add lines 6 and 7 Income (loss). Subtract line 8 from line 5 Distributions: a Cash b Property Other decreases (itemize): Schedule M-2 Analysis of Partners’ Capital Accounts (Not required if Question G9, page 1, is answered “Yes.”) 3 4 8 9 5 Add lines 1 through 4 Add lines 6 and 7 Balance at end of year. Subtract line 8 from line 5 Form 8865 (2002) Form 8865 (2002) Schedule N 7 Transactions Between Controlled Foreign Partnership and Partners or Other Related Entities Page Important: Complete a separate Form 8865 and Schedule N for each controlled foreign partnership. Enter the totals for each type of transaction that occurred between the foreign partnership and the persons listed in columns (a) through (d). Transactions of foreign partnership (a) U.S. person filing this return (b) Any domestic corporation or partnership controlling or controlled by the U.S. person filing this return (c) Any other foreign corporation or partnership controlling or controlled by the U.S. person filing this return (d) Any U.S. person with a 10% or more direct interest in the controlled foreign partnership (other than the U.S. person filing this return) 1 Sales of inventory 2 Sales of property rights (patents, trademarks, etc.) 3 Compensation received for technical, managerial, engineering, construction, or like services 4 Commissions received 5 Rents, royalties, and license fees received 6 Distributions received 7 Interest received 8 Other 9 Add lines 1 through 8 10 Purchases of inventory 11 Purchases of tangible property other than inventory 12 Purchases of property rights (patents, trademarks, etc.) 13 Compensation paid for technical, managerial, engineering, construction, or like services 14 Commissions paid 15 Rents, royalties, and license fees paid 16 Distributions paid 17 Interest paid 18 Other 19 Add lines 10 through 18 20 Amounts borrowed (enter the maximum loan balance during the year) — see instructions 21 Amounts loaned (enter the maximum loan balance during the year) — see instructions Form 8865 (2002)
https://www.scribd.com/document/543013/US-Internal-Revenue-Service-f8865-2002
CC-MAIN-2018-26
en
refinedweb
render() { const {isLoading, products} = this.props.products; if (isLoading) { return <Loader isVisible={true}/>; } return (<View style={styles.wrapper}> <Header/> <ScrollView style={styles.scrollView}> <ProductsContainer data={{productsList: { results: products }}}/> </ScrollView> <SearchBar style={styles.searchBar}/> <Footer/> </View>); } search-bar-zindex.pngsearch-bar-zindex.png var styles = StyleSheet.create({ wrapper: { flex: 1, backgroundColor: '#000', position: 'relative' }, searchBar: { position: 'absolute', top: 0 }, scrollView: { position: 'relative' } }); Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. I have looked at the image you posted again - where is the gap you are seeing - can you mark it on the image? search-bar.png
https://www.experts-exchange.com/questions/29094954/Why-is-top-0-for-absolute-component-not-working-in-react-native.html
CC-MAIN-2018-26
en
refinedweb
can be included within parent XML documents using the XML namespace facilities described in Namespaces in XML. An SVG document that is included within a parent XML document is a Conforming Included SVG Document if the SVG document, documents. A Conforming SVG Interpreters must be able to: An SVG viewer is a program which can parse and process an SVG document and render the contents of the document onto some sort of output medium such as a display or printer. Usually, an SVG Viewer is also an SVG Interpreter. An SVG Viewer is a Conforming SVG Viewer if: Although anti-aliasing support isn't a strict requirement for a Conforming SVG Viewer, it is highly recommended. Lack of anti-aliasing support will generally result in poor results on display devices. A higher class concept is that of a Conforming High-Quality SVG Viewer which must support the following additional features:: In general, forward references should be avoided. All references should be to objects which are either defined in a separate document or defined earlier in same document. At the time a reference is first processed (i.e., either at parse time or in response to a DOM call which alters the referencing attribute or property), if the referenced object is not defined at that time, then the reference should be treated as invalid thereafter...
http://www.w3.org/1999/06/25/WD-SVG-19990625/conform.html
CC-MAIN-2018-26
en
refinedweb
Twig: Sandbox Information Disclosure Affected versions¶ Twig 1.0.0 to 1.37.1 and 2.0.0 to 2.6.2 are affected by this security issue. The issue has been fixed in Twig 1.38.0 and 2.7.0. Description¶ This vulnerability affects the sandbox mode of Twig. If you are not using the sandbox, your code is not affected. Twig allows the evaluation of non-trusted templates in a sandbox, where everything is forbidden if not explicitly allowed by a sandbox policy (tags, filters, functions, method calls, ...). For instance, {% if true %}...{% endif %} is not allowed in a sandbox if the if tag has not been explicitly allowed in the sandbox policy. There is an edge case related to how PHP works. When using {{ var }} with var being an object, PHP will automatically cast the object to a string ( echo $var is equivalent to echo $var->__toString()). If you don't allow __toString() on the class of var, this code will throw a sandbox policy exception. But unfortunately, the protection against calling __toString() only works for simple cases like the one mentioned above. It does not work on the following template for instance: {{var|upper }} where __toString() will be called even if not part of the policy. As __toString() is sometimes used in classes to return some debug information, bypassing the policy might disclose sensitive information like database entry ids, usernames, or more. Resolution¶ I have rewritten the current strategy that prevents __toString() to be called when not whitelisted with a different approach that tries to spot when PHP will cast objects to strings automatically ( echo is one of them, concatenation is another one). The patch for this issue is available here for the 1.x branch. Twig: Sandbox Information Disclosure symfony.com/blog/twig-sandbox-information-disclosureTweet this __CERTIFICATION_MESSAGE__ Become a certified developer. Exams are taken online! To ensure that comments stay relevant, they are closed for old posts. Laurent Moreau said on Mar 13, 2019 at 11:07 #1 Thank you
https://symfony.com/blog/twig-sandbox-information-disclosure?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+symfony%2Fblog+%28Symfony+Blog%29
CC-MAIN-2020-24
en
refinedweb
How to create and use Middleware id_admin column in my users table if use have is_admin = 1 then it can access "admins" route. So first create IsAdminMiddleware middleware using bellow command: Create Middleware php artisan make:middleware IsAdminMiddleware Ok, now you can found IsAdminMiddleware.php in app/Http/Middleware directory and open IsAdminMiddleware.php file and put bellow code on that file. In this file i check first if user is not login then it will redirect home route and other if user have not is_admin = 1 then it will redirect home route too. app/Http/Middleware/IsAdminMiddleware.php namespace App\Http\Middleware; use Closure; use Auth; class IsAdminMiddleware { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { if(!Auth::check() || Auth::user()->is_admin != '1'){ = [ ...... 'is-admin' => \App\Http\Middleware\IsAdminMiddleware::class, ]; } Now we are ready to use is-admin middleware in routes.php file. so you can see how to use middleware in routes.php file. app/Http/routes.php Route::get('home', ['as'=>'home','uses'=>'HomeController@index']); Route::group(['middleware' => 'is-admin'], function () { Route::get('admins', ['as'=>'admins','uses'=>'HomeController@admins']); }); OR Route::get('home', ['as'=>'home','uses'=>'HomeController@index']); Route::get('admins', ['as'=>'admins','uses'=>'HomeController@admins','middleware' => 'is-admin']); >.7 - QR Code Generator Example - Laravel 5.7 Modular Structure Application Example - PHP Laravel 5.7 - Create Admin Panel Example - Laravel Create Custom Artisan Command with Example - Laravel - How to create custom error page with example - How to create custom facade in laravel 5.2? - How to create custom helper in Laravel?
https://www.itsolutionstuff.com/post/how-to-create-and-use-middleware-in-laravel-5example.html
CC-MAIN-2020-24
en
refinedweb
Another question. If I want to have multi pair traded expert how do I initialize it? In CExpertAdvisorBase::Init() I have to specify a symbol name. Table of Contents - Introduction - The Expert Advisor Class - Initialization - New Bar Detection - OnTick Handler - Expert Advisors Container - Persistence of Data - Examples - Final Remarks - Conclusion Introduction In earlier articles regarding this topic, the expert advisor examples featured have their components scattered all over the expert advisor main header file through the use of custom-defined functions. This article features the classes CExpertAdvisor and CExpertsAdvisors, which aim at the creating a more harmonious interaction between the various components of a cross-platform expert advisor. It also addresses some common problems usually encountered in expert advisors, such as loading and saving volatile data, and new bar detection. The Expert Advisor Class The CExpertAdvisorBase class is shown in the following code snippet. At this point, most of the differences between MQL4 and MQL5 are handled by the other class objects that were discussed in previous articles. class CExpertAdvisorBase : public CObject { protected: //--- trade parameters bool m_active; string m_name; int m_distance; double m_distance_factor_long; double m_distance_factor_short; bool m_on_tick_process; //--- signal parameters bool m_every_tick; bool m_one_trade_per_candle; datetime m_last_trade_time; string m_symbol_name; int m_period; bool m_position_reverse; //--- signal objects CSignals *m_signals; //--- trade objects CAccountInfo m_account; CSymbolManager m_symbol_man; COrderManager m_order_man; //--- trading time objects CTimes *m_times; //--- candle CCandleManager m_candle_man; //--- events CEventAggregator *m_event_man; //--- container CObject *m_container; public: CExpertAdvisorBase(void); ~CExpertAdvisorBase(void); virtual int Type(void) const {return CLASS_TYPE_EXPERT;} //--- initialization bool AddEventAggregator(CEventAggregator*); bool AddMoneys(CMoneys*); bool AddSignal(CSignals*); bool AddStops(CStops*); bool AddSymbol(const string); bool AddTimes(CTimes*); virtual bool Init(const string,const int,const int,const bool,const bool,const bool); virtual bool InitAccount(void); virtual bool InitCandleManager(void); virtual bool InitEventAggregator(void); virtual bool InitComponents(void); virtual bool InitSignals(void); virtual bool InitTimes(void); virtual bool InitOrderManager(void); virtual bool Validate(void) const; //--- container void SetContainer(CObject*); CObject *GetContainer(void); //--- activation and deactivation bool Active(void) const; void Active(const bool); //--- setters and getters string Name(void) const; void Name(const string); int Distance(void) const; void Distance(const int); double DistanceFactorLong(void) const; void DistanceFactorLong(const double); double DistanceFactorShort(void) const; void DistanceFactorShort(const double); string SymbolName(void) const; void SymbolName(const string); //--- object pointers CAccountInfo *AccountInfo(void); CStop *MainStop(void); CMoneys *Moneys(void); COrders *Orders(void); COrders *OrdersHistory(void); CStops *Stops(void); CSignals *Signals(void); CTimes *Times(void); //--- order manager string Comment(void) const; void Comment(const string); bool EnableTrade(void) const; void EnableTrade(bool); bool EnableLong(void) const; void EnableLong(bool); bool EnableShort(void) const; void EnableShort(bool); int Expiration(void) const; void Expiration(const int); double LotSize(void) const; void LotSize(const double); int MaxOrdersHistory(void) const; void MaxOrdersHistory(const int); int Magic(void) const; void Magic(const int); uint MaxTrades(void) const; void MaxTrades(const int); int MaxOrders(void) const; void MaxOrders(const int); int OrdersTotal(void) const; int OrdersHistoryTotal(void) const; int TradesTotal(void) const; //--- signal manager int Period(void) const; void Period(const int); bool EveryTick(void) const; void EveryTick(const bool); bool OneTradePerCandle(void) const; void OneTradePerCandle(const bool); bool PositionReverse(void) const; void PositionReverse(const bool); //--- additional candles void AddCandle(const string,const int); //--- new bar detection void DetectNewBars(void); //-- events virtual bool OnTick(void); virtual void OnChartEvent(const int,const long&,const double&,const string&); virtual void OnTimer(void); virtual void OnTrade(void); virtual void OnDeinit(const int,const int); //--- recovery virtual bool Save(const int); virtual bool Load(const int); protected: //--- candle manager virtual bool IsNewBar(const string,const int); //--- order manager virtual void ManageOrders(void); virtual void ManageOrdersHistory(void); virtual void OnTradeTransaction(COrder*) {} virtual datetime Time(const int); virtual bool TradeOpen(const string,const ENUM_ORDER_TYPE,double,bool); //--- symbol manager virtual bool RefreshRates(void); //--- deinitialization void DeinitAccount(void); void DeinitCandle(void); void DeinitSignals(void); void DeinitSymbol(void); void DeinitTimes(void); }; Most of the class methods declared within this class serve as wrappers to methods of its components. The key methods in this class will be discussed in later sections. Initialization During the initialization phase of the expert advisor, our primary goal is to instantiate the objects needed by the trading strategy (e.g. money management, signals, etc.) and then integrate them with the instance of CExpertAdvisor, which would also need to be created during OnInit. With this goal, when any of the event functions is triggered within the expert advisor, all we need to supply is a single line of code calling the appropriate handler or method of the CExpertAdvisor instance. This is very similar to the way the MQL5 Standard Library CExpert is used. After the creation of an instance of CExpertAdvisor, the next method to call is its Init method. The code of the said method is shown below: bool CExpertAdvisorBase::Init(string symbol,int period,int magic,bool every_tick=true,bool one_trade_per_candle=true,bool position_reverse=true) { m_symbol_name=symbol; CSymbolInfo *instrument; if((instrument=new CSymbolInfo)==NULL) return false; if(symbol==NULL) symbol=Symbol(); if(!instrument.Name(symbol)) return false; instrument.Refresh(); m_symbol_man.Add(instrument); m_symbol_man.SetPrimary(m_symbol_name); m_period=(ENUM_TIMEFRAMES)period; m_every_tick=every_tick; m_order_man.Magic(magic); m_position_reverse=position_reverse; m_one_trade_per_candle=one_trade_per_candle; CCandle *candle=new CCandle(); candle.Init(instrument,m_period); m_candle_man.Add(candle); Magic(magic); return false; } Here, we create the instances of most components that are often found in trading strategies. This includes the symbol or instrument to use (which has to be translated to a type of object), and the default period or timeframe. It also contains the rule on whether or not it should operate its core tasks at every new tick or at the first tick of the each candle only, whether or not it should limit a maximum of one trade per candle only (to prevent multiple entries on the same candle), and whether it should reverse its position at opposite signal (close existing trades and re-enter based on the new signal). At the end of the OnInit function, the instance of CExpertAdvisor would have to make a call to its InitComponents method. The following code shows the the said method of CExpertBase: bool CExpertAdvisorBase::InitComponents(void) { if(!InitSignals()) { Print(__FUNCTION__+": error in signal initialization"); return false; } if(!InitTimes()) { Print(__FUNCTION__+": error in time initialization"); return false; } if(!InitOrderManager()) { Print(__FUNCTION__+": error in order manager initialization"); return false; } if(!InitCandleManager()) { Print(__FUNCTION__+": error in candle manager initialization"); return false; } if(!InitEventAggregator()) { Print(__FUNCTION__+": error in event aggregator initialization"); return false; } return true; } In this method, the Init method of each of the components of the expert advisor instance is called. It is also through this method where the Validate methods of each component is called to see if their settings would pass validation. New Bar Detection Some trading strategies require only to operate at the first tick of a new candle. There are many ways to implement this feature. One of this is the comparison of the open time and open price of the current candle to their previous states, which is the method implemented in the CCandle class. The following code shows the declaration for CCandleBase, from which CCandle is based: class CCandleBase : public CObject { protected: bool m_new; bool m_wait_for_new; bool m_trade_processed; int m_period; bool m_active; MqlRates m_last; CSymbolInfo *m_symbol; CEventAggregator *m_event_man; CObject *m_container; public: CCandleBase(void); ~CCandleBase(void); virtual int Type(void) const {return(CLASS_TYPE_CANDLE);} virtual bool Init(CSymbolInfo*,const int); virtual bool Init(CEventAggregator*); CObject *GetContainer(void); void SetContainer(CObject*); //--- setters and getters void Active(bool); bool Active(void) const; datetime LastTime(void) const; double LastOpen(void) const; double LastHigh(void) const; double LastLow(void) const; double LastClose(void) const; string SymbolName(void) const; int Timeframe(void) const; void WaitForNew(bool); bool WaitForNew(void) const; //--- processing virtual bool TradeProcessed(void) const; virtual void TradeProcessed(bool); virtual void Check(void); virtual void IsNewCandle(bool); virtual bool IsNewCandle(void) const; virtual bool Compare(MqlRates &) const; //--- recovery virtual bool Save(const int); virtual bool Load(const int); }; The checking of the presence of a new candle on the chart is done through its Check method, which is shown below: CCandleBase::Check(void) { if(!Active()) return; IsNewCandle(false); MqlRates rates[]; if(CopyRates(m_symbol.Name(),(ENUM_TIMEFRAMES)m_period,1,1,rates)==-1) return; if(Compare(rates[0])) { IsNewCandle(true); TradeProcessed(false); m_last=rates[0]; } } If checking for a new bar, the expert advisor instance should always call this method every tick. The coder is then free to extend CCxpertAdvisor so that it can perform additional tasks when a new candle appears on the chart. As shown in the code above, the actual comparison of the open time and open price of the bar is done through the Compare method of the class, which is shown in the following code: bool CCandleBase::Compare(MqlRates &rates) const { return (m_last.time!=rates.time || (m_last.open/m_symbol.TickSize())!=(rates.open/m_symbol.TickSize()) || (!m_wait_for_new && m_last.time==0)); } This method of checking for the existence of a new bar depends on three conditions. Satisfying at least one will guarantee a result of true, which indicates the presence of a new candle on the chart: - The last recorded open time is not equal to the open time of the current bar - The last recorded open price is not equal to the open price of the current bar - The last recorded open time is zero and a new bar does not have to be the first tick for that bar The first two conditions involves the direct comparison of the rates of the current bar to the previous recorded state. The third condition only applies to the very first tick that the expert advisor will encounter. As soon as an expert advisor is loaded on a chart, it does not have yet any previous record of the rates (open time and open price), and so the last recorded open time would be zero. Some traders consider this bar as a new bar for their expert advisors, while others prefer to have the expert advisor wait for an actual new bar to appear on the chart after the initialization of the expert advisor. Similar to other types of classes discussed previously, the class CCandle would also have its container, CCandleManager. The following code shows the declaration of CCandleManagerBase: class CCandleManagerBase : public CArrayObj { protected: bool m_active; CSymbolManager *m_symbol_man; CEventAggregator *m_event_man; CObject *m_container; public: CCandleManagerBase(void); ~CCandleManagerBase(void); virtual int Type(void) const {return(CLASS_TYPE_CANDLE_MANAGER);} virtual bool Init(CSymbolManager*,CEventAggregator*); virtual bool Add(const string,const int); CObject *GetContainer(void); void SetContainer(CObject *container); bool Active(void) const; void Active(bool active); virtual void Check(void) const; virtual bool IsNewCandle(const string,const int) const; virtual CCandle *Get(const string,const int) const; virtual bool TradeProcessed(const string,const int) const; virtual void TradeProcessed(const string,const int,const bool) const; //--- recovery virtual bool Save(const int); virtual bool Load(const int); }; An instance of CCandle is created based on the name of the instrument and the timeframe. Having CCandleManager would make it easier for an expert advisor to track multiple charts for a given instrument, for example, having the capacity to check the occurrence of a new candle on EURUSD M15 and EURUSD H1 in the same expert advisor. Instances of CCandle that have the same symbol and timeframe are redundant and should be avoided. When looking for a certain instance of CCandle, one should simply call the appropriate method found on CCandleManager and specify the symbol and timeframe. CCandleManager, in turn, would look for the appropriate CCandle instance and call the intended method. Aside from checking the occurrence of a new candle, CCandle and CCandleManager serve another purpose: checking if a trade has been entered for a given symbol and timeframe within an expert advisor. The recent trade on a symbol can be checked, but not for a timeframe. The toggle for this flag should be set or reset by the instance of the CExpertAdvisor itself, when needed. For both classes, toggle can be set using the TradeProcessed method. For the candle manager, the TradeProcessed methods (getter and setter) only deals with finding the instance of CCandle requested and applying the appropriate value: bool CCandleManagerBase::TradeProcessed(const string symbol,const int timeframe) const { CCandle *candle=Get(symbol,timeframe); if(CheckPointer(candle)) return candle.TradeProcessed(); return false; } For CCandle, the process involves the assigning of a new value to one of its its class members, m_trade_processed. The following methods deal with the setting of the value of the said class member: bool CCandleBase::TradeProcessed(void) const { return m_trade_processed; } CCandleBase::TradeProcessed(bool value) { m_trade_processed=value; } OnTick Handler The OnTick method of CExpertAdvisor is the most used function within the class. It is from this method where most of the action takes place. The core operation of this method is shown in the following diagram: The process begins by toggling the tick flag of the expert advisor. This is to ensure that double processing of a tick cannot occur. The OnTick method of CExpertAdvisor is ideally called only within the OnTick event function, but it can also be called through other means, such as OnChartEvent. In the absence of this flag, if the OnTick method of the class is called while it is still processing an earlier tick, a tick may be processed more than once, and if the tick would generate a trade, this would often result to a duplicate trade. The refreshing of data is also necessary, as this ensures that the expert advisor has access to the most recent market data, and will not reprocess an earlier tick. If the expert advisor fails to refresh the data, it would reset the tick process flag, terminate the method, and wait for a new tick. The next steps are the detection of new bars and checking of trade signals. The checking for this is done every tick by default. However, it is possible to extend this method so that it only checks signals when a new signal is detected (to speed up processing time, especially during backtesting and optimization). The class also provides a member, m_position_reverse, which is intended to reverse position(s) opposite the current signal. The reversal performed here is only for the neutralization of the current position(s). In MetaTrader 4 and MetaTrader 5 hedging mode, this deals with the exit of the trades that are opposite the current signal (those going with the current signal will not be exited). In MetaTrader 5 netting mode, there can only be one position at any given time, so the expert advisor will enter a new position of equal volume and opposite to that of the current position. The trade signal is mostly check using m_signals, but other factors such as trading on new bar only, and time filters, can also prevent the expert advisor from executing a new trade. Only when all the conditions are satisfied will the EA be able to enter a new trade. At the end of the processing of the tick, the expert advisor will set the tick flag to false, and is then allowed to process another tick. Expert Advisors Container Similar to other class objects discussed in previous articles, the CExpertAdvisor class would also have its designated container, which is CExpertAdvisors. The following code shows the declaration for its base class, CExpertAdvisorsBase: class CExpertAdvisorsBase : public CArrayObj { protected: bool m_active; int m_uninit_reason; CObject *m_container; public: CExpertAdvisorsBase(void); ~CExpertAdvisorsBase(void); virtual int Type(void) const {return CLASS_TYPE_EXPERTS;} virtual int UninitializeReason(void) const {return m_uninit_reason;} //--- getters and setters void SetContainer(CObject *container); CObject *GetContainer(void); bool Active(void) const; void Active(const bool); int OrdersTotal(void) const; int OrdersHistoryTotal(void) const; int TradesTotal(void) const; //--- initialization virtual bool Validate(void) const; virtual bool InitComponents(void) const; //--- events virtual void OnTick(void); virtual void OnChartEvent(const int,const long&,const double&,const string&); virtual void OnTimer(void); virtual void OnTrade(void); virtual void OnDeinit(const int,const int); //--- recovery virtual bool CreateElement(const int); virtual bool Save(const int); virtual bool Load(const int); }; This container primarily mirrors the public methods found in the CExpertAdvisor class. An example of this is the OnTick handler. The method simply iterates on each instance of CExpertAdvisor to call its OnTick method: void CExpertAdvisorsBase::OnTick(void) { if(!Active()) return; for(int i=0;i<Total();i++) { CExpertAdvisor *e=At(i); e.OnTick(); } } With this container it is possible to store multiple instances of CExpertAdvisor. This is probably the only way to run multiple expert advisors on a single chart instance. Simply initialize multiple instances of CExpertAdvisor, store their pointers under a single CExpertAdvisors container, and then use the container's OnTick method to trigger the OnTick methods of each CExpertAdvisor instance. The same thing can be also done with each instance of the CExpert class of the MQL5 Standard Library using the CArrayObj class or its heirs. Persistence of Data Some data used in an instance of CExpertAdvisor only reside in computer memory. Normally, the data are often stored in the platform and the expert advisor gets the needed data from the platform itself through a function call. However, for data created dynamically while the expert advisor is running, this is not usually the case. When the OnDeinit event is triggered on an expert advisor, the expert advisor destroys all objects, and thus loses the data. OnDeinit can be triggered in a number of ways, such as closing the entire trading platform (MetaTrader 4 or MetaTrader 5), unloading the expert advisor from the chart, or even the act of recompiling the expert advisor source code. The full list of possible events that can trigger deinitialization can be found using the UninitializeReason function. When an expert advisor loses access on those data, it may behave as if it was just loaded on the chart for the first time. Most of the volatile data in the CExpertAdvisor class can be found in one of its members, which is an instance of COrderManager. This is where instances of COrder and COrderStop (and descendants) are created as the expert advisor performs its usual routine. Since these instances are created dynamically during OnTick, they are not recreated when the expert advisor reinitializes. Therefore, the expert advisor should implement a method to save and retrieve these volatile data. One way to implement this is to use a descendant of the CFileBin class, CExpertFile. The following code snippet shows the declaration of CExpertFileBase, its base class: class CExpertFileBase : public CFileBin { public: CExpertFileBase(void); ~CExpertFileBase(void); void Handle(const int handle) { m_handle=handle; }; uint WriteBool(const bool value); bool ReadBool(bool &value); }; Here, we are extending CFileBin to explicity declare methods to writing and reading data of Boolean type. At the end of the class file, we declare an instance of the CExpertFile class. This instance will be used throughout the expert advisor if it were to save and load volatile data. Alternatively, one may simply rely on the Save and Load methods inherited from CObject, and process the saving and loading of data in the usual way. However, this can be a very rigorous process. A great deal of effort and lines of code can be saved from using CFile (or its heirs) alone. //CExpertFileBase class definition //+------------------------------------------------------------------+ #ifdef __MQL5__ #include "..\..\MQL5\File\ExpertFile.mqh" #else #include "..\..\MQL4\File\ExpertFile.mqh" #endif //+------------------------------------------------------------------+ CExpertFile file; //+------------------------------------------------------------------+ The order manager saves volatile data through its Save method: bool COrderManagerBase::Save(const int handle) { if(handle==INVALID_HANDLE) return false; file.WriteDouble(m_lotsize); file.WriteString(m_comment); file.WriteInteger(m_expiration); file.WriteInteger(m_history_count); file.WriteInteger(m_max_orders_history); file.WriteBool(m_trade_allowed); file.WriteBool(m_long_allowed); file.WriteBool(m_short_allowed); file.WriteInteger(m_max_orders); file.WriteInteger(m_max_trades); file.WriteObject(GetPointer(m_orders)); file.WriteObject(GetPointer(m_orders_history)); return true; } Most of these data are of primitive types, except the last two, which are the orders and historical orders containers. For these data, the WriteObject method of CFileBin is used, which simply calls the Save method of the object to be written. The following code shows the Save method of COrderBase: bool COrderBase::Save(const int handle) { if(handle==INVALID_HANDLE) return false; file.WriteBool(m_initialized); file.WriteBool(m_closed); file.WriteBool(m_suspend); file.WriteInteger(m_magic); file.WriteDouble(m_price); file.WriteLong(m_ticket); file.WriteEnum(m_type); file.WriteDouble(m_volume); file.WriteDouble(m_volume_initial); file.WriteString(m_symbol); file.WriteObject(GetPointer(m_order_stops)); return true; } As we can see here, the process just repeats when saving objects. For primitive data types, the data is simply saved to file as usual. For complex data types, the Save method of the object is called through CFileBin's WriteObject method. In cases where multiple instance of CExpertAdvisor is present, the container CExpertAdvisors should also have the capacity to save data: bool CExpertAdvisorsBase::Save(const int handle) { if(handle!=INVALID_HANDLE) { for(int i=0;i<Total();i++) { CExpertAdvisor *e=At(i); if(!e.Save(handle)) return false; } } return true; } The method calls the Save method of each CExpertAdvisor instance. The single file handle means that there would only be one save file for each expert advisor file. It is possible for each CExpertAdvisor instance to have its own save file, but this would be the more complicated approach. The more complex part is the loading of data. In saving data, the values of some class members are simply written to file. On the other hand, when loading data, the object instances would need to be recreated in ideally the same state prior to saving. The following code shows the Load method of the order manager: bool COrderManagerBase::Load(const int handle) { if(handle==INVALID_HANDLE) return false; if(!file.ReadDouble(m_lotsize)) return false; if(!file.ReadString(m_comment)) return false; if(!file.ReadInteger(m_expiration)) return false; if(!file.ReadInteger(m_history_count)) return false; if(!file.ReadInteger(m_max_orders_history)) return false; if(!file.ReadBool(m_trade_allowed)) return false; if(!file.ReadBool(m_long_allowed)) return false; if(!file.ReadBool(m_short_allowed)) return false; if(!file.ReadInteger(m_max_orders)) return false; if(!file.ReadInteger(m_max_trades)) return false; if(!file.ReadObject(GetPointer(m_orders))) return false; if(!file.ReadObject(GetPointer(m_orders_history))) return false; for(int i=0;i<m_orders.Total();i++) { COrder *order=m_orders.At(i); if(!CheckPointer(order)) continue; COrderStops *orderstops=order.OrderStops(); if(!CheckPointer(orderstops)) continue; for(int j=0;j<orderstops.Total();j++) { COrderStop *orderstop=orderstops.At(j); if(!CheckPointer(orderstop)) continue; for(int k=0;k<m_stops.Total();k++) { CStop *stop=m_stops.At(k); if(!CheckPointer(stop)) continue; orderstop.Order(order); if(StringCompare(orderstop.StopName(),stop.Name())==0) { orderstop.Stop(stop); orderstop.Recreate(); } } } } return true; } The code above for the COrderManager much more complicated in contrast with CExpertAdvisor's Load method. The reason is that unlike the order manager, the instances of CExpertAdvisor are created during OnInit, and so the container would simply have to call the Load method of each instance of CExpertAdvisor, rather than using the ReadObject method of CFileBin. Class instances that were not created during OnInit, will have to be created as well when reloading the expert advisor. This is achieved by extending the CreateElement method of CArrayObj. An object cannot simply create itself on its own, so it has to be created by its parent object or container, or even from the main source or header file itself. An example can be seen in the extended CreateElement method found on COrdersBase. Under this class, the container is COrders (a descendant of COrdersBase), and the object to be created is of type COrder: bool COrdersBase::CreateElement(const int index) { COrder*order=new COrder(); if(!CheckPointer(order)) return(false); order.SetContainer(GetPointer(this)); if(!Reserve(1)) return(false); m_data[index]=order; m_sort_mode=-1; return CheckPointer(m_data[index]); } Here, aside from creating the element, we also set its parent object or container, in order to differentiate if it belongs to the list of active trades (m_orders class member of COrderManagerBase) or the history (m_orders_history of COrderManagerBase). Examples Examples #1-#4 in this article are modified versions of the four examples found on the previous article (see Cross-Platform Expert Advisor: Custom Stops, Trailing, and Breakeven). Let us take a look at the most complex example, expert_custom_trail_ha_ma.mqh, which is a modified version of custom_trail_ha_ma.mqh. Before the OnInit function, we declared the following global object instances: COrderManager *order_manager; CSymbolManager *symbol_manager; CSymbolInfo *symbol_info; CSignals *signals; CMoneys *money_manager; CTimes *time_filters; We replace this with an instance of CExpert. Some of the above can be found within CExpetAdvisor itself (e.g. COrderManager), while the rest have to be instantiated during OnInit (i.e. containers): CExpertAdvisors experts; At the very beginning of the method, we create an instance of CExpertAdvisor. We also call its Init method inputting the most basic settings: int OnInit() { //--- CExpertAdvisor *expert=new CExpertAdvisor(); expert.Init(Symbol(),Period(),12345,true,true,true); //--- other code //--- return(INIT_SUCCEEDED); } CSymbolInfo / CSymbolManager no longer needs to be instantiated since the instance of the CExpertAdvisor class is able to create instances of these classes on its own. The user-defined function would also have to be removed, since our new expert advisor will no longer need these. We removed the global declaration for the containers in our code, so they need to be declared from within OnInit. An example of this is the time filters container (CTimeFilters), as shown in the following code, found within the OnInit function: CTimes *time_filters=new CTimes(); Pointers to containers that are previously “added” to the order manager are instead added to the instance of CExpertAdvisor. All the other containers that are not added to the order manager will have to be added to the CExpertAdvisor instance as well. It would be the instance of COrderManager that would store the pointers. The CExpertAdvisor instance only creates wrapper methods. After this, we add the CExpertAdvisor instance to an instance of CExpertAdvisors. We then call the InitComponents method of the CExpertAdvisors instance. This would ensure the initialization of all instances of CExpertAdvisor and their components. int OnInit() { //--- //--- other code experts.Add(GetPointer(expert)); if(!experts.InitComponents()) return(INIT_FAILED); //--- other code //--- return(INIT_SUCCEEDED); } Finally, we insert the code needed for loading if the expert advisor was interrupted in its operation: int OnInit() { //--- //--- other code file.Open(savefile,FILE_READ); if(!experts.Load(file.Handle())) return(INIT_FAILED); file.Close(); //--- return(INIT_SUCCEEDED); } If the expert advisor fails to load from the file, it would return INIT_FAILED. However, in the event where no save file was supplied (and hence would generate INVALID_HANDLE), the expert advisor will not fail initialization, since the Load methods of CExpertAdvisors and CExpertAdvisor both returns true upon receiving an invalid handle. There is some risk with this approach, but it is very unlikely for a save file to be opened by another program. Just make sure that each instance of expert advisor running on a chart has an exclusive save file (just like magic number). The fifth example cannot be found on the previous article. Rather, it combines all the four expert advisors in this article into a single expert advisor. It simply uses a slightly modified version of the OnInit function of each of the expert advisors and declare it as a custom-defined function. Its return value is of type CExpertAdvisor*. If the creation of the expert advisor failed, it would return NULL instead of INIT_SUCCEEDED. The following code shows the updated OnInit function of the combined expert advisor header file: int OnInit() { //--- CExpertAdvisor *expert1=expert_breakeven_ha_ma(); CExpertAdvisor *expert2=expert_trail_ha_ma(); CExpertAdvisor *expert3=expert_custom_stop_ha_ma(); CExpertAdvisor *expert4=expert_custom_trail_ha_ma(); if (!CheckPointer(expert1)) return INIT_FAILED; if (!CheckPointer(expert2)) return INIT_FAILED; if (!CheckPointer(expert3)) return INIT_FAILED; if (!CheckPointer(expert4)) return INIT_FAILED; experts.Add(GetPointer(expert1)); experts.Add(GetPointer(expert2)); experts.Add(GetPointer(expert3)); experts.Add(GetPointer(expert4)); if(!experts.InitComponents()) return(INIT_FAILED); file.Open(savefile,FILE_READ); if(!experts.Load(file.Handle())) return(INIT_FAILED); file.Close(); //--- return(INIT_SUCCEEDED); } The expert advisor starts by instantiating each instance of CExpertAdvisor. It would then proceed to checking each of the pointers to CExpertAdvisor. If the pointer is not dynamic, then the function returns INIT_FAILED, and the initialization fails. If each of the instances passes the checking of pointers, these pointers are then stored in an instance of CExpertAdvisors. The CExpertAdvisors instance (the container, not the expert advisor instance) would then initialize its components and load previous data if necessary. The expert advisor uses custom-defined functions to create an instance of CExpertAdvisor. The following code shows the function used to create the 4th expert advisor instance: CExpertAdvisor *expert_custom_trail_ha_ma() { CExpertAdvisor *expert=new CExpertAdvisor(); expert.Init(Symbol(),Period(),magic4,true,true,true); CMoneys ); expert.AddMoneys(GetPointer(money_manager)); CTimes *time_filters=new CTimes(); if(time_range_enabled && time_range_end>0 && time_range_end>time_range_start) { CTimeRange *timerange=new CTimeRange(time_range_start,time_range_end); time_filters.Add(GetPointer(timerange)); } if(time_days_enabled) { CTimeDays *timedays=new CTimeDays(sunday_enabled,monday_enabled,tuesday_enabled,wednesday_enabled,thursday_enabled,friday_enabled,saturday_enabled); time_filters.Add(GetPointer(timedays)); } if(timer_enabled) { CTimer *timer=new CTimer(timer_minutes*60); timer.TimeStart(TimeCurrent()); time_filters.Add(GetPointer(timer)); }; } expert.AddTimes(GetPointer(time_filters)); CStops *stops=new CStops(); CCustomStop *main=new CCustomStop("main"); main.StopType(stop_type_main); main.VolumeType(VOLUME_TYPE_PERCENT_TOTAL); main.Main(true); //main.StopLoss(stop_loss); //main.TakeProfit(take_profit); stops.Add(GetPointer(main)); CTrails *trails=new CTrails(); CCustomTrail *trail=new CCustomTrail(); trails.Add(trail); main.Add(trails); expert.AddStops(GetPointer(stops));); CSignals *signals=new CSignals(); signals.Add(GetPointer(signal_ha)); signals.Add(GetPointer(signal_ma)); expert.AddSignal(GetPointer(signals)); //--- return expert; } As we can see, the code looks very similar to the OnInit function of the original expert advisor header file (expert_custom_trail_ha_ma.mqh). The other custom-defined functions are also organized in the same way. Final Notes Before concluding this article, any reader who wishes to use this library should be made aware of these factors that contribute to the library's development: At the time of this writing, the library featured in this article has over 10,000 lines of code (including comments). Despite this, it still remains a work in progress. More work needs to be done in order to fully utilize the capabilities of both MQL4 and MQL5. The author started working on this project prior to the introduction of hedging mode in MetaTrader 5. This has greatly influenced the further development of the library. As a result, the library tends to be more closer to adopting the conventions used in MetaTrader 4 than MetaTrader 5. Furthermore, the author also experienced some compatibility issues with some build updates released over the last few years, which led to some minor and major adjustments to the code (and some delay on publishing some articles). At the time of this writing, the author experienced the build updates for both platforms to be less frequent and more stable over time. This trend is expected to further improve. Nevertheless, future build updates that may cause incompatibilities would still need to be addressed. The library relies on data saved on memory to keep track of its own trades. This causes the expert advisors created using this library to heavily depend on saving and loading data in order to deal with possible interruptions the expert advisor may experience during its execution. Future work on this library, as well as any other library aiming at cross-platform compatibility, should be geared at achieving a stateless or near-stateless implementation, similar to the implementation of the MQL5 Standard Library. As a final remark, the library featured in this article should not be viewed as a permanent solution. Rather, it should be used as an opportunity for a smoother transition from MetaTrader 4 to MetaTrader 5. The incompatibilities between MQL4 and MQL5 presents a huge roadblock to traders who intend to transition to the new platform. As a result, the MQL4 source code of their expert advisors need to be refactored in order to be made compatible with the MQL5 compiler. The library featured in this article is provided as a means to deploy an expert advisor to the new platform with little or no adjustments to the main expert advisor source code. This can help the trader in his decision whether to still use MetaTrader 4, or switch to MetaTrader 5. In the event that he decided to switch, very little adjustments would be necessary, and the trader can operate the usual way with his expert advisors. On the other hand, if he decides to stay on using the old platform, he is provided an option to quickly switch to the new platform once MetaTrader 4 becomes legacy software. Conclusion This article has featured the CExpertAdvisor and CExpertAdvisors class objects, which are used to integrate all the components of a cross-platform expert advisor discussed in this article series. The article discusses how the two classes are instantiated and linked with the other components of a cross-platform expert advisor. It also introduces some solutions to problems usually encountered by expert advisors, such as new bar detection and the saving and loading of volatile data. Programs Used in the Article Class Files Featured in the Article
https://www.mql5.com/en/articles/3622
CC-MAIN-2020-24
en
refinedweb
Hi, I've found a frustrating type inference issue when writing code which uses Jesse Eichar's Scala IO library. Here's a really short example: import scalax.file.ramfs.RamFileSystem object Test extends App { val fs = new RamFileSystem() val root = fs.root / "somedir" createDirectory() val indexDir = root / "_index" createDirectory() println("IndexDir =" + indexDir) } This code compiles OK but the presentation compiler complains about the second call to createDirectory. It seems to have inferred the type of root to be Any when it should be at least Path according to the signature of createDirectory. This of course means that any subsequent code won't type check unless explicit types and casts are used. This was using the latest IDEA EAP (117.281) and originally against Scala Plugin build 683. I also upgraded to build 708 but the same happened. The project uses Scala 2.9.1 with the 0.4 release of scala-io-core and scala-io-file for Scala 2.9.1 from Maven Central. Interestingly I was previously using a SNAPSHOT build of Scala IO and the problem didn't occur then. I'm afraid I don't have and can no longer find the source of the snapshot to see what changed between the two versions. So I guess this should be logged as a bug or is it an already known problem? Thanks, Steve Sowerby. Hi, Please create new ticket to I'll check it and try to fix (and possibly explain actual problem). Best regards, Alexander Podkhlayuzin. I'm a bit puzzled when I see people reporting single "good code red" issues as if they were surprised. I estimate that I have several such errors in most files in my projects. Am I doing something wrong? I think it depends on your code style (and possibly library set is also important). For example hard usage of path dependent types is problem for plugin now and I'm planning to fix it as soon as possible. Fixing such cases is like communism building, however my hope is to reduce amount of possibilies to get red code as much as possible (I think we are on the way now). Best regards, Alexander Podkhalyuzin. Thanks, Bug reported: And I've tried, but as yet failed, to simplify it to not need the external libraries. I've added the current attempt to the bug. On 04/05/2012 16:18, Alexander Podkhalyuzin wrote: > >
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206639445-Type-inference-issue-when-using-Scala-IO-0-4?sort_by=votes
CC-MAIN-2020-24
en
refinedweb
PowerShell Custom Type Module Dr Scripto hand at Windows PowerShell. I was on the Windows PowerShell 1.0 team, with Bruce, Jim, and all the rest. My primary responsibility was for pipeline execution and error handling. I started at Microsoft in September 1989 (25 years ago!). I wrote management UI and infrastructure for Windows Server for 23 years, then a couple years ago, I switched to service engineering (aka operations). I created module CType for use in my operations work. I want to restrict the input to certain functions to be the custom objects that are generated by other functions (in my case, the Connection object). In addition, I have ~100 SQL queries that generate streams of System.Data.DataRow objects, and I want to manage the output objects and their formatting without having to hand code a similar number of custom C# classes. I have been using and improving CType for over six months, and it has been really useful to me. I talked about this idea with Ed at the PowerShell Summit in April 2014. Now I finally have a decent implementation and installation infrastructure. Thanks also to Jason Shirk for reviewing my work and suggesting the “CType {}” syntax. Installing CType If you want to cut to the chase, I have made CType available in several ways: - The MSI installer. This will install the CType module to your user profile. - The NuGet package. NuGet is good for incorporating CType into your Visual Studio project and keeping it up-to-date. - The Chocolatey package. Chocolatey is good for installing CType to many computers and virutal machines via automation and keeping it up-to-date. Run Get-Help about_CType, and you are on your way! Why create custom types in Windows PowerShell? - You can specify parameters such that only an instance of your custom type can be used as input. For example, you could define: Function New-MyConnection { [OutputType([MyCompany.MyModule.Connection])] [CmdletBinding()] param( # parameters ) # … } function Get-MyData { [CmdletBinding()] param( [MyCompany.MyModule.Connection][parameter(Mandatory=$true)]$Connection # other parameters ) # … } In this example, Get-MyData will only accept an object of the type MyCompany.MyModule.Connection, which presumably was emitted by New-MyConnection. Note that only “real” .NET types will work for this purpose—it isn’t enough to simply add TypeName as a decoration with $connection.PSObject.TypeNames.Insert(0,$typename). - You can decorate the type with formatting metadata and other Windows PowerShell type decorations. In this case, it actually would be enough to call $obj.PSObject.TypeNames.Insert(0,$typename). - Windows PowerShell has other issues with the raw System.Data.DataRow class, which I will discuss later. Why use CType to create custom types? Add-Type gives you complete flexibility to define your custom types. You could use Add-Type plus Update-FormatData directly and not bother with CType. However, there are a number of reasons why you might want to use CType: - Add-Type requires you to define your class in CSharp, JScript, or VisualBasic. Scripters may not be comfortable in these programmer languages. - These programming languages make it difficult to create type definitions “downstream in the pipeline,” which well-constructed Windows PowerShell commands can do. For example, the Add-CType parameters include: [string][parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]$PropertyName, [string][parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]$PropertyType, CType takes care of translating this into C#. This allows you to generate types dynamically and in a concise manner. I will demonstrate this later. - Update-FormatData requires input in a very cumbersome XML schema. CType also takes care of generating this. The current implementation of CType only supports simple table formats. What about PowerShell 5.0 class declarations? Windows PowerShell 5.0 introduces syntax for class declarations, where Desired State Configuration (DSC) is the first target scenario. These are “real” .NET classes, and they can be used as function parameter types. (At this writing, I can only comment on the implementation of Windows PowerShell class declarations in the WMF 5.0 November 2014 Tech Preview.) You can define your classes by using Windows PowerShell 5.0 class declarations rather than CType if you like; however, there are several advantages to using the CType module: - CType does not require WMF 5.0. Many customers will not install the still-to-be-released (at this writing) official Windows PowerShell 5.0 across all their servers for years, especially customers with dependencies on previous versions of Windows PowerShell (like old versions of System Center). - The class declaration syntax in CType isn’t really all that different from CSharp. As with Add-Type, it would be difficult to create Windows PowerShell class declarations “downstream in the pipeline.” - At this writing, Windows PowerShell class declarations only support short class names. They do not currently support namespaces or class inheritance. Note It would be possible to implement a variant of CType layered over Windows PowerShell classes rather than over Add-Type. I can’t identify any compelling advantages in making that change. There is nothing wrong with Windows PowerShell classes—they just aren’t targeted at this scenario. Creating a type with function CType CType is very easy to use. Simply call CType to add each of your custom types. If you are writing a module, add CType to RequiredModules in your .psd1 file, and create the types in the module initialization for your module. Note that a .NET type can only be defined once, so you do not want to call CType for any class more than once. Here is an example: CType MyCompany.MyNamespace.MyClass { property string Name -Width 20 property string HostName -Width 20 property DateTime WhenCreated -Width 25 property DateTime WhenDeleted -Width 25 } When the type is defined, you can create an instance by using New-Object. You don’t have to specify all the properties, and you can change their values at any time. New-Object –TypeName MyCompany.MyNamespace.MyClass –Property @{ Name = “StringValue” WhenCreated = [DateTime]’2015.01.01′ WhenDeleted = [DateTime]::Now } One cool thing about this syntax is that you can use Windows PowerShell structures such as loops, and conditionals, inside the CType declaration. Here is a very simple example: CType MyCompany.MyNamespace.MyClass { property string Name -Width 20 property string HostName -Width 20 if ($IncludeTimePropertiesWithThisType) { property DateTime WhenCreated -Width $DateTimeWidth property DateTime WhenDeleted -Width $DateTimeWidth } } Creating a type with Add-CType If you want to create types by using classic Windows PowerShell pipelines rather than the “little language” for function CType, this is an alternate syntax that does exactly the same thing. Note New-Object is only one way to come up with these objects. Select-Object, ConvertFrom-CSV, or any other way to generate objects with properties PropertyType and PropertyName (and optionally PropertyTableWidth and/or PropertyKeywords) will work. @( Class I actually wrote Add-CType first, then Jason Shirk suggested the “function CType” syntax. SQL wrapper classes If you have created wrappers for SQL queries by using Windows PowerShell, you may have noticed some idiosyncrasies using the System.Data.DataRow class in Windows PowerShell, for example: System.DBNull: If a particular result row does not have a defined value for a particular column, you may see some property values come back as a reference to the singleton instance of type System.DBNull. This has the advantage of distinguishing between an empty cell and an actual zero or empty string result value. Unfortunately, when Windows PowerShell converts System.DBNull to Boolean, it comes out as $true, so statements like… if ($row.Property) {DoThis()} …will actually execute DoThis() when the value is System.DBNull. This is pretty confusing, but it probably can no longer be fixed in Windows PowerShell without breaking backward compatibility. Without the wrapper class, you would have to use a workaround like this: if (($row.JoinedProperty –notis [System.DBNull]) –and $row.JoinedProperty) {DoThis()} The wrapper class takes care of this problem, by turning the System.DBNull value back to null. Extra methods System.Data.DataRow objects contain SQL-specific methods, like BeginEdit(), which are probably not relevant to users of your script. The wrapper hides these methods. You create SQL wrapper classes like this: CType MyCompany.MyNamespace.MyWrapperClass { sqlproperty string Name -Width 20 sqlproperty string HostName -Width 20 sqlproperty DateTime WhenCreated -Width 25 sqlproperty DateTime WhenDeleted -Width 25 } ~or~ @( WrapperClass -SqlWrapper You create instances like this: $connection = New-Object System.Data.SqlClient.SqlConnection $connectionString $command = New-Object System.Data.SqlClient.SqlCommand $commandString,$connection $adapter = New-Object System.Data.SqlClient.SqlDataAdapter $command $dataset = New-Object System.Data.DataSet $null = $adapter.Fill($dataSet) $result = $tables[0].Rows | ConvertTo-CTypeSqlWrapper -ClassName MyCompany.MyNamespace.MyWrapperClass If you already have functions that generate DataRow objects, use ConvertTo-CTypeDeclaration to create an initial CType wrapper for them. Simply add your type name, add a parent type name (or remove that clause), add widths, and change the property order as desired for formatting, and your declaration is ready! That’s it! You can contact me through the following Comment section. I would love to see bugs, suggestions, questions, and new scenarios. Feedback is really important to me—it’s how I decide whether to invest more time in a project like this one. I also have a site on GitHub where you can report issues: Welcome to Issues! Also, please contact me if you have any interest in assisting with TeslaFacebookModule. (“Start-Car” anyone?) And tell Elon Musk to hurry up… ~Jon Thank you, J
https://devblogs.microsoft.com/scripting/powershell-custom-type-module/
CC-MAIN-2020-24
en
refinedweb
#include <CGAL/Polygon_set_2.h> CGAL::General_polygon_set_2< Gps_segment_traits_2< Kernel, Container > >. The class Polygon_set_2 represents sets of linear polygons with holes. The first two template parameters ( Kernel and Container) are used to instantiate the type Polygon_2<Kernel,Container>. This type is used to represent the outer boundary of every set member and the boundaries of all holes of every set member. The third template parameter Dcel must. General_polygon_set_2 Gps_segment_traits_2
https://doc.cgal.org/latest/Boolean_set_operations_2/classCGAL_1_1Polygon__set__2.html
CC-MAIN-2020-24
en
refinedweb
#include <stdio.h> void main() { int x, y, Area, Perimeter; int *ptrx = &x ; int *ptry = &y; clrscr(); printf("Enter the sides of a rectangle"); scanf("%d %d", ptrx, ptry ); printf("x = %d\t y = %d\n",*ptrx, *ptry); Area = *ptrx * *ptry; Perimeter= 2*(*ptrx + *ptry); printf("Area of rectangle= %d\n" ,Area); printf("Perimeter = %d\n", Perimeter); } The expected output is as given below. The output is
https://mail.ecomputernotes.com/what-is-c/function-a-pointer/area-and-perimeter-of-a-rectangle-using-pointers
CC-MAIN-2020-24
en
refinedweb
Difference between revisions of "ImageJ Ops" Revision as of 11:01, Ops are a special type of ImageJ plugin, so a basic understanding of the SciJava plugin framework is strongly recommended. In addition to cloning the imagej-ops repository itself, the following components have useful Ops examples: - ImageJ-tutorials - examples of ImageJ plugins using Ops - ImageJ-scripting - provides templates in the Script Editor Tutorials and workshops - Ops: Step-by-step - ImageJ Tutorial: Using Ops - ImageJ Tutorial: Create A New Op - ? Is there a list of Ops somewhere with brief descriptions of their functionalities? If you run the Plugins › Utilities › Browse Ops... command, you can see a tree-based high-level view of all Ops currently available in Fiji sorted by namespace, as well as each available parameter combination for that Op. For the core Ops available, can also provide information about ops or namespaces; e.g., ops.help("add") will return info about available add ops. Are there any Ops for image processing? What are the Ops that need to be developed in the future?
https://imagej.net/index.php?title=ImageJ_Ops&diff=prev&oldid=19053
CC-MAIN-2020-24
en
refinedweb
SPARK-2764 (and some followup commits) simplified PySpark's worker process structure by removing an intermediate pool of processes forked by daemon.py. Previously, daemon.py forked a fixed-size pool of processes that shared a socket and handled worker launch requests from Java. After my patch, this intermediate pool was removed and launch requests are handled directly in daemon.py. Unfortunately, this seems to have increased PySpark task launch latency when running on m3* class instances in EC2. Most of this difference can be attributed to m3 instances' more expensive fork() system calls. I tried the following microbenchmark on m3.xlarge and r3.xlarge instances: import os for x in range(1000): if os.fork() == 0: exit() On the r3.xlarge instance: real 0m0.761s user 0m0.008s sys 0m0.144s And on m3.xlarge: real 0m1.699s user 0m0.012s sys 0m1.008s I think this is due to HVM vs PVM EC2 instances using different virtualization technologies with different fork costs. It may be the case that this performance difference only appears in certain microbenchmarks and is masked by other performance improvements in PySpark, such as improvements to large group-bys. I'm in the process of re-running spark-perf benchmarks on m3 instances in order to confirm whether this impacts more realistic jobs. - relates to SPARK-3333 Large number of partitions causes OOM - Resolved - links to -
https://issues.apache.org/jira/browse/SPARK-3358
CC-MAIN-2019-43
en
refinedweb
string_splitter 0.1.0+1 string_splitter # Utility classes for splitting strings and files into parts. Supports streamed parsing for handling long strings and large files. Usage # string_splitter has 2 libraries, [string_splitter] for parsing strings, and [string_splitter_io] for parsing files. Parsing Strings # import 'package:string_splitter/string_splitter.dart'; [StringSplitter] contains 3 static methods: [split], [stream], and [chunk]. Each method accepts a [String] to split, and [split] and [stream] accept lists of [splitters] and [delimiters] to be used to split the string, while [chunk] splits strings into a set numbers of characters per chunk. [delimiters], if provided, will instruct the parser to ignore [splitters] contained within the delimiting characters. [delimiters] can be provided as an individual string, in which case the same character(s) will be used as both the opening and closing delimiters, or as a [List] containing 2 [String]s, the first string will be used as the opening delimiter, and the second, the closing delimiter. // Delimiters must be a [String] or a [List<String>] with 2 children. List<dynamic> delimiters = ['"', ['<', '>']]; [split] and [stream] have 2 other options, [removeSplitters] and [trimParts]. [removeSplitters], if true, will instruct the parser not to include the splitting characters in the returned parts, and [trimParts], if true, will trim the whitespace around each captured part. [stream] and [chunk] both have a required parameter, [chunkSize], to set the number of characters to split each chunk into. /// Splits [string] into parts, slicing the string at each occurrence /// of any of the [splitters]. static List<String> split( String string, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, }); /// For parsing long strings, [stream] splits [string] into chunks and /// streams the returned parts as each chunk is split. static Stream<List<String>> stream( String string, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, @required int chunkSize, }); /// Splits [string] into chunks, [chunkSize] characters in length. static List<String> chunk(String string, int chunkSize); Streams return each set of parts in chunks, to capture the complete data set, you'll have to add them into a combined list as they're parsed. Stream<List<String>> stream = StringSplitter.stream( string, splitters: [','], delimiters: ['"'], chunkSize: 5000, ); final List<String> parts = List<String>(); await for (List<String> chunk in stream) { parts.addAll(chunk); } Parsing Files # import 'package:string_splitter/string_splitter_io.dart'; [StringSplitterIo] also contains 3 static methods: [split], [splitSync], and [stream]. Rather than a [String] like [StringSplitter]'s methods, [StringSplitterIo]'s accept a [File], the contents of which will be read and parsed. In addition to the parameters described in the section above, each method also has a parameter to set the file's encoding, or in stream's case the decoder itself, which all default to UTF8. /// Reads [file] as a string and splits it into parts, slicing the string /// at each occurrence of any of the [splitters]. static Future<List<String>> split( File file, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, Encoding encoding = utf8, }); /// Synchronously reads [file] as a string and splits it into parts, /// slicing the string at each occurrence of any of the [splitters]. static List<String> splitSync( File file, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, Encoding encoding = utf8, }); /// For parsing large files, [stream] streams the contents of [file] /// and returns the split parts in chunks. static Stream<List<String>> stream( File file, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, Converter<List<int>, String> decoder, }); [0.1.0] - September 8, 2019 - Initial release. Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: string_splitter: :string_splitter/string_splitter.dart'; We analyzed this package on Oct 9, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.5.1 - pana: 0.12.21 Platforms Detected platforms: Flutter, web, other No platform restriction found in primary library package:string_splitter/string_splitter.dart. Maintenance suggestions Maintain an example. (-10 points) Create a short demo in the example/ directory to show how to use this package. Common filename patterns include main.dart, example.dart, and string_splitter.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/string_splitter
CC-MAIN-2019-43
en
refinedweb
. ListView is broken for me on Android when using custom ViewCells. Some of them are not showing up. I can see the binding updating fine, the ViewCells ctor is being called correctly, but not all items are shown on screen. I have some ListViews that just use TextCell, and these does not cause any trouble for me. Downgrading to 1.4.2.6359 fixes the issue. I have only tested on Android, since this is what I focus on at the moment. My ViewCell looks like this: public class PickListItemCell : ViewCell { private AddressView _addressView = new AddressView(); private ListViewCheckBox _barcode = new ListViewCheckBox (); public PickListItemCell () { View = new StackLayout { Padding = new Thickness(5), Children = { _barcode, _addressView, } }; _barcode.SetBinding (ListViewCheckBox.DefaultTextProperty, "Barcode.Text"); _barcode.SetBinding (ListViewCheckBox.CheckedProperty, "Selected", BindingMode.TwoWay); _addressView.Name.SetBinding (Label.TextProperty, new Binding ("Address.Name")); _addressView.Street.SetBinding (Label.TextProperty, new Binding ("Address.StreetWithHouseNumber")); _addressView.Zip.SetBinding (Label.TextProperty, new Binding ("Address.Zip")); _addressView.City.SetBinding (Label.TextProperty, new Binding ("Address.City")); } } The view model look like this: public class PickListItemViewModel : BindableBase, IEquatable<PickListItemViewModel> { private IBarcode _barcode; public IBarcode Barcode { get { return _barcode; } set{ SetProperty(ref _barcode, value); } } Address _address; public Address Address { get { return _address; } set { SetProperty(ref _address, value); } } bool _selected; public bool Selected { get { return _selected; } set { if (SetProperty(ref _selected, value)) SelectedChanged.Fire(_selected); } } public event Action<bool> SelectedChanged; public MyTask Task { get; set; } // don't raise PropertyChanged event. #region IEquatable implementation public bool Equals (PickListItemViewModel other) { return other == null ? false : _barcode.CompareTo (other.Barcode) == 0; } #endregion public override int GetHashCode () { return _barcode.Text.GetHashCode (); } } Bindable base simply helps with raising property changed events. It is similar to the similar named class from Microsoft. @Kasper I have checked this issue with provided sample code in bug description but not able to reproduce this issue. Could you please provide us a complete sample project so that I can reproduce this issue at my end. Thanks. Since you already converted my code snippets into a standalone project, could you please send the project back to me - then I can build from there. @Kasper I have tried to use your provided sample code but I am getting error message for class missing. Screencast: Sample code: Could you please provide us complete sample project so that I can reproduce it at my end. Thanks. Created attachment 11845 [details] Sample solution Remember to do a package restore, as I excluded the packages from the bundled solution @Parmendra I have downloaded the solution from dropbox. BTW: May I recommend that you clear all build artifacts from the solution directory and also delete the packages directory, before zipping the solution. This way you can squeeze the zip from 66 MiB to 172 kiB. I modified the solution to try and make the smallest viable reproduction. The app simply display a ListView with CustomItemCells derived from ViewCell. A CustomItemCell simply contains a Label wrapped in a stack layout. The Label.TextProperty of the Label is bound to the Text property of a CustomItemViewModel. The ListView.ItemsSource is simply an ObservableCollection<CustomItemViewModel>. Whenever the user hits the "Next Table" button, the observable collection is cleared and an infinite sequence of integers are added (firstr 1,2,3,.. then 2,4,6,.. and so on). The bug is supposed to be that sometimes items are missing when using XF 1.4.3, but not when using 1.4.2. However I must admit I cannot reproduce it currently. I will make a screencast if succeed. Thanks @Kasper I have checked this issue with attached sample in comment #4 and observed that If I have scroll up and click on 'Next table' button, ListView is not showing. Screencast: Please check the screencast and let me know are you getting same behavior or I have missed anything. I am getting same behavior with X.F 1.4.2 and 1.4.3.x ApplicationOutput: Environment info: Xamarin Studio 5.9.4 (build 5) Mono 4.0.2 ((detached/c99aa0c) Xcode 6.2 (6776) Xamarin.iOS : 8.10.2.43 (Enterprise Edition) Xamarin.Android: 5.1.4.16 (Enterprise Edition) Mac OS X 10.9.4
https://xamarin.github.io/bugzilla-archives/31/31565/bug.html
CC-MAIN-2019-43
en
refinedweb
A friendly library for parsing and validating HTTP request arguments, with built-in support for popular web frameworks, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp. Project description webargs is a Python library for parsing and validating HTTP request arguments, with built-in support for popular web frameworks, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp. from flask import Flask from webargs import fields from webargs.flaskparser import use_args app = Flask(__name__) hello_args = {"name": fields.Str(required=True)} @app.route("/") @use_args(hello_args) def index(args): return "Hello " + args["name"] if __name__ == "__main__": app.run() # curl\?name\='World' # Hello World Install pip install -U webargs webargs supports Python >= 2.7 or >= 3 Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/webargs/4.3.0/
CC-MAIN-2019-43
en
refinedweb
Android App Development with Java: All About Android Activities If you look in the app/manifests branch in Android Studio’s Project tool window, you see an AndroidManifest.xml file. The file isn’t written in Java; it’s written in XML. Here is some code from an AndroidManifest.xml file. With minor tweaks, this same code could accompany lots of examples. <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> Here’s what the code “says” to your Android device: - The code’s actionelement indicates that the activity that’s set forth (the MainActivityclass) is MAIN. Being MAIN means that the program is the starting point of an app’s execution. When a user launches the app, the Android device reaches inside the code and executes the code’s onCreate method. In addition, the device executes several other methods. - The code’s categoryelement adds an icon to the device’s Application Launcher screen. On most Android devices, the user sees the Home screen. Then, by touching one element or another on the Home screen, the user gets to see the Launcher screen, which contains several apps’ icons. By scrolling this screen, the user can find an appropriate app’s icon. When the user taps the icon, the app starts running. The category element’s LAUNCHER value makes an icon for running the MainActivity class available on the device’s Launcher screen. So there you have it. With the proper secret sauce (namely, the action and category elements in the AndroidManifest.xml file), an Android activity’s onCreate method becomes an app’s starting point of execution. Extending a class Often, the words extends and @Override tell an important story — a story that applies to all Java programs, not only to Android apps. Many examples contain the lines import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { When you extend the android.support.v7.app. The folks at Google have already written thousands of lines of Java code to describe what an Android AppCompatActivity can do. Being an example of an AppCompatActivity in Android means that you can take advantage of all the AppCompatActivity class’s prewritten code. When you extend an existing Java class (such as the AppCompatActivity class), you create a new class with the existing class’s functionality. Overriding methods Often, a MainActivity is a kind of Android AppCompatActivity. So a MainActivity is automatically a screenful of components with lots and lots of handy, prewritten code. Of course, in some apps, you might not want all that prewritten code. After all, being a Republican or a Democrat doesn’t mean believing everything in your party’s platform. You can start by borrowing most of the platform’s principles but then pick and choose among the remaining principles. In the same way, the code declares itself to be an Android AppCompatActivity, but then overrides one of the AppCompatActivity class’s existing methods. If you bothered to look at the code for Android’s built-in AppCompatActivity class, you’d see the declaration of an onCreate method. The word @Override indicates that the listing’s MainActivity doesn’t use the AppCompatActivity class’s prewritten onCreate method. Instead, the MainActivity contains a declaration for its own onCreate method. In particular, the onCreate method calls setContentView(R.layout.activity_main), which displays the material described in the res/layout/activity_main.xml file. The AppCompatActivity class’s built-in onCreate method doesn’t do those things. An activity’s workhorse methods Every Android activity has a lifecycle — a set of stages that the activity undergoes from birth to death to rebirth, and so on. In particular, when your Android device launches an activity, the device calls the activity’s onCreate method. The device also calls the activity’s onStart and onResume methods. You can declare your own onCreate method without declaring your own onStart and onResume methods. Rather than override the onStart and onResume methods, you can silently use the AppCompatActivity class’s prewritten onStart and onResume methods. When an Android device ends an activity’s run, the device calls three additional methods: the activity’s onPause, onStop, and onDestroy methods. So, one complete sweep of your activity, from birth to death, involves the run of at least six methods: onCreate, then onStart, and then onResume, and later onPause, and then onStop, and, finally, onDestroy. As it is with all life forms, “ashes to ashes, dust to dust.” Don’t despair. For an Android activity, reincarnation is a common phenomenon. For example, if you’re running several apps at a time, the device might run low on memory. In this case, Android can kill some running activities. As the device’s user, you have no idea that any activities have been destroyed. When you navigate back to a killed activity, Android re-creates the activity for you and you’re none the wiser. A call to super.onCreate(savedInstanceState) helps bring things back to the way they were before Android destroyed the activity. Here’s another surprising fact. When you turn a phone from Portrait mode to Landscape mode, the phone destroys the current activity (the activity that’s in Portrait mode) and re-creates that same activity in Landscape mode. The phone calls all six of the activity’s lifecycle methods ( onPause, onStop, and so on) in order to turn the activity’s display sideways. It’s similar to starting on the transporter deck of the Enterprise and being a different person after being beamed down to the planet (except that you act like yourself and think like yourself, so no one knows that you’re a completely different person).
https://www.dummies.com/programming/java/android-app-development-java-android-activities/
CC-MAIN-2019-43
en
refinedweb
Provided by: libpcre2-dev_10.32-5_amd64 NAME PCRE2 - Perl-compatible regular expressions (revised API) SYNOPSIS #include <pcre2.h> pcre2_match_data *pcre2_match_data_create(uint32_t ovecsize, pcre2_general_context *gcontext); DESCRIPTION This function creates a new match data block, which is used for holding the result of a match. The first argument specifies the number of pairs of offsets that are required. These form the "output vector" (ovector) within the match data block, and are used to identify the matched string and any captured substrings. There is always one pair of offsets; if ovecsize is zero, it is treated as one. The second argument points to a general context, for custom memory management, or is NULL for system memory management. The result of the function is NULL if the memory for the block could not be obtained. There is a complete description of the PCRE2 native API in the pcre2api page and a description of the POSIX API in the pcre2posix page.
http://manpages.ubuntu.com/manpages/eoan/man3/pcre2_match_data_create.3.html
CC-MAIN-2019-43
en
refinedweb
auth_token in keystone v3 I'm using RDO liberty on CentOS and i'm having problems with getting a useable auth_token from keystone v3 python client: from keystoneclient.v3 import client as v3client from keystoneclient.auth.identity import v3 as v3ident from keystoneclient import session from keystoneclient import client insecure = True auth_url = '' auth = v3ident.Password( auth_url=auth_url, username='admin', password='change_me', project_name='admin', user_domain_name='default', project_domain_name='default' ) sess = session.Session( auth=auth, verify=not insecure) keystone = v3client.Client(session=sess) print "AUTH_TOKEN: %s" % keystone.auth_token # AUTH_TOKEN: None Can someone tell me what's i'm doing wrong here? I am able to use openstack --insecure token issue without problem.
https://ask.openstack.org/en/question/89833/auth_token-in-keystone-v3/?sort=oldest
CC-MAIN-2019-43
en
refinedweb
Java naming conventions are sort of guidelines which application programmers are expected to follow to produce a consistent and readable code throughout the application. If teams do not not follow these conventions, they may collectively write an application code which is hard to read and difficult to understand. Java heavily uses Camel Case notations for naming the methods, variables etc. and TitleCase notations for classes and interfaces. Let’s understand these naming conventions in detail with examples. 1. Packages naming conventions Package names must be a group of words starting with all lowercase domain name (e.g. com, org, net etc). Subsequent parts of the package name may be different according to an organization’s own internal naming conventions. package com.howtodoinjava.webapp.controller; package com.company.myapplication.web.controller; package com.google.search.common; 2. Classes naming conventions In Java, class names generally should be nouns, in title-case with the first letter of each separate word capitalized. e.g. public class ArrayList {} public class Employee {} public class Record {} public class Identity {} 3. Interfaces naming conventions In Java, interfaces names, generally, should be adjectives. Interfaces should be in titlecase with the first letter of each separate word capitalized. In same cases, interfaces can be nouns as well when they present a family of classes e.g. List and Map. public interface Serializable {} public interface Clonable {} public interface Iterable {} public interface List {} 4. Methods naming conventions Methods always should be verbs. They represent an action and the method name should clearly state the action they perform. The method name can be a single or 2-3 words as needed to clearly represent the action. Words should be in camel case notation. public Long getId() {} public void remove(Object o) {} public Object update(Object o) {} public Report getReportById(Long id) {} public Report getReportByName(String name) {} 5. Variables naming conventions All instance, static and method parameter variable names should be in camel case notation. They should be short and enough to describe their purpose. Temporary variables can be a single character e.g. the counter in the loops. public Long id; public EmployeeDao employeeDao; private Properties properties; for (int i = 0; i < list.size(); i++) { } 6. Constants naming conventions Java constants should be all UPPERCASE where words are separated by underscore character (“_”). Make sure to use final modifier with constant variables. public final String SECURITY_TOKEN = "..."; public final int INITIAL_SIZE = 16; public final Integer MAX_SIZE = Integer.MAX; 7. Generic types naming conventions Generic type parameter names should be uppercase single letters. The letter 'T' for type is typically recommended. In JDK classes, E is used for collection elements, S is used for service loaders, and K and V are used for map keys and values. public interface Map <K,V> {} public interface List<E> extends Collection<E> {} Iterator<E> iterator() {} 8. Enumeration naming conventions Similar to class constants, enumeration names should be all uppercase letters. enum Direction {NORTH, EAST, SOUTH, WEST} 9. Annotations naming conventions Annotation names follow title case notation. They can be adjective, verb or noun based the requirements. public @interface FunctionalInterface {} public @interface Deprecated {} public @interface Documented {} public @Async Documented { public @Test Documented { In this post, we discussed the Java naming conventions to be followed for consistent writing of code which make the code more readable and maintainable. Naming conventions are probably the first best practice to follow while writing clean code in any programming language. Happy Learning !! Feedback, Discussion and Comments Oliver Hi, I think No. 8 also contains an error. “Similar to class names, enumeration names should be all uppercase letters.” Class names use titlecase with nouns, I guess you were thinking “constants” when you wrote this. Lokesh Gupta Yeh ! You are right. Thanks for reporting it. Chris I feel that this section: In Java, interfaces names, generally, should be verbs. Should actually be In Java, interfaces names, generally, should be **adjectives**. When people suggest things like ‘Runnable’ or ‘Tasklike’ it’s a description of the kind of object that the interface describes. ‘What kind of object is it? It is a {List|Runnable|Cancelable} type of object’ Lokesh Gupta I think, you are right. Adjectives makes more sense. Even nouns shall be used. Must be thinking something else when I wrote it. Thanks a ton for noticing. User123 should be Lokesh Gupta Thanks for pointing out. I must have been thinking something else. Much appreciated.
https://howtodoinjava.com/java/basics/java-naming-conventions/
CC-MAIN-2019-43
en
refinedweb
Introduction. This is a tutorial for Fibonacci series in Java. The program is given below that get the number from user and print the Fibonacci Series. The program is not extendable. Go enjoy the program. Lets begin…. Program for Fibonacci Series in Java. //import Scanner as we require it. import java.util.Scanner; // the name of our class its public public class FibonacciSeries { //void main public static void main (String[] args) { //declare int variables int x=0,y=1,z,n; //print message System.out.println("How many numbers do you want?"); //Take input Scanner input = new Scanner(System.in); n = input.nextInt(); //print the first two numbers manually. System.out.println("Fibonacci Series:-"); System.out.println(x); System.out.println(y); //go on... //print the fibonacci series. for(int i=0;i<n-2;i++) { z=x+y; x=y; y=z; System.out.println(z); } } } Output How many numbers do you want? 7 Fibonacci Series:- 0 1 1 2 3 5 8 How does it work - You enter the number. - The for loop run and calculates the number of Fibonacci series and prints it. Extending it The program cannot be extended. Explanation. - Import the Scanner. - Declare the class as public - Add the void main function - Add system.out.println() function with the message to enter choice. - Declare input as Scanner. - Take the inputs and save it in variables. - Manually print the first two numbers. - Add a loop. - Calculate and print the numbers. At the end. You learnt creating the Java program for Fibonacci Series. So now enjoy the program. Please comment on the post and share it.
https://techtopz.com/java-programming-fibonacci-series/
CC-MAIN-2019-43
en
refinedweb
Douglas Adams's Whale Ryan Palo Updated on ・6 min read Intro I think it is important to find ways that your background or experience specifically help you to stand out in any given group. If you can pinpoint those areas, the group can optimize its tool set and have a better idea of who would be best suited to a specific task. Personally, while programming is one of my favorite pastimes, I have a degree in Mechanical Engineering and some experience teaching Calculus and Physics. Because of this, I thought I would share some insight into an area that I know everyone has some questions about: the whale in Douglas Adams's Hitchhiker's Guide to the Galaxy. What whale, you ask? In chapter 18, two missiles get randomly transformed into a whale and a potted plant, respectively. Here's the excerpt:. This excerpt leads me to ask: can we simulate it? With a little help from Python, we can find out. I'm going to write this assuming the reader has a working knowledge of basic programming principles and what the difference between the Imperial and Metric system are, but has very little physics background beyond that. Although I'm generally more comfortable using Imperial units like feet and pounds, I'm going to stick to metric units this time, for the sanity of our international friends and because the math works out a little easier. The Forces Involved Let's start with the forces at play here. Basically, the only two I'm going to worry about are gravity acting on the whale and drag working against the whale as it falls. Let's assume this whale is falling from a geosynchronus orbit (orbit in space that would allow it to keep pace as the earth rotates) -- approximately 3.5786 x 107 meters elevation. For those that are interested, I'm going to plop the only real in-depth equation here. Turning the equations into code isn't super difficult. We just need to fill in some of the variables above first. Since getting data on an alien planet and alien whale is more difficult, let's use Earth and an Earthly Blue Whale. The mass of the earth is roughly 5.97 x 1024 kg, and its radius is approximately 6.37 million meters. Blue whales live in the region between about 80-120 metric tons. To make the math nice, let's use 100 (100,000 kg). Fun fact: the largest known dinosaur came in at around 90 metric tons! Anyways, with these constants, and the Universal Gravitational Constant -- 6.674 x 10-11 m3 /kg s2 -- let's turn this into code. def gravity(altitude): """Returns the force of gravity [N] at a given altitude [m]""" earth_mass = 5.972e24 # [kg] earth_radius = 6.367e6 # [m] whale_mass = 100000 # [kg] universal_grav_constant = 6.67384e-11 # [m^3/kg s^2] radius = altitude + earth_radius # [m] # Assumption: the 'radius' of the whale is negligible compared to # the other sizes involved # Here's the important bit: result = universal_grav_constant * whale_mass * earth_mass/(radius**2) return result Drag gets a little more interesting. Because there is a startling lack of data on the aerodynamic characteristics of belly-flopping whales, we'll assume the whale is diving towards the ground head-first. This article is chock-full of informational goodies, such as the drag coefficient of a swimming whale (0.05) and the approximate projected cross-sectional area (10 m2 ). The projected cross-sectional area is sort-of like the size of the shadow the whale would cast if light was shone on it head-on. It is important to note that the density of the atmosphere decreases with elevation, but not in a nice linear fashion. We'll need to model the following relationship in our code (approximately): In order to keep your interest, I'll do some handwaving and leave that code out. Here is the code for drag: def drag(altitude, velocity): """Given altitude [m] and velocity [m/s], outputs the force of drag [N]""" whale_drag_coefficient = 0.05 # [unitless] whale_crossectional_area = 10 # [m^2] result = .5 * whale_drag_coefficient * density(altitude) # VIGOROUS HAND-WAVING! result *= whale_crossectional_area * velociy**2 # Drag always is opposite of the direction of motion. # i.e. if whale falls down, drag is up and vice versa if velocity > 0: result *= -1 return result Great! Two more steps before we get results. First is to get acceleration of the whale. Good ole' F = m * a. def net_acceleration(altitude, velocity): """Sums all forces to calculate a net acceleration for next step.""" gravity_force = gravity(altitude) # [N] drag_force = drag(altitude, velocity) # [N] net_force = drag_force - gravity_force # [N] assuming gravity is down. # Since F=ma, a = F/m! acceleration = net_force / WHALE_MASS # [m/s^2] return acceleration The Simulation Now we need to simulate the whole fall. We'll do this by getting each data point one by one. If we know altitude and velocity at a given time, we can find acceleration with the function above. In order to get the next velocity and position from a given acceleration, we'll need the following function: def integrate(acceleration, current_velocity, current_altitude, timestep): """Gets future velocity and position from a given acceleration""" new_velocity = current_velocity + acceleration * timestep # Sort of a y = mx + b situation new_altitude = current_altitude + current_velocity * timestep # Same, but for altitude return new_velocity, new_position This set of code is probably the least intuitive, but it basically boils down to the idea that if you go 20 miles per hour for 6 hours, you will have travelled 120 total miles (20 * 6). Blah blah blah science handwaving. You can see the full code and comments here. I'm still working on cleaning it up and factoring out the constants. Let's get to the whale. The Results from matplotlib import pyplot as plt import pandas as pd results = pd.read_csv("results.csv", header=0) plt.plot(results["Time"], results["Height"]) plt.show() Looks pretty much like we would expect. First he was up. Then he came down and it was fast. So? Let's look at the forces he felt. results["Force"] = results["Acceleration"] * 100000 # whale mass [kg] plt.plot(results["Time"], results["Force"]) plt.show() Woah! Let's get a better look at that spike. Note that I'm plotting Force vs. the index this time instead of vs. Time. This is to get a more spread-out view of things. You can see he is feeling some crazy gravitational forces in the downward direction until WHAMMO! Checking the height of the whale right around this point shows that he's where we are splitting the upper and lower Stratosphere: ~25000 m. Basically, our whale is faceplanting onto our atmosphere. Realistically, I'm pretty sure that, if our whale hadn't burned up already, we would have a localized whale-splosion and whale-shower. So that's it for now. That's probably all you can stand. For future work, I recommend evaluating the heat generated from drag and estimating the ignition point of an airborne sea mammal to find out if he disintegrates or explodes first. Too gruesome? Yeah, probably. Don't worry. He had a very exciting time on the way down. Never mind, hey, this is really exciting, so much to find out about, so much to look forward to, I’m quite dizzy with anticipation! - The Whale What Advice Would You Give Your 20-year-old Self? If you could go back in time, what advice would you give your 20-year-old self? You calculated for earth, but wasn't it an alien planet? Sorry the math is a tad above me but curious about size of debris field should the remains reach earth. I started to type a response, and then realized that the math (and assumptions required) was a bit more complicated than I initially thought. I think I can come up with an answer, but it'll take blog post sequel. I did find this while researching, however, on this site: Gnarly!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/rpalo/douglas-adamss-whale
CC-MAIN-2019-43
en
refinedweb
- invoking method from dataGridView1_CellClick with passthru to CellDoubleClick - How convert Excel data into DataSet/DataTable? - Getting only datagrid results? - Question about attributes - Generics and Reflection - Retrieve private fields - Using a javascript alert box - running DTS package from windows form using C# - thousands/millions of objects - msbuild whackiness - Problem Downloading PDF with HTTPWebRequesst - events and object lifetime - asp.net radio buttons - Problem with Marshal class in Visual Studio 2005 - What is the hierarchy here? - Calling BackgroundWorker synchronous - How to make sure that data displayed on a form is the newest data? - Static variables thead safe? - Display Publish Version - Comparing dates - OO: Who am I in the hierarchy? - How to set file extension? - Add shortcut on desktop for Published application - Editor behaving strangly - Using the # sign - Building and executing SQL Query dynamically, best practices - reading a pst with C# - open a password protected Zip file in C# code. - DataGridView - PROBLEM WITH THREADS - WIN 2003 - printer mapping. - How to avoid circular references - Generic type implementers, class and interface versions not compatible - Get Mac address - Catch ReleaseComObject exception - type ProcessThread has no constructors defined - GET CONNECTED get 100% results - Cannot see source code of referenced binaries - Using MABLE logic engine with existing .NET applications. - Using DB Transactions in DLL and also in SP - C# && Excel - Problem in System.Management.Instrumentation - Exact (X, Y) position of a file on the desktop - VS2005 - Extract XML Comments from sources in Web Project (Web Site) - how to get selected row after sorting a DataViewGrid ??? - Performance when using paramterised queries - Why can't access this site with WebReqest? - File Search - Resizing FlexGrid control... - Windows Service and notyfication icon... - Error in code after converting to VS.NET 2005? - Hotkey register is of no effect after ShowInTaskBar = false - How Do Interfaces Reduce Dependencies - dynamic path - Can't get a my DataGrid to work .. I'm busted or its busted - NumericUpDown - Towards Separating UI Code From All Other Code - state mangement - windows form event firing order - How to set tab focus? - Crystal Reports Login Screen - Generics and Type casting. - Registering COM DLLs in Set projects - Located assembly's manifest def does not match the assembly reference... - Can't start program in debug mode - doubting my decission to use the CAB - Late binding to access COM+ - VS DataAdapter and linked table - Version Numbers - Starting new threads with arguments - how to read BIOS data? - 'Session' does not exist in the class or namespace .... - switching ConnectionString at runtime - VS2005 References question - Generate PDF and/or RTF file from the DataSet - Handling Key Messages in MDI - alternatives to p/invoke? - Windows service design loading data into database - How to use DTS package in C# - smtp server need authentication - Noobie question - CurrentRowIndex does not exist in DataGridView ??? - UserControls in ASP.NET 2.0 - asp.net hosting recommendations - Data Providers available for 1.1 framework - identifying textboxes - Filter connections for Async Socket Server - progressbars - Installing Multiple Instance of a Windows Service - weird OpenFileDialog Problem with cmd.exe - How to set up 'add new project to source control' like in DS 6? - databinding error - business layer props to my webcontrol - usercontrol - XmlNode to XmlDocument. - using com+ in .net - Locking Question around hashtable - What's the reason to use return value in parenthesis? - AIM Library for C# (Fluent.Toc) - Activator.CreateInstanceFrom(..) question - Listing parameters of the Web Service - problem in 'TreeView' class - Get list of clickable objects on the desktop - Find all colors in an image and compare? - How to get this effect - MDI Childs in VS2005 Pro? - Convert MFC Fonts to c# Fonts - Conversion Question - Error using TcpClient with Stream.Read() - Running a socket server program at Windows XP startup - VS2005 - Dragged web control becomes link in code view??? - Sockets Question: What's the use of SocketFlags enum? - How to develop a HOOK application? - doc generating tool - Web page screen scraping? - Dialing a number - VB.NET or C#.NET ??? - Communicating property changes - Socket Server buffer size - Consuming C++ dlls in C# - struct definition - "new" - Thread-safe - Use image in project as a resource - Binding Datagrid (web control) to my business layer obj - 1/22/2006 .Net 2.0 Redistributable Package Info - Icon operations in .Net 2.0 - Type.GetType returns null for Systems.Windows.Forms.TextBox - HREF QUESTION (cross) - ASP.net loading page - Garbage collector messes up events?? HELP! - Getting Local machine IP address - Converting instances to byte[] - Carriage return - How long is MCSD good for? - Database Driven Object Factory - Am I Nuts? - find the DataRow associated with datagrid row - DNS .Resolve & .NET 2.0 Headache??? - DNS .Resolve & .NET 2.0 Headache??? - Hosting a web browser component in a C# winform - Inheritance question - Unmanaged C++ DLL use in C# - directx or opengl - ComboBox Blank Entry - Events seem to execute on a new thread - why? - how ti sort hashtable by Value - Making C++ DLL to use in C# - Scrolling text - like a marque - C# DataSet question please - autogenerate columns in datagridview - MS Visual Studio 2005 PRO - HTTP Download - Paste event in control library with rich text box - beginners question API - switch vs Select Case - Asynchronous socket server - C# Event to be handled by an external class - Operation not allowed on non-connected sockets - Seralisation the final question - DATATYPE PROBLEM(cross) - XML building basing on Directory list - Accessing an SQL Server database file - ReportViewer and ActiveX - Find specific pattern in a string - UserControl Validation - button images - button images. - jpeg icons - C# DataTable Question - c# Crystal Report SetParameter question - button images.. - test - Recomendation for third party tool to create glyphs/images on toolbars - dd - Speed of switch/case vs hash table for parsing XML - C# version of clsftp? - getting a reference to a property of an object - Problem in debugging a Web Service - Forms Designer and Dynamic Panels - Paste an attached file in Outlook from the Clipboard in Windows Fo - CodeProvider error - ReadInnerXml() Seems to Skips a Node - Enumerate directories on a FTP Server - Which interface to implement - Form.Height depending on Windows style? - delegates and inheritance - Smart Client Problem downloading - quick easy answer for this question - Where to download the Framework from? - A simple question about object re-initialization - Inerop help - A simple question about object re-initialization - What's the name of my own process? - Socket.Send Exception Delay - Excel Data - Dictionary<> and Inheritance - Unable to get a Profile custom (object) collection to bind to Gri. - Crash after loading .NET Managed Provider for Oracle - Retrieving Custom Attributes from a property using StackTrace - Why ProjectName.pdb exists in the release dir? - SHould I use Crystal Reports or Active Reports? - Why can't I use SortedList<T, T>? - Explorer context menu problem - determining the status of a stream - Going from string to enum? - Checking an ENUM - DirectoryServices problem enumerating properties - Mutexes.... - multilingual solutions - C# FTP using .NET 2.0 - char[] to char - DataGrid stupid problem - Repeater Problem - the dismatching error with "unsafe" - C# on Non-MSFT OS's? - C# counterpart to VB.NET Redim - Constants for DB Fields in Front End a good idea? - Ridimensionare user control - Using american dates by default - Xml-Rpc + Mysql.NET = Threading problem - Process Name = ? - Setting properties by name - Global Constants? - Suggestions for Trade Shows - ReleaseComObject - Next Line of code is...? - Forms widget updating - transparent - Dispose() in Partial Class - PropertyValueChanged Event not raised in Collection Editor - Why my C# addin can not implement a MC++ interface? - CodeDom: Equivaluent for: expression is type - PropertyValueChanged Event not raised in Collection Editor of property Grid - Form Maximize problem - serialisation issues - Can't open Workbook...(Excel 2003 and C# Visual Studio 2005) - Tabpage Renaming - ListChangedEventArgs.PropertyDescriptor always null - C# counterpart to VB.NET Implements - DOS exe output? - SQL backup progress - mouse event won't fire again until click on other cell - Registering in GAC - Custom Object used with Profile Object - "&" symbol in querystring - Multithreading and Database operations. - Singleton class in C# - Exception error when call excel version - WMI only loads process in background in remote machine - clickonce deploy with SSL and client certificates - GetProperties and Property Order - can c# translate a language to another one? - Is this right? - form over form - How to get extended attributes about a file/folder in c#? - Make Thread-Safe Calls to Windows Forms Controls - DAL design, assemblies and OO ?(s) - How to XMLSerialze an object which has an object array as filed? - Event and Event Handlers. - Is this a Late Binding question? - Archetictural Question? - DataGridView Flaf (Create columns) - Please help! HTTP protocol violation error using HttpWebRequest - mileage chart program - returning a string back after a HTTP Post - Masked textbox alignment? - Panel and Checkbox have been created and NullReferenceException wasthrown ... - Calling SQLServer Proc: If nulls passed, proc changes value to '0.0' - NVARCHAR and ODBC - Invalid token .... Very Very simply program, why do I get this? - Do the controls have a Tip at run time? - Reading 8 bit chars off a stream - Cursor control - C# 2005 how to remove warning "The file 'C:\Main.cs' does not support code parsing or generation because it is not contained within a project that supports code. - antylogic error - Meaning of VS2K5 icons? - Steps needed when deploying C# application? - Tools to find out Memory Leaks - How do I keep the top item in a list view at the top - Collection.Add() as Copy of the data, not Reference to the the data - .NET V1.1 SP1 or Hotfix causes problems with DataGrid refresh - why is month format different ? - Momory usage with static methods - User Impersonation on Win XP SP2 - Prescise error messages from XmlValidatingReader - possible or not? - Publish wizard and IIS - Enum value not correctly given on XmlSerialization over Sockets - c# names for FCL Types - Threading with WinForm - Help with end-of-line characters - popmenu - Microsoft webcast / Digipen Game Programming - Cell Events in Windows Datagrid - Save usercontrol as bitmap - Monitoring a Messaging Queue - Windows Service startup error - Scrolling two listviews simultaneously - "with blocks" in c# - Summary tag in codebehind - Searching for a particular object in an arraylist - IE control in form and popups / hotpots - Cannot add bitmap to ListView other than in first column - Visual Studio 2005, Compact Framework 2.0 & Windows CE.NET 4.2 - unload picture - Opening and Closing Sockets - array list - switch between external application - Translate BUILTIN to domain name - SQL Connection Example - Regarding applications running in the background. - EXECUTE a WEB PAGE IN C# - DragDrop a Tree node to ListView Control..... - Romoveing dll resorces from application - Static method use - waiting on function - How to delete all files in Temporary Internet Files? - Changing field in object? - Application Block Usage - Disconnected vs .. um .. connected? - Inserting localized / regional setting specific data into sql serv - Reading data from a file - CollectionBase - How to relate product to category object - Finalizer Queue - Monitor a socket port - between operator?? - Communication between threads - Save function - windows service questions - static functions variables - How do you do asynchronous processing in C# - iTunes events do not fire - Passing string from C++ code to C# - Poor array performance - How to calculate tax using C3 objects without using any pre-stored data in database - Monitor.Enter/Exit Question - how to use my class from another class (.Net 2005) - Dynamic paging/scrolling with a DataGridView - Process.Start with Username hangs - Update local C# exe file - warning CS0168: The variable 'el' is declared but never used? - BinaryWriter/Reader big endian - How to find the namespace a class belongs to? - Unsafe pointers - warning CS0168: The variable 'el' is declared but never used? - Adding image to project - Visual Studio 2003 Post Build event multi-line format issues - How do you create a Access Database for a setup project - invokemember setproperty - invokemember setproperty - Closing Web Application Window - Two Masters, one detail - Logging web page from winform - Multiple rows selected in dataGridView is not working - DefaultValue attribute - variable value change breakpoint - Passing cookies from client to webrequest host - Exception ReleaseCOMObject - Datagrids - Best/fastest way to abort worker thread - Efficiently convert Base 1 Array to Base 0 Array? - installing C# fails saying application already requesting reboot - diasble keyboard - Type initializer exception - C# Embedded font for button - Socket.Listen() maximum - without extension - Crystal Report, custom SQL - GridView1.Rows[r].Cells[i].Text problem - Create gui in minimized window - C++ to C# - how to reset datagridview - Non blocking socket send - C# Access2000->Access2003 - setup creation - Permissions issues with strong named assemblies - Correct way of storing a date? - Localized resource files in class library - Need some advise, is this site a hoax - Using XSD tool - Slow response Autocomplete - Printing and Exporting - OWC11 Spreadsheet - Flat file parsing into SQL Server 2000 - "Request Failed" Security Exception - nube needs help on editing csharp codes - Automatic removal of unnecessary namespace - how to set the formatprovider to - DsoFramer on Dispose modifies Application title - Where is the File-based web solution stored? - Some high level C# design advice needed (?multi threading). - Create array of Textbox Dynamically and retrieve data from that textbox in asp .net - Create array of Textbox Dynamically and retrieve data from that textbox in asp .net - Verifying parent type on generic object - Remoting Error - DataGrid Question - ADSI - Creating an AD account but not forcing a "change password on first login" - Class / Project Question - Why don't partial class files group? - ADOX C# example of Foreign-Key Relationship Needed - how do I call dataGridView_RowEnter event from within buttonSave_ Click event? - obtaining value of specific datagridview cell value using RowEnter event? - Collections and Parent Reference - COMException HResults not Transferred to MFC Client - authorization for c# winforms applications - Where to put [assembly: CLSCompliant(true)]? - InvalidApartmentStateChange was detected - any ideas - KeyPreview now useable? (VS 2005) - How to fill the parameters when using databinding - Mouse, is there way of reading mouse button state without relying on event - Create Task List or Task Bar in C# - How to access "DefaultNamespace" - OO Design Best Practice - Please Help - How to Remove "" from my URLs - Messages queue - are they installed and if not can I install them myself? - XML invalid characters in dataset - Why does this cause an error? (2005) - Run Command from a Windows Form - Language filter - Custom Object that has 'nested' collections - ListView SmallImageList Problem - How to marshal code to the original thread - Mimic Service Control Manager Functionality - Pattern/Best Practice question - GET CONNECTED GET BENEFITTED - An Event Listener - Integrating Flash in Windows Form - Developing screensaver in C# - Pay Pal - Using HttpWebRequest - every 2 week function - Problem Extending ObjectDataSource and ObjectDataSourceView - Using property to reference Struct data - design patterns - Set ContentBase in HTML Email using CDO - extending generics? - C# Generics, where keyword, and attributes. - WebClient and the HTTP error code - Can I edit the grid in Design Mode? - Regular Expression question - Including BLOB vaules in INSERT SQL - XML Web Service - How to define sender IP address? - style in Excel Export - Slow Debugger - MSAA in C# - file size - How to access the datasource of a GridView - Is the file on a network share? - Catching "set" operations for exposed reference types (enlist for processing in transaction) - Implementing progressbar for large webservice transfers - Easy way to tell if a program is running. - Invoka a process on a remote machine - C# Express 2005 beta vs C# VS 2005 - Connecting from Windows to Linux server with C# - C# Attribute Overhead - treeview - Expanding nodes. - interop c dll import function returns a structure containing strings - Queue throughput. - Saving text from new richTextBox - SerialPort.DataReceived "Problem" - Strong Naming Assemblies + permissions - I get some information messages with the web config - I get some information messages Web.config - Interview Questions - User Control MouseUp - Control Resizing - Capicom Asn1 error again - ndoc status? - Sort ComboBox - Where to Put DAL and Business Objects in WinForms App? - Logginh Function - Textbox Length - Need help with SP - programmatically add DataGridViewComboBoxColumn - How to rewrite this snippet if you must implement IDisposable - Property objects? - Asynchronous web service not getting kicked of. - equivalent of with - Conditional Define which runtime I'm on? - URGENT - Rotate a pictureBox - Why not need ref when dataAdapter.Fill - Listbox DisplayMember and DateTime Format - modification of XML file - DllImport Char* and string - Dependent assemblies have version conflicts - Invoke member of Userdefined Type - DataGridView - about Datatable and relations - C++ and C# in same proj - minimize, maximize buttons on the top,right hand corner of my form - How to make the console hide passwords that are typed in - Creating Hyperlinks - Best book on new changes in 2.0 for the .Net 1.1 veteran - Example code, String divided. - WebBrowser - how to intercept and modify HTTP requests? - timeGetTime() in C#? - Static Functions in Multi-user web app - MDI problems - Cannot edit a class file - File Referances debug vs. release mode - GDI+ and GDI font size inconsistances - Software deployment Liscensing - How to use variable in case statement? - Project estimation metrics? - fixed arrays - How to RemoveChild from HtmlElement from C# - Debugging Unmanaged ActiveX or COM - Quick DirectShow question - App "Freezes" when getting atom feed - Windows service and net drive - Concatenation - String.Concat - Need some help understanding abstract - Decimal Precision - short code, strange problem - RSS Functionality for monitoring - Where is the app.exe.config supposed to go? - Debugging when files are out of sync - Stream pdf to Response - Raising process exit event - Modal App, that will freeze Windows? - Convert Text (comma delimited) to Access mdb - Speed (bandwidth) limit? - Serialize Class - XsltArgumentList doesn't accept long parameter values - Setting parameters on Crystal Report - Urgent - Can NET SEND raise event in my program? - Delpoyment Issue - date problem - working with System.Xml.XmlDocument. - Toolbar icons disable when setting changed - Create own compiler - Expanding a treeview from a path for any number of levels. - whats wrong with this? - String manupilation - Possably Newbe Question .. - The abovementioned code - Inserting rich text into an MS Access memo field. - SqlCommandBuilder + transaction - C++ to C# type mapping.... - (Me || Compiler) == Stupid - Databinding DropDownList and Label - Clearing selection in InkPicture - C5 generic collection library for C# and CLI - C struct to C# mapping - appropriate collection class - Exception handling in property. - System.Object.ReferenceEquals method - Extended properties in oledbconnection. - installing windows service - .NET application without .NET Framework - Inheritance Resolving, IsSubclassOf vs "is" vs "as" performance - CSharp namespace chart
https://bytes.com/sitemap/f-326-p-76.html
CC-MAIN-2019-43
en
refinedweb
Wikiversity:Colloquium/archives/August 2010 Contents - 1 m:Requests for comment/Global banners - 2 WV:Custodian feedback#Ottava Rima - 3 WV:AGF - 4 The alleged discretionary "rights" of wikiversity custodians - 5 EDP updates need approval - 6 Friendly treat v hostile threat - 7 Help:OTRS - 8 Wikiversitans in London - 9 Call for bureaucrats - 10 Community input needed - 11 Importation proposal - 12 Redirect upload to Commons - 13 THE IMPORTANCE OF MATHEMATICS TO EDUCATION AND GOVERNMENT - 14 Annotated Bibliography - 15 I have source material for a new course. What do I do with it? - 16 sound shapes - phonograms - 17 feedback if you're interested :-) - 18 wikt:MediaWiki:Gadget-WiktSidebarTranslation.js - 19 Vector is coming! - 20 "Wikimedia Studies": perhaps we should have a policy or CR? - 21 Wikiversity:Embassy - 22 Subtitled movies - 23 Difficult navigation - 24 Quiz options - 25 Countervandalism channel IRC - 26 Wikiversity Signpost - 27 New encyclopedism - 28 Solution against the broken external links: back up the Internet - 29 Category casing. - 30 The Sandbox Server II -- The Sandbox strikes back - 31 Adding a maps extension - 32 Semantic Mediawiki extension - 33 Personal attacks ^ --MZMcBride 22:37, 1 August 2010 (UTC) WV:Custodian feedback#Ottava Rima I have opened a custodian feedback section regarding some recent actions of custodian Ottava Rima. Elsewhere on this page, there are links to Community Reviews. When a person has a problem with a specific custodian, Custodianship/Problems with Custodians suggests first filing a report on Custodian feedback, hoping to find community advice regarding the problem, which might possibly resolve it. That step has been skipped with some of the CRs that are open. Accordingly, I'm asking that as many members of the community as possible look at this review and comment, or watch it for a time. The feedback report cannot result in desysopping, but it's an opportunity to provide advice that may avert it (or avert problems with the filing user!). I would think that if someone comments in the Feedback, and if a Community Review is filed because we cannot resolve the dispute at this lower level, those who have commented will be notified on their Talk pages, and prior comment would be referenced in the Community Review, so this is efficient. Thanks. --Abd 22:23, 2 August 2010 (UTC) WV:AGF I've had a bit more spare time than usual over the past week or so, and have spent some of it reading a lot of things here that are frankly pretty depressing. Here's a snippet from WV:AGF, one of our core policies: - When you disagree with someone, remember that they probably believe that they are helping the project. That's an important tip, and I think everyone needs to take that more seriously. --SB_Johnny talk 00:39, 4 August 2010 (UTC) The alleged discretionary "rights" of wikiversity custodians I absolutely retain the right to use my custodian rights in accordance with what I judge to be in the interests of the project and in accordance with the views of the community. . Adambro certainly have mixed "rights" with "priviledges". But that is not my main point. In essense he believes he can competently act as police, judge and jury at the same time, and at every instance that he interests himself in, just based on what he thinks wikiversity should be, and even in the face of community opposition. I do not know if that has ever been the consensus or custom of wikiversity. However, from what I know from the very early days wikiversity custodians has never been explicitly endowed with any "rights" and they only act as functionaries who would act on behalf of the community, and hence the title "custodian", in clear distinction from wikipedia "administrator". [So please don't say we are wikimedia blablabla and it has always been like that blablabla.] The use of the custodial tools by custodians may have changed through practice, and I would like the wikiversity community to establish consensus on what discretions, in particular blocking others from participations, we allow our functionaries to make. Hillgentleman | //\\ |Talk 12:32, 5 August 2010 (UTC) - You might be better raising any concerns you have about the general use of blocks at Wikiversity talk:Blocking policy where the development of a policy regarding this can be discussed. I'm not quite sure how you conclude that I believe I can act "based on what he thinks wikiversity should be, and even in the face of community opposition" when in the quote you highlight I specifically said that I would use my custodian rights in accordance with both "what I judge to be in the interests of the project and in accordance with the views of the community". On the issue of my "custodian rights", I refer to the ability to block as a custodian right because that is what it is widely referred to as. You can find a list of the rights that members of the Custodians group have at Special:ListGroupRights. Adambro 12:46, 5 August 2010 (UTC) - These problems are not here because of missing policies. Policies may help to solve actually problems, but on the other hands, they will build up barriers around custodians, who will no longer be possible to use their brains and fantasy, to fix problems.--Juan de Vojníkov 12:49, 5 August 2010 (UTC) - I agree that would be nice to know. I'd also like to know in general what discretion anyone has to make any independent decisions. Is the impact that a decision has part of what discretion people are willing to give anyone? I believe that is usually true on other wikimedia projects. A decision to rename a resource usually only impacts the people working on the resource and the people reading the resource for example. If most people do the things that Custodians block for that would have more of a direct impact on the Wikiversity community, than if only a few people do the things that Custodians block for. Is having a direct impact a consideration as well? I think where people stand on issues that have no direct impact on them seems to vary. -- darklama 13:05, 5 August 2010 (UTC) - I retain the right to say hi, Hillgentleman, how have you been? Ottava Rima (talk) 13:49, 5 August 2010 (UTC) - Hi, Ottava! Been busy. Hillgentleman | //\\ |Talk 20:44, 6 August 2010 (UTC) - Are you going to be sticking around? Say yes. I'd like to see you involved in the community again (and I don't mean the drama stuff). It would be nice to drag some people back. Ottava Rima (talk) 20:53, 6 August 2010 (UTC) - Hillgentleman wrote, "I would like the wikiversity community to establish consensus on what discretions, in particular blocking others from participations, we allow our functionaries to make". According to Wikiversity policy, Custodians "can protect, delete and restore pages as well as block users from editing as prescribed by policy and community consensus." There are four policies that prescribe how the block tool can be used. As is being documented at the community review, a few rogue sysops have claimed the right to misuse the block tool. One of the proposals arising from the community review is for an official policy on blocking, a policy that will protect the Wikiversity community from further misuse of the block tool by sysops who ignore the existing policy. Such a policy on blocking was developed over the past four years. Darklama has attempted to hijack the proposed policy on blocking. Darklama, as one of the sysops who has misused the block tool, has a conflict of interest and should not be altering the proposed policy on blocking. The Wikiversity community needs to protect itself from further misuse of custodial tools by rogue sysops. --JWSchmidt 18:49, 5 August 2010 (UTC) - You've said similar to the above, that "There are four policies that prescribe how the block tool can be used", a few times recently. Could you just confirm which four policies you are referring to? Adambro 18:52, 5 August 2010 (UTC) - Adambro asked, "which four policies you are referring to?" The policy on custodianship, the policy on bots, the civility policy and the Research policy. --JWSchmidt 19:02, 5 August 2010 (UTC) - Slightly confused. You link to betawikiversity:Wikiversity:Review board/En but referred to it as "Research policy". Did you mean to link to betawikiversity:Wikiversity:Research guidelines/En? In which case, that doesn't seem to be a policy or mention blocking. I note Wikiversity:Bots only mention of blocking is "Bots running anonymously may be blocked". Would you not agree therefore that we don't really have much policy on how blocks should be used? Is it not the case that the only mention of blocking in our policies of any real substance is the small section of Wikiversity:Custodianship? Adambro 19:19, 5 August 2010 (UTC) - Adambro, an actual Custodian would have become familiar with Wikiversity policy while being mentored. I see no evidence that Adambro was mentored as a probationary custodian. Adambro was never listed at Wikiversity:Probationary custodians. During the community discussion of his candidacy for full custodianship, Adambro refused to answer important questions about his participation at Wikiversity, including his policy violations, which continue. Adambro says, "we don't really have much policy on how blocks should be used", but how blocks can be used is explicitly and clearly described in Wikiversity policy. The only problem is a few rogue sysops who ignore Wikiversity policy. The Wikiversity research policy exists on three related pages. --JWSchmidt 21:14, 5 August 2010 (UTC) - You again suggest that how blocks can be used is "explicitly and clearly described in Wikiversity policy" despite me showing that simply isn't the case. Of the four policies you've suggested define how blocks can be used, the Research guidelines on beta doesn't seem to even discuss blocks, the bots policy has one sentence and the civility policy also says little about how blocks should be used. All we have of any substance is the section of the custodianship policy but even that is only one paragraph which provides little guidance as to how blocks can and can't be used on Wikiversity. The first sentence simply says that custodians can block users, IP address or ranges. The second again just states a fact, that block can be temporary or permanent. The third section just describes how blocks are most commonly used, "in response to obvious and repeated vandalism". The fourth just requires a reason to be given in the block log. The fifth sentence again just states a fact about the blocking feature and the final sentence just provides a link to the proposed policy. - What I conclude from all of that is that it isn't accurate to say that "how blocks can be used is explicitly and clearly described in Wikiversity policy", which seems to be supported by your enthusiasm for Wikiversity:Blocking policy to be developed. I note that rather than responding to my points in my previous comment about this you just restated your concerns about how I was made a custodian. What you say may or may not be true but it doesn't help answer the question as to whether ow blocks can be used is actually "explicitly and clearly described in Wikiversity policy". Here's another opportunity for you. You can respond to the points I've raised and demonstrate that I am incorrect to conclude that Wikiversity doesn't have much which says how blocks should be used. Alternatively you could not bother and just restate your opinion that I've abused my custodian rights or whatever. Adambro 22:11, 5 August 2010 (UTC) - Adambro says, You again suggest that how blocks can be used is "explicitly and clearly described in Wikiversity policy" despite me showing that simply isn't the case, but I'm not "suggesting" anything. How blocks can be used is explicitly and clearly described in Wikiversity policy, as anyone can verify by reading the policies, starting with: "A Wikiversity custodian is an experienced and trusted user who can protect, delete and restore pages as well as block users from editing as prescribed by policy and community consensus." "the Research guidelines on beta doesn't seem to even discuss blocks" <-- The research policy says, "...Custodians take action to delete pages or block editors who refuse to follow the research guidelines". "responding to my points" <-- Adambro, I have responded, but you you don't seem to want to follow policy that clearly says how the block tool can be used at Wikiversity. The failure of a few people to follow existing Wikiversity policy is a matter under community review and the reason why Wikiversity needs an official policy on blocking that will protect the community from people who misuse the block tool. Adambro, if there are remaining "points" please make a numbered list so that we can discuss them. --JWSchmidt 06:27, 6 August 2010 (UTC) - Perhaps I should keep things simple and only ask one thing at a time. You've said 'The research policy says, "...Custodians take action to delete pages or block editors who refuse to follow the research guidelines"'. Where in betawikiversity:Wikiversity:Research guidelines/En does it say that? You seem to be referring to Wikiversity:Review board/En (or betawikiversity:Wikiversity:Review board/En) again. Is Wikiversity:Review board an official policy? Adambro 09:12, 6 August 2010 (UTC) - As I said before, the research policy exists on three related pages. --JWSchmidt 21:33, 6 August 2010 (UTC) Adambro has succinctly stated the common-law rule for administrators. In the absence of specific and clear policy to the contrary, this is the most that we can expect from any custodian, it is actually a lesser standard that the custodian would, for example, agree to only act within the clear confines of explicit policy. Wikiversity could establish this latter standard, but I'd highly recommend against it. "Police" are not "judges," they exercise executive power, which only allows temporary, ad-hoc "judgment," pending a deeper process where the community (or government, or university administrator, for, say, campus police, up to and including courts) reviews the actions. If Adambro regularly abuses discretion, that should be specifically addressed, not the principle of discretion, which is essential. Until this community gives much clearer guidance to Adambro, he cannot be deeply faulted for his actions as long as they are not clearly contrary to policy. If someone believes that a specific action is problematic, or a set of specific actions, and this cannot be resolved by direct discussion, that should be taken to a report on Wikiversity:Custodian feedback for the community to advise the custodian and the person(s) with a complaint. Instead, we have a habit of ineffective complaint through useless discussion, here and there, which just wastes everyone's time while accomplishing nothing. --Abd 18:30, 5 August 2010 (UTC) - "the common-law rule for administrators" <-- Abd, what does "the common-law rule for administrators" mean and how is it relevant to Wikiversity? "agree to only act within the clear confines of explicit policy" <-- Wikiversity policy says, Custodians "can protect, delete and restore pages as well as block users from editing as prescribed by policy and community consensus." That means, in particular, that any block not prescribed by policy must be made by community consensus. "campus police" <-- Abd, why are police relevant to Wikiversity? Custodians clean up vandalism. "the principle of discretion" <-- Abd, what "principle" are you talking about and how is it relevant to Wikiversity? "he cannot be deeply faulted for his actions" <-- Policy violations by sysops and misuse of IRC chat channel operator tools are among the problematic actions that are under community review. a report on "Wikiversity:Custodian feedback" <-- Such a "report" already exists. --JWSchmidt 19:23, 5 August 2010 (UTC) - what does "the common-law rule for administrators" mean and how is it relevant to Wikiversity? Good question, thanks. "Common law" refers to what is known and accepted by precedent and shared understanding, in the absence of specific law, statutory law in legal terms. Here it refers to what someone who has administrative experience, and users with general wiki experience, will expect as a norm, quite aside from explicit policies. Legally, policy would trump common law, except that wikis in general also follow some form or other of what is called on Wikipedia Ignore All Rules, which in public common law is called Public Policy. Means the same thing. So, unless you give custodians here guidance through establishing consensus on contrary policy, they will generally follow, providing they have sufficient experience to understand it, "common law." Inexperienced custodians may not, and even experienced custodians, I found, on Wikipedia, sometimes didn't have a clue about what makes wikis really work, the large body of shared experience. Violations of common law will often outrage people, who won't then have a policy to point to prohibiting the action. They just "know" it's wrong. How do they know that? Good policy and guidelines will stay close to common law, or they will confuse people. Deviations from common law should be very well justified by the specific conditions of a wiki. --Abd 20:11, 5 August 2010 (UTC) - Common law?? Custodians do a lot of things, some of which I don't like, big and small. Sometimes I say so and sometimes I don't. So what? It hardly makes any difference since the custodians can always get their way, since most contentious issues have, by defintion, supporters and detractors. If THAT is how wikiversity establishes its precedents, it gives far too much leeway for the custodians. They can easily get away with whatever they want. I have seen custodians allowing themselves more and more discretions through the years. And now I want to say enough is enough. A Wikiversity "Admin" who has a habit of equating what he thinks is the community consensus to the actual consensus can simply cite "my right and my discretion!" to stuff whatever he likes down the throat on the community. Wikiversity isn't supposed to be a place where a caste of admins have an advantage. You are supposed to use persuation, not your drawn tools. Hillgentleman | //\\ |Talk 00:03, 6 August 2010 (UTC) - So, unless you give custodians here guidance through establishing consensus on contrary policy, they will generally follow, providing they have sufficient experience to understand it, "common law." <-- Abd, I don't understand what you are trying to say. Destructive practices from other websites are not relevant to Wikiversity and its Mission. I agree that some sysops were never mentored and they seem not to understand/respect Wikiversity policy. Any sysop who does not understand and respect Wikiversity policy cannot be trusted and should not be a Custodian. Hillgentleman is correct. Custodians are empowered to do what is described in policy. A few sysops who ignore Wikiversity policy and try to give themselves additional powers are disrupting the Wikiversity community. --JWSchmidt 06:41, 6 August 2010 (UTC) - See also: "The voice of one crying in the wilderness" -- KYPark [T] 03:35, 7 August 2010 (UTC) EDP updates need approval Hello, I've proposed two EDP updates here. Please comment and approve. Geoff Plourde 21:49, 7 August 2010 (UTC) Friendly treat v hostile threat - Synopsis - A statistics, for your reference, shows the number of edits, made by editors and custodians, counting more than a hundred in the last 30 days as of 8 August 2010. The number includes a more or less portion of Talks, which in turn includes more or less portions of friendly treats and hostile threats, perhaps depending on the user's temperament, civility, or the respect and love of the community. - -- KYPark [T] 02:28, 9 August 2010 (UTC) Help:OTRS Hi could you proofread this text and optionally place comments, please.--Juan de Vojníkov 11:13, 8 August 2010 (UTC) - Looks good, are we going to add this to the no thanks template? Geoff Plourde 19:57, 8 August 2010 (UTC) Well, it should be there.--Juan de Vojníkov 21:26, 8 August 2010 (UTC) - It can go also to db-copyvio but the previous possition is much important.--Juan de Vojníkov 21:28, 9 August 2010 (UTC) Wikiversitans in London I would be interested to hear whether there are any other wikiversitans in London. I regularly go to the London wikipedia meetups, and perhaps we could meet up there too! Harrypotter 09:05, 10 August 2010 (UTC) Call for bureaucrats Who would you like to see as a bureaucrat?. See also: Current bureaucrats. Current custodians. -- Jtneill - Talk - c 02:11, 12 August 2010 (UTC) - is it just me, or is the custodian list really weird / wrong? Privatemusings 06:25, 12 August 2010 (UTC) - I am a technical guru, and have fixed it. (I think) :-) Privatemusings 06:27, 12 August 2010 (UTC) Community input needed A Wikiversity community member is being prevented from participating at Wikiversity. See Wikiversity:Request custodian action#Ethical Accountability.2C aka Thekohser.2C request unblock. --JWSchmidt 17:19, 12 August 2010 (UTC) - What is your definition of "community member"? I ask because Thekohser has 61 edits whereas KillerChihuahua and Salmon of Doubt had over 100, two people you have stated were not really part of the community and therefore had no right to express opinions about Moulton's ban. Ottava Rima (talk) 18:17, 12 August 2010 (UTC) - Ottava Rima, please provide a link to where I said they "had no right to express opinions about Moulton's ban". If I was forced to define "community member", my definition would involve the idea that a person is editing in support of the Wikiversity Mission. --JWSchmidt 18:24, 12 August 2010 (UTC) - I have yet to see any of that. And JWS, you challenged KillerChihuahua's statements and Salmon of Doubt's statements as being from outsiders quite often. Or are you going to say that they were part of our community and therefore their votes on Moulton's ban were correct? Ottava Rima (talk) 19:47, 12 August 2010 (UTC) - When Wikipedians make decisions in secret, off-wiki, decisions that disrupt the Wikiversity community and deflect Wikiversity from its mission, and when they edit at Wikiversity so as to impose those decisions on the Wikiversity community then it is fair to characterize them as acting as outsiders. "votes on Moulton's ban were correct?" <-- The decision to ban Moulton was made in secret, off-wiki. I don't know of any votes on Moulton's ban that were "correct", certainly there were none that were announced to the community as being a vote to community ban Moulton. There have been quite a few calls for bans at Wikiversity and none of them were justified, thus they were all serious violations of Wikiversity policy. There were some show trials that do not constitute a fair and just treatment of Moulton. "you challenged KillerChihuahua's statements and Salmon of Doubt's statements" <-- I have challenged some of their statements. Can you link to an edit by me where I challenged one of their statements on the basis of them being outsiders? --JWSchmidt 20:07, 12 August 2010 (UTC) - In the document that I keep citing that was used to verify why you needed to be desysopped, he section on IRC abuse included you being unkind to Salmon of Doubt and KC in IRC. Perhaps they did things in secret because you were using anything public to cause them discomfort? Ever think that perhaps you drove people to such? Ottava Rima (talk) 01:04, 13 August 2010 (UTC) - Ottava Rima, the show trial document that you keep linking could not be the basis for a policy-violating emergency desysop when no emergency existed. "unkind to Salmon of Doubt and KC in IRC" <-- Exactly what does "unkind" mean? When they made unsubstantiated claims about Moulton I asked for evidence to support those claims? I objected to a disruptive sockpuppet from Wikipedia coming to Wikiversity on a self-declared mission to get a Wikiversity community member banned? I objected to the unauthorized use of a bot at Wikiversity? If my questioning of their actions caused them "discomfort" the source of their discomfort was their own actions and their inability to explain how their actions supported the Mission of Wikiversity. "drove people to such" <-- A group of bullies decided to violate Wikipedia's BLP policy and use Wikipedia biographical articles in an inept and misguided effort to paint some scientists as being unscientific. When they were caught violating Wikipedia policy, they blocked Moulton from editing. Not satisfied with that, they resorted to vile online harassment which resulted in it being revealed that one of the policy violating Wikipedians was using corporate computing resources to violate Wikiversity policies and carry out online harassment. In an attempt to cover all that up, there was an orchestrated effort to ban Moulton from participation at Wikiversity. It is a truly sad saga, and a huge embarrassment for the Wikimedia Foundation, made even sadder by a few misguided Wikimedia Functionaries who continue to defend the policy-violating Wikipedians who harassed Moulton. --JWSchmidt 05:02, 13 August 2010 (UTC) Gad this spins out quickly. Looks to me like, far from preventing the member from editing, a possible or even probable result is that the impediments to editing will be lifted, in short order. I didn't notify the community here of that Request Custodian Action because, really, I was just looking for a single neutral custodian to look at this and make an ad hoc decision, trying to keep it simple, avoiding Community Review -- maybe -- unless someone wanted to push it. But there is now a poll going on there, it's true. And various fireworks. For example, SB_Johnny's back! As a custodian and bureaucrat again.... --Abd 01:32, 13 August 2010 (UTC) Importation proposal Hello, I find {{Tabbed portal}} a little bit obsolete comparatively to this template, which I've just adapted on the Wikiversité in French today. It would necessitate a Mediawiki:Common.js + Mediawiki:Common.css modification. JackPotte 04:59, 13 August 2010 (UTC) vote for my bot Redirect upload to Commons I would like to propose that Wikiversity will redirect file upload to Wikimedia Commons. There are some pros and contras of course, but I think pros have majority: - Advantages: - Wikimedia Commons is a server for all media used on WMF projects, than files can be used everywhere. There are many useful files on wv, which may be useful also on other projects, but there is now personnel to move them to Commons (Economicaly: why we should do the work of someone else?) - We will have less work, less controlling, less work with moving files to Commons (the true is no one actually do this). - en.wv doesn't have different license policies, so what can be here can be also at Commons. - en.wv doesn't have extended upload, it means file types which differ from file types of Commons. - Disadvantages: - there might be problems with categorizing of some works to Commons, but why not set there special categories such as "Wikiversity works". - Users may have problems moving to different environment. Finally we can prohibit the upload to en.wv at all and redirect everybody directly to Commons. Than in the future if license policy or file types will be extended, we may allow to upload just these specific file types, which doesn't fit to Commons.--Juan de Vojníkov 20:33, 4 August 2010 (UTC) - I agree that we should be working towards directing uploaders to Commons for all media files and licensees which are accepted there. I think we still need to allow local uploads for things like screenshots. I have been doing a bit of work on and off for a while now with the idea of at some point proposing change the upload form. Wikiversity:Upload is part of this work. It is the development of a upload page to direct uploaders to the most appropriate form. Adambro 20:40, 4 August 2010 (UTC) - Yes, as we just talked on IRC. Fair use works can go to Commons, so they probably should stay here. Than just the redirection.--Juan de Vojníkov 20:56, 4 August 2010 (UTC) - Thats what we are talking John, that Fair use works will stay here, but other will go to Commons. It can be done like Adrignola say or there could still exist local upload but kind of hidden. On the end, Fair use may look like open for contributers, but it is not open for other people, who would like to share. Is it possible to use fair use works in Australia, UK, SA or Germany?--Juan de Vojníkov 06:01, 5 August 2010 (UTC) - Wikibooks uses a special group called "uploaders" that administrators can add/remove to allow people to upload fair use files locally. The "upload file" link has been changed to direct to Commons, but people can still visit Special:Upload if they are a member of the uploaders group (admins don't have to add themselves to that group). Keep in mind that fair use files are not permitted at Commons. The system used at Wikibooks directs uploads to Commons while not disabling uploads entirely. Finally, if you look at the fine-grained permissions at Special:ListGroupRights, you'll see that even under that system, people who aren't members of the uploaders group can still overwrite files they've uploaded. It will be a long process to push all the existing files to Commons, but that system takes the burden of file upload license checking off administrators and keeps files from being limited to Wikiversity's use only (which makes cross-project cooperation difficult). Adrignola 00:44, 5 August 2010 (UTC) - Yes, the other option is not to allow Fair use, because our mission is not to collect the higher number of files at all costs but offer the free content. So how many English speaking countries recognize Fair use? All?--Juan de Vojníkov 06:01, 5 August 2010 (UTC) - From Fair dealing, it would appear that there are at least seven: Australia, Canada, New Zealand, Singapore, South Africa, United Kingdom, and United States. I also replied to your other question on my talk page. Adrignola 17:15, 5 August 2010 (UTC) I want to inject support for JWSchmidt here; I have had recent dealings with the WP where I extended myself as a "fisherman" for images. I cannot possibly tell you how much grief I went through. WP seems to be the opposite of free when it comes to sharing information; if you read all the copyright documentation there is now way to describe WP except as a supporter of copyright law. They seem to create their own, needlessly. WP is so opposed to fair use that pictures of buildings are illegal. Fair use is the most important tool for education there is, because all information is built on existing information--all work is derivative of other work! Here is writing on the topic from the WP that attempts to loosen things there. As an educational entity and a genuine wiki, we need to head the opposite direction with respect to uploads. Also, see WP as an educational entity.--John Bessatalk 13:01, 14 August 2010 (UTC) THE IMPORTANCE OF MATHEMATICS TO EDUCATION AND GOVERNMENT --41.138.169.70 12:58, 14 August 2010 (UTC) - Yeah?--Juan de Vojníkov 09:36, 15 August 2010 (UTC) Annotated Bibliography I am reading IF Stone's Trial of Socrates to understand the open-education environment of Athens (look in the upper left corner), and the nature of the Socratic circle especially with respect to Aristotle, inventor of the Scientific Method. I will create an annotated bibliography that includes my reactions creating what will be the first of what I call "mediated citations." I hope to attract opinions' of others, within the rules of MCs, that will very likely say I am all wrong! (pervious text)--John Bessatalk 15:35, 7 August 2010 (UTC) - I read, or re-read, the descriptive sections about Athens, the Socratics, and Greece that are important to me (I find trials boring) and annotated them on post-its. As you may have seen below, I am starting my counseling masters, so I have to ration my time between requirements and interest. - As an aside, the mediation concept is becoming very useful; I developed the term "mediated glandular responses" to describe, for instance, the feeling a gambler is looking for when winning.--John Bessatalk 12:53, 13 August 2010 (UTC) - Getting closer. I have these two examples of conversations that I hope will be typical of discussions: [2], [3]. Since I will have to also start reading psych texts now, I will want to create the same types of bibliographies for the texts. I think that writing per text is somehow more "open," as the text book industry is notorious for "price fixing."--John Bessatalk 14:19, 15 August 2010 (UTC) I have source material for a new course. What do I do with it? I have translated some of Maimonides' work and I think it would make great source material for a survey course in Judaica. Is it something the Wikiversity could use? Is this the right place for it? --Rebele 14:19, 16 August 2010 (UTC) - Is Miamonides' original untranslated work in the public domain or released under the CC-BY-SA license? -- darklama 14:27, 16 August 2010 (UTC) - Maimonides died over 800 years ago. His works are PD. en.wikisource.org tends to have PD collections of works. If they don't want a translation, then you can post here and we can figure out how to accomodate you. Ottava Rima (talk) 15:15, 16 August 2010 (UTC) sound shapes - phonograms --155.150.223.150 14:47, 17 August 2010 (UTC)Have a list US language sounds and their graphic representation like ough_ow - oo - uf - off - all - bough dough through rough caugh bought. A set of cards with the symbol on one side and the sound words on the other side. Munson short hand is similar but with shapes. There are 26 letters and 76 phonograms, very confusing but very flexiable, US language is not like NORWAY. NORWAY HAS GOVERNMENT CONTROL OF SPELLING.14:47, 17 August 2010 (UTC)14:47, 17 August 2010 (UTC)~~ feedback if you're interested :-) any and all feedback on this, a recent post of mine, is most welcome. cheers, Privatemusings 04:13, 19 August 2010 (UTC):11, 19 August 2010 (UTC) --Justin carlo masangya 12:21, 20 August 2010 (UTC) White blood cells (WBCs), or leukocytes (also spelled "leucocytes"), are cells of the immune system involved in --Justin carlo masangya 12:45, 20 August 2010 (UTC)justin carlo masangya meaning deforestation-is the clearance of forest by logging[popularity known as slash and burn.] effects the effects are: 1.erosion of soil 2.disruption of the water cycle 3.loss of biodiversity 4.flooding 5.drought 6.climate change Vector is coming! Guys, Wikimedia Usability has set August 25th as the date when Vector will become the default skin for all other projects, including Wikiversity. Geoff Plourde 07:05, 8 August 2010 (UTC) - Thanks Geoff. Ill be prepared that day, to switch back everywhere.--Juan de Vojníkov 08:31, 8 August 2010 (UTC) - Just a note -- it's the target date, and may move. Just clarifying. Historybuff 14:40, 9 August 2010 (UTC) "Wikimedia Studies": perhaps we should have a policy or CR? Original research projects related to Wikipedia, the Wikimedia Foundation, and so on have proven to be rather problematic for Wikiversity in the past. Should we simply set these studies outside of our scope? There are problems with doing so, of course, since this would in fact be censorship. However, a blanket ban on the subject would be easier to digest than a ban on only those projects that are critical, or bans that only apply to certain people. Thoughts? Comments? Angry rants at the very thought? --SB_Johnny talk 16:33, 16 August 2010 (UTC) - Jimbo himself when asked said research about Wikimedia projects are fine. I see no need to put a blanket ban on the subject. I'm not aware of any current problems, are you? -- darklama 16:47, 16 August 2010 (UTC) - How have research projects related to Wikipedia and the Wikimedia Foundation been problematic? What has been problematic are people who disrupt such projects. Rather than ban useful research, I favor putting in place at Wikiversity some protections for scholarly research projects and researchers. --JWSchmidt 16:54, 16 August 2010 (UTC) - I agree, John: the projects weren't the issue, but the reaction to them did a lot of damage. The point is that any such effort is going to attract the same reaction. Projects that are critical will attract the attention of those who want to defend Wikimedia from criticism, and likewise projects that are not critical will attract the attention of those who feel Wikimedia deserves some criticism. - I don't, of course, think this approach would in any way be good for the sort of academic freedom that WV should ideally stand for and encourage. It might be the only way to survive. --SB_Johnny talk 17:14, 16 August 2010 (UTC) - Moulton commented here, it was properly reverted as being by a blocked user, and I restored it with redaction and a note. This was removed. To respond, I agree with Moulton that the study of wiki ethics could do much to avoid future problems, by delineating existing problems, which may lead to suggestions for improvement. We need be particularly careful, here, to avoid the assignment of blame. In my view, most problems on the wikis are due to defective structure; this is at variance with what seems to be a popular, easily-assumed view that ascribes problems to problem users. --Abd 19:18, 19 August 2010 (UTC) - I think Wikiversity needs to recognize that any research can attract all kinds of people, and some may seek to have their views dominate discussion and research. I think Wikiversity needs people that know how to quickly bring about a cease fire when strong views clash. If Wikiversity needs a policy it might be that dominating discussion and research and seeking "victory" harms Wikiversity, and those are acceptable reasons to block when people don't stop after being asked to cease fire and come to a truce. -- darklama 17:38, 16 August 2010 (UTC) - It might seem contradictory, but the solution may be (at first) more blocks rather than fewer. And even more than more blocks, more warnings for incivility or revert warring or other disruption, followed by short blocks upon disregard. A short block is like a sergeant-at-arms at a meeting asking a disruptive member to leave a meeting. It is not a ban, and the member can come back when there is no more immediate risk of disruption. It is purely procedural, and it's understood that some people can be hot-headed and that the community can and should restrain this. But members aren't punished for being hot-headed, rather the disruption is directly addressed. Temporary exclusion is not punishment and should never be presented as such. A failure to understand this is behind a great deal of tenacious disruption on Wikipedia and here. Basically, if anyone thinks a person is causing disruption, they can and should warn the person. If the person disregards this, any custodian can look and short-block, if warranted. Ideally, the one warning should not be from someone involved in a dispute, and the reason is that people will tend to discount warnings from others who are involved,nor should the custodian be involved, except in an emergency as I've elsewhere described. But it's still okay for someone involved to warn; a reviewing custodian can decide whether or not to proceed with a block or confirm the warning -- and then block for continued disregard beyond that. The point is to gain voluntary compliance, and not to allow the user to believe that they are being excluded. - Generally, a user who has violated agreements many times should be unblocked promptly upon assurances that the user will not continue the blockworthy behavior. The blocking custodian should always consider this, and can even set conditions for prompt unblock, but should not coerce; humiliating conditions and unclear conditions should be avoided, they cause trouble. If the blocking custodian does not wish to unblock, that custodian should never decline an unblock request. If there is no other custodian available, the blocking custodian should simply leave it in place. Blocks should always be applied with utmost civility and with support for acceptable behavior. Wikipedia deprecated "cool-down blocks." I suppose the reason is that, as a block reason, it represents mind-reading and, indeed, that could be offensive. But, in fact, a properly applied block will accomplish cool-down. "Okay, I was out of line there, thanks for considering unblocking me, I'll try not to repeat that." Most adults are capable of that kind of admission, and they will do so sincerely. It's not even any kind of moral offense to be "out of line." We get angry for good reasons, often. But we, if we are sane, also understand that if we start shouting at a judge in a court, for example, we'll be restrained. Only the truly crazy will take this as a personal insult. - A better understanding of block policy would go a long way. We need better documentation, to guide custodians, and also to assure users that they will get fair treatment, if the policy is followed. We should never allow the appearance to arise that a single custodian is "in charge" of an editor's behavior, unless the editor has accepted that arrangement. Even my young children know, instinctively, to resist this kind of control! I'm in charge of what I will permit and what I will prevent, as the parent, but they are always in charge of their own behavior, and if I don't respect that, I'm failing as a parent. - There are deeper solutions that are possible, using bots, allowing for the flexibility of temporary narrow or broad "topic bans" that would be bot-enforced (by automatic reversion), but that's down the road. For now, seeking and encouraging voluntary compliance, with stronger response as needed, with judicious use of the block tool, should be adequate. --Abd 20:24, 16 August 2010 (UTC) - A ban is not needed. What is needed are guidelines accepted by consensus that handle how to avoid unnecessary disruption when individual users or the WMF are criticized or appear to be so. It is easy for such study projects to become wheels on which to grind axes. Now, need these guidelines be developed in advance? No, except for one, which we should write. When WV content becomes controversial because of "cross-wiki issues" -- or even local issues -- we need to have procedures in place to address this and prevent disruption. In the current project started by Privatemusings, I called for work to "come to a screeching halt" when objections appeared, until the objections themselves are addressed and consensus found. Not "cancelled." Not "deleted," except that ordinary content deletion, still in history, should be fine if needed temporarily, while it's under discussion. We should not allow any user to barge ahead with insisting on controversial content. If there is "outing" perhaps revision deletion may be needed, and even short blocks if revision deletion is needed. Otherwise, we need what Moulton calls a social contract, an agreement that provides for means to resolve disputes, and the default situation is blank, i.e., no content. When someone objects to content that has not been established by consensus, it should be blanked or deleted, by default. Then it can be discussed, whether or not to allow it, with the community assisting to keep the discussions civil and to the point. The legitimate needs of "outsiders" must be respected, but also the academic freedom of this community. We need to do both. And it takes time. - The key is to establish process that seeks consensus, not "victory" for one side or another. --Abd 17:09, 16 August 2010 (UTC) - What about a temporary ban. To wait until Wikiversity has grown in learning communities on other issues than Wikimedia? If there is a very large community of users, mostly occupied with topics that have nothing to do with studying Wikimedia, than fights on these kind projects for studying Wikimedia will have far less influence on the whole Wikiversity community.Daanschr 17:27, 16 August 2010 (UTC) - We have a current project which is not causing any disruption. And more are being opened, with no sign of disruption. Why fix it if it isn't broken? See Response_testing/WMF_Projects. Note that if someone objects to some work there, there will indeed be a kind of "temporary ban." I.e., an informal ban will arise upon complaint, enforced by users who have both academic freedom and avoidance of unnecessary disruption as goals, and who will seek consensus before barging ahead. It's really like just about any wiki decision. --Abd 20:30, 16 August 2010 (UTC) < I think CR is 'community review', right? - I suppose that's actually what's happening here - I hope to be able to carry on real slow with the Response testing project (which, following a suggestion from sj, has a 'wmf' section) - I don't think it's really creating any trouble at the mo - and I have a feeling that the root causes of the broo ha ha's are both interesting and important as subjects to discuss, learn about, analyse etc. There's a hint of an intimation in sbj's post that perhaps the closure of wv remains on the table somewhere - personally I'd raise an eyebrow were wmf to shut the project down on the basis that it became critical - but sure, things like sue's blog (she's the executive director of the wmf - so the boss on the staff side) could be read as warnings to pull some heads in. Is wikiversity really seen as harbouring people who" (Sue's paraphrase of Gary Marx's description of how people attack social / political movements here) The idea that the above could in any way be aimed at, well, me I suppose, I find both amusing and troubling - were it shown to be the case that those in ultimate control of this project are forming that view, I think shutting down wv would probably be a good thing - there probably wouldn't be much point in it, I guess? Privatemusings 00:35, 17 August 2010 (UTC - PM, that comment of Sue Gardner was not at all aimed at you. Sue was writing much more generally. However, a shallow understanding of Marx could lead her to think of Wikipedia vs. The Enemies, which would be a serious mistake. --Abd 15:24, 17 August 2010 (UTC) - w:Wikipedia:Wikipedia_Signpost/2010-08-16/Spam_attacks describes a recent "research" aka vandalism project on Wikipedia. Any research which harms or otherwise disrupts other WMF projects shouldn't be permitted here. More generally, we shouldn't have to ban all research of other WMF projects, we just shouldn't pretend that there aren't certain limitations and issues to consider due to Wikiversity being a WMF project. As far as I can tell, the WMF research projects here that have been controversial, such as trying to research into past conflicts on Wikipedia, have failed to recognise some of these issues. Adambro 15:58, 17 August 2010 (UTC) - There is a possible misunderstanding here, based on there being two kinds of "research." There is experimental research, which is something done by someone who undertakes "response testing," for example, and there is research as in the study of evidence already available. The research Adambro mentions (thanks for the link, by the way, fascinating story) was, however, not organized on-wiki, nor was prior response testing research, to my knowledge. In other words, these were not "research projects here." (But they may have been headed in that direction, hence were properly interrupted, given the lack of guidelines and supervision.) - I'll note that the researcher involved in the Signpost report was unblocked per an agreement with ArbComm that did not prohibit further research; rather, it contained it and set up private review processes to precede future projects. My own opinion is that research like that which was done is actually very important, even though it involved "vandalizing" Wikipedia for a very short time. (With fake spam designed to test real user response.) The intention underlying the research was to reduce vandalism and spam and to reduce its persistence. Actions should be judged by intention, as well as by immediate effect, sometimes a negative immediate effect can have a benefit, long-term, that far outweighs the immediate effect. There are ways to address the problem of consent to participation in research involving human beings, and I hope that the WMF obtains some real expert advice in this area. - I do agree with Adambro's conclusion, however. First of all, experimental research involving human beings requires fairly complex ethical guidelines, just as response testing in business is best done under ethical restraints. Study of particular past conflicts, however, is normally research of the second kind, without the same ethical considerations. However, because such research can create what are effectively partial biographies of human beings, there are still serious requirements to respect, and these are guidelines that we need to develop. These should be developed and applied wherever such study takes place, whether here at a WMF project, or on, say, the alternative netknowledge wiki, independently controlled. If undue interference develops here, I assume that the project would move elsewhere. But I don't expect that outcome, except for minor subprojects, perhaps. I expect cooperation between the "academic institutions," which is the norm. --Abd 17:21, 17 August 2010 (UTC) - @ The signpost Adam pointed reminds us of the reality, loud and clear: You can get away with doing the same harmful things or worse more easily if you are powerful enough, like being a developer or a researcher working on a project in a computer science department in a major university. Guys, if you want to make a splash, make a big one, and don't talk about your plan if it isn't mature. Hillgentleman | //\\ |Talk 22:58, 17 August 2010 (UTC) The Arbitration Committee has reviewed your block and the information you have submitted privately, and is prepared to unblock you conditionally. - I think the moral of the story is if you've contributed things of great worth in the past, you are more likely to be forgiven and allowed to get away with doing something harmful. I think waiting until plans are mature to discuss them discourages early collaboration. I think people just need to be absolutely clear that plans aren't final yet and need to indicate when a plan is final when discussing plans. I think people should avoid acting on plans before plans are clearly finalized though. -- darklama 23:26, 17 August 2010 (UTC) - I think HG's point is valid, though. The research project in question was not planned with the approval of ArbComm or the Foundation. It was outside. It did minor harm, short-term. The breaching experiment with a set of unwatched BLPs, done by Greg, was by the cooperation of a WP administrator and the blocked Greg Kohs. The experiment did much less harm, probably, than the university experiment. Yet the admin was desysopped, and that experiment might have been a factor in the global lock, as I recall (what was the timing? I forget). The conclusion and resolution of ArbComm in the university case was one that I agree with. But there is, in fact, a double standard being applied here, and it probably has to do with Greg being a prominent critic. And that sucks, in short. Nevertheless, this is really moot here. As far as I can see we are not going to allow Wikiversity to be a base for organizing "breaching experiments." Period. We might study those, however, sometimes, afterwards. Carefully. --Abd 23:45, 17 August 2010 (UTC) - For simplicity what I'm saying is: Do get permission before acting. Don't wait until a proposal is mature to discuss a proposed experiment. To use the specific experiment being discussed as an example, the researcher should of been able to use Wikiversity to develop there plan and to discuss the plan with other people, and than once people at Wikiversity felt the proposal was mature, the researcher should of sought permission from the Wikimedia Foundation, Wikipedia's ArbCom, or Wikimedia Research Committee, pointed to the development here, and answered any questions that WMF, ArbCom, or the Research Committee had, and ensured any actions or experiment carried out was within the limits that WMF, ArbCom, or the Research Committee permitted or not done it at all if they opposed the proposed research experiment entirely. -- darklama 14:25, 19 August 2010 (UTC) - "Experiment" is a loaded term here. I agree with Darklama, generally, but I wrote the response before carefully reading all of it! So this is an independent take: We can say that experiments involving testing the responses of human beings raise ethical questions that must be addressed before proceeding with actual experiment. If someone proposes such an experiment here, discussion is necessary before action, and, in fact, that discussion should eventually be brought to the attention of those that might be affected. If, for example, some response testing on en.WP were proposed, users here should be discouraged from acting to run the experiment before there is consensus for it; users who disregard that might be sanctioned, if there was activity here that was improper (such as active and specific planning of an experiment, with operational details, etc.) If an experiment is to be run on another wiki, such as en.WP, the proposal should be cleared, first, with either the community of the wiki involved, or, on WP, if confidentiality and some level of secrecy were required, with ArbComm there, or with some WMF body. WP ArbComm has an established procedure, it looks like, for such testing. - However, we do not have to wait for wide consensus to develop resources studying wiki history. There are still issues, but anticipating them all could be difficult, so normal wiki process suggests proceeding with caution, being sensitive to criticism and warnings. Normal process (such as deletion of contributions considered too hot to stand at the surface), avoidance of revert warring, and ordinary discussion should handle this well enough. An Ethics Committee might be formed to consider ethical issues, with, possibly, some special process, but we'll cross that bridge when we come to it. --Abd 17:44, 19 August 2010 (UTC) - Templates could be used to indicate the progress of a research project like: This proposed research project or experiment may still be in development, under discussion, or in the process of gathering approval. You may be sanctioned if you follow suggestions in this draft proposal without approval. This research project or experiment is mature and has gained approval. Please check the edit history to ensure no significant divergences from the approved proposal has happened before following suggestions to avoid any sanctions. This research project or experiment sought approval and was rejected. This resource is kept for historical interest and for people to learn what not to do. - -- darklama 13:35, 20 August 2010 (UTC) - Yes. This would be clearly appropriate for proposals involving response testing planned as part of the study. A somewhat different template would be used for simple documentation research, pointing to guidelines for such. (Suppose I put up a link to all contributions of Editor X. No problem. Suppose I put up a link to selected contributions in a way that make the editor look like a complete bozo. Problem. Maybe! A proper research project would be designed to avoid cherry-picking, would use stated, objective criteria, if selection is to be done.) A goal of stating or inferring blame should be carefully avoided, even the appearance of such a goal should be avoided. It's impossible to anonymize the necessary evidence, on-wiki, but certain pieces of a project could be developed off-wiki, or on-wiki under certain conditions, that would, top-level, draw anonymized conclusions, with raw evidence being buried, not available in current pages, so not searchable under the person's name. For the studies, the identity of the person is, ultimately, not relevant. It's tricky, and no specific rule is likely to apply well under all conditions, sometimes identity is important, which is why the Privacy Policy allows breaking privacy rules when it's needed. And when one does that, a review process is needed, which is, I think, OTRS, though there is also Ombudsman if checkuser is involved. --Abd 14:06, 20 August 2010 (UTC) Wikiversity:Embassy Pardon my French, but It seems to be requiered. JackPotte 02:21, 19 August 2010 (UTC) - Not quite. Nobody was watching or using Wikibooks:English Embassy and there were no complaints when I delinked it from the above page and removed it. Adrignola 12:35, 19 August 2010 (UTC) Subtitled movies Hello, the en.w, fr.w, fr.v et fr.b have installed this gadget, it can be useful as we can use a video in any language by pasting some customized subtitles above, with eventually some hyperlinks. JackPotte 13:48, 21 August 2010 (UTC) - I agree, this would be very useful if someone wants to use videos on their educative posts. Some admin please import it. Diego Grez 17:40, 23 August 2010 (UTC) - Bugzilla advises to put it directly in Mediawiki:Common.js in order to be able to give them enough feedback about the tool. JackPotte 19:48, 24 August 2010 (UTC) --91.65.132.76 12:23, 21 August 2010 (UTC) As a studdent, I find the navigation trough Wikiversity really confused. For example, I was interested in learning Biology, but first I reached the primary school and then a link to "university" level, and from there just a boring list in alphabetic order... I understand the lak of content, but why do I get different places form the same link and viceversa? The spanish wikiversity is oftenly easier to navigate, may be you could get some ideas from there, or what is better, to colaborate together! :) I agree, wikiversity as a whole is kind of all over the place. Perhaps it might be useful to develop some pages for those who would like to make things easier. That is what led me to this page, but there appears to be a series of disparate discussions here!Harrypotter 19:41, 21 August 2010 (UTC) - There also needs to be worthwhile contents. If there are to be links, the best thing would be to link to finished products, or to mention from the start where finished products are not to be found. - The disparate discussions deal with the issue whether conflicts on Wikipedia should be studied here. And these conflicts are not only studied... The rest of Wikiversity didn't cause much trouble as far as i know.--Daanschr 20:53, 21 August 2010 (UTC) I think there is a lot of useful and worthwhile contents, but content is too deeply nested in navigation to find. I think most people give up after 1 or 2 pages, and with the current navigation you could end up having to goto 6 or more pages before finding the course contents: Math portal > Math school > Math department > Math topic > Math category > Math list > Calculus topic > Calculus category > Calculus list > Introduction to calculus. when it should be just 1 or 2 pages away: Math > Calculus course > Introduction to calculus -- darklama 22:28, 21 August 2010 (UTC) - An inventory could be made of all worthwhile contents, and than a good navigation system to get these contents. I got little to do now, so i can work on it, but i don't want to do everything on my own.Daanschr 08:16, 22 August 2010 (UTC) - I think everyone will consider their own work worthwhile. I think navigation needs to start small and grow as more contents becomes available. I think part of the problem is people at the beginning were too ambitious and didn't put enough thought into how to organize contents so it could be easily navigated. I think if we are to avoid repeating that, we need to have a organized plan that most people can agree with, even if only a few people are willing to volunteer their time to implement it. In the past I suggested that the portal, school, and topic namespaces be replaced with a single course namespace. We could always attempt to implement a course pseudo-namespace with the intent to replace those 3 namespaces once all/most works have been organized into it. What do you think? -- darklama 12:35, 22 August 2010 (UTC) - Well i have come across a vast undergrowth of incomplete pages which have been abandoned often a year or two ago. In working on the British Empire, I parked this as an archive, and proceeded developing a much smaller element of this enormous topic (those interested can see Tudor Origins of the British Empire). In a rather chaotic fashion I have stumbled across wonderful navigation aids, quizzes and other useful tools, but largely through trial and error. What I feel would be useful is: - Student Navigation, which would lead potential students to peer-reviewed educational resources - perhaps some thing like wikipedia Good and Featured articles - Teacher Navigation, which would include partially completed material and various resources like quizzes, navigational bars etc. as well as active working groups. It seems to me that there was a lot of enthusiasm a year or two ago, but that activity has declined and that now quite a few people check the site, but after a little while give up. Before being active here, I was active on wikieducator, which has quite different problems. I am currently trying to find a practical way of using both sites to get the benefit of each.Harrypotter 14:11, 22 August 2010 (UTC) - We could make a split in the categorization between featured contents and all contents. And put a warning on the last categorization, that lots of the contents aren't finished learning materials. Determining the difference between featured and non-featured contents will require a lot of politics. How can that be managed in a decent manner? - It would be best to make a categorization of active courses and learning communities, to ensure that new users don't enter a desert, but can become part of a community. One way to stimulate this would be to organize fairs. In the Middle Ages, merchants organized fairs to ensure that they could buy and sell products and wouldn't be alone on a market. If we set time periods, like a week, in which certain topics could be studied as a group activity than that could stimulate new users to stay. I fear that lots of people will turn away when they can choose between 30 featured contents, all developed by a single user of Wikiversity.Daanschr 09:00, 23 August 2010 (UTC) - Harrypotter 09:12, 23 August 2010 (UTC) - By accident I came across this page: Category:Featured resources! Harrypotter 09:19, 23 August 2010 (UTC) - I think Cormac made it. But, i wasn't part of the people who made this category. My main focus here on Wikiversity was to come to some kind of learning communities. The reading groups have been operational, one was aired for a couple of months with weekly activities. Two others had problems with upstarts. - The idea of the fair can be used in several ways. Suppose there will be a fair on history in the first week of February 2011. Than a couple of people can prepare something for this week with the aim to attract more users for Wikiversity who are interested in history. There can be discussions on the chat, there can be the development of a game on history, or we can discuss certain writers or sources. The same can be done with topics like climate change or Einstein's Theory of relativity, or major political philosophers. - I have tried to organize congresses, focused on editing a group of articles on Wikipedia. But, that looks a lot like WikiProjects. It could also be simply about discussing a subject and collectively writing an essay on it. A date can be set when the congress will start. In preparation of this congress, literature can be discussed and read, people can be invited. An organization can be set up in order to manage the congress in such a way that the participants felt comfortable with it. - A fair is more broad and less determined from a congress. In the Middle Ages fairs were attended by merchants and some customers, who traded their products with each other in order to sell them in different areas. On Wikiversity a fair could be a gathering of people, all with different ideas, who want to find some like-minded people in order to get their ideas worked out, with whatever they want to do regarding learning and Wikiversity. I guess it is best not to use the word fair, because it is distracting. What can better be done is to just put a meeting on an agenda to of a field of study (like history, or physics), to talk about this on the Wikiversity chat. But, maybe there are too little people at the moment to man these kind of meetings. - We are both interested in history, i graduated at the university in history. So, we could try this out with history. - The problem with the under construction tag is that most contents on Wikiversity are under construction. If you add the tag, it should also be removed when nothing is done with the article anymore. On Wikipedia there was a campaign to remove tags from articles, otherwise half of the encyclopedia appeared to be under construction. But, in some cases it might be a good idea to use those tags. Suppose we have a busy well-organized community, that cleans up all the mess it leaves behind online, than an under construction tag would work very well.Daanschr 14:55, 23 August 2010 (UTC) - What you say is very interesting and brings to mind the Champagne fairs, which I always link with the development of the narrative form, i.e. through Chrétien de Troyes. Perhaps we should use the term Fair and encourage existing participants to prepare material to showcase during the period, have a number of people who will agree to respond to queries during the week and try to raise the profile of wikiversity outside the existing community, particularly as regards other wikimedia projects and wikieducator - who I feel we could work more with. How does that sound?Harrypotter 18:58, 23 August 2010 (UTC) - I derived the idea from the Champagne fairs. Never heard of Chrétien though. - One way to organize such a fair would be to make an article on the subject and to have people join by stating their own learning projects on it and tell what they are doing now with them. Added to it could be chat sessions, to make the communication quicker. - It won't help with the problem of difficult navigation on wikiversity ;-).Daanschr 20:59, 23 August 2010 (UTC) - I think we should go ahead and see what comes of it. Actually I think we could do an article on the Champagne Fairs. Chrétien was a medieval writer of romances, and the fairs became an important culture focus as well as just trade. I think the navigation issue is vast and somewhat daunting, and the solutions will come about through creating islands of collaboration which can then be linked up. If the Signpost idea comes off as well, and we work together, this could help.Harrypotter 09:41, 24 August 2010 (UTC) - So, what kind of fair do you want to organize? I know it is my own idea, but i still doubt the usefulness and my own satisfaction of it. I would also be happy to do something with history!Daanschr 17:18, 24 August 2010 (UTC) - Well let's start with history. I've been mucking about a bit with Portal:Social Sciences, Portal:History and School:History. Actually it's a bit like stepping on board the Marie Celeste - I keep on expecting to find someone's half eaten sandwich - stale after having been left for 18 months - whenever I follow a link. Perhaps if we do some work together there, we can see how the idea of a fair works out a little bit later?Harrypotter 00:09, 25 August 2010 (UTC) - Okay, i will continue the discussion on the School:History article and talk page.Daanschr 08:58, 25 August 2010 (UTC) Quiz options Making a quiz with single submit options --Rahul08 09:56, 27 August 2010 (UTC) What I am talking about is the ability to submit the answer to one question at a time.For e.g., in english basics 101,[basics 101 numbers] numbers section, you can see a series of questions where the user has to enter an answer.Now if he clicks the submit button for the first question, the present format tends to check all the questions, including the ones he hasn't answered.I was wondering if there was something which would allow only one question to be corrected each time rather than the whole quiz. - There is a (rather experimental) way to do it with recursive conversions. The idea is to write a substituted template and you put the answers in as parameters; if your answer is right then the template will give you the next question; if you answer is wrong then you will be told so and the question repeated. Hillgentleman | //\\ |Talk 14:36, 27 August 2010 (UTC) Countervandalism channel IRC Hello dear Wikiversitarians, It's been a while since the last check, but I noticed the #cvn-wv-en on irc.freenode.net channel is abandoned. The recentchanges-bot from the m:CVN (named MartinBot) has been offline for about a year, but not. The advantage of a CVN-channel over the regular feed from irc.wikimedia.org is that it filters down to suspicious edits (anonymous edits and otherwise notable edits for vandal fighters) and filters these based on a globally shared database of blacklisted and whitelisted usernames and other patterns (these are shared amongst all cvn-channels.). But if the amount of edits isn't up to a point where one can't follow the edits via Special:RecentChanges such a channel may be overkill. Right now when I look at Special:RecentChanges I can look back 2-3 days in the last 100 edits so if there are enough people watching that one could check 'everything' without having to filter them down to a lower number. Either way, setting up the channel is no hassle at all. So state below what you think about it and whether or not you would like such a channel again. Krinkle 14:21, 27 August 2010 (UTC) - Wikiversity is very much more of an experiment on using wiki for education than a development reference resource; I doubt the counter-vandalism heuristics of other sites would work here. There are many new/ip users which are students and they may not be very familiar with wiki editing. Experimentations are actually encouraged. Hillgentleman | //\\ |Talk 14:28, 27 August 2010 (UTC) Wikiversity Signpost I'm trying to start up a Wikiversity equivalent of the Wikipedia Signpost. Does anyone have any article ideas? Would you liekt o write an article yourself? Thanks, Rock drum (talk • contribs) 19:54, 23 August 2010 (UTC) - If me and Harrypotter will succeed, than we might be writing some articles. At the moment though, i don't have any material.Daanschr 21:02, 23 August 2010 (UTC) - Hey, great idea! Try checking in with some of the profs and active editors. User:MrABlair23, for instance, who just created and then blanked a fascinating course. Or Prof. Loc Vu-Quoc who has all of his students post all of their assignments here on WV. –SJ+> 02:20, 24 August 2010 (UTC) - Well, maybe we should post the fair idea there. Do you know what it's going to be called?Harrypotter 09:36, 24 August 2010 (UTC) - SJ, the reason I blanked the course is that I had a bit of a problem going on and that was the only possible solution. It is no big deal and plus, I have thought of quite a better way to deliver that fascinating course. So, if anyone is interested in doing it, please sign up to it now! --MrABlair23 14:32, 24 August 2010 (UTC) - Thanks, MrABlair -- I didn't mean to focus on the blanking, just that you're working on a cool course and making lots of updates to it. A story about the course itself and your ideas for running it would be quite interesting. As an aside, I was just at the NYC Wikiconference this past weekend, and there were dozens of WP editors there interested in Wikiversity once they heard about it. Most of them had only edited Wikipedia, and some didn't even know WV existed... so a signpost, or a regular story in the en:wp signpost, and other ways to share what's happening here, will make a real difference. –SJ+> 06:42, 31 August 2010 (UTC) - This edit, a response to MrABlair23, was made to this section by a blocked editor, and was reverted as such by me, and listed for review (with other edits made the same day), with a comment that it looked "good." Taking responsibility for the content of this edit, today I restored it. However, my restoration was reverted, so I'm making this comment for transparency. --Abd 19:48, 28 August 2010 (UTC) New encyclopedism Just in case you may be interested in any of: - w: Wikipedia:Categories for discussion/Log/2010 August 23#Category:New encyclopedism - w: Talk:New encyclopedism - User:KYPark/Encyclopaedism/Timeline (Maybe more proper here than in Wikipedia.) BTW, please anyone advise me why v: User:KYPark/Encyclopaedism/Timeline doesn't work here, which works at w: Talk:New encyclopedism. -- KYPark [T] 09:44, 30 August 2010 (UTC) - (edit conflict wtih below) Looks like one of the templates you created here, as you had created at Wikipedia, had an error in it (that wasn't on Wikipedia). I think I fixed it. The wikipedia page isn't the one you cited. Rather, it's w:User:KYPark/Encyclopaedism/Timeline. --Abd 15:54, 30 August 2010 (UTC) - Huh! That WP page doesn't exist. I must have been confused. The template w:Template:show-head2 is only used at w:User:KYPark/Sandbox, permanent link, I have no idea now what KYPark means by his comment that the Timeline page doesn't work here, and that page content is not where he referenced it to be. w:Template:show-tail is used on his private Sandbox and on a series of year pages in his user space. The Talk page he references refers to the Timeline page here. --Abd 18:01, 30 August 2010 (UTC) - The development of the British Museum Library Catalogue was very important in what you are describing.Harrypotter 15:50, 30 August 2010 (UTC) - FYI, for templates you can request Import which has the advantage of copying the template exactly and also preserves the edit history to give credit to those who contributed to the work. Another advantage is that the import can also copy subpages (like the /doc documentation for a template) in one easy click. --mikeu talk 15:59, 30 August 2010 (UTC) - Sure. In this case, though, KYPark is the author of the WP templates.... Yes, import is better, for the reason of giving credit, but it also requires a custodian to act, which delays the process; if I just want to experiment with templates to make a page work here, that delay and extra hassle will probably mean that it won't get done. But it would be pretty simple to fix this later. I'll review templates I've brought from WP and make a list to be imported. The process should be done in such a way as to merge the present content, which has often been altered from Wikipedia to make it work here, with the old content underneath in History. That should be simple. Hey, quite a bit of what I do would be simpler with the tools .... but I'll need a mentor. One step at a time, I suppose. --Abd 17:42, 30 August 2010 (UTC) - Thanks FYI, mikeu, but the case is roughly as Abd suggested. The WV version is improved, whereas the WP version was improvised (original), as it were. To be honest, I'm giving WP up for one reason or another. One more reason has been added; the w:Category:New encyclopedism was after all deleted on August 30 unjustly without any deleting guy responding to my w: Talk:New encyclopedism, though individually invited. May Wikipedia pay for this obvious injustice! -- KYPark [T] 09:35, 2 September 2010 (UTC) - Again, why v: User:KYPark/Encyclopaedism/Timeline doesn't work here? - which is just User:KYPark/Encyclopaedism/Timeline. - It is on WP and WV that the very link w: Talk:New encyclopedism works. So I expect v: User:KYPark/Encyclopaedism/Timeline to work on WP and WV as well. As you see, however, this link is red on WV, while the same code works on WP as you experience at w: Talk:New encyclopedism.:29, 31 August 2010 (UTC) Here is an example: do you see the reference at the bottom of wikt:fr:welcome? I've just added it and the archive link is already available. JackPotte 12:23, 3 September 2010 (UTC) Category casing. I have been thinking about putting some hard work cleaning up categories and uncategorized pages. It was suggested that I standardize the casing of category names. But the suggester and I immediately disagreed about which was best. So I thought I would get a quick straw pole about a sense of how the communtiy feels. Let me know what you think. Thenub314 12:49, 3 August 2010 (UTC) strstract Algebraact Algebra - Wikiversity once had a policy page where these kinds of issues were discussed by the community, but User:Darklama made the unilateral decision to disrupt the development of that policy. --JWSchmidt 13:16, 3 August 2010 (UTC) - Ok, but here is our chance to decide what we would like to do regardless of Darklama's edit. Express your opinion one way or the other about the issue on the table, and when enough people have, or I get bored of waiting then I will get to work. Thenub314 17:05, 3 August 2010 (UTC) - "Not very relevant." <-- Wrong. The page that Darklama hid away is the page where Wikiversity community members should decide such matters. --JWSchmidt 17:30, 3 August 2010 (UTC) - While that may be true, until we are ready to enforce page discipline (easily done, it's not censorship), here we are. The man wants an answer, and if that answer, perhaps based on shallow discussion here, conflicts with general practice, we'll have to look at it some more. --Abd 20:39, 3 August 2010 (UTC) - Let me be very clear about why it is not (in my opinion) very relevant. First regardless of that edit, or what the policy said, I would have asked again. Why? Because I have a preference, and for all I know someone who wrote that page many years ago made a choice by eeny-meeny-miny-moe. Before I steel my nerves to make several hundred edits it would take to clean things up, I am going to ask questions to make sure my work reflects the current feelings of the community. I would have always started the discussion at this page instead of the page you point to because it is more visible. Now you can continue to complain about Darklama's edit, that is fine, but I have nothing more to say on the matter. Might I suggest though that it would be more productive to give your opinion below. Because I am not interested in any of this politics, I just want to get stuff done. Thenub314 20:45, 3 August 2010 (UTC) - Production, Politics, Prediction. My prediction is that you will change a bunch of category names and then a year from now someone else who "wants to get stuff done" will change them all back to the way they are are now. I admit that politics often works in futile cycles of needless activity, but I don't think that recording hard-earned cultural wisdom in guidelines and policies is "politics"...it is good community practice. --JWSchmidt 22:47, 4 August 2010 (UTC) Categories should be title cased (as in Category:Abstract Algebra) - Thenub314 12:49, 3 August 2010 (UTC) - For courses, Geoff Plourde 17:42, 3 August 2010 (UTC) #Being an educational entity, I think that title casing is necessary down to second headings. - With some more thought, I think that title casing is necessary for the top few levels of articles/lessons, but not necessarily everything else.--John Bessatalk 12:18, 26 August 2010 (UTC) - Beyond that I think we should be as familiar as possible, and hence capitalize as little as possible.--John Bessatalk 18:09, 25 August 2010 (UTC) - With some experience, casing needs to be in context. When doing more social work (in counseling), casing is more common because of the ego-centric nature of "theories." But with neuroanatomy, casing seems entirely unnecessary as everything is objective, and (easily allows itself to be object-oriented, or OO--or perhaps, functionally-oriented). - Further, ego-centricism in anatomy that has resulted in upper-cased parts should be de-ego-centralized with lower casing.--John Bessatalk 16:01, 3 September 2010 (UTC) Categories should be sentence cased (as in Category:Abnormal psychology) - Abd 20:39, 3 August 2010 (UTC) There are strong convenience reasons to use sentence case. It allows someone to neglect case in citing the page or adding a category from memory, and case is always a little bit harder to type. Because most of us have strong Wikipedia experience, as well, which uses sentence case except for proper nouns, it's more in line with our habits. Wikiversity's somewhat common usage of "title case" -- which is ambiguous in fact, just clear in the example, has often delayed me completing an edit until I figured out what the used form is. Example of ambiguity: Category:Category:Solutions to problems in Abstract Algebra. Abstract Algebra can be taken as a proper noun, but it's easier if we use "title case," i.e, all lower case except for obvious and clear proper nouns, i.e., Category:Solutions from Isaac Newton on integral calculus. And if anyone thinks that a capitalization error will be common, a redirect can be put in. I think that sentence case will require fewer redirects. First letter is by convention capitalized in page names: first letter case is ignored by the software, I believe. Sentence case thus allows someone to type all lower case letters, usually, which can help with, say, an iPhone. I have some vague memory that there may be an exception. --Abd 20:39, 3 August 2010 (UTC) - Categories for courses should be title-cased, but general subject categories like those seen at Wikiversity:Browse should be sentence case. As a side effect, this makes it easier for interwiki adders to match up categories. Adrignola 22:43, 3 August 2010 (UTC) - For general subjects, Geoff Plourde 22:46, 3 August 2010 (UTC) Well I am relatively happy with the compromise suggested by Adrignola, a similar scheme is used at wikibooks. There are just a few things that should be kept in mind: - It is often the case that a resourse may be neither a course nor a subject, but rather some other learning resource. I it may be better to think of things in terms of learning resources and subjects. - Resources (and hence courses) are sentenced cased. So for example, Philosophy of mathematics would have a corresponding category would be Category:Philosophy of Mathematics to hold the subpages for the course. The casing would not match, this is no big deal in my opinion, but thought I would point it out. There is also a small potential for confusion. If someone later creates a category for the subject of the philosophy of mathematics it would be Category:Philosophy of mathematics, which now matches the case of the course. Maybe we should consider the reverse? That is, sentence case categories corresponding to learning resources and title case categories that exist to collect similar resources together. We used this type of scheme at wikibooks to avoid this type of name collision but it took the opposite form since our resources are usually title cased. Of course the other obvious choice is to make one of the names explicit by appending something like a (subject), but this could get a little messy. Thenub314 09:11, 4 August 2010 (UTC) I generally use and encourage sentence casing unless there is a particularly good reason e.g., proper nouns/names. -- Jtneill - Talk - c 11:18, 4 August 2010 (UTC) WV as an educational entity It makes sense for WV to be distinct from WP as WP is an encyclopedia built from wikis, and WP is an educational wiki. Wikis are only a reasonably new concept, being about the same age as the Web, but they continually growing into complex collaborative knowledge construction entities, a concept that be-devils WP because it is only an encyclopedia. The exact opposite is true here; when we finally get this community site harmonized and are able to attract those who are truly wrapped in the wikis' construction potentials, then the WP will be a widely-respected as a source for new and revolutionary information. Wikis are educational by nature, so we can embrace the wiki potential in ways the WP cannot.--John Bessatalk 12:43, 14 August 2010 (UTC) - To prepare for my courses, I have been familiarizing myself with the topics by deconstructing over-view material. It seems to make most sense to title-case, or fully capitalize pages, be they articles or lessons, and also first heading topics (=Topic=), but then sentence case second headings (==Second topic==), and then use as much lower case as possible from then on, so as to poise wiki-structured "byte code" writing to be converted into prose.--John Bessatalk 16:57, 25 August 2010 (UTC) The Sandbox Server II -- The Sandbox strikes back Hi all, We're getting another shot with a sandbox server -- but we need some projects!! Is there a course, learning experiment, interaction or other bit that could utilize a server? Please let us know! We're putting together projects that will be going on to the server when it gets set up, hopefully in the next few weeks. We're hoping to start with 3 strong projects, but once we've got those we'll be rolling out more in the future. So let us know what you've got. --Historybuff 05:55, 9 August 2010 (UTC) Here is fine -- if things get busy, we can move it to another page. Historybuff 14:39, 9 August 2010 (UTC) - Moodle!!! Geoff Plourde 05:14, 10 August 2010 (UTC) Geoff, you are a man of few words. I think you've nominated yourself to help out with the Moodle project -- I like that idea. Any other contributors there, and any other ideas? Historybuff 23:22, 10 August 2010 (UTC) Darklama -- I think Wikimedia (and WM software development) would be a _fantastic_ project, but it could be a large one, and one (at present) which I won't have time to manage or lead. If we can find a tech lead and a learning project manager, I think it would work well. Wordpress is a great idea. I like the idea of just a blog, and I'll fiddle around with this. There are other good ideas (other CMS, LMS, etc) which could be explored. Keep the ideas coming! Historybuff 14:28, 11 August 2010 (UTC) - I can run a Moodle installation, and I',m sure JWS could assist. other LMSs are iffy, I'd say. Geoff Plourde 05:55, 13 August 2010 (UTC) Hi Geoff. Do you have a Learning Project on WV about Moodle, or somewhere to talk on-wiki about it? I think you've got the Moodle thing if you want it, just let me know what's needed to get started. I think we'll be "going live" in a couple weeks. Historybuff 18:50, 17 August 2010 (UTC) - I am on moodle myself for my masters program -- it's OK, not great. I think a wiki implementation would be better with a side-site for social gathering. Where moodle rules is in teacher evaluation of participation over tested grades -- "teach" can monitor all activity and see who is really doing the work. But, as we all know, you can really drill down on activity with mediawiki!--John Bessatalk 13:18, 6 September 2010 (UTC) Adding a maps extension Could somebody please take a look at Geographic touchpoints, and note in the table that maps are not rendering in the way we had gotten them to render at the equivalent page on NetKnowledge.org. Could someone guide me as to how to either adopt that same extension here at Wikiversity, or to modify the touchpoints table so that it will comply with another existing maps extension that is in use here at Wikiversity? -- Thekohser 16:07, 27 August 2010 (UTC) I think the maps extension is unlikely to be enabled here, even if the community demonstrated support for it, because the extension relies on querying servers outside of WMF's control which they would likely be consider a privacy issue, since 3rd parties would have access to "private" information. -- darklama 15:36, 30 August 2010 (UTC) - mmm... DL, I think you meant "querying." Has this been discussed somewhere? --Abd 16:00, 30 August 2010 (UTC) - If we are indeed prohibited by privacy policy from acquiring an external map extension within a Wikimedia Foundation project, then I suppose just text coordinates with an external link to a community-decided "safe enough" external map site, plus perhaps a freely-licensed bitmap image (though static) of the city location (something like this) would be sufficient. I'd like to leave this discussion open, though, for another week or two, just to make sure that Darklama's (helpful) opinion is not mistaken. Certainly, some work has been done to attempt solutions for mapping: - So, some people are clearly working on the problem as we speak. It just may be months or years away from acceptable implementation. Frankly, I find it discouraging that the Wikimedia Foundation hasn't taken a more active role in either launching or acquiring a free, open-source mapping project, but I'll leave that comment where it is. -- Thekohser 16:14, 31 August 2010 (UTC) - Yes there has been work done to try to address the issue as you found. I didn't mention any of them because like you said they may be years away from an acceptable implementation. You could use an imagemap with some image to link to other pages on Wikiversity, if that would work for you. External links are fine because the person acknowledges/accepts the risk by clicking the link. If a person doesn't click the external link than supposedly there is no risk to their privacy. Presumably if an extension required people to opt-in through there preferences to see maps by a 3rd party (like Google Maps) that would be fine too. -- darklama 18:22, 31 August 2010 (UTC) - This kind of sucks. It's a shame for a "university" type of forum to lack anything more technically useful than the old "pull down" maps of the world, Europe, Asia, Africa, South America, and North America that I remember from ninth grade. Now I need to simply decide if the interactive mapping found at NetKnowledge outweighs or not the apparently larger and more active community here at Wikiversity. Of course, I am also open to the suggestion that Geographic touchpoints are simply not worth compiling at all. -- Thekohser 16:01, 7 September 2010 (UTC) Semantic Mediawiki extension Is the Semantic Mediawiki extension installed on English Wikiversity? If not, could it be, or has that been deprecated? If so, I'm thinking that it would be a better framework for the Geographic touchpoints project here. -- Thekohser 16:31, 31 August 2010 (UTC) - No it is not installed. That would be good to have, but I wonder if the developers position would be the same as using the latest DynamicPageList? Bug requests to use the latest DPL have long been quickly closed as WONTFIX. -- darklama 18:25, 31 August 2010 (UTC) Support - Support per SJ's comment above in that case. Wikiversity would be a good test ground for that extension, just like with the Quiz extension. -- darklama 22:25, 7 September 2010 (UTC) Discussion Personal attacks - I think this discussion has produced Yet Another World-shattering NuanceHarrypotter 22:25, 5 September 2010 (UTC) - If we did embrace and even made a custodian of such a user as user:Salmon of Doubt who came here to contribute nothing but to edit war with other wikiversiters, you guys are really making a storm out of very little. Hillgentleman | //\\ |Talk 16:33, 11 September 2010 (UTC)
https://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/August_2010
CC-MAIN-2019-43
en
refinedweb
Manage all events on workflows Project Description Release History Download Files Install Install the package via pypi: pip install django-workflow-activity Add the installed application in the django settings file: INSTALLED_APPS = ( ... 'workflow_activity' ) Migrate the database: python manage.py migrate Usage To create workflows and permissions, see the following documentations: To use workflow activity methods on a class : from workflow_activity.models import WorkflowManagedInstance class MyClass(WorkflowManagedInstance): ... To add a workflow to an object: myobj = MyClass() myobj.set_workflow('My workflow') Now, you can use methods on your object like: myobj.last_state() myobj.last_transition() myobj.last_actor() myobj.last_action() myobj.allowed_transitions(request.user) myobj.is_editable_by(request.user, permission='edit') myobj.state() myobj.change_state(transition, request.user) ... And managers like: MyClass.objects.filter() MyClass.pending.filter() MyClass.ended.filter() ... Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-workflow-activity/
CC-MAIN-2017-34
en
refinedweb
Last updated on MARCH 06, 2013 Applies to:Oracle Communications Order and Service Management - Version 6.3.1 to 6.3.1 [Release 6.3] Information in this document applies to any platform. This problem can occur on any platform. ***Checked for relevance on 06-Mar-2013*** Symptoms -- Problem Statement: com.mslv.oms.metadatahandler.handler.exceptions.NODefaultCartridgeFound: No Default cartridge found for namespace in database:namespace_Construction [java] Import transaction has been rolled back. -- Steps To Reproduce: Import a no-default cartridge only -- Business Impact: Can't import Cause My Oracle Support provides customers with access to over a Million Knowledge Articles and hundreds of Community platforms
https://support.oracle.com/knowledge/More%20Applications%20and%20Technologies/735882_1.html
CC-MAIN-2017-34
en
refinedweb
Playing Sounds In the earlier example, we used the play method. You can add some optional parameters to have more control over playback. The first parameter represents the starting position, the second parameter the number of times to loop the sound. In this example, the sound starts at the three-second position and loops five times: sound.play(3000, 5); When it loops again, it starts at the same position, here the third second. The Sound class does not dispatch an event when it is done playing. The SoundChannel class is used for that purpose, as well as for controlling sound properties such as volume and to stop playback. Create it when a Sound object starts playing. Each sound has its own channel: import flash.media.SoundChannel; var sound = new Sound(); sound.addEventListener(Event.COMPLETE, onLoaded); sound.load(new URLRequest(“mySound.mp3”)); function onLoaded(event:Event):void { sound.removeEventListener(Event.COMPLETE, onLoaded); var channel:SoundChannel = sound.play(); channel.addEventListener(Event.SOUND_COMPLETE, playComplete); } function playComplete(event:Event):void { event.target.removeEventListener(Event.SOUND_COMPLETE, playComplete); trace(“sound done playing”); } Displaying Progress There is no direct way to see playback progress, but you can build a timer to regularly display the channel position in relation to the length of the sound. The sound needs to be fully loaded to acquire its length: import flash.utils.Timer; import flash.events.TimerEvent; var channel:SoundChannel; var sound:Sound; // load sound // on sound loaded var timer:Timer = new Timer(1000); timer.addEventListener(TimerEvent.TIMER, showProgress); channel = sound.play(); channel.addEventListener(Event.SOUND_COMPLETE, playComplete); timer.start(); function showProgress(event:TimerEvent):void { // show progress as a percentage var progress:int = Math.round(channel.position/sound.length*100); } Do not forget to stop the timer when the sound has finished playing: function playComplete(event:Event):void { channel.removeEventListener(Event.SOUND_COMPLETE, playComplete); timer.removeEventListener(TimerEvent.TIMER, showProgress); } You do not know the length of a streaming audio file until it is fully loaded. You can, however, estimate the length and adjust it as it progresses: function showProgress(event:TimerEvent):void { var percentage:int = sound.bytesLoaded/sound.bytesTotal; var estimate:int = Math.ceil(sound.length/percentage); // show progress as a percentage var progress:int = (channel.position/estimate)*100; trace(progress); }
https://www.blograby.com/developer/playing-sounds-displaying-progress.html
CC-MAIN-2017-34
en
refinedweb
Hi Xiao, Played with Struct a bit and seems like a perfect fit for what I was looking for. Thanks a lot for the suggestion. Having some issues while trying to figure out the data type of the value on the python side but most likely that could be because of python runtime version mismatch issue, which I'm discussing in the other mail chain. Advertising Thanks again, and will post the performance numbers in terms of serializing/deserializing in different languages (at least, in python, C++, java and node.js. -Ranadheer On Saturday, April 2, 2016 at 6:20:09 AM UTC+5:30, Feng Xiao wrote: > > > > On Fri, Apr 1, 2016 at 6:06 AM, Ranadheer Pulluru <prana...@gmail.com > <javascript:>> wrote: > >> Hi, >> >> I'm planning to use protobuf for publishing tick data of the financial >> instruments. The consumers can be any of the java/python/node.js >> languages.The tick is expected to contain various fields like (symbol, >> ask_price, bid_price, trade_price, trade_time, trade_size, etc). Basically, >> it is sort of a map from field name to the value, where value type can be >> any of the primitive types. I thought I can define the schema of the Tick >> data structure, using map >> <>and Any >> <>as >> follows >> >> >> syntax = "proto3"; >> >> package tutorial; >> >> import "google/protobuf/any.proto"; >> >> message Tick { >> string subject = 1; // name of the financial instrument - something >> like MSFT, GOOG, etc >> uint64 timestamp = 2; //millis from epoch signifying the timestamp >> at which the object is constructed at the publisher side. >> map<string, google.protobuf.Any> fvmap = 3; // the actual map having >> field name and values. Something like {ask_price: 10.5, bid_price: 9.5, >> trade_price: 10, trade_size=5} >> } >> >> Though I'm able to generate the code in different languages for this >> schema, I'm not sure how to populate the values in the *fvmap*. >> >> public class TickTest >> { >> public static void main(String[] args) >> { >> Tick.Builder tick = Tick.newBuilder(); >> tick.setSubject("ucas"); >> tick.setTimestamp(System.currentTimeMillis()); >> Map<String, Any> fvMap = tick.getMutableFvmap(); >> // fvMap.put("ask", value); // Not sure how to pass values like >> 10.5/9.5/10/5 to Any object here. >> } >> } >> >> >> >> Could you please let me know how to populate the fvMap with different >> fields and values here? Please feel tree to tell me if using map >> <>and Any >> <>is not >> the right choice and if there are any better alternatives. >> > It seems to me a google.protobuf.Struct suites your purpose better: > > > > >> >> Thanks >> Ranadheer >> >> -- >>.
https://www.mail-archive.com/protobuf@googlegroups.com/msg11990.html
CC-MAIN-2017-34
en
refinedweb
IntroductionIn every mobile app,there is requirement for storing data and accessing data. We need to store data in our Phones to persist data when the application is not running.However fortunatley Windows Phone 7 &windows phone 8 provides a secure way to store data into Isolated Store.Isolated Storage, as the name suggests is a special virtualized file system that every application can access for standard IO operations yet the file is unavailable to any other application. Hence the files stored by one application is isolated from another. Note For beginners: 1)Isolated storage are best and simple for storing data,for more info you may visit 2)We can also store and access data using sqlite,i will be introduce it laterely or next article. Source File at :IsolatedStorageList Building the Sample 1)You will have to add a reference to System.Xml.Serialization 2)Needed <tt>System.IO.IsolatedStorage</tt> namespace to access files and/or application settings. Description 1)However in this sample i had taken "MyData" class as well as "MyDataList" C# public class MyData { public string Name { get; set; } public string Location { get; set; } } public class MyDataList : List<MyData>//for storing mydata class items with type of list { } 2) Add item to "MyDataList" like this way C# MyDataList listobj = new MyDataList(); listobj.Add(new MyData { Name = "subbu", Location = "hyd" }); listobj.Add(new MyData { Name = "Venky", Location = "Kadapa" }); listobj.Add(new MyData { Name = "raju", Location = "Kuwait" }); listobj.Add(new MyData { Name = "balu", Location = "US" }); listobj.Add(new MyData { Name = "gopi", Location = "London" }); listobj.Add(new MyData { Name = "rupesh", Location = "USA" }); listobj.Add(new MyData { Name = "ram", Location = "bang" }); 3)Write list or listbox items into IsolatedStorage C# IsolatedStorageFile Settings1 = IsolatedStorageFile.GetUserStoreForApplication(); if (Settings1.FileExists("MyStoreItems")) { Settings1.DeleteFile("MyStoreItems"); } using (IsolatedStorageFileStream fileStream = Settings1.OpenFile("MyStoreItems", FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(MyDataList)); serializer.WriteObject(fileStream, listobj); } 4)Read list or listbox items from IsolatedStorage list or listbox items from IsolatedStorage); } } 5)Remove list or listbox items from IsolatedStorage C# if (Settings1.FileExists("MyStoreItems")) { Settings1.DeleteFile("MyStoreItems"); MessageBox.Show("Items removed successfully."); } 6)Binding listbox items from IsolatedStorage list data); } } Isolistbox.ItemsSource = listobj;//binding isolated storage list data 7)Write selected listbox class item into IsolatedStorage C# private void Isolistbox_SelectionChanged_1(object sender, SelectionChangedEventArgs e) { MyData selecteddata = (MyData)Isolistbox.SelectedItem;//get listbox item data //Write selected class item value into isolated storage if (selecteddata != null) { if (Settings1.FileExists("MySelectedStoreItem")) { Settings1.DeleteFile("MySelectedStoreItem"); } using (IsolatedStorageFileStream fileStream = Settings1.OpenFile("MySelectedStoreItem", FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(MyData)); serializer.WriteObject(fileStream, selecteddata); } MessageBox.Show("Your selected item Name:" + selecteddata.Name.ToString()+" " + "is stored in isolated storage"); } } . IsolatedStorage in windowsphone 8 c# 2. Storing ListBox items in isolatedstorage in windowsphone 3. Removing items from isolatedstorage in windowsphone 4. Reading listbox items from isolated storage 5. Binding listbox items from isolated storag 6. Storing listbox selcted item in isolatedstorage 7.How to get listbox selected item from isolated storage when on restart of application and then on click of write listbox items previous items are lost.Please fix this. Thank you so much and keep it up,i fixed it in source code.Please see and download sample from here. How to delete a particular item from listbox items. listboxobj.Items.RemoveAt(indexitem); i tried but it says "Operation not supported on read-only collection" ther? need help, exception is thrown, i think observablecollection should be used but i do not about that.I need it. one more thing you need to fix. when you write listitems thats working,then read listitems thats working but when you click on remove listitems it says items removed,but without exiting application if you write into listitems,previous items are not deleted,they are added to latest items.This only works when you click remove listitems and exit application, then you can see items are deleted.If you are not exiting application then it is not going to work.Please fix this issue and how to overcome exception "Operation not supported on read-only collection" Thank you so much for your deep involvement of my post.I updated source code and please download it and let me know :) You done it but i need help on deleting individual items,i used contextmenu style for deleting,each time when i try to delete a particular item it throws "Operation not supported on read-only collection".I used Isolistbox.items.remove(Isolistbox.selectedIndex); How can I store a single selected item? I have a list picker, and I would select an item from the list. I want this to be stored. When the app is launched again, the selection should be intact.
http://bsubramanyamraju.blogspot.com/2014/01/how-to-store-listbox-items-into.html
CC-MAIN-2017-34
en
refinedweb
When I compile some old source code downloaded from Internet, I sometimes encounter the problems with #include <fstream.h> #include <iostream.h> The g++ compiler in cygwin claims the following errors: In file included from /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++/backward/fstream.h:31, from decode.cpp:5: /usr/lib/gcc/i686-pc-cygwin/3.4.4. in a class like that: I changed the header files as:I changed the header files as:Code:class e { public: e(istream&f); ..... } #include <fstream> #include <iostream> But the g++ still says: decode.cpp:37: error: expected `)' before '&' token decode.cpp:55: error: ISO C++ forbids declaration of `istream' with no type What can I do to fix this problem? Thanks
https://cboard.cprogramming.com/cplusplus-programming/100875-problems-istream.html
CC-MAIN-2017-34
en
refinedweb
Introduction Images are the best attraction for every mobile app.But Users have high expectations when it comes to using applications on their mobile devices. It is important to have good performance and quality in your applications and there are a number of things you can do to deliver great performance.So as s developer it is more quit difficult to overcome out of memory exception,however I’ll list a few that apply to memory, images, data binding and more.and i hope i will definately helpful for who are having same problem.Source File at :WP8ImageFree Building the Sample We need ad namespaces " Microsoft.Phone.Tasks" for camera task and "System.IO.IsolatedStorage,System.Runtime.Serialization" for isolated storage ,and no need for any other external dll's Description However I had taken for the requirements of "Saving Camera captured Images to Isolated Storage" ,"Read the camera captured images from the isolated storage to listbox" and also "How to resize captured images in wp8 c#" Let's start few steps to meet above requirements 1)Saving Camera captured Images to Isolated Storage: Before going to start this section, we must need to know about "isolated storage",so that in previously i had posted about this one at"" and now we are going to cature the image from camera like this way, C# private void CarImage_tap(object sender, System.Windows.Input.GestureEventArgs e) { CameraCaptureTask ccTask = new CameraCaptureTask(); ccTask.Completed += ccTaskCompleted; ccTask.Show(); } private void ccTaskCompleted(object sender, PhotoResult pr) { try { byte[] ImageBytes; if (pr.ChosenPhoto != null) { ImageBytes = new byte[(int)pr.ChosenPhoto.Length]; pr.ChosenPhoto.Read(ImageBytes, 0, ImageBytes.Length); pr.ChosenPhoto.Seek(0, System.IO.SeekOrigin.Begin); var bitmapImage = PictureDecoder.DecodeJpeg(pr.ChosenPhoto, 480, 856); if (bitmapImage.PixelHeight > bitmapImage.PixelWidth) { CarImage.MaxHeight = 450; CarImage.MaxWidth = 252; } else { CarImage.MaxHeight = 252; CarImage.MaxWidth = 450; } this.CarImage.Source = bitmapImage; prkdata.CarImageBytes = ImageBytes; } } catch { } } after capturing the image ,we need to store imagebytes in isolated storage like this way, C# SavedData prkdata = new SavedData(); SavedDataList parkinglistobj = new SavedDataList(); IsolatedStorageFile Settings = IsolatedStorageFile.GetUserStoreForApplication(); prkdata.CarImageBytes = ImageBytes; parkinglistobj.Add(new SavedData { ID = count + 1, Address = "Image" + count.ToString(), CarImageBytes = prkdata.CarImageBytes }); if (Settings.FileExists("ParkignItemList")) { Settings.DeleteFile("ParkignItemList"); } using (IsolatedStorageFileStream fileStream = Settings.OpenFile("ParkignItemList", FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(SavedDataList)); serializer.WriteObject(fileStream, parkinglistobj); } 2)Resizing Camera captured Images in windows phone 8 c#: In above code i had resize captured image to max height 450 and width to 252,How ever to resize image the without loss image quality we need to adjust width and height with equal proprotions,here i had taken with help of paint applicaiton in windows 8 os to calculate apect pixel ratio's image to fixed height of 450. C# var bitmapImage = PictureDecoder.DecodeJpeg(pr.ChosenPhoto, 480, 856); if (bitmapImage.PixelHeight > bitmapImage.PixelWidth) { CarImage.MaxHeight = 450; CarImage.MaxWidth = 252; } else { CarImage.MaxHeight = 252; CarImage.MaxWidth = 450; } 3)Reading Camera captured Images from isolatedstorage to listbox: In this step we need to helper classes 3.1)For storing capture image bytes i.e(SavedData.cs): C# public class SavedData { public int ID { get; set; } public String Address { get; set; } public byte[] CarImageBytes { get; set; } } public class SavedDataList : List<SavedData> { } 3.2)IValueConverter class for converting imagebytes(); } Notes: 1)in ivalueconverter class it is most important thing, i had mention is BitmapImage properties are"DecodePixelHeight,DecodePixelWidth,CreateOptions" to know more about this ones And is is very important to set bitimage.DecodePixelWidth = 56; bitimage.DecodePixelHeight = 100;. C# BitmapImage image = new BitmapImage(); image.DecodePixelType = DecodePixelType.Logical; image.CreateOptions = BitmapCreateOptions.BackgroundCreation; image.CreateOptions = BitmapCreateOptions.DelayCreation; image.DecodePixelWidth = 56; image.DecodePixelHeight = 100; 2)bitimage.DecodePixelWidth and bitimage.DecodePixelHeight are only supported in windowsphone 8 os 3.2)Binding listbox with bitmapimage from captured imagebytes : We can bind listbox items with imagebytes in xaml using above ivalueconverter class i.e "BytesToImageConverter.cs" .so that we need to reference above ivalueconverter class like this way XAML xmlns: </phone:PhoneApplicationPage.Resources> and bind listbox image with ivaluecoverter like this way, XAML <Image Source="{Binding CarImageBytes, Converter={StaticResource ByteToImage}}" Stretch="None" VerticalAlignment="Center" HorizontalAlignment="Center" /> finally assign the list of captured isolated storage images to listbox like this way, C# if (Settings.FileExists("ParkignItemList")) { using (IsolatedStorageFileStream fileStream = Settings.OpenFile("ParkignItemList", FileMode.Open)) { DataContractSerializer serializer = new DataContractSerializer(typeof(SavedDataList)); parkinglistobj = (SavedDataList)serializer.ReadObject(fileStream); } } ParkListBox.ItemsSource = parkinglistobj.OrderByDescending(i => i.ID).ToList(); 4)ScreenShots : Note: screens are taken from the emulator Hi, Subbu. Awesome blog you have written, i need your help in my case, in my case, i need to load more than 1000 thumbnails of a book from the isolatedStorage into my ListBox as well as i need to show the status as red button for new thumbnails, i have written a code and its executing successfully, it gives exception when i navigate thumbnail page twice, it generate "out of memory exception ". I tried a lot but failed to solve, Can you please help me how can i reduce the Memory consumption on again and again visit that thumbnail page. ? i really re commend this blog to programmers ... im now a real programmer, making several dollars , kidnapped programming techniques from this blog , thanks sir for your good job Hi Subbu sir, Tq for your Samples their were awesome. I am new developer I need a small help from you regarding same sample.(In my app same functionality.) when the user taps on particular item it will navigate to new page(There using another list Box I displayed only that Item using ID). My problem is when the user see the full details in new page he can delete that item. How can I do this. I tried many ways: ParkListBox.SelectedItem = parkinglistobj.Select(i => i.ID == index); //SavedData item = ParkListBox.DataContext as SavedData; SavedData item = parkinglistobj.Select(i => i.ID == index) as SavedData; parkinglistobj.Remove(item); MessageBox.Show("Deleted Successfully"); But it was not deleting from the list. Please help this. 2nd problem is When the user taps on the image he can save/share that image to phone/Others. How can I do this? Tq in advance... :) Have a nice day sir.
http://bsubramanyamraju.blogspot.com/2014/01/windows-phone-8-listbox-images-out-of.html
CC-MAIN-2017-34
en
refinedweb
WCF’s fundamental communication mechanism is SOAP-based Web services. Because WCF implements Web services technologies defined by the WS-* specifications, other software which is based on SOAP and supports WS-* specifications can communicate with the WCF-based applications. To build up a cross-platform WCF server, you can use Metro. Metro is a Web Services framework that provides tools and infrastructure to develop Web Services solutions for the end users and middleware developers. It depends on Java programming language. The latest version of Metro is 1.4. In the development process of SCM Anywhere (a SCM tool, with fully integrated Version Control, Issue Tracking and Build Automation, developed with WCF and METRO/WSIT), we found that METRO is NOT as mature as WCF. There are lots of small issues in METRO/WSIT. Luckily, METRO is an open source project and keeps evolving all the time. Our experience is that if you find some features are not working properly in METRO, keep downloading the latest version from Java.net. Several weeks later, you may discover that the features are working properly. To implement a Java client to communicate with the WCF server, you can follow the steps below: 1. Download METRO/WSIT from the home page of Metro:. 2. Download Eclipse. We use Eclipse + Metro to develop Dynamsoft SCM Anywhere. 3. Install Metro by executing the command: java –jar metro-1_4.jar. The installation folder of Metro contains some documents, tools and samples. You can find the documents in the “docs” folder. 4. Use the C# project “WcfService1” (provided in my WCF client and WCF service article) as the WCF server. Go to Property of the WCF project, and set the server port to one that is not occupied by other services. Here we used 8888 for example. In the “web.config” file, change the string “wsHttpBinding” to “basicHttpBinding”. 5. This is the key step. We use the wsimport tool included in Metro to generate the Java client code. Create a file named “service1.xml” and copy the following code to the file: <bindings xmlns:xsd="" xmlns: <bindings node="wsdl:definitions"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings> The parameter “enableAsyncMapping” means generating the asynchronous method communicating to WCF server. Save this file, and execute the following command: bin\wsimport -extension -keep -Xnocompile -b TestService.xml Then you can find two new directories in Metro folder: “org” and “com”. They are the generated Java code. 6. Open Eclipse IDE, create a new Java project named “SimpleWCFClient”, and copy the two new directories “org” and “com” to the “src” folder of the project. Refresh the project, and you can find that some new code files are in the project. 7. Create a test class named “WCFTest” and write the following code to the file: import java.net.URL; import javax.xml.namespace.QName; import javax.xml.ws.BindingProvider; import org.tempuri.IService1; import org.tempuri.Service1; public class WCFTest { public static void main(String[] strArgs) { try { Service1 service1 = new Service1(new URL(“”), new QName(“”, “Service1”)); IService1 port = service1.getBasicHttpBindingIService1(); ((BindingProvider)port).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, “”); int input = 100; String strOutput = port.getData(input); System.out.println(strOutput); } catch(Exception e) { e.printStackTrace(); } } } 8. Four .jar files need to be added to the Java project. You can get these files in the “lib” folder of Metro: webservices-api.jar, webservices-extra.jar, webservices-extra-api.jar, webservices-rt.jar. Then go to Property of the project and add these jars to the project. 9. Compile and run the Java project. If the Eclipse console outputs “You entered: 100”, congratulations, you are successful. You can download these code from here. When you are familiar with these, you will find it’s very convenient to write a Java application communicating with a WCF client. Links: Previous article >>>>: WCF client and WCF service Next article >>>>: Data types between Java and WCF WCF & Java Interop series home page: WCF & Java Interop Pingback: Fix Webservices-api.jar Errors - Windows XP, Vista, 7 & 8 Pingback: All in One- Bookmarks Throught Career | MUHAMMAD ARSHAD
http://www.codepool.biz/java-client-and-wcf-server.html
CC-MAIN-2017-34
en
refinedweb
Booting Linux: EFI and Management Processor - EFI and POSSE - Configuring the Management Processor (MP). EFI and POSSE EFI is an interface between your operating systems and platform firmware. POSSE is the HP implementation of EFI that contains additional commands beyond the ones available through EFI alone. You'll use the EFI acronym in this chapter, but you should be aware that some HP documentation may use POSSE. EFI is a component that is independent of the operating system and provides a shell for interfacing to multiple operating systems The interface consists of data tables that contain platform-related information along with boot and runtime service calls that are available to the operating system and its loader. These components work together to provide a standard environment for booting multiple operating systems. If you are interested in finding out more about EFI than what's documented here, take a look at Intel EFI Web site. At the time of this writing, EFI information can be found at. As you can see in Figure 1-1, EFI on HP Integrity servers contains several layers. The hardware layer contains disk with an EFI partition, which in turn has in it an operating system loader. This layer also contains one or more operating system partitions. Figure 1-1 EFI on HP Integrity Servers In addition, the EFI system partition itself consists of several different components as shown in Figure 1-2. Figure 1-2 EFI System Partition The Logical Block Addresses (LBAs) are shown across the top of the diagram. The Master Boot Record (MBR) is the first LBA. There is then a partition table. Three partitions are shown on this disk. Note that multiple operating system partitions can be loaded on the same disk. At the time of this writing, Windows Server 2003 and Linux can be loaded on the same disk. The EFI partition table on the right is a backup partition table. Booting an operating system with EFI on an Integrity servers involves several steps. Figure 1-3 depicts the high-level steps. Figure 1-3 Load and Run an Operating System The first step is to initialize the hardware. This takes place at the lowest level (BIOS) before EFI or the operating systems play any part in the process. Next, the EFI and boot loader are loaded and run. After an operating system is chosen, the operating system loader is loaded and run for the specific operating system being booted. Finally, the operating system itself is loaded and run. There are no specific operating systems cited in Figure 1-3 because the process is the same regardless of the operating system being loaded. In the examples in this book, Linux, HP-UX, and Windows are used and all these operating systems would load in the same manner. Working with EFI Traversing the EFI menu structure and issuing commands is straight forward. You make your desired selections and then traverse a menu hierarchy. To start EFI, when the system self test is complete, hit any key to break the normal boot process. The main EFI screen appears. Figure 1-4 shows the EFI Boot Administration main screen from which you can make various boot-related selections. Figure 1-4 EFI Boot Administration Main Screen Figure 1-4 shows that there are two operating systems installed on this Integrity server an HP-UX Primary Boot and a Red Hat Linux Advanced Server. Either of these can be booted. From the main screen, you can also choose either EFI Shell [Built-in], Boot option maintenance menu, or Security/Password menu. The first item shown in the EFI main screen is the default. Use the arrow keys to scroll and highlight a selection. After the item you need is highlighted, press Enter to select it. For example, if you were to select Boot option maintenance menu, you would see a screen resembling the one shown in Figure 1-5. Figure 1-5 EFI Boot Maintenance Manager Main Menu Figures 1-4 and 1-5 give you a feeling for the menu-driven nature of EFI and EFI selections. One of the important things a system administrator might need to know is a given system's device mappings. To view mappings using EFI, you need to get to a console-like EFI Shell> prompt. To do so, you must select EFI Shell in Figure 1-4. Once at the prompt, there are a variety of commands that you can run including map. map is the EFI command that shows device mapping on the Integrity server. This listing shows the output from the map command: Shell> map fs0 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig8E89981A-0B97-11D7-9C4C - AF87605217DA) fs1 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig7C0F0000) blk0: Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master) blk1: Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0) blk2: Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig8E89981A-0B97-11D7-9C4C - AF87605217DA) blk3 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part2,SigC9D59DF0-0BA7-11D7-9B31 - FBA1AECDAF7E) blk4 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part3,SigC9D7945C-0BA7-11D7-9B31 - FBA1AECDAF7E) blk5 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0) blk6 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig7C0F0000) blk7 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part2,Sig7C0F0000) The device mappings can be difficult to read. Much of the information is intended for programmers and technicians. As a system administrator, however, you need to know which entries correspond to which devices and which entries are for your partitions and file systems. To determine that, let's take a look at each entry. As you probably guessed, file systems begin with fs and block devices begin with blk in this listing. To understand these entries however, let's look at them individually. To make their meaning clearer, I've grouped the entries differently than they originally appeared in the EFI listing previously shown. Red Hat Advanced Server disk and related entries: blk1 physical disk blk2 is first partition on blk1 blk3 is second partition on blk1 blk4 is third partition on blk1 fs0 is first file system on blk1 HP-UX 11i disk and related entries: blk5 physical disk blk6 is first partition on blk5 blk7 is second partition on blk5 fs1 is first file system on blk5 Let's analyze one of the block (blk) entries: blk1 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0) blk is the label assigned to the physical drive and the 1 is the number of the physical drive (blk can also be a partition on a physical drive as we'll see shortly). Acpi(HWP0002,100) first shows a device type of HWP0002 with a PCI host number of 100. This PCI host number is often called the ROPE. The ROPE is the circuitry that handles I/O for the PCI interface. Although this information is most often used by programmers, it is sometimes handy to know the ROPE since it also defines the I/O card slot. The following types of devices are the most common: HWP0001: Single I/O Controller Single Block Address w/o I/O Controller in the namespace. HWP0002: Logical Block Address (LBA) device. HWP0003: AGP LBA device. After the ROPE we find a Pci entry. This entry indicates that the device/slot number is 1 and the function number is 0. The Scsi Pun (physical unit) will be either 0 or 1 depending on which is the SCSI address of the disk. The Lun (logical unit) will always be 0 in this case because you're not assigning any Logical Units on the disks. Now, let's look at the blk2 entry: blk2 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/ HD(Part1,Sig8E89981A-0B97-11D7-9C4C-AF87605217DA) The blk2 entry is a partition on the blk1 device. All of the information is the same for the two entries except for the additional partition-related information beginning with Part1. Part1 indicates this is the first partition on physical device blk1 with an EFI signature beginning with Sig. blk3 and blk4 are additional partitions that we created when we loaded Advanced Server on this disk and created three partitions. The first group ends with the fs0 entry: fs0 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/ HD(Part1,Sig8E89981A-0B97-11D7-9C4C-AF87605217DA) Notice that the fs0 line also matches blk1 and blk2 from a path perspective. This is a file system readable by EFI; hence, it begins with fs. To summarize, what we see is a physical device blk1 that has on it an EFI partition blk2 and a file system fs0. All three of these are listed as separate entries in EFI. In addition, blk3 and blk4 are partitions on the same physical device blk1. The same applies to physical unit 1, which is Pun1. This is blk5. It too has two partitions at blk6 and blk7. fs1 is the file system that is on disk. This is the HP-UX disk in our configuration: blk0 : Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master) The blk0 device in the list is the DVD-ROM. The Ata (Advanced Technology Attachment) is the official name that American National Standards Institute group X3T10 uses for what the computer industry calls Integrated Drive Electronics (IDE) in this case, a DVD-ROM. Table 1-1 summarizes the fields that we analyzed for all the entries. Table 1-1. Description of EFI Device Mappings Field Keep in mind that some of this information, such as the Acpi data, is most often necessary to help analyze the system in the event of a problem. Keep in mind that the file system numbers may change when you remap devices or when components are added or removed, such as a DVD device. There are many more EFI commands that you may want to use in addition to map. Table 1-2 summarizes many of the most used EFI commands. We'll take a look at some of them in the next section. Table 1-2. Commonly Used EFI Commands Using EFI, you can control the boot-related setup on your Integrity server. Because of the number of operating systems you can run on Integrity servers, you'll use this interface often to coordinate and manage them. EFI Command Examples As previously mentioned, traversing the EFI menu structure and issuing commands is straightforward. When the system boots, you are given the option to interrupt the autoboot. (If you don't interrupt it, the autoboot will load the first operating system listed which, in our case, is Red Hat Advanced Server.) At system startup, the EFI Boot Manager presents the boot option menu (as shown in the following output). Here, you have five seconds to enter a selection before Red Hat Linux Advanced Server is started: EFI Boot Manager ver 1.10 [14.60] Firmware ver 1.61 [4241] Please select a boot option Red Hat Linux Advanced Server HP-UX Primary Boot: 0/1/1/0.1.0 EFI Shell [Built-in] Boot option maintenance menu Security/Password Menu Use ^ and v to change option(s). Use Enter to select an option Default boot selection will be booted in 5 seconds You can use the arrow, or the u and d keys, to move up and down respectively. We used the ‡ key (down arrow) to select EFI Shell [Built-in]. This brought us to the Shell> prompt. From there, we can issue EFI commands. Similarly, once your at the Shell> prompt, help is always available. To get a listing of the classes of commands available in Shell>, simply enter help and press Enter: Shell> help List of classes of commands: boot -- Booting options and disk-related commands configuration -- Changing and retrieving system information device -- Getting device, driver and handle information memory -- Memory related commands shell -- Basic shell navigation and customization scripts -- EFI shell-script commands Use 'help <class>' for a list of commands in that class Use 'help <command>' for full documentation of a command Use 'help -a' to display list of all commands When using Linux from a network connection from another system you may have to use the ^ and v to move up and down the menu structure respectively. You can also issue help requests for any EFI commands at any level. For example, if you want to know more about your current cpu configuration, you would start with help configuration to determine the help command for cpu configuration, and then help cpuconfig: Shell> help configuration Configuration commands: cpuconfig -- Deconfigure or reconfigure cpus date -- Displays the current date or sets the date in the system err -- Displays or changes the error level esiproc -- Make an ESI call errdump -- View/Clear logs info -- Display hardware information monarch -- View or set the monarch processor palproc -- Make a PAL call. salproc -- Make a SAL call time -- Displays the current time or sets the time of the system ver -- Displays the version information Use 'help <command>' for full documentation of a command Use 'help -a' to display list of all commands Shell> help cpuconfig cpu Specifies which cpu to configure CPUCONFIG [cpu] [on|off] on|off Specifies to configure or deconfigure a cpu Note: 1. Cpu status will not change until next boot. 2. Specifying a cpu number without a state will display configuration status. Examples: * To deconfigure CPU 0 fs0:\> cpuconfig 0 off Cpu will be deconfigured on the next boot. * To display configuration status of cpus fs0:\> cpuconfig PROCESSOR INFORMATION Proc Arch Processor CPU Speed Rev Model Family Rev State --- ---------- ---- ----- ------ ---- ------------- 0 560 MHz B1 0 31 0 Sched Deconf 1 560 MHz B1 0 31 0 Active Shell> As a result of having issued this help cpuconfig, we now know how to manipulate the CPUs in our system. The following output shows what happens when you issue the cpuconfig command with a few options: Shell> cpuconfig PROCESSOR INFORMATION Proc Arch Processor CPU Speed Rev Model Family Rev State --- ---------- ---- ----- ------ ---- ------------- 0 1000 MHz B3 0 31 0 Active 1 1000 MHz B3 0 31 0 Active Shell> cpuconfig 1 off CPU will be deconfigured on next boot. Shell> cpuconfig PROCESSOR INFORMATION Proc Arch Processor CPU Speed Rev Model Family Rev State --- ---------- ---- ----- ------ ---- ------------- 0 1000 MHz B3 0 31 0 Active 1 1000 MHz B3 0 31 0 Sched Deconf Shell> cpuconfig 1 on CPU will be configured on next boot. Shell> cpuconfig PROCESSOR INFORMATION Proc Arch Processor CPU Speed Rev Model Family Rev State --- ---------- ---- ----- ------ ---- ------------- 0 1000 MHz B3 0 31 0 Active 1 1000 MHz B3 0 31 0 Active Shell> You used cpuconfig to view the current CPU configuration showing that both processors are Active. Then you turned off processor 1 (cpuconfig 1 off). You then viewed the CPU configuration again to confirm that processor 1 had been turned off as indicated by the Sched Deconf (cpuconfig). After that, we turned processor 1 on again (cpuconfig 1 on). Finally, we confirmed that both processors are again Active (cpuconfig). As you can see, a lot of useful configuration information about your system is available using EFI. In addition to cpuconfig, you can also use info to get important system information. The following listing first shows the results of the info command. info, with no argument, lists all the differing information options available (such as all, boot, cache, and so on). After you see all the info options, you use info all to get a complete rundown on your system: Shell> info Usage: INFO [-b] [target] target : all, boot, cache, chiprev, cpu, fw, io, mem, sys, warning Shell> info all SYSTEM INFORMATION Product Name: server rx2600 Serial Number: US24758356 UUID: B831CE57-19C2-11D7-A034-3483D23C4340 PROCESSOR INFORMATION Proc Arch Processor CPU Speed Rev Model Family Rev State --- ---------- ---- ----- ------ ---- ------------- 0 1000 MHz B3 0 31 0 Active 1 1000 MHz B3 0 31 0 Active CACHE INFORMATON Instruction Data Unified CPU L1 L1 L2 L3 --- -------- -------- -------- -------- 0 16 KB 16 KB 256 KB 3072 KB 1 16 KB 16 KB 256 KB 3072 KB MEMORY INFORMATION ---- DIMM A ----- ---- DIMM B ----- DIMM Current DIMM Current --- ------ ---------- ------ ---------- 0 256MB Active 256MB Active 1 256MB Active 256MB Active 2 256MB Active 256MB Active 3 256MB Active 256MB Active 4 ---- ---- 5 ---- ---- Active Memory : 2048 MB Installed Memory : 2048 MB I/O INFORMATION BOOTABLE DEVICES Order Media Type Path ----- ---------- --------------------------------------- 1 HARDDRIVE Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/ HD(Part1,Sig8E89981A-0B97-11D7-9C4C-AF87605217DA) 2 HARDDRIVE Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig7C0F0000) Seg Bus Dev Fnc Vendor Device Slot # # # # ID ID # Path --- --- --- --- ------ ------ --- ----------- 00 00 01 00 0x1033 0x0035 XX Acpi(HWP0002,0)/Pci(1|0) 00 00 01 01 0x1033 0x0035 XX Acpi(HWP0002,0)/Pci(1|1) 00 00 01 02 0x1033 0x00E0 XX Acpi(HWP0002,0)/Pci(1|2) 00 00 02 00 0x1095 0x0649 XX Acpi(HWP0002,0)/Pci(2|0) 00 00 03 00 0x8086 0x1229 XX Acpi(HWP0002,0)/Pci(3|0) 00 20 01 00 0x1000 0x0030 XX Acpi(HWP0002,100)/Pci(1|0) 00 20 01 01 0x1000 0x0030 XX Acpi(HWP0002,100)/Pci(1|1) 00 20 02 00 0x14E4 0x1645 XX Acpi(HWP0002,100)/Pci(2|0) 00 40 01 00 0x1011 0x0019 03 Acpi(HWP0002,200)/Pci(1|0) 00 80 01 00 0x1011 0x0019 01 Acpi(HWP0002,400)/Pci(1|0) 00 E0 01 00 0x103C 0x1290 XX Acpi(HWP0002,700)/Pci(1|0) 00 E0 01 01 0x103C 0x1048 XX Acpi(HWP0002,700)/Pci(1|1) 00 E0 02 00 0x1002 0x5159 XX Acpi(HWP0002,700)/Pci(2|0) BOOT INFORMATION Monarch CPU: Current Preferred Monarch Monarch Possible Warnings ------- --------- ----------------- 0 0 AutoBoot: ON - Timeout is : 10 sec Boottest: LAN Address Information: LAN Address Path ----------------- ---------------------------------------- Mac(00306E39B724) Acpi(HWP0002,0)/Pci(3|0)/Mac(00306E39B724)) *Mac(00306E3927B0) Acpi(HWP0002,100)/Pci(2|0)/Mac(00306E3927B0)) FIRMWARE INFORMATION Firmware Revision: 1.61 [4241] PAL A Revision: 7.31 PAL B Revision: 7.36 SAL Spec Revision: 0.20 SAL A Revision: 2.00 SAL B Revision: 1.60 EFI Spec Revision: 1.10 EFI Intel Drop Revision: 14.60 EFI Build Revision: 1.22 POSSE Revision: 0.10 ACPI Revision: 7.00 BMC Revision 1.30 IPMI Revision: 1.00 SMBIOS Revision: 2.3.2a Management Processor Revision: E.02.07 WARNING AND STOP BOOT INFORMATION CHIP REVISION INFORMATION Chip Logical Device Chip Type ID ID Revision ------------------- ------- ------ -------- Memory Controller 0 122b 0022 Root Bridge 0 1229 0022 Host Bridge 0000 122e 0032 Host Bridge 0001 122e 0032 Host Bridge 0002 122e 0032 Host Bridge 0003 122e 0032 Host Bridge 0004 122e 0032 Host Bridge 0006 122e 0032 Host Bridge 0007 122e 0032 Other Bridge 0 0 0002 Other Bridge 0 0 0007 Baseboard MC 0 0 0130 Shell> As you can see, info all produces a great overview of the system configuration. Note that two bootable devices are listed in the order in which they appear in the main EFI screen (see the "EFI Boot Administration Main Screen" on page 5). In addition to selecting the EFI Shell, which we have been doing in our examples to this point, we also have other options. Selecting Boot option maintenance menu produces the selections shown here: EFI Boot Maintenance Manager ver 1.10 [14.60] Manage BootNext setting. Select an Operation Red Hat Linux Advanced Server HP-UX Primary Boot: 0/1/1/0.1.0 EFI Shell [Built-in] Reset BootNext Setting Save Settings to NVRAM Help Exit You have some of the same selections that you had at the main menu level, including the two installed operating systems and the EFI shell, but you also have some new selections. If you were to select Reset BootNext Setting, the following selections would be produced: EFI Boot Maintenance Manager ver 1.10 [14.60] Main Menu. Select an Operation Boot from a File Add a Boot Option Delete Boot Option(s) Change Boot Order Manage BootNext setting Set Auto Boot TimeOut Select Active Console Output Devices Select Active Console Input Devices Select Active Standard Error Devices Cold Reset Exit At this point, you could perform a variety of functions. On your system, for instance, we could use Change Boot Order to make the HP-UX partition the default boot selection instead of the Linux Advanced Server. Although you are going to perform a console-only installation of Red Hat Advanced Server in Chapter 2, the rx2600 we're working on does indeed have a built-in VGA port and graphics display attached. If you wanted to enable the graphics display, we would select Select Active Console Output Devices from the menu above. The Select the Console Output Device(s) menu appears listing all possible console devices. The first device with an * is the serial port that was selected by default. To enable the graphics as well, you must select the last console device. You know that this is the graphics device port because it does not contain Uart as part of the selection. Note that Uart devices are always serial devices. Also note that in the following example, the graphical device port is already selected (indicated by an * in front of it). We then select Save Settings to NVRAM, which saves the console device settings, and then exit:) Save Settings to NVRAM Exit This setting enables both the serial and graphics consoles. The early boot messages will go to the serial console. After the graphics server is started, the Advanced Server-related boot messages will display graphically. After the installation is complete, this would be true during the load of Advanced Server as well. The early selections would take place on the console and then the majority of the load selections would display graphically. The first four entries with PNP0501 in them are on the nine pin serial port. The next four entries with HWP0002 in them are on the three cable device that fits into the 25 pin connector. Be sure to enable a serial console on only one of the two devices since the Linux kernel expects only one serial console to be enabled. As you have seen, EFI is a useful and relatively easy to use tool. Of course, no one expects you to remember all you have seen here. If you need help, refer to "Commonly Used EFI Commands" on page 9 in Table 1-2. You can also type help at the Shell> prompt, or take a look at the tables summarizing EFI at the end of this chapter. Between them all, you'll be able to perform many useful functions using the EFI interface.
http://www.informit.com/articles/article.aspx?p=359419
CC-MAIN-2017-34
en
refinedweb
Class: QgsLayerTreeNode¶ - class qgis.core. QgsLayerTreeNode(t: QgsLayerTreeNode.NodeType, checked: bool = True)¶ Bases: PyQt5.QtCore.QObject Constructor QgsLayerTreeNode(other: QgsLayerTreeNode) This class is a base class for nodes in a layer tree. Layer tree is a hierarchical structure consisting of group and layer nodes: - group nodes are containers and may contain children (layer and group nodes) - layer nodes point to map layers, they do not contain further children Layer trees may be used for organization of layers, typically a layer tree is exposed to the user using QgsLayerTreeView widget which shows the tree and allows manipulation with the tree. Ownership of nodes: every node is owned by its parent. Therefore once node is added to a layer tree, it is the responsibility of the parent to delete it when the node is not needed anymore. Deletion of root node of a tree will delete all nodes of the tree. Signals: signals are propagated from children to parent. That means it is sufficient to connect to root node in order to get signals about updates in the whole layer tree. When adding or removing a node that contains further children (i.e. a whole subtree), the addition/removal signals are emitted only for the root node of the subtree that is being added or removed. Custom properties: Every node may have some custom properties assigned to it. This mechanism allows third parties store additional data with the nodes. The properties are used within QGIS code (whether to show layer in overview, whether the node is embedded from another project etc), but may be also used by third party plugins. Custom properties are stored also in the project file. The storage is not efficient for large amount of data. Custom properties that have already been used within QGIS: - “loading” - whether the project is being currently loaded (root node only) - “overview” - whether to show a layer in overview - “showFeatureCount” - whether to show feature counts in layer tree (vector only) - “embedded” - whether the node comes from an external project - “embedded_project” - path to the external project (embedded root node only) - “legend/…” - properties for legend appearance customization - “expandedLegendNodes” - list of layer’s legend nodes’ rules in expanded state See also also() New in version 2.4: Enums Methods Signals Attributes checkedLayers(self) → List[QgsMapLayer]¶ Returns a list of any checked layers which belong to this node or its children. New in version 3.0. children(self) → List[QgsLayerTreeNode]¶ Gets list of children of the node. Children are owned by the parent customProperty(self, key: str, defaultValue: Any = None) → Any¶ Read a custom property from layer. Properties are stored in a map and saved in project file. customPropertyChanged¶ Emitted when a custom property of a node within the tree has been changed or removed [signal] expandedChanged¶ Emitted when the collapsed/expanded state of a node within the tree has been changed [signal] insertChildrenPrivate(self, index: int, nodes: Iterable[QgsLayerTreeNode])¶ Low-level insertion of children to the node. The children must not have any parent yet! isItemVisibilityCheckedRecursive(self) → bool¶ Returns whether this node is checked and all its children. New in version 3.0. isItemVisibilityUncheckedRecursive(self) → bool¶ Returns whether this node is unchecked and all its children. New in version 3.0. isVisible(self) → bool¶ Returns whether a node is really visible (ie checked and all its ancestors checked as well) New in version 3.0. itemVisibilityChecked(self) → bool¶ Returns whether a node is checked (independently of its ancestors or children) New in version 3.0. nodeType(self) → QgsLayerTreeNode.NodeType¶ Find out about type of the node. It is usually shorter to use convenience functions from QgsLayerTree namespace for that parent(self) → QgsLayerTreeNode¶ Gets pointer to the parent. If parent is None, the node is a root node readXml(element: QDomElement, context: QgsReadWriteContext) → QgsLayerTreeNode¶ Read layer tree from XML. Returns new instance. Does not resolve textual references to layers. Call resolveReferences() afterwards to do it. readXml(element: QDomElement, project: QgsProject) -> QgsLayerTreeNode Read layer tree from XML. Returns new instance. Also resolves textual references to layers from the project (calls resolveReferences() internally). New in version 3.0. removeChildrenPrivate(self, from_: int, count: int, destroy: bool = True)¶ Low-level removal of children from the node. removeCustomProperty(self, key: str)¶ Remove a custom property from layer. Properties are stored in a map and saved in project file. removedChildren¶ Emitted when one or more nodes has been removed from a node within the tree [signal] resolveReferences(self, project: QgsProject, looseMatching: bool = False)¶ Turn textual references to layers into map layer object from project. This method should be called after readXml() If looseMatchingis Truethen a looser match will be used, where a layer will match if the name, public source, and data provider match. This can be used to match legend customization from different projects where layers will have different layer IDs. New in version 3.0. setCustomProperty(self, key: str, value: Any)¶ Sets a custom property for the node. Properties are stored in a map and saved in project file. setExpanded(self, expanded: bool)¶ Sets whether the node should be shown as expanded or collapsed in GUI setItemVisibilityChecked(self, checked: bool)¶ Check or uncheck a node (independently of its ancestors or children) New in version 3.0. setItemVisibilityCheckedParentRecursive(self, checked: bool)¶ Check or uncheck a node and all its parents New in version 3.0. setItemVisibilityCheckedRecursive(self, checked: bool)¶ Check or uncheck a node and all its children (taking into account exclusion rules) New in version 3.0. willRemoveChildren¶ Emitted when one or more nodes will be removed from a node within the tree [signal]
https://qgis.org/pyqgis/master_temp/core/QgsLayerTreeNode.html
CC-MAIN-2019-39
en
refinedweb
This post describes briefly how to use Entity Framework 6.1.1 to create a database out of a Model defined in code. The goal is to have a summary with all the steps needed with the minimum overhead of information. I will use a Console Application. Pre Requisites Install Entity Framework from Nuget. By default the created App.config will create the database locally. Define Entities Each entity will create a Table in the database. Entities are defined with a class that must contains a primary key. This primary key must be the class name + Id or a property with the annotation [Key]. Let-s create some sample entities: public class Food { [Key] // Primary Key, when the PK is the name of the class + id the annotation is not needed public int FoodId { get; set; } public string Name { get; set; } public DateTime ExpireDate { get; set; } public bool StillInFridge { get; set; } // Define a relationship Many to Many between Food and Recipe, an intermediate table will be created public virtual List<Recipe> UsedForRecipes { get; set; } } public class Recipe { // Primary Key public int RecipeId { get; set; } // EF needs the virtual attribute to enable Lazy loading public virtual List<Food> Ingredients { get; set; } } DbContext The context is the object that will allows to make queries to our objects. The context must extends the class DbContext and define a DbSet for each table of the database. The method OnModelCreating can be used to customize the database using the Fluent AP. E.g. to give a column a name different to the name defined in the Entity property. public class FoodContext : DbContext { public DbSet<Food> Foods { get; set; } public DbSet<Recipe> Recipes { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { // This method can be used if we need a column with a name different than the // property modelBuilder.Entity<Food>() .Property(recipe => recipe.StillInFridge) .HasColumnName("InFridge"); } } Test it! We have everything already! Let’s create an entry in our database… We just need to add an object to our DbContext and SaveChanges: class Program { static void Main() { using (FoodContext context = new FoodContext()) { context.Foods.Add(new Food { ExpireDate = new DateTime(2014, 7, 30), Name = "Eggs" }); // Save changes to database context.SaveChanges(); } } } To see this database within Visual Studio, open the SQL Server Object Explorer View and add a new local server with the name “(localdb)\v11.0”. This is the name of the database that EF will create by default. If everything was ok now you should be able to see the create database using the SQL Server Object Explorer View: References One thought on “Entity Framework 6 – Code First”
https://softwarejuancarlos.com/2014/07/25/entity-framework-6-code-first/
CC-MAIN-2019-39
en
refinedweb
Now, we will see how to use the LSTM network to generate Zayn Malik's song lyrics. The dataset can be downloaded from here () ,which has a collection of Zayn's song lyrics. First, we will import the necessary libraries: import tensorflow as tfimport numpy as np Now, we will read our file containing the song lyrics: with open("Zayn_Lyrics.txt","r") as f: data=f.read() data=data.replace('\n','') data = data.lower() Let's see what we have in our data: data[:50]"now i'm on the edge can't find my way it's inside " Then, we store all the characters in the all_chars ...
https://www.oreilly.com/library/view/hands-on-reinforcement-learning/9781788836524/9084fc18-8040-4528-b939-3455fa652915.xhtml
CC-MAIN-2019-39
en
refinedweb
First issue: the new "spinlock" appears to use CAS more than necessary, I think it can be simplified to this: Index: subversion/libsvn_fs_base/bdb/env.c =================================================================== --- subversion/libsvn_fs_base/bdb/env.c (revision 18266) +++ subversion/libsvn_fs_base/bdb/env.c (working copy) @@ -362,17 +362,13 @@ if (apr_err) { /* Tell other threads that the initialisation failed. */ - svn__atomic_cas(&bdb_cache_state, - BDB_CACHE_INIT_FAILED, - BDB_CACHE_START_INIT); + svn__atomic_set(&bdb_cache_state, BDB_CACHE_INIT_FAILED); return svn_error_create(apr_err, NULL, "Couldn't initialize the cache of" " Berkeley DB environment descriptors"); } - svn__atomic_cas(&bdb_cache_state, - BDB_CACHE_INITIALIZED, - BDB_CACHE_START_INIT); + svn__atomic_set(&bdb_cache_state, BDB_CACHE_INITIALIZED); #endif /* APR_HAS_THREADS */ } #if APR_HAS_THREADS @@ -387,9 +383,7 @@ " Berkeley DB environment descriptors"); apr_sleep(APR_USEC_PER_SEC / 1000); - cache_state = svn__atomic_cas(&bdb_cache_state, - BDB_CACHE_UNINITIALIZED, - BDB_CACHE_UNINITIALIZED); + cache_state = svn__atomic_read(&bdb_cache_state); } #endif /* APR_HAS_THREADS */ Second issue: bdb_cache_state is declared as "volatile svn__atomic_t" and always accessed through the svn__atomic_xxx functions, and that combination is obviously intended to provide "thread safety". I guess it's reasonable in practice :) However, the svn__atomic_xxx functions are also used to access the the panic flag in bdb_env_t and yet that flag is declared as plain svn_boolean_t, that's confusing. If the svn__atomic_xxx calls are necessary then I think the panic flag should be "volatile svn__atomic_t", however I don't really see why the svn__atomic_xxx calls are needed or even why the flag is needed. In bdb_cache_get the flag is redundant, it serves to avoid a BDB get_flags() call but only when the flag is TRUE and there is little point in such micro-optimisations in the "recovery needed" path. The other uses of the panic flag are in the code used to close the database, again it seems unnecessary to avoid a get_flags() call in that code. I think we should remove the panic flag and substitute get_flags() calls in the code to close the database. -- Philip Martin --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2006-01/0925.shtml
CC-MAIN-2019-39
en
refinedweb
import "gopkg.in/mafredri/cdp.v0/rpcc" Package rpcc provides an RPC client connection with support for the JSON-RPC 2.0 specification, not including Batch requests. Server side RPC notifications are also supported. Dial connects to an RPC server listening on a websocket using the gorilla/websocket package. conn, err := rpcc.Dial("ws://127.0.0.1:9999/f39a3624-e972-4a77-8a5f-6f8c42ef5129") // ... The user must close the connection when finnished with it: conn, err := rpcc.Dial("ws://127.0.0.1:9999/f39a3624-e972-4a77-8a5f-6f8c42ef5129") if err != nil { // Handle error. } defer conn.Close() // ... A custom dialer can be used to change the websocket lib or communicate over other protocols. netDial := func(ctx context.Context, addr string) (io.ReadWriteCloser, error) { conn, err := net.Dial("tcp", addr) if err != nil { // Handle error. } // Wrap connection to handle writing JSON. // ... return conn, nil } conn, err := rpcc.Dial("127.0.0.1:9999", rpcc.WithDialer(netDial)) // ... Send a request using Invoke: ctx, cancel := context.WithCancel(context.Background()) defer cancel() err := rpcc.Invoke(ctx, "Domain.method", args, reply, conn) // ... Receive a notification using NewStream: stream, err := rpcc.NewStream(ctx, "Domain.event", conn) if err != nil { // Handle error. } err = stream.RecvMsg(&reply) if err != nil { // Handle error. } The stream should be closed when it is no longer used to avoid leaking memory: stream, err := rpcc.NewStream(ctx, "Domain.event", conn) if err != nil { // Handle error. } defer stream.Close() When order is important, two streams can be synchronized with Sync: err := rpcc.Sync(stream1, stream2) if err != nil { // Handle error. } call.go conn.go doc.go socket.go stream.go stream_sync.go var ( // ErrConnClosing indicates that the operation is illegal because // the connection is closing. ErrConnClosing = errors.New("rpcc: the connection is closing") ) var ( // ErrStreamClosing indicates that the operation is illegal because // the stream is closing and there are no pending messages. ErrStreamClosing = errors.New("rpcc: the stream is closing") ) Invoke sends an RPC request and blocks until the response is received. This function is called by generated code but can be used to issue requests manually. Sync takes two or more streams and sets them into synchronous operation, relative to each other. This operation cannot be undone. If an error is returned this function is no-op and the streams will continue in asynchronous operation. All streams must belong to the same Conn and they must not be closed. Passing multiple streams of the same method to Sync is not supported and will return an error. A stream that is closed is removed and has no further affect on the streams that were synchronized. When two streams, A and B, are in sync they will both receive messages in the order that they arrived on Conn. If a message for both A and B arrives, in that order, it will not be possible to receive the message from B before the message from A has been received. type Codec interface { // WriteRequest encodes and writes the request onto the // underlying connection. Request is re-used between writes and // references to it should not be kept. WriteRequest(*Request) error // ReadResponse decodes a response from the underlying // connection. Response is re-used between reads and references // to it should not be kept. ReadResponse(*Response) error } Codec is used by recv and dispatcher to send and receive RPC communication. Conn represents an active RPC connection. func Dial(target string, opts ...DialOption) (*Conn, error) Dial connects to target and returns an active connection. The target should be a WebSocket URL, "ws://" for HTTP and "wss://" for HTTPS. Example: "ws://localhost:9222/target". DialContext is like Dial, with a caller provided Context. A nil Context will panic. Close closes the connection. Context returns the underlying context for this connection. SetCompressionLevel sets the flate compressions level for writes. Valid level range is [-2, 9]. Returns error if compression is not enabled for Conn. See package compress/flate for a description of compression levels. DialOption represents a dial option passed to Dial. func WithCodec(f func(conn io.ReadWriter) Codec) DialOption WithCodec returns a DialOption that sets the codec responsible for encoding and decoding requests and responses onto the connection. This option overrides the default json codec. func WithCompression() DialOption WithCompression returns a DialOption that enables compression for the underlying websocket connection. Use SetCompressionLevel on Conn to change the default compression level for subsequent writes. func WithDialer(f func(ctx context.Context, addr string) (io.ReadWriteCloser, error)) DialOption WithDialer returns a DialOption that sets the dialer for the underlying connection. It can be used to replace the WebSocket library used by this package or to communicate over a different protocol. This option overrides the default WebSocket dialer and all of: WithCompression, WithTLSConfig and WithWriteBufferSize become no-op. func WithTLSClientConfig(c *tls.Config) DialOption WithTLSClientConfig specifies the TLS configuration to use with tls.Client. func WithWriteBufferSize(n int) DialOption WithWriteBufferSize returns a DialOption that sets the size of the write buffer for the underlying websocket connection. Messages larger than this size are fragmented according to the websocket specification. The maximum buffer size for recent versions of Chrome is 104857586 (~100MB), for older versions a maximum of 1048562 (~1MB) can be used. This is because Chrome does not support websocket fragmentation. type Request struct { ID uint64 `json:"id"` // ID chosen by client. Method string `json:"method"` // Method invoked on remote. Args interface{} `json:"params,omitempty"` // Method parameters, if any. } Request represents an RPC request to be sent to the server. type Response struct { // RPC response to a Request. ID uint64 `json:"id"` // Echoes that of the Request. Result json.RawMessage `json:"result"` // Result from invokation, if any. Error *ResponseError `json:"error"` // Error, if any. // RPC notification from remote. Method string `json:"method"` // Method invokation requested by remote. Args json.RawMessage `json:"params"` // Method parameters, if any. } Response represents an RPC response or notification sent by the server. type ResponseError struct { Code int64 `json:"code"` Message string `json:"message"` Data string `json:"data"` } ResponseError represents the RPC response error sent by the server. func (e *ResponseError) Error() string type Stream interface { // Ready returns a channel that is closed when a message is // ready to be received via RecvMsg. Ready indicates that a call // to RecvMsg is non-blocking. // // Ready must not be called concurrently while relying on the // non-blocking behavior of RecvMsg. In this case both // goroutines will be competing for the same message and one // will block until the next message is available. // // Calling Close on the Stream will close the Ready channel // indefinitely, pending messages may still be received via // RecvMsg. // // Ready is provided for use in select statements. Ready() <-chan struct{} // RecvMsg unmarshals pending messages onto m. Blocks until the // next message is received, context is canceled or stream is // closed. // // When m is a *[]byte the message will not be decoded and the // raw bytes are copied into m. RecvMsg(m interface{}) error // Close closes the stream and no new messages will be received. // RecvMsg will return ErrStreamClosing once all pending messages // have been received. Close() error } Stream represents a stream of notifications for a certain method. NewStream creates a new stream that listens to notifications from the RPC server. This function is called by generated code. ☞ Chrome does not support websocket fragmentation (continuation messages) or messages that exceed 1MB in size. This limit was bumped in more recent versions of Chrome which can receive messages up to 100MB in size. See and. Package rpcc imports 10 packages (graph). Updated 2019-07-28. Refresh now. Tools for package owners.
https://godoc.org/gopkg.in/mafredri/cdp.v0/rpcc
CC-MAIN-2019-39
en
refinedweb
id='19' var dataSource = [ { id: 'table1-1', desc: 'Table 1 Foo', realId: 1 }, { id: 'table1-2', desc: 'Table 1 Bar', realId: 2 }, { id: 'table2-1', desc: 'Table 2 Foo', realId: 1 }, { id: 'table2-2', desc: 'Table 2 Bar', realId: 2 }, ... ]; Jeremy Falcon wrote:with IDs like "table-id" to namespace them in essence. id: "sx" + st.Id item.id.substring(2) Marc Clifton wrote:My original rant took longer to write than the refactor the code .
https://www.codeproject.com/Lounge.aspx?msg=5464564
CC-MAIN-2019-39
en
refinedweb
- Dan Williams authored commit a95c90f1 upstream. The last step before devm_memremap_pages() returns success is to allocate a release action, devm_memremap_pages_release(), to tear the entire setup down. However, the result from devm_add_action() is not checked. Checking the error from devm_add_action() is not enough. The api currently relies on the fact that the percpu_ref it is using is killed by the time the devm_memremap_pages_release() is run. Rather than continue this awkward situation, offload the responsibility of killing the percpu_ref to devm_memremap_pages_release() directly. This allows devm_memremap_pages() to do the right thing relative to init failures and shutdown. Without this change we could fail to register the teardown of devm_memremap_pages(). The likelihood of hitting this failure is tiny as small memory allocations almost always succeed. However, the impact of the failure is large given any future reconfiguration, or disable/enable, of an nvdimm namespace will fail forever as subsequent calls to devm_memremap_pages() will fail to setup the pgmap_radix since there will be stale entries for the physical address range. An argument could be made to require that the ->kill() operation be set in the @pgmap arg rather than passed in separately. However, it helps code readability, tracking the lifetime of a given instance, to be able to grep the kill routine directly at the devm_memremap_pages() call site. Link:: Dan Williams <dan.j.williams@intel.com> Fixes: e8d51348 ("memremap: change devm_memremap_pages interface...") Reviewed-by: "Jérôme Glisse" <jglisse@redhat.com> Reported-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Michal Hocko <mhocko@suse>6e6a8b24
https://gitlab.com/post-factum/pf-kernel/blob/4c8e581506abda82392b619b6f38afd9f074eff8/include/linux/memremap.h
CC-MAIN-2019-39
en
refinedweb
This page describes how to schedule recurring queries in BigQuery. Overview You can schedule queries to run on a recurring basis. Scheduled queries must be written in standard SQL, which can include Data Definition Language (DDL) and Data Manipulation Language (DML) statements. The query string and destination table can be parameterized, allowing you to organize query results by date and time. Before you begin Before you create a scheduled query: Scheduled queries use features of the BigQuery Data Transfer Service. Verify that you have completed all actions required in Enabling the BigQuery Data Transfer Service. If you are creating the scheduled query by using the classic BigQuery web UI, allow pop-ups in your browser from bigquery.cloud.google.com, so that you can view the permissions window. You must allow the BigQuery Data Transfer Service permission to manage your scheduled query. Required permissions Before scheduling a query:, see Predefined roles and permissions. Configuration options Query string The query string must be valid and written in standard SQL. Each run of a scheduled query can receive the following query parameters. To manually test a query string with @run_time and @run_date parameters before scheduling a query, use the command line interface. Available parameters Example The @run_time parameter is part of the query string in this example, which queries a public dataset named hacker_news.stories. SELECT @run_time AS time, title, author, text FROM `bigquery-public-data.hacker_news.stories` LIMIT 1000 Destination table When you set up the scheduled query, if the destination table for your results doesn't exist, BigQuery attempts to create the table for you. If you are using a DDL or DML query: - In the GCP Console, choose the Processing location or region. Processing location is required for DDL or DML queries that create the destination table. - In the classic BigQuery web UI, leave Destination table blank. If the target table does exist, the destination table's schema might be updated based on the query results, if you add columns to the schema ( ALLOW_FIELD_ADDITION) or relax a column's mode from REQUIRED to NULLABLE ( ALLOW_FIELD_RELAXATION). In all other cases, table schema changes between runs cause the scheduled query to fail. Queries can reference tables from different projects and different datasets. When configuring your scheduled query, you don't need to include the destination dataset in the table name. You specify the destination dataset separately. Write preference The write preference you select determines how your query results are written to an existing destination table. WRITE_TRUNCATE: If the table exists, BigQuery overwrites the table data. WRITE_APPEND: If the table exists, BigQuery appends the data to the table. If you are using a DDL or DML query: - In the GCP Console, the write preference option will not appear. - In the classic BigQuery web UI, leave the Write preference blank. Creating, truncating, or appending a destination table only happens if BigQuery is able to successfully complete the query. Creation, truncation, or append actions occur as one atomic update upon job completion. Clustering Scheduled queries can create clustering on new tables only, when the table is made with a DDL CREATE TABLE AS SELECT statement. See Creating a clustered table from a query result on the Using Data Definition Language statements page. Partitioning options Scheduled queries can create partitioned or non-partitioned destination tables. Partitioning is not available in the GCP Console, but is available in the classic BigQuery web UI, CLI and API setup methods. If you are using a DDL or DML query with partitioning, leave the Partitioning field blank. There are two types of table partitioning in BigQuery: - Tables partitioned by ingestion time: Tables partitioned based on the scheduled query's run time. - Tables partitioned on a column: Tables that are partitioned based on a TIMESTAMPor DATEcolumn. For tables partitioned on a column: - In the classic BigQuery web UI, if the destination table will be partitioned on a column, you'll specify the column name in the Partitioning field when Setting up a scheduled query. For ingestion-time partitioned tables and non-partitioned tables, leave the Partitioning field blank. For ingestion-time partitioned tables: - Indicate the date partitioning in the destination table's name. See the table name templating syntax, explained below. Partitioning examples - Table with no partitioning - Destination table - mytable - Partitioning field - leave blank - Ingestion-time partitioned table - Destination table - mytable$YYYYMMDD - Partitioning field - leave blank - Column-partitioned table - Destination table - mytable - Partitioning field - name of the TIMESTAMPor DATEcolumn used to partition the table Available parameters When setting up the scheduled query, you can specify how you want to partition the destination table with runtime parameters. Templating system Scheduled queries support runtime parameters in the destination table name with. Setting up a scheduled query Console Open the BigQuery web UI in the GCP Console. Run the query that you're interested in. When you are satisfied with your results, click Schedule query and Create new scheduled query. The scheduled query options open in the New scheduled query pane. On the New scheduled query pane: - For Name for the scheduled query, enter a name such as My scheduled query. The scheduled query name can be any value that allows you to easily identify the scheduled query if you need to modify it later. (Optional) For Schedule options, you can leave the default value of Daily (every 24 hours, based on creation time), or click Schedule start time to change the time. You can also change the interval to Weekly, Monthly, or Custom. When selecting Custom, a Cron-like time specification is expected, for example every 3 hours. The shortest allowed period is 15 minutes. See the schedulefield under TransferConfigfor more valid API values. For DDL/DML queries, you'll choose the Processing location or region. For a standard SQL SELECTquery, provide information about the destination dataset. - For Dataset name, choose the appropriate destination dataset. - For Table name, enter the name of your destination table. - For a DDL or DML query, this option is not shown. - For Destination table write preference, choose either WRITE_TRUNCATEto overwrite the destination table or WRITE_APPENDto append data to the table. - For a DDL or DML query, this option is not shown. (Optional) For Advanced options, if you use customer-managed encryption keys, you can select Customer-managed key here. A list of your available CMEKs will appear for you to choose from. For all queries: - (Optional) Check Send email notifications to allow email notifications of transfer run failures. Click Schedule. To view the status of your scheduled queries, click Scheduled queries in the navigation pane. Refresh the page to see the updated status of your scheduled queries. Click one to get more details about that scheduled query. Classic UI Go to the classic BigQuery web UI. Go to the classic BigQuery web UI Run the query that you're interested in. When you are satisfied with your results, click Schedule Query. The scheduled query options open underneath the query box. On the New Scheduled Query page: - For Destination dataset, choose the appropriate dataset. - For Display name, enter a name for the scheduled query such as My scheduled query. The scheduled query name can be any value that allows you to easily identify the scheduled query if you need to modify it later. - For Destination table: - For a standard SQL query, enter the name of your destination table. - For a DDL or DML query, leave this field blank. - For Write preference: - For a standard SQL query, choose either WRITE_TRUNCATEto overwrite the destination table or WRITE_APPENDto append data to the table. - For a DDL or DML query, choose Unspecified. (Optional) For Partitioning field: - For a standard SQL query, if the destination table is a column-partitioned table, enter the column name where the table should be partitioned. Leave this field blank for ingestion-time partitioned tables and non-partitioned tables. - For a DDL or DML query, leave this field blank. (Optional) For Destination table KMS key, if you use customer-managed encryption keys, you can enter a customer-managed encryption key here. 3 hours. The shortest allowed period is fifteen minutes. See the schedulefield under TransferConfigfor more valid API values. (Optional) Expand the Advanced section and configure notifications. - For Cloud Pub/Sub topic, enter your Cloud Pub/Sub topic name, for example, projects/myproject/topics/mytopic. Check Send email notifications to allow email notifications of transfer run failures. Click Add. To view the status of your scheduled queries, click Scheduled queries in the navigation pane. Refresh the page to see the updated status of your scheduled queries. Click one to get more details about that scheduled query. CLI Option 1:is an alternative way to name the target dataset for the query results, when used with DDL/DML queries. - Use either --destination_tableor --target_dataset, but not both. - interval, when used with bq querymakes a query a recurring scheduled query. A schedule for how often the query should run is required. Examples: --schedule='every 24 hours' --schedule='every 3 hours' Optional flags: --project_idis your project ID. If --project_idisn't specified, the default project is used. --replacewill truncate the destination table and write new results with every run of the scheduled query. --append_tablewill append results to the destination table. For example, the following command creates a scheduled query named My Scheduled Query using the simple query SELECT 1 from mydataset.test. The destination table is mytable in the dataset mydataset. The scheduled query is created in the default project: bq query \ --use_legacy_sql=false \ --destination_table=mydataset.mytable \ --display_name='My Scheduled Query' \ --replace=true \ 'SELECT 1 FROM mydataset.test' Option 2: Use the bq mk command. Scheduled Queries are a kind of transfer. To schedule a query, you can use the BigQuery Data Transfer Service CLI to make a transfer configuration. Queries must be in StandardSQL dialect to be scheduled. Enter the bq mk command and supply the transfer creation flag --transfer_config. The following flags are also required: --data_source --target_dataset(Optional for DDL/DML queries.) --display_name --params Optional flags: --project_idis your project ID. If --project_idisn't specified, the default project is used. --scheduleis how often you want the query to run. If --scheduleisn't specified, the default is 'every 24 hours' based on creation time. For DDL/DML queries, you can also supply the --locationflag to specify a particular region for processing. If --locationisn't specified, the global Google Cloud Platform location is used. bq mk \ --transfer_config \ --project_id=project_id \ --target_dataset=dataset \ --display_name=name \ --params='parameters' \ --data_source=data_source Where: - dataset is the target dataset for the transfer configuration. - This parameter is optional for DDL/DML queries. It is required for all other queries. - name is the display name for the transfer configuration. The display name can be any value that allows you to easily identify the scheduled query (transfer) if you need to modify it later. - parameters contains the parameters for the created transfer configuration in JSON format. For example: --params='{"param":"param_value"}'. For a scheduled query, you must supply the queryparameter. - The destination_table_name_templateparameter is the name of your destination table. - This parameter is optional for DDL/DML queries. It is required for all other queries. - For the write_dispositionparameter, you can choose WRITE_TRUNCATEto truncate (overwrite) the destination table or WRITE_APPENDto append the query results to the destination table. - This parameter is optional for DDL/DML queries. It is required for all other queries. - (Optional) The destination_table_kms_keyparameter is for customer-managed encryption keys. - data_source is the data source — scheduled_query. For example, the following command creates a scheduled query transfer configuration named My Scheduled Query using the simple query SELECT 1 from mydataset.test. The destination table mytable is truncated for every write, and the target dataset is mydataset. The scheduled query is created in the default project: bq mk \ --transfer_config \ --target_dataset=mydataset \ --display_name='My Scheduled Query' \ --params='{"query":"SELECT 1 from mydataset.test","destination_table_name_template":"mytable","write_disposition":"WRITE_TRUNCATE"}' \ --data_source=scheduled_query The first time you run the command,. Python Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries . For more information, see the BigQuery Python API reference documentation . from google.cloud import bigquery_datatransfer_v1 import google.protobuf.json_format client = bigquery_datatransfer_v1.DataTransferServiceClient() # TODO(developer): Set the project_id to the project that contains the # destination dataset. # project_id = "your-project-id" # TODO(developer): Set the destination dataset. The authorized user must # have owner permissions on the dataset. # dataset_id = "your_dataset_id" # TODO(developer): The first time you run this sample, set the # authorization code to a value from the URL: # # # authorization_code = "_4/ABCD-EFGHIJKLMNOP-QRSTUVWXYZ" # # You can use an empty string for authorization_code in subsequent runs of # this code sample with the same credentials. # # authorization_code = "" # Use standard SQL syntax for the query. query_string = """ SELECT CURRENT_TIMESTAMP() as current_time, @run_time as intended_run_time, @run_date as intended_run_date, 17 as some_integer """ parent = client.project_path(project_id) transfer_config = google.protobuf.json_format.ParseDict( { "destination_dataset_id": dataset_id, "display_name": "Your Scheduled Query Name", "data_source_id": "scheduled_query", "params": { "query": query_string, "destination_table_name_template": "your_table_{run_date}", "write_disposition": "WRITE_TRUNCATE", "partitioning_field": "", }, "schedule": "every 24 hours", }, bigquery_datatransfer_v1.types.TransferConfig(), ) response = client.create_transfer_config( parent, transfer_config, authorization_code=authorization_code ) print("Created scheduled query '{}'".format(response.name)) Setting up a manual run on historical dates In addition to scheduling a query to run in the future, you can also trigger immediate runs manually. Triggering an immediate run would be necessary if your query uses the run_date parameter, and there were issues during a prior run. For example, every day at 09:00 you query a source table for rows that match the current date. However, you find that data wasn't added to the source table for the last three days. In this situation, you can set the query to run on historical data within a date range that you specify. Your query is run using combinations of run_date and run-time that correspond to the dates you configured in your scheduled query. After setting up a scheduled query, here's how you can run the query by using a historical date range: Console After clicking Schedule to save your scheduled query, you can click the Scheduled queries button to see the list of currently scheduled queries. Click any display name to see the query schedule's details. At the top right of the page, click Schedule backfill to specify a historical date range. The run times chosen are all within your selected range, including the first date and excluding the last date. Example 1 Your scheduled query is set to run every day 09:00 Pacific Time. You're missing data from Jan 1, Jan 2, and Jan 3. Choose the following historic date range: Start Time = 1/1/19 End Time = 1/4/19 Your query runs using run_date and run_time parameters that correspond to the following times: - 1/1/19 09:00 Pacific Time - 1/2/19 09:00 Pacific Time - 1/3/19 09:00 Pacific Time Example 2 Your scheduled query is set to run every day 23:00 Pacific Time. You're missing data from Jan 1, Jan 2, and Jan 3. Choose the following historic date ranges (later dates are chosen because UTC has a different date at 23:00 Pacific Time): Start Time = 1/2/19 End Time = 1/5/19 Your query runs using run_date and run_time parameters that correspond to the following times: - 1/2/19 09:00 UTC, or 1/1/2019 23:00 Pacific Time - 1/3/19 09:00 UTC, or 1/2/2019 23:00 Pacific Time - 1/4/19 09:00 UTC, or 1/3/2019 23:00 Pacific Time After setting up manual runs, refresh the page to see them in the list of runs. Classic UI After clicking Add to save your scheduled query, you'll see the details of your scheduled query displayed. Below the details, click the Start Manual Runs button to specify a historical date range. You can further refine the date range to have a start and end time, or leave the time fields as 00:00:00. Example 1 If your scheduled query is set to run every day 14:00, and you apply the following historic date range: Start Time = 2/21/2018 00:00:00 AM End Time = 2/24/2018 00:00:00 AM Your query runs at the following times: - 2/21/2018 14:00:00 - 2/22/2018 14:00:00 - 2/23/2018 14:00:00 Example 2 If your scheduled query is set to run every fri at 01:05 and you apply the following historic date range: Start Time = 2/1/2018 00:00:00(a Thursday) End Time = 2/24/2018 00:00:00 AM (also a Thursday) Your query runs at the following times: - 2/2/2018 01:05:00 - 2/9/2018 01:05:00 CLI To manually run the query on a historical date range: Enter the bq mk command and supply the transfer run flag --transfer_run. The following flags are also required: --start_time --end_time bq mk \ --transfer_run \ --start_time='start_time' \ --end_time='end_time' \ resource_name Where: - start_time and end_time are timestamps that end in Z or contain a valid time zone offset. Examples: - 2017-08-19T12:11:35.00Z - 2017-05-25T00:00:00+00:00 - resource_name is the scheduled query's (or transfer's) Resource Name. The Resource Name is also known as the transfer configuration. For example, the following command schedules a backfill for scheduled query resource (or transfer configuration): projects/myproject/locations/us/transferConfigs/1234a123-1234-1a23-1be9-12ab3c456de7. bq mk \ --transfer_run \ --start_time 2017-05-25T00:00:00Z \ --end_time 2017-05-25T00:00:00Z \ projects/myproject/locations/us/transferConfigs/1234a123-1234-1a23-1be9-12ab3c456de7 API Use the projects.locations.transferConfigs.scheduleRun method and supply a path of the TransferConfig resource. Quotas A scheduled query is executed with the creator's credentials and project, as if you were executing the query yourself. A scheduled query is subject to the same BigQuery Quotas and limits as manual queries. Scheduled queries are priced the same as manual BigQuery queries. Known issues and limitations Regions Cross-region queries are not supported, and the destination table for your scheduled query must be in the same region as the data being queried. See Dataset locations for more information about regions and multi-regions. Google Drive You can query Google Drive data in a scheduled query. If you're scheduling an existing query, you might need to click "Update Credentials" in the scheduled query details screen. Allow 10—20 minutes for the change to take effect. You might need to clear your browser's cache. Credentials are automatically up to date for new scheduled queries.
https://cloud.google.com/bigquery/docs/scheduling-queries
CC-MAIN-2019-39
en
refinedweb
Kumba <kumba@gentoo.org> writes: > Richard Sandiford wrote: >>; >> } >> > > Done. This is referenced in the first patch > (gcc-4.4-trunk-fixr10k-z1.patch). The second patch > (gcc-4.4-trunk-fixr10k-z2.patch) contains a form whereby I just > re-declared mips_branch_likely and set it once-per template. More on > this below. The first version looks good, except for a couple of formatting issues. The second version doesn't work: those static mips_branch_likely variables are local to insn-output.c, so mips.c:print_operand will never see them. > Yeah, ~ is one of the last characters that doesn't seem to be > completely used up and looks good. That still leaves !, &, {, }, and > a comma. But those could look confusing with surrounding characters. Ah, well, that's not too bad. I wouldn't have many qualms about using %! and %& if need be. > So about the two patches. Both of these appears to accomplish the > job, and allow gcc to begin compiling, but at one point about two > hours into the build, genautomata will segfault when attempting to > output tmp-automata.c. I don't know which stage this is in...it's one > of the early stages, and it's using xgcc at this point. > > I tried running gdb on that particular invocation of genautomata, but > it there's not much data I could gather, since the -O2 optimization > removes some of the useful debugging info. It segfaults at an > fprintf() invocation, and tmp-automata.c is 0 bytes. > > Here's the last few lines I get: > > /usr/cvsroot/gcc/host-mips-unknown-linux-gnu/prev-gcc/xgcc > -B/usr/cvsroot/gcc/host-mips-unknown-linux-gnu/prev-gcc/ > -B/usr//mips-unknown-linux-gnu/bin/ -g -O2 -DIN_GCC -W -Wall > -Wwrite-strings > -Wstrict-prototypes -Wmissing-prototypes -Wcast-qual -Wold-style-definition > -Wc++-compat -Wmissing-format-attribute -pedantic -Wno-long-long > -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H > -DGENERATOR_FILE > -o build/genautomata \ > build/genautomata.o build/rtl.o build/read-rtl.o > build/ggc-none.o > build/vec.o build/min-insn-modes.o build/gensupport.o build/print-rtl.o > build/errors.o ../../host-mips-unknown-linux-gnu/libiberty/libiberty.a -lm > build/genautomata ../.././gcc/config/mips/mips.md \ > insn-conditions.md > tmp-automata.c > /bin/sh: line 1: 28620 Segmentation fault build/genautomata > ../.././gcc/config/mips/mips.md insn-conditions.md > tmp-automata.c > make[3]: *** [s-automata] Error 139 > make[3]: Leaving directory `/usr/cvsroot/gcc/host-mips-unknown-linux-gnu/gcc' > make[2]: *** [all-stage2-gcc] Error 2 > make[2]: Leaving directory `/usr/cvsroot/gcc' > make[1]: *** [stage2-bubble] Error 2 > make[1]: Leaving directory `/usr/cvsroot/gcc' > make: *** [all] Error 2 > > > I thought at first, it was the use of the helper function, so I backed > that out and went with the form seen in the second patch, but that > didn't help things either. So I'm assuming this is related to the > changes to the atomic macro templates, and xgcc must have something > inside itself that's a little wonky. Not real sure how to approach > this. > > However, there's more. If I rebuild genautomata by hand (using args > from the command line), and I drop the optimization down a notch to > -O1, then I can run the command to create tmp-automata.c, and it'll > complete successfully (and the output in that file looks good). So > I'm a bit baffled. I assume the issue is caused by my patch, unless > I'm running into a regression in trunk that my patch simply exposes. > > Is there another way to maybe extract some info on what's causing this? For avoidance of doubt, I suppose the first thing to ask is: do you get the segfault with the same checkout after you revert your patch? It could certainly be transient breakage on trunk, like you say. > @@ -13824,6 +13841,17 @@ mips_override_options (void) > warning (0, "the %qs architecture does not support branch-likely" > " instructions", mips_arch_info->name); > > + /* Check to see whether branch-likely instructions are not available > + when using -mfix-r10000. This will be true if: > + 1. -mno-branch-likely was passed. > + 2. The selected ISA does not support branch-likely and > + the command line does not include -mbranch-likely */ Nitlet, but "to see" is redundant. Maybe: /* Make sure that branch-likely instructions available when using -mfix-r10000. The instructions are not available if either: 1. -mno-branch-likely was passed. 2. The selected ISA does not support branch-likely and the command line does not include -mbranch-likely. */ > + if ((TARGET_FIX_R10000 > + && (target_flags_explicit & MASK_BRANCHLIKELY) == 0) > + ? !ISA_HAS_BRANCHLIKELY > + ? !TARGET_BRANCHLIKELY : false : false) Should just be: if (TARGET_FIX_R10000 && ((target_flags_explicit & MASK_BRANCHLIKELY) == 0 ? !ISA_HAS_BRANCHLIKELY : !TARGET_BRANCHLIKEL)) sorry ("branch-likely instructions not available"); And the check should go after... > @@ -13971,6 +13999,12 @@ mips_override_options (void) > && mips_matching_cpu_name_p (mips_arch_info->name, "r4400")) > target_flags |= MASK_FIX_R4400; > > + /* Default to working around R10000 errata only if the processor > + was selected explicitly. */ > + if ((target_flags_explicit & MASK_FIX_R10000) == 0 > + && mips_matching_cpu_name_p (mips_arch_info->name, "r10000")) > + target_flags |= MASK_FIX_R10000; > + > /* Save base state of options. */ > mips_base_target_flags = target_flags; > mips_base_delayed_branch = flag_delayed_branch; ...this. .) > @@ -76,9 +76,9 @@ > "GENERATE_LL_SC" > { > if (which_alternative == 0) > - return MIPS_COMPARE_AND_SWAP_12 (MIPS_COMPARE_AND_SWAP_12_NONZERO_OP); > + return mips_output_sync_insn (MIPS_COMPARE_AND_SWAP_12 > (MIPS_COMPARE_AND_SWAP_12_NONZERO_OP)); > else > - return MIPS_COMPARE_AND_SWAP_12 (MIPS_COMPARE_AND_SWAP_12_ZERO_OP); > + return mips_output_sync_insn (MIPS_COMPARE_AND_SWAP_12 > (MIPS_COMPARE_AND_SWAP_12_ZERO_OP)); Break lines longer than 80 chars. Here and elsewhere, it's probably best to use: return (mips_output_sync_insn (...stuff...)); rather than things like: > @@ -160,8 +160,9 @@ > (clobber (match_scratch:SI 5 "=&d"))] > "GENERATE_LL_SC" > { > - return MIPS_SYNC_OLD_OP_12 ("<insn>", MIPS_SYNC_OLD_OP_12_NOT_NOP, > - MIPS_SYNC_OLD_OP_12_NOT_NOP_REG); > + return mips_output_sync_insn (MIPS_SYNC_OLD_OP_12 ("<insn>", > + MIPS_SYNC_OLD_OP_12_NOT_NOP, > + MIPS_SYNC_OLD_OP_12_NOT_NOP_REG)); ...this. Arguments should generally be indented at least as far as the opening "(". Looks good otherwise, thanks. We just need to sort out the build problem. Richard
https://www.linux-mips.org/archives/linux-mips/2008-11/msg00114.html
CC-MAIN-2017-30
en
refinedweb
#include "apache_message_handler.h" Implementation of an HTML parser message handler that uses Apache logging to emit messsages. Installs a signal handler for common crash signals that tries to print out a backtrace. These methods don't perform any formatting on the string, since it turns out delegating message_handlers generally only need to format once at the top of the stack and then propagate the formatted string inwards. Reimplemented from net_instaweb::GoogleMessageHandler.
http://modpagespeed.com/psol/classnet__instaweb_1_1ApacheMessageHandler.html
CC-MAIN-2017-30
en
refinedweb
? When VS.NET 2003 was announced, Microsoft made it clear that it wasn't a service pack for VS.NET 2002 you had to pay for, it was a new release and that they'd release a service pack for VS.NET 2002 'shortly' after the release of VS.NET 2003. Well... it's been 3 months now today, since VS.NET 2003 is on the market and no sign of any service pack for VS.NET 2002.! After the recent sourcecode control debate I started thinking: why on earth are we still using the 'file' as the base unit to store sourcecode in? The whole 'file' concept is pretty bad and limiting when it comes to sourcecode control, code reuse and overall code management. Much better would it be if we could work with a code repository as the container for our sourcecode which would work with sourcecode elements like we know, e.g.: namespaces, classes, assemblies, resource objects etc. etc. It's not Visual SourceSafe's v^Hfault Lately, some people started blogging about a new source control system, Vault. I haven't used it, nor am I intending to do so. The reason is not that I don't like Vault or Eric Sink, but because I don't have problems with what I currently use, Visual SourceSafe (VSS). I personally believe Eric Sink knows what he's talking about big time, as his blogs are one of the few which truly show some vision on the total scope of software development, and therefore I think Vault must be a product that can live up the hype that is being build up around Eric's new source control system. LLBLGen v1.21 has been released! (Mostly bugfixes) A new LLBLGen v1.x has been released today! Version 1.21.2003.712 to be exact, is a QFE (Quick Fix Engineering) release, which means only minor new features are added and for the rest just bugfixes. This is the final release, no more updates will be released after this release, since the next version of LLBLGen, LLBLGen Pro, is in development and will be released later this summer. The sourcecode comes in a project that is compatible with VS.NET 2003 but not with VS.NET 2002. You need a converter to load it into VS.NET 2002..
https://weblogs.asp.net/fbouma/archive/2003/07
CC-MAIN-2017-30
en
refinedweb
Python's generators sure are handy While rewriting some older code today, I ran across a good example of the clarity inherent in Python's generator expressions. Some time ago, I had written this weirdo construct: for regex in date_regexes: match = regex.search(line) if match: break else: return # ... do stuff with the match The syntax highlighting makes the problem fairly obvious: there's way too much syntax! First of all, I used the semi-obscure "for-else" construct. For those of you who don't read the Python BNF grammar for fun (as in: the for statement), the definition may be useful: So long as the for loop isn't (prematurely) terminated by a break statement, the code in the else suite gets evaluated. To restate (in the contrapositive): the code in the else suite doesn't get evaluated if the for loop is terminated with a break statement. From this definition we can deduce that if a match was found, I did not want to return early. That's way too much stuff to think about. Generators come to the rescue! def first(iterable): """:return: The first item in the iterable that evaluates as True. """ for item in iterable: if item: return item return None match = first(regex.search(line) for regex in regexes) if not match: return # ... do stuff with the match At a glance, this is much shorter and more comprehensible. We pass a generator expression to the first function, which performs a kind of short-circuit evaluation — as soon as a match is found, we stop running regexes (which can be expensive). This is a pretty rockin' solution, so far as I can tell. Prior to generator expressions, to do something similar to this we'd have to use a list comprehension, like so: match = first([regex.search(line) for regex in regexes]) if not match: return # ... do stuff with the match We dislike this because the list comprehension will run all of the regexes, even if one already found a match. What we really want is the short circuit evaluation provided by generator expressions and the any builtin, as shown above. Huzzah! Edit Originally I thought that the any built-in returned the first object which evaluated to a boolean True, but it actually returns the boolean True if any of the objects evaluate to True. I've edited to reflect my mistake.
http://blog.cdleary.com/2008/01/pythons-generators-sure-are-handy/
CC-MAIN-2017-30
en
refinedweb
Interactive initiate This is the most basic way of automatically start a workflow when an object is created through the user interface. This technique will not work for objects created by MIF, workflows or escalations. To enable workflow auto-initiate open your workflow in Workflow Designer and select the Interactive Initiate checkbox. Escalation Another way of activating a workflow is through an escalation. Go to the Actions application and create a new action like this: - Action: STARTWF - Description: Start TESTWF workflow - Object: WORKORDER - Type: APPACTION - Value: WFINITIATE - Parameter/Attribute: TESTWF - Accessible from: ESCALATION Now setup an escalation to trigger the STARTWF action: - Escalation: STARTWF - Applies to: WORKORDER - Condition: - Schedule: 5m,*,*,*,*,*,*,*,*,* (every 5 minutes) - Escalation Points - Escalation Point: 1 - Repeat: false - Leave other fields empty - Actions - Action: STARTWF Application Toolbar Button, Action Menu or pushbutton If you wand to let the user manually trigger a workflow from an application you have several options. Typically this is done creating a new button on the toolbar but you can also create a new action in the menu or add a pushbutton to the application itself. Open your application in the Application Designer. Add the control to which you want to attach the workfow start Set the control properties as follows: - mxevent: ROUTEWF - value: [MYWF] Obviously you have to replace [MYWF] with the name of your workflow. Java There are two useful methods that can be used to start and stop a workflow on an MBO. Both are located into psdi.workflow.WFInstance class. initiateWorkflow(String memo, WFProcess wfProcess) stopWorkflow(String memo) This article has a nice example of it. Another option is to use psdi.workflow.WorkFlowService class. The initiateWorkflow(String processName, MboRemote target) allows to easily start a workflow on a specific MBO. The Java code should be something like this MXServer mx = MXServer.getMXServer(); WorkFlowServiceRemote wfsr = (WorkFlowServiceRemote)mx.lookup("WORKFLOW"); MBORemote mbo = (MBORemote)getMbo(); wfsr.initiateWorkFlow("[MYWF]", mbo); Script Similarly to Java it is possible to use the above methods to start a workflow using TPAE scripting. from psdi.server import MXServer MXServer.getMXServer().lookup("WORKFLOW").initiateWorkflow("[MYWF]",mbo); Integration Framework In this IBM TechNote is described a new feature introduced in TPAE 7.1.1.6 to start a workflow with an HTTP call to MIF. There is also another TechNote about this. A more general approach would be to add a custom YORN field named STARTWF and set it to 1 through MIF. An escalation can then start the workflow for all the objects that has STARTWF=1 and then reset the STARTWF flag to 0. REST A REST call can also be used. An example URI is the following: POST /maxrest/rest/mbo/po/6789?wfname=MYWF HTTP/1.1 x-http-method-override: "initiateWorkflow" Replace PO with your MBO name, 6789 with the ID of the new record and MYWF with your workflow name. One can also start WF from automation script, just like Java ;) Thank you. I have updated the article. Hi Bruno, how're you? I'm trying to apply a SLA automatically when the user save a SR, but, some times, the SLA has already been applied and because changes in SR must be reapplied. So, I create a WF to apply my SLA and in the first moment, I tried make this by bean class thus: public int SAVE() throws MXException, RemoteException { int result = super.SAVE(); try { MXServer mxs = MXServer.getMXServer(); WorkFlowServiceRemote wsrmt = (WorkFlowServiceRemote)mxs.lookup("WORKFLOW"); SRRemote mbo = (SRRemote)getMbo(); wsrmt.initiateWorkflow("WFSLASR", mbo); fireStructureChangedEvent(); refreshTable(); this.sessionContext.queueRefreshEvent(); } catch (Exception e) { e.printStackTrace(); } return result; } But here, I can't access the method getMboValue().getPreviusValue() to verify if an attribute of my SR was changed. So I decided to make this in a Mbo Class. Thus, I'm trying to do that way: protected void save() throws MXException, RemoteException { boolean startWF = applyAutomaticSLA(); //here I test if is needed to reapply the SLA super.save(); if(startWF){ try { //Start the WF to apply the SLA MXServer mxs = MXServer.getMXServer(); WorkFlowServiceRemote wsrmt = (WorkFlowServiceRemote) mxs.lookup("WORKFLOW"); wsrmt.initiateWorkflow("WFSLASR", (SRRemote) this); //Here, I want to show to end user a status message saying to wait because the system is recalculating the SLA this.getThisMboSet().addWarning(new MXApplicationException("ticket", "recalcSLA")); } catch (Exception e) { e.printStackTrace(); } } } What my problem is... When I change the field to reapply my SLA and I click in the SAVE button, the Maximo stay in loop, saving the SR repeatedly. How can I do this? Thanks for your help. Hi Bruno. Debugging my code, a discovery that problem is when I try to initiate the workflow. When I do this: wsrmt.initiateWorkflow("WFSLASR", (SRRemote) this); the record don't save and return to begin. Can you help please? Thank you. Hi Bruno Do you have an idea how we can call maximo internal action i.e. REVPO ( revise PO ) from workflow? I want to make a workflow to process PO Revision ? This comment has been removed by the author. Hi , what is wakup action is used for ? Hi Bruno, I am trying to user your example on to initiate workflow through the REST API and i can't get it to work. I followed your example and changed the MBO to WORKORDER and added my wonum and replaced MYWF with my workflow name and the status never changed. Did i do something wrong? Any help would be appreciated. you can run the report in a new window with a script? Hi Bruno, As you said in first point , Interactive Initiate is working fine for TSRM created TTs(Created in TSRM portal) but not working for auto TTs from External System integrated with TSRM(IBM Netcool) and for Child Tickets. How can auto assign workflow to Auto TTs and child TTs.. Pls Suggest... Thanks In one application, I removed the SAVE button and created a workflow go button that was assigned the SAVE icon. So, every time the user saved, it performed the intrinsic save and ran the small workflow (really more of a script in this case). The drawback of this approach is if the user saves through another method, such as clicking back to list view and getting the save prompt, of course it won't run. But in the case where I used it, it was not critical to run the workflow every time it saved. The funny thing to me was that when I made the change, no one ever commented that the save button had moved to a different place. Hi Bruno, I wonder if it possible to initiate a workflow using a Runtime event (like in Siebel if you are familiar). Example of runtime event: Run an Action if field Status of Object 'SR' is set Example of runtime event (2): Run an Action if a record in object 'SR' is Saved Thanks, Fotis
http://maximodev.blogspot.com/2014/01/how-to-launch-workflow.html
CC-MAIN-2017-30
en
refinedweb
#include "google_analytics_filter.h" Filter <script> tags. Rewrite qualifying sync loads of Google Analytics as async loads.. Called for an IE directive; typically used for CSS styling. See.
http://modpagespeed.com/psol/classnet__instaweb_1_1GoogleAnalyticsFilter.html
CC-MAIN-2017-30
en
refinedweb
The full form of UUID is Universally Unique Identifier. A UUID represents a 128-bit value that is unique. The standard representation of UUID uses hex digits. For example: 3c0969ac-c6e3-40f2-9fc8-2a59b8987918 cb7125cc-d78a-4442-b21b-96ce9227ef51 The UUID class in the java.util package bundles the method to generate random UUID. There are four different versions of UUID. They are as follows: This shot covers how to generate a randomly generated UUID. The randomly generated UUID uses a random number as the source to generate the UUID. In Java, the randomUUID() static method is used to generate a random UUID. The method internally uses SecureRandom class, which provides a cryptographically strong random number generator. Every UUID is associated with a version number. The version number describes how the UUID was generated. The .version() method is used to get the version of the generated UUID. import java.util.UUID; public class GenUUID { public static void main(String[] args) { UUID uuid = UUID.randomUUID(); System.out.println("UUID generated - " + uuid); System.out.println("UUID Version - " + uuid.version()); } } RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-generate-random-uuid-in-java
CC-MAIN-2022-33
en
refinedweb
FilterFilter A regex based adaptable profanity filter. npm i regexcensor Enabling the FilterEnabling the Filter import Filter from 'regexcensor' const filter = Filter('*') Predefined Filter SetsPredefined Filter Sets If the input is a string, the filter will be set to a predefined swear preset. const filter = Filter('PG13') Configuring the FilterConfiguring the Filter If you want to configure a certain preset of of censored words, you can pass a configuring object like follows: const filter = Filter({ fields: ['*'], }) FieldsFields MethodsMethods Filter.add([RegExp, ...]) Adds the patterns to the filter. Filter.check([string, ...] | string) : boolean Checks one or several strings for profanity defined by the Filter. Returns true if any string triggers the patterns and false otherwise. Filter.find([string, ...] | string) : string[] Checks one or several strings for profanity defined by the Filter. Returns all emitted profane words. Filter.match([string, ...] | string) : RegExp[] Checks one or several strings for profanity defined by the Filter. Returns all emitted regex patterns. Filter.replace(string) : string Replaces profanity in the string with asterisks. Testing the RepositoryTesting the Repository - Run npm run testto test the code with mocha. - Run npm run checkto test different phrases in the console. SourcesSources Inspired from: Inspired from: No content was directly cloned or copied, I took my time to rethink every regex. The only thing that are similar are the words.
https://www.npmjs.com/package/regexcensor
CC-MAIN-2022-33
en
refinedweb
This page shows how to enable node auto repair for GKE on-prem clusters. The node auto repair feature continuously detects and repairs unhealthy nodes in a cluster. The feature is disabled by default. You can enable the feature during admin or user cluster creation. You can also enable the feature for an existing user cluster, but not for an existing. Repair strategy GKE on-prem. v1 cluster configuration files To enable node auto repair, you must use a v1 configuration file, either the v1 admin cluster configuration file or the v1 user cluster configuration file. Node auto repair is not supported in the v0 configuration file. Enabling node auto repair for a new cluster In your admin or user cluster configuration file, set autoRepair.enabled to true: autoRepair: enabled: true Continue with the steps for creating your admin or user cluster. Enabling node auto repair for an existing user cluster: In your user cluster configuration file, set autoRepair.enabled to true: autoRepair: enabled: true Update the cluster: gkectl update cluster --config USER_CLUSTER_CONFIG --kubeconfig ADMIN_KUBECONFIG Replace the following: USER_CLUSTER_CONFIG: the path of your user cluster configuration file ADMIN_KUBECONFIG: the path of your admin cluster kubeconfig file Disabling node auto repair for a user cluster In your user cluster configuration file, set autoRepair.enabled to false: autoRepair: enabled: false Update the cluster: gkectl update cluster --config USER_CLUSTER_CONFIG --kubeconfig ADMIN_KUBECONFIG Disabling node auto repair for an admin cluster To disable node auto repair for an admin cluster, delete the cluster-health-controller Deployment: kubectl --kubeconfig ADMIN_KUBECONFIG] delete deployment cluster-health-controller --namespace kube-system Debugging node auto repair You can investigate issues with node auto repair by describing the Machine and Node objects in the admin cluster. Here's an example: List the machine objects: kubectl --kubeconfig kubeconfig get machines Output: default gke-admin-master-w kubeconfig describe machine gke-admin-master-wcbrj In the output, look for events from cluster-health-controller. Similarly, you can list and describe node objects. For example: kubectl --kubeconfig kubeconfig get nodes ... kubectl --kubeconfig KUBECONFIG get machines Replace KUBECONFIG with the path of your admin or user cluster kubeconfig file. Add the repair annotation to the Machine object: kubectl annotate --kubeconfig KUBECONFIG machine MACHINE_NAME onprem.cluster.gke.io/repair-machine=true Replace MACHINE_NAME with the name of the Machine object. Delete the Machine object: kubectl delete --kubeconfig KUBECONFIG machine MACHINE_NAME
https://cloud.google.com/anthos/clusters/docs/on-prem/1.5/how-to/node-auto-repair
CC-MAIN-2022-33
en
refinedweb
Debugging "Plugin Crash" In the meantime you might want to checkout Tools/Xml Tools 2.4.9 Unicode/ I’ve forked the XPatherizerNPP github project and it can be downloaded here: Thank you for suggestion on the other plugin! XML Tools seems a bit unstable, also its XPath component doesn’t seem to work correctly. Attempting to evaluate “.” in a Microsoft sample XML file does not return any nodes… The plugin loads, but the XPath functionality doesn’t seem to work. All dependencies have been installed per the installation readme. Please let me know if there are any issues with my project upload; I have limited experience with git. I’ve spent a long time working IT but only dabbled with development. Thanks very much! Stating what is probably obvious, it appears the the plugin infrastructure can’t handle the messaging protocols of the current version of Notepad++. I am trying to borrow newer implementations of the infrastructure from other plugins, though it will be patch work at best as I don’t understand most of the code I am looking at. See for the current version also the files are splitted up more, but has nicer interfaces to access data. Have you checked if the 32bit build suffers from the same issues? @chcg - I saw your update to the DLLexport folder; didn’t catch it before I made a new update: I’m seeing a new error now: “XPatherizerNPP.dll just crashed in runPluginCommand(size_t i : 22)” Not a lot of reference material for that online. Otherwise I solved the SCNotification issues by remapping the main.cs calls to “ScNotification” in the Scintilla_iface.cs file, changed the “nc.nmhdr.code” calls to “nc.Header.Code” and redefined all the namespaces from “XPatherizerNPP” to “Kbg.NppPluginNET” and “Kbg.NppPluginNET.PluginInfrastructure” I borrowed from the plugin and used all infrastructure there. Any suggestions are still appreciated! Like to note that I only see the error “XPatherizerNPP.dll just crashed in runPluginCommand(size_t i : 22)” when I try to “Show XPatherizer Windows” So it’s choking on the ScNotification messages when Notepad++ runs “UnmanagedExports.cs”. The form object is “null” and I’m not sure how it creates an instance of the form… It gets through line 66 of “UnmanagedExports.cs” when trying to load the form and crashes. I’ve not given up… I discovered that the “Notification” routine in Main.CS is no longer triggered by UnmanagedExports.cs. I changed the function name to “OnNotification” and modified the argument to match the object being passed in from “beNotified” in the UnmanagedExports.cs callback. This caused the event handler to correctly trigger when NPP finishes loading, initializing the search & result forms prior to them being called. The “XPatherizerNPP.dll just crashed in runPluginCommand(size_t i : 22)” error is now gone as it is not trying to load a null form object. Now the problem is that the Scintilla interface does not appear to be returning the correct memory address for the file that is meant to be analyzed. From line 42 in NppPluginNETBase.cs: Win32.SendMessage(nppData._nppHandle, (uint) NppMsg.NPPM_GETCURRENTSCINTILLA, 0, out curScintilla); This returns an address that, when passed into “SciMsg.SCI_GETTEXT” returns a whole big bunch of gibberish (looks like Asian characters of some kind). Soooo I’m trying to figure out what’s going on with that. Slow progress! Learning as I go… I’ve updated the code in the git repo: l Did you build the plugin for 32bit and checked that it is working there as expected? Additionally maybe remove the int casts in NppPluginNETHelper.cs from public ClikeStringArray(int num, int stringCapacity) { _nativeArray = Marshal.AllocHGlobal((num + 1) * IntPtr.Size); _nativeItems = new List<IntPtr>(); for (int i = 0; i < num; i++) { IntPtr item = Marshal.AllocHGlobal(stringCapacity); Marshal.WriteIntPtr((IntPtr)(_nativeArray + (i * IntPtr.Size)), item); _nativeItems.Add(item); } Marshal.WriteIntPtr((IntPtr)(_nativeArray + (num * IntPtr.Size)), IntPtr.Zero); } @chcg Thanks! I have changed the plugin to 32 bit. I’ve actually got the SCI_GETTEXT working by utilizing the “ScintillaGateway” object as defined in the plugin infrastructure I’ve integrated. It appears that the old “SciMsg.SCI_GETTEXT” was returning ASCII when I needed Unicode. The plugin infrastructure somehow resolves this, and I’m simply leaning on that to get it done. Now my issue appears to be that the XML Nodes are being parsed incorrectly… The parent node is being listed as a child of itself in the output… I’m slowly digging through the source code to figure out what element is chopping up the XML. Thanks for the help and suggestions! I’ve uploaded the new code to GIT again. Ok, I’ve merged the branch back into master and it seems like everything works… I don’t know what the XPML functionality is supposed to be, and without a working version I’m not sure how to troubleshoot it. I’ve removed the XPML options from the menu. The functional 32 bit version of the plugin is available here:
https://community.notepad-plus-plus.org/topic/15320/debugging-plugin-crash/13?lang=en-US
CC-MAIN-2022-33
en
refinedweb
Java collections have a default toString implementation, but its output is fixed and it’s not always what you need. Almost every project has its own StringUtil class with a bunch of functions including joinToString. The Kotlin standard library is no exception. >>> val list = arrayListOf(1, 2, 3) >>> println(list) ❶ [1, 2, 3] ❶ invokes toString() Imagine we need the elements to be separated by semicolon and surrounded by round brackets instead of the default square ones: >>> println(joinToString(list, "; ", "(", ")")) (1; 2; 3) Let’s introduce the function joinToString without using any of Kotlin’s new features in this area, and then rewrite it in a more idiomatic style. The function just appends the elements of the collection to a StringBuilder, with a separator between them and surrounded by prefix and postfix. The function is generic: it works on collections that contain elements of any type. As you can see, the syntax for generics is very similar to Java. There are a few differences, but we’ll leave them for a later chapter. fun <T> joinToString( collection: Collection<T>, separator: String, prefix: String, postfix: String ): String { val result = StringBuilder(prefix) var count = 0 for (element in collection) { if (count++ > 0) result.append(separator) ❶ result.append(element) } result.append(postfix) return result.toString() } ❶ don’t append a separator before the first element The implementation above is fine, and we’ll mostly leave it as is. What we’ll focus on is the declaration: how can we change it to make calls of this function less verbose? Maybe we could avoid having to pass four arguments every time we’re calling the function? Let’s see what we can do. Named Arguments The first problem that we’re going to address has to do with the readability of function calls. For example, look at the following call of the joinToString function: joinToString(collection, " ", " ", ".") Can you tell what parameters all these Strings correspond to? Are the elements separated by the whitespace or the dot? These questions are very hard to answer without looking at the signature of the function. Maybe you remember it, or maybe your IDE can help you, but it’s not obvious from the calling code. This problem is especially common with boolean flags. To solve it, some Java coding styles recommend creating enum types instead of using booleans. Others even require you to specify the parameter names explicitly in a comment (like for String arguments below). joinToString(collection, /* separator */ " ", /* prefix */ " ", /* postfix */ "."); With Kotlin, we can do better than that. joinToString(collection, separator = " ", prefix = " ", postfix = ".") When calling a method written in Kotlin, we can specify the names of some arguments that we’re passing to the function. Needless to say, IntelliJ IDEA will keep the names up to date if we rename the parameter of the function being called. If you specify the names of some arguments, the remaining arguments should go accordingly to their order in the parameter list. LIBRARIES AND NAMED ARGUMENTS Even though named arguments are great for developers that need to call a library function, they make life somewhat more difficult for a library developer. In Java, when you’re writing a library and want to maintain backwards compatibility with previous versions of your library, you only need to make sure that your method names, parameter types and return types stay the same. Parameter names do not matter, because they are not part of a method signature. This means that you can always rename parameters, and it can never break anything in code that uses your library. In Kotlin, the situation is different. Because any client of your library can use named arguments when calling any of your functions, parameter names also become part of the API of your library. If you rename a parameter of a public API method, compiled code that uses your library will continue to work. However, if a client of your library was passing a value to that parameter using the named parameter syntax, the client code will no longer compile after you rename the parameter, because it will refer to a parameter name that no longer exists. Therefore, you need to make sure that you don’t change parameter names as you evolve your library. Named arguments work especially well with default parameter values, which we’re going to look at next. Default Parameter Values Another Java problem that comes into play fairly often is the over-abundance of overloaded methods in some classes. Just look at java.lang.Thread and its eight constructors ! There are multiple reasons why this can happen – maybe we’ve had to [1] introduce new parameters without breaking backwards compatibility, or maybe we want to provide maximum convenience to the users of our API – but the end result is the same: duplication. The parameter names are repeated over and over, and if we’re being good citizens, we also have to repeat most of the documentation in every overload. At the same time, if we call an overload that omits some parameters, it’s not always clear which values are used for them. Kotlin gives us a much better solution by letting us specify default values for parameters in a function declaration. Let’s use that to improve our joinToString function! For most cases the strings could be separated by commas without any prefix or postfix. So let’s make these values the default ones: fun <T> joinToString( collection: Collection<T>, separator: String = ", ", ❶ prefix: String = "", ❶ postfix: String = "" ❶ ): String ❶ default parameter values Now we can either invoke the function with all the arguments or omit some of them: >>> joinToString(list) 1, 2, 3 >>> joinToString(list, "; ") 1; 2; 3 When using the regular call syntax, you can omit only trailing arguments. If you use named arguments, you can omit some arguments from the middle of the list and specify only the ones you need: >>> joinToString(list, prefix = "# ") # 1, 2, 3 Note that the default values of the parameters are encoded in the class being called, not in the calling class. If you change the default value and recompile the class containing the function, the callers which haven’t specified a value for the parameter will start using the new default value. We wrote a nice utility function without paying much attention to the surrounding context. Surely, it must have been a method of some class, and we have simply omitted the surrounding class declaration, right? In fact, Kotlin makes this unnecessary. Getting Rid of Static Utility Classes: Top-level Functions and Properties You all know that Java, as an object-oriented language, requires all code to be written as methods of classes. Usually, this works out nicely, but in reality almost every large project ends up with a lot of code that does not clearly belong to any single class. Sometimes an operation works with objects of two different classes which play an equally important role for it, and sometimes there is one primary object but you don’t want the operation to be its instance method in order to avoid bloating the API. As a result of this, you end up with classes that don’t contain any state or any instance methods, and act simply as containers for a bunch of static methods. A perfect example of such a class is the Collections class in the JDK. To find examples of such classes in your own code, simply look for classes which have Util as part of the name. Kotlin lets you avoid creating all those meaningless classes by allowing you to place functions directly at the top level of a source file, outside of any class. Such functions are still members of the package declared on the top of the file, and you still need to import them if you want to call them from other packages, but the extra unnecessary level of nesting no longer exists. Let’s put the joinToString function into the strings package directly. We will create a file join.kt with the following contents: package strings public fun joinToString(...): String { ... } Now, how does this run? You know that, when you compile that file, some classes will be produced, because the JVM can only execute code in classes. When you work only with Kotlin, that’s all you need to know. However, if you’re gradually introducing Kotlin into an existing Java project, you need to understand which classes and methods exactly will be generated, so that you would know how to call the methods. In order to make it clear, let’s look at the Java code that would compile to the same class: package strings; public class JoinKt { ❶ public static String joinToString(...) { ... } } ❶ corresponds to “join.kt”, the filename of the previous example You can see that the name of the class generated by the Kotlin compiler corresponds to the name of the file in which the function was contained. All top-level functions in the file are compiled to static methods in that class. Therefore, calling this method from Java is as easy as calling any other static method: import strings.JoinKt; ... JoinKt.joinToString(list, ", ", "", ""); TOP-LEVEL PROPERTIES Just like functions, properties can be placed at the top level of a file as well. Storing individual pieces of data outside of a class is not as often needed, but is still useful. The most common case is probably constants: val UNIX_LINE_SEPARATOR = "\n" As you probably expect for a constant, this will be compiled to a public static final field, equivalent to the following Java code: public static final String UNIX_LINE_SEPARATOR = "\n"; var properties are supported as well. For example, you can use a var property to count the number of times some operation has been performed. var operationCount = 0 ❶ fun performOperation() { operationCount++ ❷ // ... } fun reportOperationCount() { println("Operation performed $operationCount times") ❸ } ❶ Package-level property declaration ❷ Change the value of the property ❸ Read the value of the property The value of such a property will be stored in a static field. You’ve improved our joinToString utility function quite a lot. Now you are ready to make it even more handy!
https://freecontent.manning.com/making-functions-easier-to-call-in-kotlin/
CC-MAIN-2022-33
en
refinedweb
A Lightweight Multi-Process Execution Pool with load balancing and customizable resource consumption constraints. \author: (c) Artem Lutov [email protected] \license: Apache License, Version 2.0 \organizations: eXascale Infolab, Lumais, ScienceWise \date: 2015-07 v1, 2017-06 v2, 2018-05 v3 \grants: Swiss National Science Foundation grant number CRSII2_147609, European Commission grant Graphint 683253 BibTeX: @misc{pyexpool, author = {Artem Lutov and Philippe Cudr-Mauroux}, url = {}, title = {PyExPool-v.3: A Lightweight Execution Pool with Constraint-aware Load-Balancer.}, year = {2018} } A Lightweight Multi-Process Execution Pool with load balancing to schedule Jobs execution with per-job timeout, optionally grouping them into Tasks and specifying optional execution parameters considering NUMA architecture peculiarities: Automatic rescheduling of the workers on low memory condition for the in-RAM computations is an optional and the only feature that requires an external package, psutil. All scheduling jobs share the same CPU affinity policy, which is convenient for the benchmarking, but not so suitable for scheduling both single and multi-threaded apps with distinct demands for the CPU cache. All main functionality is implemented as a single-file module to be easily included into your project and customized as a part of your distribution (like in PyCaBeM to execute muliple apps in parralel on the dedicated CPU cores and avoiding their swapping from the main memory), also it can be installed as a library. An optional minimalistic Web interface is provided in the separate file to inspect and profile the load balancer and execution pool. The main purpose of the main single-file module is the concurrent execution of modules and external executables with custom resource consumption constraints, cache / parallelization tuning and automatic balancing of the worker processes for the in memory computations on the single server. PyExPool is typically used as an application framework for benchmarking or heavy-loaded multi-process execution activities on constrained computational resources. If the concurrent execution of Python functions is required, usage of external modules is not a problem and the automatic jobs scheduling for the in-RAM computations is not necessary, then a more handy and straightforward approach is to use Pebble library. A pretty convenient transparent parallel computations are provided by the Joblib. If a distributed task queue is required with advanced monitoring and reporting facilities then Celery might be a good choice. For the comprehensive parallel computing Dask is a good choice. For the parallel execution of only the shell scripts the GNU parallel might be a good option. The only another existing open-source load balancer I'm aware about, which has wider functionality than PyExPool (but can not be integrated into your Python scripts so seamlessly) is Slurm Workload Manager. The load balancing is enabled when the global variables _LIMIT_WORKERS_RAM and _CHAINED_CONSTRAINTS are set, jobs .category and relative .size (if known) specified. The balancing is performed to use as much RAM and CPU resources as possible performing in-RAM computations and meeting the specified timeout and memory constraints for each job and for the whole pool. Large executing jobs can be postponed for the later execution with less number of worker processes after completion of the smaller jobs. The number of workers is reduced automatically (balanced) on the jobs queue processing to meet memory constraints. It is recommended to add jobs in the order of the increasing memory/time complexity if possible to reduce the number of worker processes terminations on jobs postponing (rescheduling). Demo of the scheduling with memory constraints for the worker processes: Demo of the scheduling with cache L1 maximization for single-threaded processes on the server with cross-node CPUs enumeration. Whole physical CPU core consisting of two hardware threads assigned to each worker process, so the L1 cache is dedicated (not shared), but the maximal loading over all CPUs is 50%: Demo of the WebUI for the Jobs and Tasks tracing and profiling: Exactly the same fully functional interface is accessible from the console using w3m or other terminal browsers: To explore the WebUI demo execute the following testcase $ MANUAL=1 python -m unittest mpetests.TestWebUI.test_failures and open (or :8080) in the browser. Include the following modules: These modules can be install either manually from GitHub or from the pypi repository: $ pip install pyexpool WebUI( mpewuimodule) renders interface from the bottle html templates located in the ., ./views/or any other folder from the bottle.TEMPLATE_PATHlist, where custom views can be placed to overwrite the default pages. Additionally, hwloc / lstopo should be installed if customized CPU affinity masking and cache control are required, see Requirements section. Multi-Process Execution Pool can be run without any external modules with automatically disabled load balancing. The external modules / apps are required only for the extended functionality: lstopo) is required to identify enumeration type of logical CPUs to perform correct CPU affinity masking. Required only for the automatic affinity masking with cache usage optimization and only if the CPU enumeration type is not specified manually. $ sudo apt-get install -y hwloc psutil is required for the dynamic jobs balancing to perform the in-RAM computations ( _LIMIT_WORKERS_RAM = True) and limit memory consumption of the workers. $ sudo pip install psutil To perform in-memory computations dedicating almost all available RAM (specifying memlimit ~= physical memory), it is recommended to set swappiness to 1 .. 10: $ sudo sysctl -w vm.swappiness=5or set it permanently in /etc/sysctl.conf: vm.swappiness = 5. bottle is required for the minimalistic optional WebUI to monitor executing jobs. $ sudo pip install bottle WebUI( mpewuimodule) renders interface from the bottle html templates located in the ., ./views/or any other folder from the bottle.TEMPLATE_PATHlist, where custom views can be placed to overwrite the default pages. mock is required exclusively for the unit testing under Python2, mock is included in the standard lib of Python3. $ sudo pip install mock All Python requirements are optional and installed automatically from the pip distribution ( $ pip install pyexpool) or can be installed manually from the pyreqsopt.txt file: $ sudo pip install -r pyreqsopt.txt lstopoapp of hwlocpackage is a system requirement and should be installed manually from the system-specific package repository or built from the sources. Flexible API provides automatic CPU affinity management, maximization of the dedicated CPU cache, limitation of the minimal dedicated RAM per worker process, balancing of the worker processes and rescheduling of chains of the related jobs on low memory condition for the in-RAM computations, optional automatic restart of jobs on timeout, access to job's process, parent task, start and stop execution time and more... ExecPool represents a pool of worker processes to execute Jobs that can be grouped into the hierarchy of Taskss for more flexible management. # Global Parameters # Limit the amount of memory (<= RAM) used by worker processes # NOTE: requires import of psutils _LIMIT_WORKERS_RAM = True # Use chained constraints (timeout and memory limitation) in jobs to terminate # also related worker processes and/or reschedule jobs, which have the same # category and heavier than the origin violating the constraints CHAINED_CONSTRAINTS = True Job(name, workdir=None, args=(), timeout=0, rsrtonto=False, task=None #,* , startdelay=0., onstart=None, ondone=None, onfinish=None, params=None, category=None, size=0, slowdown=1. , omitafn=False, memkind=1, memlim=0., stdout=sys.stdout, stderr=sys.stderr, poutlog=None, perrlog=None): """Initialize job to be executed Job is executed in a separate process via Popen or Process object and is managed by the Process Pool Executor Main parameters: name: str - job name workdir - working directory for the corresponding process, None means the dir of the benchmarking args - execution arguments including the executable itself for the process NOTE: can be None to make make a stub process and execute the callbacks timeout - execution timeout in seconds. Default: 0, means infinity rsrtonto - restart the job on timeout, Default: False. Can be used for non-deterministic Jobs like generation of the synthetic networks to regenerate the network on border cases overcoming getting stuck on specific values of the rand variables. task: Task - origin task if this job is a part of the task startdelay - delay after the job process starting to execute it for some time, executed in the CONTEXT OF THE CALLER (main process). ATTENTION: should be small (0.1 .. 1 sec) onstart - a callback, which is executed on the job starting (before the execution started) in the CONTEXT OF THE CALLER (main process) with the single argument, the job. Default: None. If onstart() raises an exception then the job is completed before been started (.proc = None) returning the error code (can be 0) and tracing the cause to the stderr. ATTENTION: must be lightweight NOTE: - It can be executed several times if the job is restarted on timeout - Most of the runtime job attributes are not defined yet ondone - a callback, which is executed on successful completion of the job in the CONTEXT OF THE CALLER (main process) with the single argument, the job. Default: None ATTENTION: must be lightweight onfinish - a callback, which is executed on either completion or termination of the job in the CONTEXT OF THE CALLER (main process) with the single argument, the job. Default: None ATTENTION: must be lightweight params - additional parameters to be used in callbacks stdout - None or file name or PIPE for the buffered output to be APPENDED. The path is interpreted in the CONTEXT of the CALLER stderr - None or file name or PIPE or STDOUT for the unbuffered error output to be APPENDED ATTENTION: PIPE is a buffer in RAM, so do not use it if the output data is huge or unlimited. The path is interpreted in the CONTEXT of the CALLER poutlog: str - file name to log non-empty piped stdout pre-pended with the timestamp. Actual only if stdout is PIPE. perrlog: str - file name to log non-empty piped stderr pre-pended with the timestamp. Actual only if stderr is PIPE. Scheduling parameters: omitafn - omit affinity policy of the scheduler, which is actual when the affinity is enabled and the process has multiple treads category - classification category, typically semantic context or part of the name, used to identify related jobs; requires _CHAINED_CONSTRAINTS size - expected relative memory complexity of the jobs of the same category, typically it is size of the processing data, >= 0, 0 means undefined size and prevents jobs chaining on constraints violation; used on _LIMIT_WORKERS_RAM or _CHAINED_CONSTRAINTS slowdown - execution slowdown ratio, >= 0, where (0, 1) - speedup, > 1 - slowdown; 1 by default; used for the accurate timeout estimation of the jobs having the same .category and .size. requires _CHAINED_CONSTRAINTS memkind - kind of memory to be evaluated (average of virtual and resident memory to not overestimate the instant potential consumption of RAM): 0 - mem for the process itself omitting the spawned sub-processes (if any) 1 - mem for the heaviest process of the process tree spawned by the original process (including the origin itself) 2 - mem for the whole spawned process tree including the origin process memlim: float - max amount of memory in GB allowed for the job execution, 0 - unlimited Execution parameters, initialized automatically on execution: tstart - start time, filled automatically on the execution start (before onstart). Default: None tstop - termination / completion time after ondone NOTE: onstart() and ondone() callbacks execution is included in the job execution time proc - process of the job, can be used in the ondone() to read its PIPE pipedout - contains output from the PIPE supplied to stdout if any, None otherwise NOTE: pipedout is used to avoid a deadlock waiting on the process completion having a piped stdout pipederr - contains output from the PIPE supplied to stderr if any, None otherwise NOTE: pipederr is used to avoid a deadlock waiting on the process completion having a piped stderr mem - consuming memory (smooth max of average of VMS and RSS, not just the current value) or the least expected value inherited from the jobs of the same category having non-smaller size; requires _LIMIT_WORKERS_RAM terminates - accumulated number of the received termination requests caused by the constraints violation NOTE: > 0 (1 .. ExecPool._KILLDELAY) for the apps terminated by the execution pool (resource constrains violation or ExecPool exception), == 0 for the crashed apps wkslim - worker processes limit (max number) on the job postponing if any, the job is postponed until at most this number of worker processes operate; requires _LIMIT_WORKERS_RAM chtermtime - chained termination: None - disabled, False - by memory, True - by time; requires _CHAINED_CONSTRAINTS """ Task(name, timeout=0, onstart=None, ondone=None, onfinish=None, params=None , task=None, latency=1.5, stdout=sys.stdout, stderr=sys.stderr): """Initialize task, which is a group of subtasks including jobs to be executed Task is a managing container for subtasks and Jobs. Note: the task is considered to be failed if at least one subtask / job is failed (terminated or completed with non-zero return code). name: str - task name timeout - execution timeout in seconds. Default: 0, means infinity. ATTENTION: not implemented onstart - a callback, which is executed on the task start (before the subtasks/jobs execution started) in the CONTEXT OF THE CALLER (main process) with the single argument, the task. Default: None ATTENTION: must be lightweight ondone - a callback, which is executed on the SUCCESSFUL completion of the task in the CONTEXT OF THE CALLER (main process) with the single argument, the task. Default: None ATTENTION: must be lightweight onfinish - a callback, which is executed on either completion or termination of the task in the CONTEXT OF THE CALLER (main process) with the single argument, the task. Default: None ATTENTION: must be lightweight params - additional parameters to be used in callbacks task: Task - optional owner super-task latency: float - lock timeout in seconds: None means infinite, <= 0 means non-bocking, > 0 is the actual timeout stdout - None or file name or PIPE for the buffered output to be APPENDED stderr - None or file name or PIPE or STDOUT for the unbuffered error output to be APPENDED ATTENTION: PIPE is a buffer in RAM, so do not use it if the output data is huge or unlimited Automatically initialized and updated properties: tstart - start time is filled automatically on the execution start (before onstart). Default: None tstop - termination / completion time after ondone. numadded: uint - the number of direct added subtasks numdone: uint - the number of completed DIRECT subtasks (each subtask may contain multiple jobs or sub-sub-tasks) numterm: uint - the number of terminated direct subtasks (including jobs) that are not restarting numdone + numterm <= numadded """ AffinityMask(afnstep, first=True, sequential=cpusequential()) """Affinity mask Affinity table is a reduced CPU table by the non-primary HW treads in each core. Typically, CPUs are enumerated across the nodes: NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 In case the number of HW threads per core is 2 then the physical CPU cores are 1 .. 15: NUMA node0 CPU(s): 0,2,4,6,8,10,12,14 (16,18,20,22,24,26,28,30 - 2nd HW treads) NUMA node1 CPU(s): 1,3,5,7,9,11,13,15 (17,19,21,23,25,27,29,31 - 2nd HW treads) But the enumeration can be also sequential: NUMA node0 CPU(s): 0,(1),2,(3),... ... Hardware threads share all levels of the CPU cache, physical CPU cores share only the last level of the CPU cache (L2/3). The number of worker processes in the pool should be equal to the: - physical CPU cores for the cache L1/2 maximization - NUMA nodes for the cache L2/3 maximization NOTE: `hwloc` utility can be used to detect the type of logical CPUs enumeration: `$ sudo apt-get install hwloc` See details: afnstep: int - affinity step, integer if applied, allowed values: 1, CORE_THREADS * n, n E {1, 2, ... CPUS / (NODES * CORE_THREADS)} Used to bind worker processes to the logical CPUs to have warm cache and, optionally, maximize cache size per a worker process. Groups of logical CPUs are selected in a way to maximize the cache locality: the single physical CPU is used taking all its hardware threads in each core before allocating another core. Typical Values: 1 - maximize parallelization for the single-threaded apps (the number of worker processes = logical CPUs) CORE_THREADS - maximize the dedicated CPU cache L1/2 (the number of worker processes = physical CPU cores) CPUS / NODES - maximize the dedicated CPU cache L3 (the number of worker processes = physical CPUs) first - mask the first logical unit or all units in the selected group. One unit per the group maximizes the dedicated CPU cache for the single-threaded worker, all units should be used for the multi-threaded apps. sequential - sequential or cross nodes enumeration of the CPUs in the NUMA nodes: None - undefined, interpreted as cross-nodes (the most widely used on servers) False - cross-nodes True - sequential For two hardware threads per a physical CPU core, where secondary HW threads are taken in brackets: Crossnodes enumeration, often used for the server CPUs NUMA node0 CPU(s): 0,2(,4,6) NUMA node1 CPU(s): 1,3(,5,7) Sequential enumeration, often used for the laptop CPUs NUMA node0 CPU(s): 0(,1),2(,3) NUMA node1 CPU(s): 4(,5),6(,7) """ ExecPool(wksnum=max(cpu_count()-1, 1), afnmask=None, memlimit=0., latency=0., name=None, webuiapp=None) """Multi-process execution pool of jobs A worker in the pool executes only a single job, a new worker is created for each subsequent job. wksnum: int - number of resident worker processes, >=1. The reasonable value <= logical CPUs (returned by cpu_count()) = NUMA nodes * node CPUs, where node CPUs = CPU cores * HW treads per core. The recommended value is max(cpu_count() - 1, 1) to leave one logical CPU for the benchmarking framework and OS applications. To guarantee minimal average RAM per a process, for example 2.5 GB without _LIMIT_WORKERS_RAM flag (not using psutil for the dynamic control of memory consumption): wksnum = min(cpu_count(), max(ramfracs(2.5), 1)) afnmask - affinity mask for the worker processes, AffinityMask None if not applied memlimit - limit total amount of Memory (automatically reduced to the amount of physical RAM if the larger value is specified) in gigabytes that can be used by worker processes to provide in-RAM computations, >= 0. Dynamically reduces the number of workers to consume not more memory than specified. The workers are rescheduled starting from the most memory-heavy processes. NOTE: - applicable only if _LIMIT_WORKERS_RAM - 0 means unlimited (some jobs might be [partially] swapped) - value > 0 is automatically limited with total physical RAM to process jobs in RAM almost without the swapping latency - approximate minimal latency of the workers monitoring in sec, float >= 0; 0 means automatically defined value (recommended, typically 2-3 sec) name - name of the execution pool to distinguish traces from subsequently created execution pools (only on creation or termination) webuiapp: WebUiApp - WebUI app to inspect load balancer remotely Internal attributes: alive - whether the execution pool is alive or terminating, bool. Should be reseted to True on reuse after the termination. NOTE: should be reseted to True if the execution pool is reused after the joining or termination. failures: [JobInfo] - failed (terminated or crashed) jobs with timestamps. NOTE: failures contain both terminated, crashed jobs that jobs completed with non-zero return code excluding the jobs terminated by timeout that have set .rsrtonto (will be restarted) jobsdone: uint - the number of successfully completed (non-terminated) jobs with zero code tasks: set(Task) - tasks associated with the scheduled jobs """ execute(job, concur=True): """Schedule the job for the execution job: Job - the job to be executed, instance of Job concur: bool - concurrent execution or wait until execution completed NOTE: concurrent tasks are started at once return int - 0 on successful execution, process return code otherwise """ join(timeout=0): """Execution cycle timeout: int - execution timeout in seconds before the workers termination, >= 0. 0 means unlimited time. The time is measured SINCE the first job was scheduled UNTIL the completion of all scheduled jobs. return bool - True on graceful completion, False on termination by the specified constraints (timeout, memory limit, etc.) """ clear(): """Clear execution pool to reuse it Raises: ValueError: attempt to clear a terminating execution pool """ __del__(): """Force termination of the pool""" __finalize__(): """Force termination of the pool""" A simple Web UI is designed to profile Jobs and Tasks, interactively trace their failures and resource consumption. It is implemented in the optional module mpewui and can be spawned by instantiating the WebUiApp class. A dedicated WebUiApp instance can be created per each ExecPool, serving the interfaces on the dedicated addresses (host:port). However, typically, a single global instance of WebUiApp is created and supplied to all employed ExecPool instances. Web UI module requires HTML templates installed by default from the pip distribution, which can be overwritten with the custom pages located in the views directory. See WebUI queries manual for API details. An example of the WebUI usage is shown in the mpetests.TestWebUI.test_failures of the mpetests. WebUiAppinstance works in the dedicated thread of the load balancer application and designed for the internal profiling with relatively small number of queries but not as a public web interface for the huge number of clients. WARNING: high loading of the WebUI may increase latency of the load balancer. WebUiApp(host='localhost', port=8080, name=None, daemon=None, group=None, args=(), kwargs={}) """WebUI App starting in the dedicated thread and providing remote interface to inspect ExecPool ATTENTION: Once constructed, the WebUI App lives in the dedicated thread until the main program exit. Args: uihost: str - Web UI host uiport: uint16 - Web UI port name: str - The thread name. By default, a unique name is constructed of the form Thread-N where N is a small decimal number. daemon: bool - Start the thread in the daemon mode to be automatically terminated on the main app exit. group - Reserved for future extension when a ThreadGroup class is implemented. args: tuple - The argument tuple for the target invocation. kwargs: dict - A dictionary of keyword arguments for the target invocation. Internal attributes: cmd: UiCmd - UI command to be executed, which includes (reserved) attribute(s) for the invocation result. """ UiCmdId = IntEnum('UiCmdId', 'FAILURES LIST_JOBS LIST_TASKS API_MANUAL') """UI Command Identifier associated with the REST URL""" def ramfracs(fracsize): """Evaluate the minimal number of RAM fractions of the specified size in GB Used to estimate the reasonable number of processes with the specified minimal dedicated RAM. fracsize - minimal size of each fraction in GB, can be a fractional number return the minimal number of RAM fractions having the specified size in GB """ def cpucorethreads(): """The number of hardware treads per a CPU core Used to specify CPU affinity dedicating the maximal amount of CPU cache L1/2. """ def cpunodes(): """The number of NUMA nodes, where CPUs are located Used to evaluate CPU index from the affinity table index considering the NUMA architecture. """ def cpusequential(): """Enumeration type of the logical CPUs: cross-nodes or sequential The enumeration can be cross-nodes starting with one hardware thread per each NUMA node, or sequential by enumerating all cores and hardware threads in each NUMA node first. For two hardware threads per a physical CPU core, where secondary hw threads are taken in brackets: Crossnodes enumeration, often used for the server CPUs NUMA node0 CPU(s): 0,2(,4,6) => PU L#1 (P#4) NUMA node1 CPU(s): 1,3(,5,7) Sequential enumeration, often used for the laptop CPUs NUMA node0 CPU(s): 0(,1),2(,3) => PU L#1 (P#1) - indicates sequential NUMA node1 CPU(s): 4(,5),6(,7) ATTENTION: `hwloc` utility is required to detect the type of logical CPUs enumeration: `$ sudo apt-get install hwloc` See details: return - enumeration type of the logical CPUs, bool or None: None - was not defined, most likely cross-nodes False - cross-nodes True - sequential """ Target version of the Python is 2.7+ including 3.x, also works fine on PyPy. The workflow consists of the following steps: See unit tests ( TestExecPool, TestProcMemTree, TestTasks classes) for the advanced examples. from multiprocessing import cpu_count from sys import executable as PYEXEC # Full path to the current Python interpreter from mpepool import AffinityMask, ExecPool, Job, Task # Import all required classes # 1. Create Multi-process execution pool with the optimal affinity step to maximize the dedicated CPU cache size execpool = ExecPool(max(cpu_count() - 1, 1), cpucorethreads()) global_timeout = 30 * 60 # 30 min, timeout to execute all scheduled jobs or terminate them # 2. Schedule jobs execution in the pool # 2.a Job scheduling using external executable: "ls -la" execpool.execute(Job(name='list_dir', args=('ls', '-la'))) # 2.b Job scheduling using python function / code fragment, # which is not a goal of the design, but is possible. # 2.b.1 Create the job with specified parameters jobname = 'NetShuffling' jobtimeout = 3 * 60 # 3 min # The network shuffling routine to be scheduled as a job, # which can also be a call of any external executable (see 2.alt below) args = (PYEXEC, '-c', """import os import subprocess basenet = '{jobname}' + '{_EXTNETFILE}' #print('basenet:', basenet, file=sys.stderr) for i in range(1, {shufnum} + 1): netfile = ''.join(('{jobname}', '.', str(i), '{_EXTNETFILE}')) if {overwrite} or not os.path.exists(netfile): # sort -R pgp_udir.net -o pgp_udir_rand3.net subprocess.call(('sort', '-R', basenet, '-o', netfile)) """.format(jobname=jobname, _EXTNETFILE='.net', shufnum=5, overwrite=False)) # 2.b.2 Schedule the job execution, which might be postponed # if there are no any free executor processes available execpool.execute(Job(name=jobname, workdir='this_sub_dir', args=args, timeout=jobtimeout # Note: onstart/ondone callbacks, custom parameters and others can be also specified here! )) # Add another jobs # ... # 3. Wait for the jobs execution for the specified timeout at most execpool.join(global_timeout) # 30 min In case the execution pool is required locally then it can be used in the following way: ... # Limit of the memory consumption for the all worker processes with max(32 GB, RAM) # and provide latency of 1.5 sec for the jobs rescheduling with ExecPool(max(cpu_count()-1, 1), vmlimit=32, latency=1.5) as xpool: job = Job('jmem_proc', args=(PYEXEC, '-c', TestProcMemTree.allocAndSpawnProg( allocDelayProg(inBytes(amem), duration), allocDelayProg(inBytes(camem), duration))) , timeout=timeout, memkind=0, ondone=mock.MagicMock()) jobx = Job('jmem_max-subproc', args=(PYEXEC, '-c', TestProcMemTree.allocAndSpawnProg( allocDelayProg(inBytes(amem), duration), allocDelayProg(inBytes(camem), duration))) , timeout=timeout, memkind=1, ondone=mock.MagicMock()) ... xpool.execute(job) xpool.execute(jobx) ... xpool.join(10) # Timeout for the execution of all jobs is 10 sec [+latency] The code shown above is fetched from the TestProcMemTree unit test. To perform graceful termination of the Jobs in case of external termination of your program, signal handlers can be set: import signal # Intercept kill signals # Use execpool as a global variable, which is set to None when all jobs are done, # and recreated on jobs scheduling execpool = None def terminationHandler(signal=None, frame=None, terminate=True): """Signal termination handler signal - raised signal frame - origin stack frame terminate - whether to terminate the application """ global execpool if execpool: del execpool # Destructors are called later # Define _execpool to avoid unnecessary trash in the error log, which might # be caused by the attempt of subsequent deletion on destruction execpool = None # Note: otherwise _execpool becomes undefined if terminate: sys.exit() # exit(0), 0 is the default exit code. # Set handlers of external signals, which can be the first lines inside # if __name__ == '__main__': signal.signal(signal.SIGTERM, terminationHandler) signal.signal(signal.SIGHUP, terminationHandler) signal.signal(signal.SIGINT, terminationHandler) signal.signal(signal.SIGQUIT, terminationHandler) signal.signal(signal.SIGABRT, terminationHandler) # Ignore terminated children procs to avoid zombies # ATTENTION: signal.SIG_IGN affects the return code of the former zombie resetting it to 0, # where signal.SIG_DFL works fine and without any the side effects. signal.signal(signal.SIGCHLD, signal.SIG_DFL) # Define execpool to schedule some jobs execpool = ExecPool(max(cpu_count() - 1, 1)) # Failsafe usage of execpool ... Also it is recommended to register the termination handler for the normal interpreter termination using atexit: import atexit ... # Set termination handler for the internal termination atexit.register(terminationHandler, terminate=False) Note: Please, star this project if you use it. stderr
https://awesomeopensource.com/project/eXascaleInfolab/PyExPool
CC-MAIN-2022-33
en
refinedweb
Question - Create a function that returns the letter position where the ball final position is once the swapping is finish. description - There are three cups on a table, at positions A, B, and C. At the start, there is a ball hidden under the cup at position B. - There will be several swap perform, represented by two letters. - For example, if I swap the cups at positions A and B, this can be represented as ABor BA. Examples cup_swapping(["AB", "CA"]) ➞ "C" cup_swapping(["AC", "CA", "CA", "AC"]) ➞ "B" cup_swapping(["BA", "AC", "CA", "BC"]) ➞ "A" My solution - 1. initialise a current position "B" - 2. iterate over the list of swap combination - 2.1 iterate over each swap combination - 3. check the ball have been swap - 3.1 if current position is exist in the swap combination, it is swapped - 3.2 update the current position - 3.2.1 if current_position equals to the first letter - 3.2.2 then final letter is the current position - 3.3.1 if current_position not equal the first letter - 3.3.2 then first letter is the current position - 4. print the final position def cup_swapping(swaps): current_position = "B" for move in swaps: if current_position in move: if current_position == move[0]: current_position = move[1] else: current_position = move[0] return current_position Solution by others Method 1 - shorten version of my answer def cup_swapping(swaps): current_position = "B" for move in swaps: if current_position in move: current_position = move[1] if move[0] == current_position else move[0] return current_position Method 2 def cup_swapping(swaps, current_position="B"): for move in swaps: current_position = move.replace(current_position, "") if current_position in move else current_position return current_position - key point - iterate over each swap - in the swap, there is two letter - if the current position appear in the swap, replace that letter with empty string - i.e. replace "AB" to "A" if current position is "A" - and also update the current position to the remaining single letter - if the current position do not appear in the swap, keep the current position My reflection - It is a good feeling to solve a problem without hint. First I got stuck when just write the code, then I try to solve the question by writing the algorithm down, it sudden feel much easier to think. Thus, I will make the habit the write down the algorithm first rather coding at first. Beside, I learn new way to shorten the if else statement Credit challenge found on edabit Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mathewchan/python-exercise-12-find-the-hidden-ball-4hkc
CC-MAIN-2022-33
en
refinedweb
Hello Nat The external reviews module has a number of config options that can be set in code, depending on your requirements. The options are documented here: What is your specific requirement so we can advise on the correct configuration of these options. David Hi David ideally we would like external reviewers not to have to log in to add reviews at all, but its looking like that is not possible? i had thought to simply add a authorization rule in to allow all access to the /EPiServer/advanced-cms.ExternalReviews/Views/ folder so that the required scripts and styles would load for everyone - because if you dont log in those 302 redirect to the login page which basically stops the review fucntionality working at all. But then we also have problems with the images being served up from the /Episerver/cms/Content/path-to-image path, which also dont load for unauthenticated viewers of the page. but guessing that is not how this is supposed to work? If we have all reviewers using a shared login, then I would expect that when the external reviewer tries to load the external link, it would kick to the login and then back to the page after a successful login Hello Nat You can use virtual roles to define a "ExternalReviewers" role which would allow anyone with the link to review and comment. They can then provide their real name when writing a review. It would mean anyone with the link could then review and comment so I'd advise trying to at least restrict this to internal users? The config I used to test is as follows (add to your <episerver.framework> config): <episerver.framework> <!--Other config--> <virtualRoles addClaims="true"> <providers> <!--Other config--> <add name="ExternalReviewers" type="EPiServer.Security.EveryoneRole, EPiServer.Framework" /> </providers> </virtualRoles> <!--Other config--> </episerver.framework> You can learn more about creating custom virtual roles here if you have a way of identifying users who you want to review and comment: David Ps the code to configure advanced reviews in relation to the above is below: using AdvancedExternalReviews; using EPiServer.Framework; using EPiServer.Framework.Initialization; using EPiServer.ServiceLocation; namespace Demo.Web { [InitializableModule] [ModuleDependency(typeof(FrameworkInitialization))] public class ExternalReviewInitialization : IConfigurableModule { public void ConfigureContainer(ServiceConfigurationContext context) { context.Services.Configure<ExternalReviewOptions>(options => { options.EditableLinksEnabled = true; }); } public void Initialize(InitializationEngine context) { } public void Uninitialize(InitializationEngine context) { } } } Hi David thanks for that, I had tried that config, but when accessing the link both the /EPiServer/advanced-cms.ExternalReviews/Views/external-review-component.js /EPiServer/advanced-cms.ExternalReviews/Views/reset.css both redirect to the main CMS login - so the review page does not function correctly - which I guess is why I was on this magical mystery tour in the first place. :( I dont know if this is standard but we have a <location path="EPiServer"> <system.web> <authorization> <allow roles="WebEditors, WebAdmins, Administrators" /> <deny users="*" /> </authorization> so guess that could well be causing issues - although it does seem to be set up like that in the Alloy template Hi Nat Did that issue still occur with the virtual role configuration I provied? I can see the same issue when I do not have the following line in my virtual providers section: <add name="ExternalReviewers" type="EPiServer.Security.EveryoneRole, EPiServer.Framework" /> David adding the virtual role doesnt seem to make any difference at all - both the js/css files are not loading. OK, so say I add a reviewer user, how do I get the externalReview pages to redirect to the login? as I guess I cant expect people to login via the standard CMS login, where they will complete the login form and then stay on that page - albeit without the 'login failed' error and then know to paste in the external link.. or is it easier to use the pin - in which the login does seem to come up, but then the submit of that login seems to 404 sorry, this is turning out to be a right pain.. really appreciate your help with this. Hi David, I had done just that, and it did work, although felt a little bit sledgehammer for a nut - and then noticed the web.config in the modules/_protected folder for the package and thought I might simply be able to change it there - as with my initial question. and I would also need to add a similar location rule for episerver/cms/content to allow the images to show up, as they seemed to be served from that url, and wasnt sure what the security concerns with that might be. Correct EveryoneRole just returns true so everyone should be in it. Virtual Roles have worked for years so I am curious as why they are not working for you in this instance. Can you check you don't have 'ExternalReviewers' already defined in admin mode? Then try and create some content and set the permissions to 'ExternalReviewers' only and see if you get access or not. If you are working in code you can just check IsInRole("ExternalReviewers"). I have only tested with ASP.net indentity so that is the only difference I can see with your configuration. If you want to debug you could create our own virutal role and see what gets executed. Here's the code for EveryoneRole: [ServiceConfiguration] public class EveryoneRole : VirtualRoleProviderBase { // Fields private static string _roleName; private const string DefaultRoleName = "Everyone"; // Methods public override bool IsInVirtualRole(IPrincipal principal, object context) => true; // Properties public override string Name { get => (base.Name ?? "Everyone"); set { base.Name = value; } } public static string RoleName { get => (_roleName ?? (_roleName = "Everyone")); set { _roleName = value; } } } Morning David So I am giving up on leaving it wide open, and think I will simply create a shared user for everyone. However, if I am adding a virtual role via the web.config - that doesnt seem to appear in the groups/roles list in the admin section, so difficult to assign a user to that group. Also, when I tried this yesterday - accessing the edit link generated by the external reviews package, didnt prompt the user to login. Is there any way of getting this to work? closest I got was to use the pin code instead, where the enter code did show - well the enter code box shoed, but on a completely blank page with no text to prompt the user at all. and then on submitting that, it 404'ed anyway. basically I am thinking there is something pretty messed up in the solution somewhere, so maybe I should spend some time looking for that. thanks again David I have added the role and user, but the only way I can get a login prompt, and get the scripts to work is by adding this to the config <location path="externalContentReviews"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location> <location path="EPiServer/advanced-cms.ExternalReviews/Views"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> but at this point I am willing to accept that. 🤷♂️ think we should maybe look at changing the way users log in is the web.config packaged in the modules/_protected/add-on-folder-folder able to control access to the add on folder contents? it seems that most add on s come complete with a web.config with I have recently installed the advanced-cms.ExternalReviews package, but it does not allow non logged in users to access the external content reviews as the required styles/scripts from the addon redirect to the login. I thought I could control this by changing the packaged folder web.config to but it doesnt seem to have an effect. do I simply need to add the allow in the main web.config? thanks
https://world.optimizely.com/forum/developer-forum/Addons/Thread-Container/2021/5/web-config-in-the-modulesadd-on-name-folder/
CC-MAIN-2022-33
en
refinedweb
ID Token This page covers how you can subscribe and listen to the users ID Token and use it throughout your application. Subscribing to the ID Token If you require the users ID Token (e.g. for making external API requests), the useAuthIdToken provides a simple way to subscribe to latest token. import { useAuthIdToken } from "@react-query-firebase/auth"; import { auth } from "./firebase"; function App() { const tokenResult = useAuthIdToken(["token"], auth); if (tokenResult.isLoading) { return <div />; } if (tokenResult.data) { return <div>ID Token: {user.data.token}!</div>; } return <div>Not signed in.</div>; } Anytime your users state changes or the ID Token is refreshed, the hook will update either returning null (if there is no authenticated user) or an IDTokenResult containing your token and metadata to go with it. Force Refreshing By default, if the ID Token has not yet expired the same one will be returned. You can override this behaviour by providing the forceRefresh flag to the hook options: const tokenResult = useAuthIdToken(["token"], auth, { forceRefresh: true, }); Each time the query runs, your token will be fetched and update no matter the expiration time. Using the token Displaying the token isn't really that useful. Instead we're more likely to use it when performing API requests. Luckily React Query makes it easy to get the data of a query outside of the scope of a component, or within an extracted hook. Imagine we've got a custom useApiRequest hook within our application, using the same Query Key we can get the token before the request and attach it to the API request. import { useQueryClient, useQuery } from "react-query"; function useApiRequest(url) { const client = useQueryClient(); return useQuery(url, () => { return fetch(url, { headers: new Headers({ "X-ID-Token": client.getQueryData("token")?.token ?? "", }), }); }); } If the Query Key has a IDTokenResult stored, we'll extract the token from it and pass it as a header. Returning the token Sometimes you just want the actual ID Token, rather than the full IDTokenResult. Luckily React Query makes this simple by providing a data selector: useAuthIdToken(["token"], auth, { select(result) { if (result) { return result.token; } else { return ""; } }, }); Now the Query Key token will always ensure a string is value is returned. React Query options The hook also allows us to provide React Query options, supporting all of the options the useQuery supports! For example, we could handle side effects with ease: useAuthIdToken(["token"], auth, { onSuccess(result) { if (result) { localStorage.setItem("token", result.token); } else { localStorage.removeItem("token"); } }, });
https://react-query-firebase.invertase.dev/auth/id-token
CC-MAIN-2022-33
en
refinedweb
The DataFrame.axes attribute in Pandas is used to return a list of values representing the axes of a given DataFrame. It also returns the labels of the row and columns axis as the only member. DataFrame.axes attribute This attribute takes no parameter value. This attribute returns a list showing the labels of the row and column axis labels as the only members in that order. import pandas as pd # creating a dataframe df = pd.DataFrame({'AGE': [ 20, 29], 'HEIGHT': [94, 170], 'WEIGHT': [80, 115]}) # obtaining the list representing the axes of df print(df.axes) pandasmodule. df. DataFrame.axesattribute to obtain the list representing the axes of df. We print the result to the console. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-return-a-list-representing-the-axes-of-a-given-dataframe
CC-MAIN-2022-33
en
refinedweb
In Java, we have a default constructor that doesn't take any arguments. We typically use it to initialize fields to their default values. The default value for the variables of numeric types such as byte, int, short, and long is 0. The default value for float and double is 0.0. The default value for boolean is false. The default value for the reference variables is null. We use the below syntax for the default constructor: public class MyClass { public MyClass() { } } Below is an example of how we use a default constructor to initialize fields: public class MyClass { private int x; private String y; public MyClass() { this.x = 5; this.y = "Educative"; } public int getX() { return x; } public String getY() { return y; } public static void main(String[] args) { MyClass myClass = new MyClass(); System.out.println(myClass.getX()); System.out.println(myClass.getY()); } } In the example above, we'll initialize the fields x and y to their default values ( 5 and "Educative", respectively). What if you don't write a default constructor for a class? If you don't write a default constructor, the compiler will automatically generate one. However, this may not always be what you want. For example, if you have fields that need to be initialized to non-default values, you'll need to write your constructor. It's generally a good idea to write your own default constructor, even if you don't initialize any fields. This makes your code more precise and easier to understand. Let's differentiate between a default constructor and a no-argument constructor. The main difference between the two is that a default constructor will initialize fields to their default values. In contrast, a no-argument constructor will not initialize any fields. For example, consider the following class: public class MyClass { private int x; public MyClass() { } // no-argument constructor public int getX() { return x; } public static void main(String[] args) { MyClass myClass = new MyClass(); System.out.println(myClass.getX()); } } In the example above, the no-argument constructor doesn't initialize the field x. This means that if we create an object of this class, the field x will have whatever value was assigned to it by the default constructor (which, in this case, is 0). RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/what-is-a-default-constructor-in-java
CC-MAIN-2022-33
en
refinedweb
table of contents NAME¶ posix_fallocate - allocate file space SYNOPSIS¶ #include <fcntl.h> int posix_fallocate(int fd, off_t offset, off_t len); posix_fallocate(): _POSIX_C_SOURCE >= 200112L DESCRIPTION¶ The pos. -¶ posix_fallocate() is available since glibc 2.1.94. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ POS. SEE ALSO¶ fallocate(1), fallocate(2), lseek(2), posix_fadvise(2) COLOPHON¶ This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/testing/manpages-dev/posix_fallocate.3.en.html
CC-MAIN-2022-33
en
refinedweb
GREPPER SEARCH WRITEUPS DOCS INSTALL GREPPER All Languages >> Javascript >> file upload in angular 10 “file upload in angular 10” Code Answer’s upload files with angular javascript by Disturbed Dunlin on Oct 24 2021 Comment 1); }) } file upload in angular 10 typescript by Zany Zebra on Jun 23 2022 Comment 0 onFileChange(event) { const reader = new FileReader(); if (event.target.files && event.target.files.length) { const [file] = event.target.files; reader.readAsDataURL(file); reader.onload = () => { this.data.parentForm.patchValue({ tso: reader.result }); // need to run CD since file load runs outside of zone this.cd.markForCheck(); }; } } Source: stackoverflow.com Add a Grepper Answer Answers related to “file upload in angular 10” angular file upload upload file angular file upload in node js angular file upload app with django upload file in node angular button open file input file upload with angular material upload file angular rest api angualar image upload service image file upload in angular upload file to api angular angular file upload code anji Queries related to “file upload in angular 10” angular file upload image upload in angular file upload in angular how to upload file in angular file uploader angular upload file component angular file upload angular component angular form with file upload upload file using angular angular upload file to server how to upload file angular image file upload and load in angular button to upload file angular angular 9 file upload example file upload angular 10 file upload in angular 8 file upload component angular upload file angular 12 angularjs upload file uploading file in angular angular upload a file How To Upload File In Angular With ASP.NET how to upload to backend file using angular file uploader angular good upload files in angular demo upload file in angular forms image file upload in angular angularjs file-upload upload files with angular file upload form in angular upload in angular angular form file upload example angular file upload button upload a file to server in angular upload a file from angular 12 button file upload to client angular upload a file button angular form file upload with angularjs upload and download file angular file upload angular js upload document in angular angular 14 file upload angular 13 upload files file upload in angular 8 example upload a file angular10 how to upload file with angular upload files implement in angular how to upload file using angular and nodejs link to upload files in angular how to use FileUploader in angular image upload and display in angular file-upload update in angular 12 upload file and save in folder in angularjs how ro upload file in angular 8 show file upload angular how to upload a file angulkar opening uploaded file in angular file upload angular http angulare file uploader in .net upload file example angular browse,upload,display and save the file in same page in angular Any File type upload in angular uploader angular angulare file uploader Angular-upload file angular uploader uploading files in angular angular to upload file in node js angular upload file and seznd it to api file upload use nodejs angular different different file upload in angular upload file angunlar upload file angular using button Upload file in angular 7 new fieReader in angular to upload file upload document using angualr uploading files in angular forms upload file to backend angular example upload files in angular toturial input file upload in html angular upload file and save it in path angular upload file to backend angular latest angular file uploader upload file on button click in angular upload file button angular use button to upload file angular upload a file angularjs upload file angularjs upload file direct from angular upload file with data angular upload file from '@angular to MVC uploadig file c# and angular set angular file upload options show uploading file in angular upload file angular npm single file upload angular upload file angular form "angularjs" file upload angular js Upload.upload angular open file upload programmatically angular upload both data and file angular upload file and read it Angular upload file in folder angular input upload file button for upload file angular bundle upload files angular angular upload file WHEN click button angular uploadfile angular upload files and save document upload and management angular angular c# upload file angular 9 upload file angular file uplad angular 10 upload file angualr file uploader angular fileupload options angular file uploader tutorial angular file upload with forms angular file upload better angular file upload html angular file upload button example How we can upload file in angular 12 how to upload a file angular how to file upload store the file using angular How to upload an angular application file uploader angular 12 file upload with button angular file upload put angularjs How to upload Files using Angular? How to Upload Files in a angular 11 android how to upload document files in angular How to upload file in angula how to upload file and display by using locally in angular download file after upload using Angular file upload angular and .net file upload angular 12 file upload angular when click on button file upload + vien angular file angular upload File upload on button click Angular File upload in folder angular file upload in angular html file upload button in angular file upload functionality for button in angular file upload button in angular 12 angular upload file file upload angular upload file in angular file upload angularjs upload a file angular angular upload angular file upload component angular form file upload FileUpload angular file upload in angular 12 upload documents angular upload files in angular Angular 8 file upload file upload using angular angular file upload library angular file upload service angular 8 file upload example upload file angular 10 upload file in angular 12 angular upload file service angular fileupload how to upload files on clicking upload button in angular upload a file using angular angular file upload with preview upload files angular 12 Upload file in Angular 9 upload file in angular example upload angular angular 13 file upload file upload functionality in angular angular node upload file angular file upload php angular how to upload files file upload module angular file upload preview angular angluar file upload file upload using put method in angular file upload with preview in angular angular file uploader form data file upload angular php angular file upload .net upload a file with form submit angular file upload in angular 2 upload file with angular file uploader update upload file in angular 11 How to upload file in angular to api how to upload file to backend in angular angular file upload html button angualr file upload how to upload file in angular 10 how to file upload in angular update file using file upload in angular how to bundle upload files angular Angular 10 file upload example angular file upload to server opening upload file in angular angular files upload button upload file angular button file upload angular browse,upload and display the file in same page in angular AngularTS image upload and save inside Angular folder upload file trong angular angular input file upload uploading files through angular angular upload sample angular upload file with form angular upload file event c# angular upload file Angular how to upload file and store in local folder upload file angular service angular image file upload file upload and preview in angular upload file anguar on button click open uploader to upload file angularjs upload file in angular 8 open window to upload file angular upload file and save in folder angular upload file in angular with form data input upload file angular input uploadfile angular javascript angularjs upload file then view upload file on button click angular upload button angular html upload button angular upload a file angular6 upload a file button angular upload ong file angulaar upload file using nz-upload in angular upload a file in angular upload file angular php upload file from pc angular show an uploaded file on input angular upload file from angular to node js upload file angular button upload file from angular to java upload files angular node js input file upload in angular angular js file.upload angular to upload file angular upload file and download file angular upload file box angular upload file package angularjs upload file examples button for file upload in angular best file uploader in angular angulare upload files angular upload the file angular install upload-files angular 8 file upload bar angular button upload file angular 9 file uploader angular 12 upload file tutorial angular 10 handle file upload angular code to upload local file angular file upload package angular file uploader example angular file upload using service angular file upload input angular file upload formates create a file upload page in angular and mvc how can i button upload file using file in angular how to upload a file and save it in folder using angular how can i upload file using file in angular form upload file angular file uploader anguarl how to upload a file using angular with all details how to upload file with angularjs how to upload files into files io with angular how to upload file with form details in angular using nodejs how to upload file in angular js how to upload file form data in angular file upload using angular and nodejs file upload angaulr file upload angular 9 file upload angualr file upload and explorer in angular js file upload in node js and angular 8 file upload angular tutorials file upload in angular and node file upload in angularjs example file upload in angular formly file upload in angular 8 and node js file upload example in angular 11 upload file for all file type in angular upload file angular angular file uploader angular file upload example angular upload files upload files angular angular node js upload file file uploader in angular upload file with angular how to upload file using angular angular uploading files how to upload files angular image upload in angular and preview uploaded file angularjs upload file to server how to upload files in angular angularjs file upload angular file upload with form data angular 9 file upload upload file component with angular 12 angular upload file with form data upload file in angular folder upload angular html ng file upload how to upload a file with angular how to upload file on server in angular folder and file upload in angular angular upload file locally package for upload file angualr how to upload a file in angular file upload in angular12 angular upload file form data angular simple upload file angular upload button file upload in angularjs file upload on angular angular c# file upload file upload in angular13 file upload with + button angular angular file upload and preview upload file in project anglar file upload button angular Upload Angularjs file upload in angular 10 upload file with angular file ng2 File uploader angular file upload with image angular file upload ui upload files by angular angular file upload input for file upload angular upload files in angular tutorial angular file upload folder ng upload file form files upload in angularjs how can i upload file using filesaver in angular how to display uploaded file in angular how to make upload file button in angular angular file upload project open file upload on click angular file upload angular express implementing file upload in angular 12 upload file form angular browse and upload code in angular file upload angular example angularFire upload file angulare file upload in .net upload file in angualr upload file in angular 10 angular upload file to .net framework angular upload file button angularjs uploadfile custom upload file angular angular form upload file Angular 8 File Upload or Image Upload with Preview ng file upload angular 12 making a upload file button angular upload file to server angular upload.upload angularjs example upload code in angular uploading files to brower from server in angular upload file in angular to nodejs upload file in angular or node java file upload with angular Upload file and store in local in angular uploading documents from angular ow t sow upload file in html angular 8 upload file angular13 upload angular file upload file angular type upload a file with angular form upload a file from angular upload a file in angular js patch angular upload file upload a file in angularjs upload file with angular file fileuploader upload file to server angular 13 simple file upload angular upload file from angular to c# example Upload.upload Angularjs angular upload file programmatically angular nodejs upload file angular type file upload angular upload file and preview angular upload file component angular upload file to folder code to upload a file in angular button for file upload angular angularjs upload file example angular-upload file example angular upload files from folder angular form file upload dorper angular click upload file angular button to upload file angular 13 upload file angular 10 upload files angular 1.7 upload file Angular File Upload 2020 angular filer for file upload angular file upload without backend angular file upload module angular file upload html code angular file upload file preview document upload in angular how to upload a file in html angular how to handle file upload and preview in angular handling file uploads angular file uploading with angular file upload with preview angular how to upload document angular how to write a code to upload a document using angular how to upload files into file io with angular how to upload file in angular of type file how to upload file in angular form How to upload file and store in local folder angular file upload package angular file upload angular form file upload angular 13 file upload and preview angular file upload and download angular 12 file form upload in angular file upload as button angular file upload input angular File Upload in angular template File upload in Angular 9 file upload in anguarl file upload example in angular Browse Javascript Answers by Framework AngularJS jQuery Express Bootstrap React Vue Backbone Ember Next.js Node.js Ionic Flutter More “Kinda” Related Answers View All Javascript Answers » javascript get file extension exclude extension from filename javascript javascript remove extension from filename js fileinput get content javascript find file extension from string how to check if file upload is empty jquery javascript get filename from url jquery post upload file how to validate file extension in javascript how to append to file in js javascript reference file two folders up angular button open file input file origin does not match viewer's pdf.js how to get the extension from filename using javascript where to put js files in flask readonly javascript read only javascript multer save file with extension js check file exist file input disable open file picker javascript multer rename file jquery get value name uploaded file javascript set file input value to null blob to file javascript node file change event listener jquery get all file input elements how to get file extension in javascript last index javascript readfile validate file size in js javascript prompt for download location get uploaded file name in js get filename from url js file name without extension javascript read file javascript front javascript read text file from remote extract filename from content-disposition header js aws list all files in s3 bucket node js aws list all files in s3 bucket js save files js get file content from url move file from one folder to another in aws s3 nodejs javascript get file extension from string javascript download string as file Dynamically download different filetypes in JavaScript javascript save result to file share link to facebook javascript uploadgetfiletypefileextension get file extention js download file javascript cheerio load from file dropzone on success all files javascript - get the filename and extension from input type=file rename file in js Read text file in vanilla JS nextjs multer rename file download text file javascript javascript check if file exists on server javascript file drag and drop read file size javascript btoa javascript how to get file extension in javascript How to get input file using js download button html javascript angular get file from assets writing files in javascript javascript read server file file upload javascript convert file to blob javascript convert file to blob in angular javascript blob to file FileReader get filename new File in js js open file dialog js upload file dialog file upload in jquery uploading file with fetch in js uploading file with fetch react read multiple files with filereader js get file location how to get file type in javascript get current file name javascript multer express file upload how to read a file in javascript javascript file exists check download file from any url filereader reactjs set file upllaod via javascript to html input filereader check file type It isn't possible to write into a document from an asynchronously-loaded external script unless it is explicitly opened watch file in changes in webpack js read a ini file file extension name in js window.print filename accept only video in input type file below size get blob from file javascript jquery serialize with file upload Odoo Plain Javascript files dropzone add download button addedfile get file extension file upload control in javascript File Upload Button and display file javascript javascript read text file from url upload files with angular js local file read to blob variable unzip file electronjs get downloadable link to s3 bucket object js javascript equivalent of CTRL+F5 javascript upload file button javascript download file on click make word file download in bootstrap button javascript input file callback js download file from webserver make property read-only javascript upload file angular js send file as form input jszip create zip file js get the filename you uploaded download file on button click in angular 8 JavaScript - How to get the extension of a filename use filereader javascript make file from array save js get filenem js file_get_contents in javascript Upload different files in different folders using Multer in NodeJs extract string from text file javascript file upload nest Upload a file using ExpressJS+Multer javascript download file download word file from my page javascript js get files input type file limit size save file javascript synchronous file read dropzone upload on one file javascript auto save input import file in chrome extension upload file on node js azure function create a download file from blob url get field type file js and loop p5.js how to display a text from string iis express gzip get file extension of path extendscript laravel amazon s3 file upload how to create a filelist object in javascript how to edit a fil with vanilla js js pass data between pages upload bloob javascript download file from api response read files in javascript upload file from url javascript Using fetch to upload files angular file upload FTP download local file draft save using jquery filepond remove file after upload fiffo in javascript ajs access file text dfs javascript fetch file on server using jquery link the filename to the visible layer chrome extension how to save data to an alternative file html download zip file onclick Photoshop extendscript javascript save to text file a list of layers mozila readonly drupal 8 programmatically saving node doesn't save custom field values javascript file in chrome doesn't show changes immediately file path to blob javascript upload file javascript mdn save new jszip file bufer load content on user language in javascript javascript Rename in the module js watchFile how to update a function to accept a name and have it displayed in JavaScript 1update normalize-url general hardhat config js file code synchronous file reading define all jsdoc typedef in a seperate file Javascript - The file size is measured in bytes filepond remove uploaded file javascript activate file input Transfer file to server using rsync window.txt 10 pro white for file loaded js load inside div from file <script type="text/javascript">window.__initialDataLoaded(window._sharedData);</script> __filename not defined in mjs files use anchor element to open file saveAs method of file-saver in angular move_uploaded_file equivalent in js uppy count files from javascript createfileinput javascript creating a read stream from a large text file Edit src/App.js and save to reload. "gitpod" recover deleted files extendscript unzip file FTP upload local file javascript Rename in the import file fabic js save and render download print.js rtl all files website checker js browse file upload text file react js functional component how to change Mime type of a file express how to save file in folder using javascript Automatic update javascript fileversion how does URL.createObjectURl differ from fileReader nodejs: send html file to show in Browser controllare che ci sia un file in javascript how to enable button of upload after click on chosefile in jquery Save Function To Different Name btoa in js string only capacitorjs get zip code example restrict file input with react uploady how to extract java script elemet js file not show update content disposition attachment javascript fetch download "excel" ecmascript make file for one function get latest file from s3 bucket javascript submit file js sanitize html before storing to db in js convert File to multer file js ajax file upload input upload blob to server how to upload document cloddinary file upload with progress bar xhr.upload upload file to s3 using pre signed url javascript signed url to get file from s3 bucket import zenodo_upload from '@iomeg/zenodo-upload example access data from dofferent file in js how to save data in javascript saves javascript get extension from file name javascript write to text file stack overflow get file name with extension netsuite suitescript foramt file with jq javascript download save files in folder save file as get dimensions puppeteer js javascript read file file handling using javascript check if file exists javascript javascript load content from file file_get_contents in javascript js upload file size limit how to use input type file and show selected file on screen') angular cli angular lifecycle hooks check angular version how to update angular version update to angular 12 update angular cli 10 angular cli update angular date formats short date angular pipe angular event emitter angular [routerLink] check if substring in string typescript string contains javascirpt string contains in javascript angular string contains strstr javascript match substring js how to check whether a string contains a substring in typescript online js string contais onclick event in angular generate module with routing in angular redirect angular angular router navigate import angular flex layout how to install flexbox in angular how to run angular update formgroup value angular settimeout in angular angular input change event adding bootstrap to angular button disabled angular get current url angular get active url angular angular http request query params command to create custom pipe in angular 6 ngclass angular ngclass toaster for angular ngx toastr toast angular how to add toaster in angular 9 how to assign port in angular angular ng serve with custom port angular cors issue pipe of date angular how recharge la page angluar refresh page angular refresh page after delete angular how to refresh page angular angular npm angular material update angular cli angular local storage angular email validation create module with routing by angular componentdidupdate disable input angular hide and show in angular 8 angularjs cdn install ionic @angular/fire npm install @angular/fire firebase –save install angular fire angular firebase add firebase angular angular input press enter moment use in angular angular moment in select option how to make one default in angular bootstrap 4.6 npm angular add bootstrap how to install bootstrap in angular Your global Angular CLI version (11.0.2) is greater than your local version Your global Angular CLI version is greater than your local version downgrade angular version in project angular loop unistall angular cli désinstaller angular node js angular for loop how to put background image in angular 11 angular background image build angular project await in angular 8 start angular app server angular serve sweetalert angular 8 ngfor object angular for objetkeys pipe ngfor on keys of a dictionary angular angular for loop key value ng build prod how to remove angular package unistall react node package angular create guard angular command to create interceptor ternary operator angular template ternary operator in angular navigate to route and refresh angular 6 angular input value fontawesome angular angular 8 to 9 generate component with module angular 8 window resize dispatch emit resize event in angular js trigger window resize jquery window trigger resize disable formcontrol angular update angular materia; mouseover angular 6 angular mouseenter forloop angular typescript for loop copy to clipboard angular ionic lifecycle angular pipe first letter uppercase refresh current component angular restart component angular angular pipe for 2 decimal places angular elementref setinterval in angular 6 ng add angularFire2 ngular fire install firebase in angular ng remove @angular/fire ngchange angular 8 properly import mat icon angular 10 call a function whenever routerlink is clicke angular update angular to specific version angular build with configuration how to run angular application in visual studio code ERESOLVE could not resolve npm ERR! npm ERR! While resolving: @agm/core@1.1.0 npm ERR! Found: @angular/common@10.0.14 Unable to resolve dependency tree error when installing npm packages @angular/common@11.2.1 node_modules/@angular/common @angular/common@"11.2.1" from the root project ERESOLVE unable to resolve dependency tree mui 5 npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree angular url parameter update angular onchange event in angular ngfor select angular uppercase angular pipe angular filter ngfor how to set form control value in angular ng new module w route install aos angular 10 aos animation angular angular add font what is ngmodel property binding how to fix cors in angular ng has unexpectedly closed (exit code 127). get value onChange from mat-select angular if condition in class angular 8 angular 11 on hover send event to child component angular open new tab with angular router angular adding delay javascript adding delay declare * angular jquery creare component in anglar If 'router-outlet' is an Angular component, then verify that it is part of this module. angular font awesome us phone number regex USA phone number validator angular gitignore for angular angular cli create component with module typescript filter array of objects angular filter array of objects angular x social login angular input date binding on variable update angular date input value angular dynamic class angular output bootstrap dropdown not working in angular 8 angular access form control value ngmodel change angular module with routing cli angular add object to array angular bootstrap not working angular int to string angular build aot vs jit scoll a div to bottom in angular how click button and redirect angular angular generate directive link in angular drop down listing in angular form angular 9 how to get previous state angular how to check previous route angular how to get previous state ionic 4 get previous route set route on button in angular add a route to a buttoin in angular angular 404 on refresh import file json angular 12 bootstrap not working in angular angular disabled condition based page reload button using angular skip import angular 6 angular httpclient query params not working submit a form on enter angular how to set disabled flag formgroup angular how to import all material module in angular angular 9 dockerfile label class text color in bs4 bootstrap change font color of text bootstrap text color bootstrap color class bootstrap text warning html background image auto resize background image size css array count items php length of array savve array to <List> java array to list jave arrayList to Array convert object array to list java 8 SQL how to use like in sql sql like query how to install mysql ubuntu alter table delete column how remove column in mysql# model in bootsrap 4 boostrap 4 modal bootstrap modal popup small modal popup bootstrap modal bootstrap style abril modal boostrap string to int c# c sharp split string string to date vb string to date installing bootstrap in angular 9 install ng bootstrap ngbmodal angular 9 yarn install .
https://www.codegrepper.com/code-examples/javascript/file+upload+in+angular+10
CC-MAIN-2022-33
en
refinedweb
® ™ Core Rulebook ® ™ Core Rulebook ® ™ Core Rulebook Credits Lead Designer: Jason Bulmahn Design Consultant: Monte Cook Additional Design: James Jacobs, Sean K Reynolds, and F. Wesley Schneider Additional Contributions: Tim Connors, Elizabeth Courts, Adam Daigle, David A. Eitelbach, Greg Oppedisano, and Hank Woon Cover Artist: Wayne Reynolds Interior Artists: Abrar Ajmal, Concept Art House, Vincent Dutrait, Jason Engle, Andrew Hou, Imaginary Friends, Steve Prescott, Wayne Reynolds, Sarah Stone, Franz Vohwinkel, Tyler Walpole, Eva Widermann, Ben Wootten, Svetlin Velinov, Kevin Yan, Kieran Yanner, and Serdar Yildiz Creative Director: James Jacobs Editing and Development: Christopher Carey, Erik Mona, Sean K Reynolds, Lisa Stevens, James L. Sutter, and Vic Wertz Editorial Assistance: Jeffrey Alvarez and F. Wesley Schneider Editorial Interns: David A. Eitelbach and Hank Woon Art Director: Sarah E. Robinson Senior Art Director: James Davis Special Thanks: The Paizo Customer Service and Warehouse Teams, Ryan Dancey, Clark Peterson, and the proud participants of the Open Gaming Movement. This game is dedicated to Gary Gygax and Dave Arnes. This game would not be possible without the passion and dedication of the thousands of gamers who helped playtest and develop it. Thank you for all of your time and effort. are not included in this declaration.) Open Content: Except for material designated as Product Identity (see above), the game mechanics of this Paizo Publishing game product are Open Game Content, as defined in the Open Gaming License version 1.0a Section 1(d). No portion of this work other than the material designated as Open Game Content may be reproduced in any form without written permission. Pathfinder Roleplaying Game Core Rulebook is published by Paizo Publishing, LLC under the Open Game License version 1.0a Copyright 2000 Wizards of the Coast, Inc. Paizo Publishing, LLC, the Paizo golem logo, Pathfinder, and GameMastery are registered trademarks of Paizo Publishing, LLC; Pathfinder Roleplaying Game, Pathfinder Society, Pathfinder Chronicles, Pathfinder Modules, and Pathfinder Companion are trademarks of Paizo Publishing, LLC. © 2009 Paizo Publishing. Sixth printing May 2013. Printed in China. TABLE OF CONTENTS Chapter 1: Getting Started 8 Chapter 7: Additional Rules 166 Chapter 13: Environment 410 Using This Book 9 Alignment 166 Dungeons 410 Common Terms 11 Traps 416 Example of Play 13 Vital Statistics 168 Sample Traps 420 Generating a Character 14 Wilderness 424 Ability Scores 15 Movement 170 Urban Adventures 433 Weather 437 Exploration 172 The Planes 440 Environmental Rules 442 Chapter 2: Races 20 Chapter 8: Combat 178 Chapter 14: Creating NPCs 448 Dwarves 21 How Combat Works 178 Elves 22 Combat Statistics 178 Adept 448 Gnomes 23 Actions in Combat 181 Half-Elves 24 Injury and Death 189 Aristocrat 449 Half-Orcs 25 Movement and Distance 192 Halflings 26 Combat Modifiers 195 Commoner 449 Humans 27 Special Attacks 197 Special Initiative Actions 202 Expert 450 Warrior 450 Chapter 3: Classes 30 Chapter 9: Magic 206 Creating NPCs 450 Character Advancement 30 Casting Spells 206 Chapter 15: Magic Items 458 Barbarian 31 Spell Descriptions 209 Bard 34 Arcane Spells 218 Using Items 458 Cleric 38 Divine Spells 220 Magic Items on the Body 459 Druid 48 Damaging Magic Items 459 Fighter 55 Chapter 10: Spells 224 Purchasing Magic Items 460 Monk 56 Magic Item Descriptions 460 Paladin 60 Spell Lists 224 Armor 461 Ranger 64 Weapons 467 Rogue 67 Spell Descriptions 239 Potions 477 Sorcerer 70 Rings 478 Wizard 77 Chapter 11: Prestige Classes 374 Rods 484 Scrolls 490 Arcane Archer 374 Staves 491 Wands 496 Arcane Trickster 376 Wondrous Items 496 Intelligent Items 532 Chapter 4: Skills 86 Assassin 378 Cursed Items 536 Artifacts 543 Acquiring Skills 86 Dragon Disciple 380 Magic Item Creation 548 Skill Descriptions 87 Duelist 382 Eldritch Knight 384 Loremaster 385 Chapter 5: Feats 112 Mystic Theurge 387 Prerequisites 112 Pathfinder Chronicler 388 Types of Feats 112 Feat Descriptions 113 Shadowdancer 391 Chapter 12: Gamemastering 396 Appendix 1: Special Abilities 554 Chapter 6: Equipment 140 Starting a Campaign 396 Appendix 2: Conditions 565 Wealth and Money 140 Building an Adventure 396 Appendix 3: Inspiring Reading 568 Weapons 140 Armors 149 Preparing for the Game 401 Appendix 4: Game Aids 569 Special Materials 154 Goods and Services 155 During the Game 402 Open Game License 569 Campaign Tips 404 Character Sheet 570 Ending the Campaign 406 Index 572 I t started in early 1997. Steve Winter, Creative Director at One of the best things about the Pathfinder RPG is that TSR, told a few of us designers and editors that we should it really necessitates no “conversion” of your existing books start thinking about a new edition of the world’s most and magazines. That shelf you have full of great adventures popular roleplaying game. For almost three years, a team and sourcebooks (many of them very likely from Paizo)? of us worked on developing a new rules set that built upon You can still use everything on it with the Pathfinder RPG. the foundation of the 25 years prior. Released in 2000, 3rd In fact, that was what convinced me to come on board the Edition started a new era. A few years later, a different set of Pathfinder RPG ship. I didn’t want to see all the great stuff designers made updates to the game in the form of 3.5. that had been produced thus far swept under the rug. Today, the Pathfinder Roleplaying Game carries on that Now, my role as “design consultant” was a relatively same tradition as the next step in the progression. Now, small one. Make no mistake: the Pathfinder RPG is Jason’s that might seem inappropriate, controversial, or even a baby. While my role was to read over material and give little blasphemous, but it’s still true. The Pathfinder RPG feedback, mostly I just chatted with Jason, relating old uses the foundations of the game’s long history to offer 3rd Edition design process stories. Jason felt it valuable to something new and fresh. It’s loyal to its roots, even if know why things were done the way they were. What was those roots are—in a fashion—borrowed. the thinking behind the magic item creation feats? Had we ever considered doing experience points a different way? The game’s designer, Jason Bulmahn, did an amazing How did the Treasure Value per Encounter chart evolve? job creating innovative new mechanics for the game, but And so on. he started with the premise that he already had a pretty good game to build upon. He didn’t wipe the slate clean It was an interesting time. Although I sometimes feel and start over. Jason had no desire to alienate the countless I have gone on at length about every facet of 3rd Edition fans who had invested equally countless hours playing the design in forums, in interviews, and at conventions, Jason game for the last 35 years. Rather, he wanted to empower managed to ask questions I’d never been asked before. them with the ability to build on what they’d already Together, we really probed the ins and outs of the game, created, played, and read. He didn’t want to take anything which I think is important to do before you start making away from them—only to give them even more. changes. You’ve got to know where you’ve been before you 4 Introduction can figure out where you’re going. This is particularly series of house rules for the 3.5 version of the world’s oldest true when you start messing around with a game as robust roleplaying game. In the fall of 2007, with a new edition of and tightly woven as 3rd Edition. The game’s design is an that game on the horizon, it seemed only natural that some intricate enough matrix that once you change one thing, gamers would prefer to stick with the rules they already other aspects of the game that you never even suspected owned. It also made sense that those same gamers would were related suddenly change as well. By the time we were like some updates to their rules, to make the game easier done hashing things out, we’d really put the original to use and more fun to play. When design of this game first system through its paces and conceived of some interesting began, compatibility with existing products was one of my new ideas. Jason used that as a springboard and then went primary goals, but I also wanted to make sure that all of the and did all the hard work while I sat back and watched with classes, races, and other elements were balanced and fun to a mix of awe and excitement as the various playtest and play. In other words, I endeavored to keep all of the great, preview versions of the game came out. iconic parts of the game, while fixing up the clunky rules that slowed down play and caused more than one heated The Pathfinder RPG offers cool new options for argument at the game table. characters. Rogues have talents. Sorcerers have bloodline powers. It fixes a few areas that proved troublesome over As the rules grew in size, it became apparent that the the last few years. Spells that turn you into something else changes were growing beyond a simple update into a full- are restructured. Grappling is simplified and rebalanced. f ledged rules system. So while the Pathfinder RPG is But it’s also still the game that you love, and have loved for compatible with the 3.5 rules, it can be used without any other so long, even if it was called by a different name. books. In the coming months, you can expect to see a number of brand-new products, made specifically to work with this I trust the gang at Paizo to bear the game’s torch well. version of the rules, from Paizo and a host of other publishers They respect the game’s past as much as its future. They through the Pathfinder Roleplaying Game Compatibility understand its traditions. It was my very distinct and License. This license allows publishers to use a special logo to sincere pleasure to play a small role in the Pathfinder RPG’s indicate that their product works with the rules in this book. development. You hold in your hands a truly great game that I’ve no doubt will provide you with hours and hours of fun. Making an already successful game system better is not a simple task. To accomplish this lofty goal, we turned to fans Enjoy! of the 3.5 rules, some of whom had been playing the game for over eight years. Since the spring of 2008, these rules Monte Cook have undergone some of the most stringent and extensive playtesting in gaming history. More than 50,000 gamers Adventure Awaits! have downloaded and used these rules. Moving through a number of playtest drafts, the final game that you now hold Welcome to a world where noble warriors battle mighty in your hands slowly started to come together. There were dragons and powerful wizards explore long-forgotten plenty of missteps, and more than one angry debate, but I tombs. This is a world of fantasy, populated by mysterious believe that we ended up with a better game as a result. This elves and savage orcs, wise dwarves and wily gnomes. In would not be the game you now hold without the passion this game, your character can become a master swordsman and inspiration of our playtesters. Thank you. who has never lost a duel, or a skilled thief capable of stealing the crown from atop the king’s head. You can play In closing, this game belongs to you and all the fans of a pious cleric wielding the power of the gods, or unravel fantasy gaming. I hope that you find this system to be fun the mysteries of magic as an enigmatic sorcerer. The world and simple to use, while still providing the same sort of is here for you to explore, and your actions will have a depth and variety of options you’ve come to expect from a profound inf luence in shaping its history. Who will rescue fantasy roleplaying game. the king from the clutches of a powerful vampire? Who will thwart the vengeful giants who have come from the There is a world of adventure waiting for you to explore. mountains to enslave the common folk? These stories wait It’s a world that needs brave and powerful heroes. Countless for your character to take center stage. With this rulebook, others have come before, but their time is over. Now it’s a few friends, and a handful of dice, you can begin your your turn. epic quest. Jason Bulmahn The Pathfinder Roleplaying Game did not start out Lead Designer as a standalone game. The first draft was designed as a 5 1 Getting Started The dragon roared in triumph as Valeros collapsed into the snow, blood spurting from the terrible wound in his belly. Kyra rushed to his side, praying that she wasn’t too late to save his life. “I’ll hold the beast off!” Seoni cried as she stepped up to the dragon, her staff f laring with defensive fire. Merisiel looked to the hulking dragon, then at the delicate sorcerer, and shook her head sadly. The adventure had just barely begun, and judging by this fight alone, they weren’t getting paid enough for the job. T he Pathfinder Roleplaying Game is a tabletop behind. (Although you should be honest about the results fantasy game in which the players take on the roles of your dice rolls, sometimes the results are not evident, of heroes who form a group (or party) to set out on and openly rolling the dice might give away too much dangerous adventures. Helping them tell this story is the information.) Combat in the Pathf inder RPG can be Game Master (or GM), who decides what threats the player resolved in one of two ways: you can describe the situation characters (or PCs) face and what sorts of rewards they earn to the characters and allow them to interact based on the for succeeding at their quest. Think of it as a cooperative description you provide, or you can draw the situation on storytelling game, where the players play the protagonists a piece of paper or a specially made battle mat and allow and the Game Master acts as the narrator, controlling the the characters to move their miniatures around to more rest of the world. accurately represent their position during the battle. While both ways have their advantages, if you choose If you are a player, you make all of the decisions for your the latter, you will need a mat to draw on, such as Paizo’s character, from what abilities your character has to the line of GameMastery Flip-Mats, as well as miniatures to type of weapon he carries. Playing a character, however, is represent the monsters or other adversaries. These can more than just following the rules in this book. You also also be found at your local game shop, or at paizo.com. decide your character’s personality. Is he a noble knight, set on vanquishing a powerful evil, or is he a conniving Playing the Game: While playing the Pathfinder RPG, rogue who cares more about gold than glory? The choice the Game Master describes the events that occur in the is up to you. game world, and the players take turns describing what their characters do in response to those events. Unlike If you are a Game Master, you control the world that the storytelling, however, the actions of the players and the players explore. Your job is to bring the setting to life and to characters controlled by the Game Master (frequently present the characters with challenges that are both fair and called non-player characters, or NPCs) are not certain. Most exciting. From the local merchant prince to the rampaging actions require dice rolls to determine success, with some dragon, you control all of the characters that are not being tasks being more difficult than others. Each character is played by the players. Paizo’s Pathfinder Adventure Path series, better at some things than he is at other things, granting Pathfinder Modules, and Pathfinder Chronicles world him bonuses based on his skills and abilities. guides provide everything you need to run a game, or you can invent your own, using the rules in this book as well as Whenever a roll is required, the roll is noted as “d#,” the monsters found in the Pathfinder RPG Bestiary. with the “#” representing the number of sides on the die. If you need to roll multiple dice of the same type, What You Need: In addition to this book, you will there will be a number before the “d.” For example, if you need a number of special dice to play the Pathfinder are required to roll 4d6, you should roll four six-sided Roleplaying Game. The dice that come with most board dice and add the results together. Sometimes there will games have six sides, but the Pathfinder Roleplaying be a + or – after the notation, meaning that you add that Game uses dice with four sides, six sides, eight sides, ten number to, or subtract it from, the total results of the dice sides, twelve sides, and twenty sides. Dice of this sort can (not to each individual die rolled). Most die rolls in the be found at your local game store or online at paizo.com. game use a d20 with a number of modifiers based on the character’s skills, his or her abilities, and the situation. In addition to dice, if you are a player, you will need Generally speaking, rolling high is better than rolling a character sheet (which can be photocopied from the low. Percentile rolls are a special case, indicated as rolling back of this book) and, if the Game Master uses a map d%. You can generate a random number in this range by to represent the adventure, a small figurine to represent rolling two differently colored ten-sided dice (2d10). Pick your character. These figurines, or miniatures, can one color to represent the tens digit, then roll both dice. also be found at most game stores. They come in a wide If the die chosen to be the tens digit rolls a “4” and the variety of styles, so you can probably f ind a miniature that other d10 rolls a “2,” then you’ve generated a 42. A zero relatively accurately depicts your character. on the tens digit die indicates a result from 1 to 9, or 100 if both dice result in a zero. Some d10s are printed with If you are the Game Master, you will need a copy of the “10,” “20,” “30,” and so on in order to make reading d% Pathfinder RPG Bestiary, which contains the rules for a rolls easier. Unless otherwise noted, whenever you must whole spectrum of monsters, from the mighty dragon to round a number, always round down. the lowly goblin. While many of these monsters can be used to fight against the players, others might provide As your character goes on adventures, he earns useful information or become powerful allies. Some gold, magic items, and experience points. Gold can be might even join the group, with one of the players taking used to purchase better equipment, while magic items on the role of a monstrous character. In addition, you possess powerful abilities that enhance your character. should have your own set of dice and some sort of screen you can use to hide your notes, maps, and dice rolls 8 Getting Started 1 Experience points are awarded for overcoming challenges have a number of “house rules” that they use in their games. and completing major storylines. When your character The Game Master and players should always discuss any has earned enough experience points, he increases his rules changes to make sure that everyone understands how character level by one, granting him new powers and the game will be played. Although the Game Master is the abilities that allow him to take on even greater challenges. final arbiter of the rules, the Pathfinder RPG is a shared While a 1st-level character might be up to saving a farmer’s experience, and all of the players should contribute their daughter from rampaging goblins, defeating a terrifying thoughts when the rules are in doubt. red dragon might require the powers of a 20th-level hero. It is the Game Master’s duty to provide challenges for your Using this book character that are engaging, but not so deadly as to leave you with no hope of success. For more information on the This book is divided into 15 chapters, along with a host duties of being a Game Master, see Chapter 12. of appendices. Chapters 1 through 11 cover all of the rules needed by players to create characters and play the game. Above all, have fun. Playing the Pathfinder RPG is Chapters 12 through 15 contain information intended supposed to be exciting and rewarding for both the Game to help a Game Master run the game and adjudicate the Master and the players. Adventure awaits! world. Generally speaking, if you are a player, you do not need to know the information in these later chapters, but The Most Important Rule you might be asked to reference them occasionally. The following synopses are presented to give you a broad The rules in this book are here to help you breathe life into overview of the rules encompassed within this book. your characters and the world they explore. While they are designed to make your game easy and exciting, you might Chapter 1 (Getting Started): This chapter covers the find that some of them do not suit the style of play that your basics of the Pathfinder RPG, including information on gaming group enjoys. Remember that these rules are yours. how to reference the rest of the book, rules for generating You can change them to fit your needs. Most Game Masters player characters (PCs), and rules for determining a 9 character’s ability scores. Ability scores are the most basic deals with how much weight your character can carry attributes possessed by a character, describing his raw without being hindered. Movement describes the distance potential and ability. your character can travel in a minute, hour, or day, depending upon his race and the environment. Visibility Chapter 2 (Races): The Pathfinder RPG contains seven deals with how far your character can see, based on race core races that represent the most common races in the and the prevailing light conditions. game world. They are dwarves, elves, gnomes, half-elves, half-orcs, half lings, and humans. This chapter covers all Chapter 8 (Combat): All characters eventually end up of the rules needed to play a member of one of these races. in life-or-death struggles against fearsome monsters When creating a PC, you should choose one of the races and dangerous villains. This chapter covers how to deal from this chapter. with combat in the Pathfinder RPG. During combat, each character acts in turn (determined by initiative), with Chapter 3 (Classes): There are 11 core classes in the the order repeating itself until one side has perished or Pathfinder RPG. Classes represent a character’s basic is otherwise defeated. In this chapter, you will find rules profession, and each one grants a host of special abilities. for taking a turn in combat, covering all of the various A character’s class also determines a wide variety of other actions that you can perform. This chapter also includes statistics used by the character, including hit points, saving rules for adjudicating special combat maneuvers (such throw bonuses, weapon and armor proficiencies, and skill as attempting to trip your enemy or trying to disarm his ranks. This chapter also covers the rules for advancing your weapon) and character injury and death. character as he grows in power (gaining levels). Gaining additional levels in a class grants additional abilities and Chapter 9 (Magic): A number of classes (and some increases other statistics. When creating a PC, you should monsters) can cast spells, which can do nearly anything, choose one class from this chapter and put one level into from bringing the dead back to life to roasting your that class (for example, if you choose your starting class to enemies with a ball of fire. This chapter deals with the rules be wizard, you would be a 1st-level wizard). for casting spells and learning new spells to cast. If your character can cast spells, you should become familiar with Chapter 4 (Skills): This chapter covers skills and how these rules. to use them during the game. Skills represent a wide variety of simple tasks that a character can perform, from Chapter 10 (Spells): Whereas the magic chapter describes climbing a wall to sneaking past a guard. Each character how to cast a spell, this chapter deals with the individual receives a number of skill ranks, which can be used to spells themselves, starting with the lists of which spells make the character better at using some skills. As a are available to characters based on their classes. This is character gains levels, he receives additional skill ranks, followed up by an extensive listing of every spell in the which can be used to improve existing skills possessed game, including its effects, range, duration, and other by the character or to become proficient in the use of important variables. A character that can cast spells should new skills. A character’s class determines how many skill read up on all the spells that are available to him. ranks a character can spend. Chapter 11 (Prestige Classes): Although the core classes Chapter 5 (Feats): Each character possesses a number of in Chapter 3 allow for a wide variety of character types, feats, which allow the character to perform some special prestige classes allow a character to become a master of one action or grant some other capability that would otherwise select theme. These advanced classes grant a specialized not be allowed. Each character begins play with at least list of abilities that make a character very powerful in one feat, and new feat choices are awarded as a character one area. A character must meet specific prerequisites advances in level. before deciding to take levels in a prestige class. These prerequisites vary depending upon the prestige class. If Chapter 6 (Equipment): This chapter covers the basic you plan on taking levels in a prestige class, you should gear and equipment that can be purchased, from armor familiarize yourself with the prerequisites to ensure that and weapons to torches and backpacks. Here you will also your character can eventually meet them. find listed the cost for common services, such as staying in an inn or booking passage on a boat. Starting characters Chapter 12 (Gamemastering): This chapter covers receive an amount of gold based on their respective classes the basics of running the Pathfinder RPG. It includes which they can spend on equipment at 1st level. guidelines for creating a game, using a published adventure, adjudicating matters at the table, and awarding Chapter 7 (Additional Rules): The rules in this chapter experience points and treasure. If you are the GM, you cover several miscellaneous rules that are important should become familiar with the concepts presented in to playing the Pathfinder RPG, including alignment, this chapter. encumbrance, movement, and visibility. Alignment tells you whether your character is an irredeemable villain, a Chapter 13 (Environment): Aside from fighting against virtuous hero, or anywhere in between. Encumbrance monsters, a host of other dangers and challenges await 10 Getting Started 1 the PCs as they play the Pathfinder RPG. This chapter are usually abbreviated using the first letter of each covers the rules for adjudicating the environment, from alignment component, such as LN for lawful neutral or cunning traps to bubbling lava, and is broken down CE for chaotic evil. Creatures that are neutral in both by environment type, including dungeons, deserts, components are denoted by a single “N.” mountains, forests, swamps, aquatic, urban, and other dimensions and planes beyond reality. Finally, this Armor Class (AC): All creatures in the game have an chapter also includes information on weather and its Armor Class. This score represents how hard it is to hit a effects on the game. creature in combat. As with other scores, higher is better. Chapter 14 (Creating NPCs): In addition to characters Base Attack Bonus (BAB): Each creature has a base and monsters, the world is populated by countless attack bonus and it represents its skill in combat. As a nonplayer characters (NPCs). These characters are created character gains levels or Hit Dice, his base attack bonus and controlled by the GM and represent every other person improves. When a creature’s base attack bonus reaches that exists in the game world, from the local shopkeep to +6, +11, or +16, he receives an additional attack in combat the greedy king. This chapter includes simple classes used when he takes a full-attack action (which is one type of by most NPCs (although some can possess levels in the core full-round action—s ee Chapter 8). classes and prestige classes) and a system for generating an NPC’s statistics quickly. Bonus: Bonuses are numerical values that are added to checks and statistical scores. Most bonuses have a type, Chapter 15 (Magic Items): As a character goes on and as a general rule, bonuses of the same type are not adventures, he often finds magic items to help him in his cumulative (do not “stack”)—only the greater bonus struggles. This chapter covers these magic items in detail, granted applies. including weapons, armor, potions, rings, rods, scrolls, staves, and wondrous items (a generic category that covers Caster Level (CL): Caster level represents a creature’s everything else). In addition, you will find cursed items power and ability when casting spells. When a creature (which hinder those who wield them), intelligent items, casts a spell, it often contains a number of variables, such artifacts (items of incredible power), and the rules for as range or damage, that are based on the caster’s level. creating new magic items in this chapter. Class: Classes represent chosen professions taken by Appendices: The appendices at the back of the book characters and some other creatures. Classes give a host gather a number of individual rules concerning special of bonuses and allow characters to take actions that they abilities and conditions. This section also includes a list of otherwise could not, such as casting spells or changing recommended reading and a discussion of other tools and shape. As a creature gains levels in a given class, it gains products that you can use for a more enjoyable Pathfinder new, more powerful abilities. Most PCs gain levels in the RPG experience. core classes or prestige classes, since these are the most powerful (see Chapters 3 and 11). Most NPCs gain levels in Common Terms NPC classes, which are less powerful (see Chapter 14). The Pathfinder RPG uses a number of terms, abbreviations, Check: A check is a d20 roll which may or may not be and definitions in presenting the rules of the game. The modified by another value. The most common types are following are among the most common. attack rolls, skill checks, ability checks, and saving throws. Ability Score: Each creature has six ability scores: Combat Maneuver: This is an action taken in combat Strength, Dexterity, Constitution, Intelligence, Wisdom, that does not directly cause harm to your opponent, such and Charisma. These scores represent a creature’s most as attempting to trip him, disarm him, or grapple with basic attributes. The higher the score, the more raw him (see Chapter 8). potential and talent your character possesses. Combat Maneuver Bonus (CMB): This value represents Action: An action is a discrete measurement of time how skilled a creature is at performing a combat maneuver. during a round of combat. Using abilities, casting spells, When attempting to perform a combat maneuver, this and making attacks all require actions to perform. There value is added to the character’s d20 roll. are a number of different kinds of actions, such as a standard action, move action, swift action, free action, and Combat Maneuver Defense (CMD): This score full-round action (see Chapter 8). represents how hard it is to perform a combat maneuver against this creature. A creature’s CMD is used as the Alignment: Alignment represents a creature’s difficulty class when performing a maneuver against basic moral and ethical attitude. Alignment has two that creature. components: one describing whether a creature is lawful, neutral, or chaotic, followed by another that describes Concentration Check: When a creature is casting a whether a character is good, neutral, or evil. Alignments spell, but is disrupted during the casting, he must make a concentration check or fail to cast the spell (see Chapter 9). Creature: A creature is an active participant in the story or world. This includes PCs, NPCs, and monsters. 11 Damage Reduction (DR): Creatures that are resistant Initiative: Whenever combat begins, all creatures to harm typically have damage reduction. This amount is involved in the battle must make an initiative check to subtracted from any damage dealt to them from a physical determine the order in which creatures act during combat. source. Most types of DR can be bypassed by certain The higher the result of the check, the earlier a creature types of weapons. This is denoted by a “/” followed by the gets to act. type, such as “10/cold iron.” Some types of DR apply to all physical attacks. Such DR is denoted by the “—” symbol. Level: A character’s level represents his overall ability See Appendix 1 for more information. and power. There are three types of levels. Class level is the number of levels of a specific class possessed by a character. Difficulty Class (DC): Whenever a creature attempts Character level is the sum of all of the levels possessed by to perform an action whose success is not guaranteed, he a character in all of his classes. In addition, spells have a must make some sort of check (usually a skill check). The level associated with them numbered from 0 to 9. This level result of that check must meet or exceed the Difficulty indicates the general power of the spell. As a spellcaster Class of the action that the creature is attempting to gains levels, he learns to cast spells of a higher level. perform in order for the action to be successful. Monster: Monsters are creatures that rely on racial Hit Extraordinary Abilities (Ex): Extraordinary abilities are Dice instead of class levels for their powers and abilities unusual abilities that do not rely on magic to function. (although some possess class levels as well). PCs are usually not monsters. Experience Points (XP): As a character overcomes challenges, defeats monsters, and completes quests, he Multiplying: When you are asked to apply more than gains experience points. These points accumulate over one multiplier to a roll, the multipliers are not multiplied time, and when they reach or surpass a specific value, the by one another. Instead, you combine them into a single character gains a level. multiplier, with each extra multiple adding 1 less than its value to the first multiple. For example, if you are asked to Feat: A feat is an ability a creature has mastered. Feats apply a ×2 multiplier twice, the result would be ×3, not ×4. often allow creatures to circumvent rules or restrictions. Creatures receive a number of feats based off their Hit Dice, Nonplayer Character (NPC): These are characters but some classes and other abilities grant bonus feats. controlled by the GM. Game Master (GM): A Game Master is the person who Penalty: Penalties are numerical values that are adjudicates the rules and controls all of the elements of the subtracted from a check or statistical score. Penalties do story and world that the players explore. A GM’s duty is to not have a type and most penalties stack with one another. provide a fair and fun game. Player Character (Character, PC): These are the Hit Dice (HD): Hit Dice represent a creature’s general characters portrayed by the players. level of power and skill. As a creature gains levels, it gains additional Hit Dice. Monsters, on the other hand, gain Round: Combat is measured in rounds. During an racial Hit Dice, which represent the monster’s general individual round, all creatures have a chance to take a turn prowess and ability. Hit Dice are represented by the to act, in order of initiative. A round represents 6 seconds number the creature possesses followed by a type of die, in the game world. such as “3d8.” This value is used to determine a creature’s total hit points. In this example, the creature has 3 Hit Rounding: Occasionally the rules ask you to round a Dice. When rolling for this creature’s hit points, you would result or value. Unless otherwise stated, always round roll a d8 three times and add the results together, along down. For example, if you are asked to take half of 7, the with other modifiers. result would be 3. Hit Points (hp): Hit points are an abstraction signifying Saving Throw: When a creature is the subject of a how robust and healthy a creature is at the current dangerous spell or effect, it often receives a saving throw to moment. To determine a creature’s hit points, roll the mitigate the damage or result. Saving throws are passive, dice indicated by its Hit Dice. A creature gains maximum meaning that a character does not need to take an action to hit points if its first Hit Die roll is for a character class make a saving throw—they are made automatically. There level. Creatures whose first Hit Die comes from an NPC are three types of saving throws: Fortitude (used to resist class or from his race roll their first Hit Die normally. poisons, diseases, and other bodily ailments), Ref lex (used Wounds subtract hit points, while healing (both natural to avoid effects that target an entire area, such as fireball), and magical) restores hit points. Some abilities and spells and Will (used to resist mental attacks and spells). grant temporary hit points that disappear after a specific duration. When a creature’s hit points drop below 0, it Skill: A skill represents a creature’s ability to perform an becomes unconscious. When a creature’s hit points reach a ordinary task, such as climb a wall, sneak down a hallway, negative total equal to its Constitution score, it dies. or spot an intruder. The number of ranks possessed by a creature in a given skill represents its proficiency in that skill. As a creature gains Hit Dice, it also gains additional skill ranks that can be added to its skills. 12 Getting Started 1 Spell: Spells can perform a wide variety of tasks, from The GM consults his notes about this part of the adventure and harming enemies to bringing the dead back to life. Spells realizes that there are indeed some monsters nearby, and that the specify what they can target, what their effects are, and PCs have walked into their trap. how they can be resisted or negated. GM: Lem, could you roll a Perception check? Spell-Like Abilities (Sp): Spell-like abilities function just Lem rolls a d20 and gets a 12. He then consults his character like spells, but are granted through a special racial ability sheet to find his bonus on Perception skill checks, which turns out or by a specific class ability (as opposed to spells, which are to be a +6. gained by spellcasting classes as a character gains levels). Lem: I got an 18. What do I see? GM: As you turn around, you spot six dark shapes moving Spell Resistance (SR): Some creatures are resistant to up behind you. As they enter the light from Ezren’s spell, magic and gain spell resistance. When a creature with you can tell that they’re skeletons, marching onto the bridge spell resistance is targeted by a spell, the caster of the spell wearing rusting armor and waving ancient swords. must make a caster level check to see if the spell affects the Lem: Guys, I think we have a problem. target. The DC of this check is equal to the target creature’s GM: You do indeed. Can I get everyone to roll initiative? SR (some spells do not allow SR checks). To determine the order of combat, each one of the players rolls a d20 and adds his or her initiative bonus. The GM rolls once Stacking: Stacking refers to the act of adding together for the skeletons and one additional time for their hidden leader. bonuses or penalties that apply to one particular check or Seelah gets an 18, Harsk a 16, Ezren a 12, and Lem a 5. The statistic. Generally speaking, most bonuses of the same skeletons get an 11, and their leader rolled an 8. type do not stack. Instead, only the highest bonus applies. GM: Seelah, you have the highest initiative. It’s your turn. Most penalties do stack, meaning that their values are Seelah: Since they’re skeletons, I’m going to attempt to added together. Penalties and bonuses generally stack with destroy them using the power of my goddess Iomedae. I one another, meaning that the penalties might negate or channel positive energy. exceed part or all of the bonuses, and vice versa. Seelah rolls 2d6 and gets a 7. Seelah: The skeletons take 7 points of damage, but they Supernatural Abilities (Su): Supernatural abilities are get to make a DC 15 Will save to only take half damage. magical attacks, defenses, and qualities. These abilities The GM rolls the Will saving throws for the skeletons and gets can be always active or they can require a specific action an 18, two 17s, a 15, an 8, and a 3. Since four of the skeletons made to utilize. The supernatural ability’s description includes their saving throws, they only take half damage (3 points), while information on how it is used and its effects. the other two take the full 7 points of damage. GM: Two of the skeletons burst into f lames and crumble Turn: In a round, a creature receives one turn, during as the power of your deity washes over them. The other which it can perform a wide variety of actions. Generally in four continue their advance. Harsk, it’s your turn. the course of one turn, a character can perform one standard Harsk: Great. I’m going to fire my crossbow at the action, one move action, one swift action, and a number of nearest skeleton. free actions. Less-common combinations of actions are Harsk rolls a d20 and gets a 13. He adds that to his bonus on permissible as well, see Chapter 8 for more details. attack rolls with his crossbow and announces a total of 22. The GM checks the skeleton’s armor class, which is only a 14. Example of Play GM: That’s a hit. Roll for damage. Harsk rolls a d10 and gets an 8. The GM realizes that the The GM is running a group of four players through their skeletons have damage reduction that can only be overcome latest adventure. They are playing Seelah (a human paladin), by bludgeoning weapons. Since crossbow bolts deal piercing Ezren (a human wizard), Harsk (a dwarf ranger) and Lem damage, the skeleton’s damage reduction reduces the damage (a half ling bard). The four adventurers are exploring the from 8 to 3, but this is still enough to reduce that skeleton’s hit ruins of an ancient keep, after hearing rumors that there points to below 0. are great treasures to be found in its musty vaults. As the GM: Although the crossbow bolt seemed to do less adventurers make their way toward the crumbling edifice, damage against the skeleton’s ancient bones, the hit was they cross an ancient stone bridge. After describing the hard enough to cause that skeleton to break apart. Ezren, scene, the GM asks the players what they want to do. it’s your turn. Ezren: I’m going to cast magic missile at the skeleton Harsk: Let’s keep moving. I don’t like the look of this that’s closest to me. place. I draw my crossbow and load it. Magic missile creates a number of glowing darts that always hit their target. Ezren rolls 1d4+1 for each missile and gets a Seelah: Agreed. I draw my sword, just in case. Ezren: I’m going to cast light so that we can see where we’re going. GM: Alright, a f lickering glow springs up from your hand, illuminating the area. Lem: I’d like to keep a lookout, just to make sure there are no monsters nearby. 13 total of 6. Since this is magic, it automatically bypasses the When generating a character, start with your character’s skeleton’s DR, causing another one to fall. concept. Do you want a character who goes toe-to-toe with terrible monsters, matching sword and shield GM: There are only two skeletons left, and it’s their against claws and fangs? Or do you want a mystical seer turn. One of them charges up to Seelah and takes a swing who draws his powers from the great beyond to further at her, while the other moves up to Harsk and attacks. his own ends? Nearly anything is possible. The GM rolls a d20 for both attacks. The attack against Seelah Once you have a general concept worked out, use the is only an 8, which is not equal to or higher than her AC of 18. The following steps to bring your idea to life, recording the attack against Harsk is a 17, which beats his AC of 16. The GM resulting information and statistics on your Pathfinder rolls damage for the skeleton’s attack. RPG character sheet, which can be found at the back of this book and photocopied for your convenience. GM: The skeleton hits you, Harsk, leaving a nasty cut on your upper arm. Take 7 points of damage. Step 1—Determine Ability Scores: Start by generating your character’s ability scores (see page 15). These six Harsk: Ouch. I have 22 hit points left. scores determine your character’s most basic attributes GM: That’s not all. Charging out of the fog onto the and are used to decide a wide variety of details and bridge is a skeleton dressed like a knight, riding the bones statistics. Some class selections require you to have better of a long-dead horse. The heads of the warrior’s previous than average scores for some of your abilities. victims are mounted atop its deadly lance. Lem, it’s your turn. What do you do? Step 2—Pick Your Race: Next, pick your character’s Lem: Run! race, noting any modifiers to your ability scores The combat continues in order, starting over with Seelah, until and any other racial traits (see Chapter 2). There are one side or the other is defeated. If the PCs survive the fight, they seven basic races to choose from, although your GM can continue on to the ancient castle to see what treasures and might have others to add to the list. Each race lists the perils lie within. languages your character automatically knows, as well as a number of bonus languages. A character knows a Generating a Character number of additional bonus languages equal to his or her Intelligence modif ier (see page 17). From the sly rogue to the stalwart paladin, the Pathfinder RPG allows you to make the character you want to play. 14 Getting Started 1 Step 3—Pick Your Class: A character’s class represents Assign these totals to your ability scores as you see fit. a profession, such as fighter or wizard. If this is a new This method is less random than Classic and tends to character, he starts at 1st level in his chosen class. As he create characters with above-average ability scores. gains experience points (XP) for defeating monsters, he goes up in level, granting him new powers and abilities. Classic: Roll 3d6 and add the dice together. Record this total and repeat the process until you generate six numbers. Step 4—Pick Skills and Select Feats: Determine the Assign these results to your ability scores as you see fit. This number of skill ranks possessed by your character, based on method is quite random, and some characters will have his class and Intelligence modifier (and any other bonuses, clearly superior abilities. This randomness can be taken one such as the bonus received by humans). Then spend step further, with the totals applied to specific ability scores these ranks on skills, but remember that you cannot have in the order they are rolled. Characters generated using this more ranks than your level in any one skill (for a starting method are difficult to fit to predetermined concepts, as character, this is usually one). After skills, determine how their scores might not support given classes or personalities, many feats your character receives, based on his class and and instead are best designed around their ability scores. level, and select them from those presented in Chapter 5. Heroic: Roll 2d6 and add 6 to the sum of the dice. Record Step 5—Buy Equipment: Each new character begins this total and repeat the process until six numbers are the game with an amount of gold, based on his class, that generated. Assign these totals to your ability scores as you can be spent on a wide range of equipment and gear, from see fit. This is less random than the Standard method and chainmail armor to leather backpacks. This gear helps generates characters with mostly above-average scores. your character survive while adventuring. Generally speaking, you cannot use this starting money to buy Dice Pool: Each character has a pool of 24d6 to assign magic items without the consent of your GM. to his statistics. Before the dice are rolled, the player selects the number of dice to roll for each score, with a Step 6—Finishing Details: Finally, you need to minimum of 3d6 for each ability. Once the dice have been determine all of a character’s details, including his assigned, the player rolls each group and totals the result starting hit points (hp), Armor Class (AC), saving throws, of the three highest dice. For more high-powered games, initiative modifier, and attack values. All of these numbers the GM should increase the total number of dice to 28. are determined by the decisions made in previous steps. A This method generates characters of a similar power to level 1 character begins with maximum hit points for its the Standard method. Hit Die roll. Aside from these, you need to decide on your character’s name, alignment, and physical appearance. It Purchase: Each character receives a number of points is best to jot down a few personality traits as well, to help to spend on increasing his basic attributes. In this you play the character during the game. Additional rules method, all attributes start at a base of 10. A character (like age and alignment) are described in Chapter 7. can increase an individual score by spending some of his points. Likewise, he can gain more points to Ability Scores spend on other scores by decreasing one or more of his ability scores. No score can be reduced below 7 or raised Each character has six ability scores that represent his above 18 using this method. See Table 1–1 on the next page character’s most basic attributes. They are his raw talent for the costs of each score. After all the points are spent, and prowess. While a character rarely rolls an ability check apply any racial modifiers the character might have. (using just an ability score), these scores, and the modifiers they create, affect nearly every aspect of a character’s skills The number of points you have to spend using the and abilities. Each ability score generally ranges from 3 to purchase method depends on the type of campaign you 18, although racial bonuses and penalties can alter this; an are playing. The standard value for a character is 15 points. average ability score is 10. Average nonplayer characters (NPCs) are typically built using as few as 3 points. See Table 1–2 on the next page for Generating Ability Scores a number of possible point values depending on the style of campaign. The purchase method emphasizes player There are a number of different methods used to generate choice and creates equally balanced characters. This ability scores. Each of these methods gives a different level system is typically used for organized play events, such as of f lexibility and randomness to character generation. the Pathfinder Society (visit paizo.com/pathfinderSociety Racial modif iers (adjustments made to your ability for more details on this exciting campaign). scores due to your character’s race—see Chapter 2) are applied after the scores are generated. Determine Bonuses Standard: Roll 4d6, discard the lowest die result, and Each ability, after changes made because of race, has add the three remaining results together. Record this total a modifier ranging from –5 to +5. Table 1–3 shows the and repeat the process until six numbers are generated. modifier for each score. The modifier is the number 15 you apply to the die roll when your character tries to do two-handed attacks receive 1–1/2 times the Strength something related to that ability. You also use the modifier bonus. A Strength penalty, but not a bonus, applies to with some numbers that aren’t die rolls. A positive attacks made with a bow that is not a composite bow.) modifier is called a bonus, and a negative modifier is • Climb and Swim checks. called a penalty. The table also shows bonus spells, which • Strength checks (for breaking down doors and the like). you’ll need to know about if your character is a spellcaster. Dexterity (Dex) Abilities and Spellcasters Dexterity measures agility, ref lexes, and balance. This The ability that governs bonus spells depends on what type ability is the most important one for rogues, but it’s also of spellcaster your character is: Intelligence for wizards; useful for characters who wear light or medium armor or Wisdom for clerics, druids, and rangers; and Charisma for no armor at all. This ability is vital for characters seeking bards, paladins, and sorcerers. In addition to having a high to excel with ranged weapons, such as the bow or sling. A ability score, a spellcaster must be of a high enough class character with a Dexterity score of 0 is incapable of moving level to be able to cast spells or use spell slots of a given spell and is effectively immobile (but not unconscious). level. See the class descriptions in Chapter 3 for details. You apply your character’s Dexterity modifier to: The Abilities • Ranged attack rolls, including those for attacks made Each ability partially describes your character and affects with bows, crossbows, throwing axes, and many ranged some of his actions. spell attacks like scorching ray or searing light. • Armor Class (AC), provided that the character can react Strength (Str) to the attack. • Ref lex saving throws, for avoiding fireballs and other Strength measures muscle and physical power. This ability attacks that you can escape by moving quickly. is important for those who engage in hand-to-hand (or • Acrobatics, Disable Device, Escape Artist, Fly, Ride, “melee”) combat, such as fighters, monks, paladins, and Sleight of Hand, and Stealth checks. some rangers. Strength also sets the maximum amount of weight your character can carry. A character with a Constitution (Con) Strength score of 0 is too weak to move in any way and is unconscious. Some creatures do not possess a Strength Constitution represents your character’s health and score and have no modifier at all to Strength-based skills stamina. A Constitution bonus increases a character’s or checks. hit points, so the ability is important for all classes. Some creatures, such as undead and constructs, do You apply your character’s Strength modifier to: not have a Constitution score. Their modifier is +0 • Melee attack rolls. for any Constitution-based checks. A character with a • Damage rolls when using a melee weapon or a thrown Constitution score of 0 is dead. weapon, including a sling. (Exceptions: Off-hand attacks You apply your character’s Constitution modifier to: receive only half the character’s Strength bonus, while • Each roll of a Hit Die (though a penalty can never drop a Table 1–1: Ability Score Costs result below 1—that is, a character always gains at least 1 hit point each time he advances in level). Score Points Score Points • Fortitude saving throws, for resisting poison, disease, 3 and similar threats. 7 –4 13 5 If a character’s Constitution score changes enough to 7 alter his or her Constitution modifier, the character’s hit 8 –2 14 10 points also increase or decrease accordingly. 13 9 –1 15 17 Intelligence (Int) 10 0 16 Intelligence determines how well your character learns and reasons. This ability is important for wizards because 11 1 17 it affects their spellcasting ability in many ways. Creatures of animal-level instinct have Intelligence scores of 1 or 2. 12 2 18 Any creature capable of understanding speech has a score of at least 3. A character with an Intelligence score of 0 is Table 1–2: Ability Score Points comatose. Some creatures do not possess an Intelligence score. Their modifier is +0 for any Intelligence-based Campaign Type Points skills or checks. Low Fantasy 10 Standard Fantasy 15 High Fantasy 20 Epic Fantasy 25 16 Getting Started 1 Table 1–3: Ability Modifiers and Bonus Spells Ability Bonus Spells per Day (by Spell Level) Score Modifier 0 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 1 –5 Can’t cast spells tied to this ability 2–3 –4 Can’t cast spells tied to this ability 4–5 –3 Can’t cast spells tied to this ability 6–7 –2 Can’t cast spells tied to this ability 8–9 –1 Can’t cast spells tied to this ability 10–11 0 — — — — — — — — — — 12–13 +1 — 1 — — — — — — — — 14–15 +2 — 1 1 — — — — — — — 16–17 +3 — 1 1 1 — — — — — — 18–19 +4 — 1 1 1 1 — — — — — 20–21 +5 — 2 1 1 1 1 — — — — 22–23 +6 — 2 2 11 11 — — — 24–25 +7 — 2 2 2 1 1 1 1 — — 26–27 +8 — 2 2 2 2 1 1 1 1 — 28–29 +9 — 3 2 2 2 2 1 1 1 1 30–31 +10 — 3 3 22 22 1 11 32–33 +11 — 3 3 3 2 2 2 2 1 1 34–35 +12 — 3 3 3 3 2 2 2 2 1 36–37 +13 — 4 3 3 3 3 2 2 2 2 38–39 +14 — 4 4 33 33 2 22 40–41 +15 — 4 4 4 3 3 3 3 2 2 42–43 +16 — 4 4 4 4 3 3 3 3 2 44–45 +17 — 5 4 4 4 4 3 3 3 3 etc. … You apply your character’s Intelligence modifier to: • Heal, Perception, Profession, Sense Motive, and • The number of bonus languages your character knows at Survival checks. Clerics, druids, and rangers get bonus spells based on the start of the game. These are in addition to any starting racial languages and Common. If you have a penalty, you their Wisdom scores. The minimum Wisdom score needed can still read and speak your racial languages unless your to cast a cleric, druid, or ranger spell is 10 + the spell’s level. Intelligence is lower than 3. • The number of skill points gained each level, though Charisma (Cha) your character always gets at least 1 skill point per level. • Appraise, Craft, Knowledge, Linguistics, and Spellcraft Charisma measures a character’s personality, personal checks. magnetism, ability to lead, and appearance. It is the most A wizard gains bonus spells based on his Intelligence important ability for paladins, sorcerers, and bards. It score. The minimum Intelligence score needed to cast a is also important for clerics, since it affects their ability wizard spell is 10 + the spell’s level. to channel energy. For undead creatures, Charisma is a measure of their unnatural “lifeforce.” Every creature has Wisdom (Wis) a Charisma score. A character with a Charisma score of 0 is not able to exert himself in any way and is unconscious. Wisdom describes a character’s willpower, common sense, awareness, and intuition. Wisdom is the most important You apply your character’s Charisma modifier to: ability for clerics and druids, and it is also important for • Bluff, Diplomacy, Disguise, Handle Animal, Intimidate, paladins and rangers. If you want your character to have acute senses, put a high score in Wisdom. Every creature Perform, and Use Magic Device checks. has a Wisdom score. A character with a Wisdom score of 0 • Checks that represent attempts to inf luence others. is incapable of rational thought and is unconscious. • Channel energy DCs for clerics and paladins attempting You apply your character’s Wisdom modifier to: to harm undead foes. • Will saving throws (for negating the effects of charm Bards, paladins, and sorcerers gain a number of bonus spells based on their Charisma scores. The minimum person and other spells). Charisma score needed to cast a bard, paladin, or sorcerer spell is 10 + the spell’s level. 17 2 Races With a small army of merciless dark elf warriors fast on their heels, escape from the drow city seemed unlikely. While the human Sajan and gnome Lini might survive as slaves, Seltyiel knew the drow would only keep him, a half-elf, alive long enough to boast over while they tortured him. Cursing his mixed blood again, he sneered and turned abruptly, instantly summoning to mind the words of his most devastating arcane fire. “Come on, you fungus-eating freaks!” he shouted at the relentless drow. “Let me show you how elves from the surface world dance!” 7ft 6ft 5ft 4ft 3ft 2ft 1ft 0ft Elf Human Gnome Half-orc Half-elf Dwarf Halfling From the stout dwarf to the noble elf, the races of doesn’t force you to choose one religion or alignment over the Pathfinder Roleplaying Game are a diverse mix another, the typical choices for each race are mentioned. of cultures, sizes, attitudes, and appearances. After Next is a discussion of why a member of the race in you’ve generated your character’s basic ability scores, question might decide to take on the peril-filled life of an the next step in the character creation process is to adventurer. Finally, we list a few sample names for males select your character’s race; this chapter presents seven and females of each race. different options from which to choose. These seven races comprise the most commonly encountered civilized Each of the seven races also has a suite of special abilities, races in the Pathfinder RPG. bonuses, and other adjustments that apply to all members of that race. These are your character’s “racial traits.” Choosing your character’s race is one of the more important decisions you’ll need to make. As your Each race also has ability score modifiers that are character grows more powerful, you’ll be able to diversify applied after you’ve generated your ability scores, as his or her abilities by selecting different classes, skills, described in the previous chapter. These modifiers can and feats, but you only get to pick your race once (unless raise an ability score above 18 or reduce a score below some unusual magic, like reincarnation, comes into 3—although having such a low score in any of your play). Of course, each race is best suited to a specific abilities is something you should avoid, as there’s no type of role—dwarves make better fighters than they surer route to character death than a low Constitution, do sorcerers, while half lings aren’t as good as half-orcs and no swifter route to frustration than a PC who can’t at being barbarians. Keep each race’s advantages and talk since his Intelligence is lower than 3. You should disadvantages in mind when making your choice. While seek your GM’s approval before playing a character with it can be fun to play a race against its assumed role, it’s not any ability score of less than 3. as fun to get three levels into a character before realizing that the character you wanted to play would have been The seven races presented in this chapter have wildly better off as a different race entirely. different abilities, personalities, and societies, but at the same time, all seven races are quite similar—none of the Each of the seven races in this chapter is presented in races here deviate too far from humanity, and all of their the same format, starting with a generalized description abilities are roughly equal and balanced. Other races, more of the race’s role in the world. This is followed by a physical powerful and more exotic, exist in the game world as well, description of an average member of that race, a brief but the Pathfinder RPG is built and balanced with the overview of the race’s society, and a few words about the expectation that all players start on roughly equal footing. race’s relations with the other six. Although your race Rules and guidelines for playing more powerful or more unusual races can be found in Chapter 12. 20 Races 2 Dwarves their races. Dwarves generally distrust and shun half- orcs. They find half lings, elves, and gnomes to be too Dwarves are a stoic but stern race, ensconced in cities frail, f lighty, or “pretty” to be worthy of proper respect. carved from the hearts of mountains and fiercely It is with humans that dwarves share the strongest link, determined to repel the depredations of savage races like for humans’ industrious nature and hearty appetites orcs and goblins. More than any other race, the dwarves come closest to matching those of the dwarven ideal. have acquired a reputation as dour and humorless craftsmen of the earth. It could be said that dwarven Alignment and Religion: Dwarves are driven by honor history shapes the dark disposition of many dwarves, and tradition, and while they are often satirized as for they reside in high mountains and dangerous realms standoffish, they have a strong sense of friendship and below the earth, constantly at war with giants, goblins, justice, and those who win their trust understand that, and other such horrors. while they work hard, they play even harder—especially when good ale is involved. Most dwarves are lawful good. Physical Description: Dwarves are a short and stocky They prefer to worship deities whose tenets match these race, and stand about a foot shorter than most humans, traits, and Torag is a favorite among dwarves, though with wide, compact bodies that account for their burly Abadar and Gorum are common choices as well. appearance. Male and female dwarves pride themselves on the length of their hair, and men often decorate their Adventurers: Although dwarven adventurers are rare beards with a variety of clasps and intricate braids. A clean- compared to humans, they can be found in most regions shaven male dwarf is a sure sign of madness, or worse—no of the world. Dwarves often leave the confines of their one familiar with their race trusts a beardless dwarf. redoubts to seek glory for their clans, to find wealth with which to enrich the fortress-homes of their birth, or to Society: The great distances between their mountain reclaim fallen dwarven citadels from racial enemies. citadels account for many of the cultural differences Dwarven warfare is often characterized by tunnel that exist within dwarven society. Despite these schisms, fighting and melee combat, and as such most dwarves dwarves throughout the world are characterized by their tend toward classes such as fighters and barbarians. love of stonework, their passion for stone- and metal-based craftsmanship and architecture, and a fierce hatred of Male Names: Dolgrin, Grunyar, Harsk, Kazmuk, giants, orcs, and goblinoids. Morgrym, Rogar. Relations: Dwarves and orcs have long dwelt in Female Names: Agna, Bodill, Ingra, Kotri, proximity, theirs a history of violence as old as both Rusilka, Yangrit.. 21 Elves world. Most, however, find manipulating earth and stone to be distasteful, and prefer instead to indulge in the finer The long-lived elves are children of the natural world, arts, with their inborn patience making them particularly similar in many superficial ways to fey creatures, yet suited to wizardry. different as well. Elves value their privacy and traditions, and while they are often slow to make friends, at both the Relations: Elves are prone to dismissing other races, personal and national levels, once an outsider is accepted writing them off as rash and impulsive, yet they are as a comrade, such alliances can last for generations. excellent judges of character. An elf might not want a Elves have a curious attachment to their surroundings, dwarf neighbor, but would be the first to acknowledge that perhaps as a result of their incredibly long lifespans or dwarf ’s skill at smithing. They regard gnomes as strange some deeper, more mystical reason. Elves who dwell in (and sometimes dangerous) curiosities, and half lings with a region for long find themselves physically adapting to a measure of pity, for these small folk seem to the elves to be adrift, without a traditional home. Elves are fascinated match their surroundings, most noticeably with humans, as evidenced by the number of half-elves taking on coloration ref lecting the local in the world, even if they usually disown such offspring. environment. Those elves that spend They regard half-orcs with distrust and suspicion. their lives among the short-lived races, on the other hand, often develop Alignment and Religion: Elves are emotional and a skewed perception of mortality and capricious, yet value kindness and beauty. Most elves are become morose, the result of watching chaotic good. They prefer deities that share their love of wave after wave of companions age and the mystic qualities of the world—Desna and Nethys are die before their eyes. particular favorites, the former for her wonder and love of Physical Description: Although the wild places, and the latter for his mastery of magic. generally taller than humans, Calistria is perhaps the most notorious of elven deities, for elves possess a graceful, fragile she represents elven ideals taken to an extreme. physique that is accentuated by their long, pointed ears. Their eyes Adventurers: Many elves embark on adventures out of a are wide and almond-shaped, and desire to explore the world, leaving their secluded forest filled with large, vibrantly colored realms to reclaim forgotten elven magic or search out lost pupils. While elven clothing often kingdoms established millennia ago by their forefathers. plays off the beauty of the natural For those raised among humans, the ephemeral and world, those elves that live in unfettered life of an adventurer holds natural appeal. Elves cities tend to bedeck themselves generally eschew melee because of their frailty, preferring in the latest fashion. instead to pursue classes such as wizards and rangers. Society: Many elves feel a bond with nature and strive to Male Names: Caladrel, Heldalel, Lanliss, Meirdrarel, live in harmony with the natural Seldlon, Talathel, Variel, Zordlon. Female Names: Amrunelara, Dardlara, Faunra, Jathal, Merisiel, Oparal, Soumral, Tessara, Yalandlara. Chapter 7.. 22 Races 2 Gnomes Relations: Gnomes have difficulty interacting with the Gnomes trace their lineage back to the mysterious realm other races, on both emotional and physical levels. Gnome of the fey, a place where colors are brighter, the wildlands wilder, and emotions more primal. Unknown forces drove humor is hard to translate and often comes across as the ancient gnomes from that realm long ago, forcing them to seek refuge in this world; despite this, the gnomes have malicious or senseless to other races, while gnomes in turn never completely abandoned their fey roots or adapted to mortal culture. As a result, gnomes are widely regarded by tend to think of the taller races as dull and lumbering giants. the other races as alien and strange. They get along well with half lings and humans, but are Physical Description: Gnomes are one of the smallest of the common races, generally standing just over 3 feet overly fond of playing jokes on dwarves and half-orcs, whom in height. Their hair tends toward vibrant colors such as the fiery orange of autumn leaves, the verdant green most gnomes feel need to lighten up. They respect elves, but of forests at springtime, or the deep reds and purples of wildf lowers in bloom. Similarly, their f lesh tones range often grow frustrated with the comparatively slow pace at from earthy browns to f loral pinks, frequently with little regard for heredity. Gnomes possess highly mutable which members of the long-lived race make decisions. To facial characteristics, and many have overly large mouths and eyes, an effect which can be both disturbing and the gnomes, action is always better than inaction, and many stunning, depending on the individual. gnomes carry several highly involved projects with them at all Society: Unlike most races, gnomes do not generally organize themselves within classic societal structures. times to keep themselves entertained during rest periods. Whimsical creatures at heart, they typically travel alone or with temporary companions, ever seeking new and Alignment and Religion: Although gnomes are impulsive more exciting experiences. They rarely form enduring relationships among themselves or with members of tricksters, with sometimes inscrutable motives and equally other races, instead pursuing crafts, professions, or collections with a passion that borders on zealotry. Male confusing methods, their hearts are generally in the right gnomes have a strange fondness for unusual hats and headgear, while females often proudly wear elaborate place. They are prone to powerful fits of emotion, and find and eccentric hairstyles. themselves most at peace within the natural world. Gnomes are usually neutral good, and prefer to worship deities who value individuality and nature, such as Shelyn, Gozreh, Desna, and increasingly Cayden Cailean.. Male Names: Abroshtor, Bastargre, Halungalom, Krolmnite, Poshment, Zarzuket, Zatqualmie. Female Names: Besh, Fijit, Lini, Neji, Majet, Pai, Queck, Trig. Chapter 7. Defensive Training: Gnomes get a +4 dodge bonus to AC against monsters of the giant subtype.. 23 half-elves Society: The lack of a unified homeland and culture forces half-elves to remain versatile, able to conform to Elves have long drawn the covetous gazes of other races. nearly any environment. While often attractive to both Their generous life spans, magical aff inity, and inherent races for the same reasons as their parents, half-elves grace each contribute to the admiration or bitter envy rarely fit in with either humans or elves, as both races of their neighbors. Of all their traits, however, none so see too much evidence of the other in them. This lack of entrance their human associates as their beauty. Since acceptance weighs heavily on many half-elves, yet others the two races first came into contact with each other, are bolstered by their unique status, seeing in their lack the humans have held up elves as models of physical of a formalized culture the ultimate freedom. As a result, perfection, seeing in the fair folk idealized versions half-elves are incredibly adaptable, capable of adjusting of themselves. For their part, many elves find humans their mindsets and talents to whatever societies they find attractive despite their comparatively barbaric ways, themselves in. drawn to the passion and impetuosity with which members of the younger race play out their brief lives. Relations: A half-elf understands loneliness, and knows that character is often less a product of race than Sometimes this mutual infatuation leads of life experience. As such, half-elves are often open to to romantic relationships. Though usually friendships and alliances with other races, and less likely to rely on first impressions when forming opinions of short-lived, even by human standards, new acquaintances. these trysts commonly lead to the birth of half-elves, a race descended of two Alignment and Religion: Half-elves’ isolation strongly cultures yet inheritor of neither. Half- inf luences their characters and philosophies. Cruelty elves can breed with one another, but does not come naturally to them, nor does blending in even these “pureblood” half-elves tend and bending to societal convention—as a result, most to be viewed as bastards by humans half-elves are chaotic good. Half-elves’ lack of a unified and elves alike. culture makes them less likely to turn to religion, but those who do generally follow the common faiths of Physical Description: Half-elves their homeland. stand taller than humans but shorter than elves. They inherit the lean Adventurers: Half-elves tend to be itinerants, build and comely features of their wandering the lands in search of a place they might elven lineage, but their skin color is finally call home. The desire to prove oneself to the dictated by their human side. While community and establish a personal identity—or half-elves retain the pointed ears even a legacy—drives many half-elf adventurers to lives of elves, theirs are more rounded of bravery. and less pronounced. A half-elf ’s Male Names: Calathes, Encinal, Kyras, Narciso, Quiray, human-like eyes tend to range a Satinder, Seltyiel, Zirul. spectrum of exotic colors running from amber or violet to emerald Female Names: Cathran, Elsbeth, Iandoli, Kieyanna, green and deep blue. Lialda, Maddela, Reda, Tamarie. Chapter 7. Chapter 3 for more information about favored classes. Languages: Half-elves begin play speaking Common and Elven. Half-elves with high Intelligence scores can choose any languages they want (except secret languages, such as Druidic). 24 Races 2 Half-orcs tend to be the most accommodating, and there half-orcs make natural mercenaries and enforcers. Half-orcs are monstrosities, their tragic births the result of perversion and violence—or at least, that’s how other Alignment & Religion: Forced to live either among races see them. It’s true that half-orcs are rarely the result brutish orcs or as lonely outcasts in civilized lands, most of loving unions, and as such are usually forced to grow half-orcs are bitter, violent, and reclusive. Evil comes easily up hard and fast, constantly fighting for protection or to to them, but they are not evil by nature—rather, most make names for themselves. Feared, distrusted, and spat half-orcs are chaotic neutral, having been taught by long upon, half-orcs still consistently manage to surprise their experience that there’s no point doing anything but that detractors with great deeds and unexpected wisdom— which directly benefits themselves. When they bother to though sometimes it’s easier just to crack a few skulls. worship the gods, they tend to favor deities who promote warfare or Physical Description: Both genders of half-orc stand individual strength, such between 6 and 7 feet tall, with powerful builds and as Gorum, Cayden Cailean, greenish or grayish skin. Their canines often grow long Lamashtu, and Rovagug. enough to protrude from their mouths, and these “tusks,” combined with heavy brows and slightly pointed ears, give Adventurers: Staunchly them their notoriously bestial appearance. While half-orcs independent, many half-orcs may be impressive, few ever describe them as beautiful. take to lives of adventure out of necessity, seeking Society: Unlike half-elves, where at least part of society’s to escape their painful discrimination is born out of jealousy or attraction, half- pasts or improve orcs get the worst of both worlds: physically weaker than their lot through their orc kin, they also tend to be feared or attacked outright force of arms. Others, by the legions of humans who don’t bother making the more optimistic distinction between full orcs and half bloods. Still, while or desperate for not exactly accepted, half-orcs in civilized societies tend to acceptance, take up the be valued for their martial prowess, and orc leaders have mantle of crusaders in actually been known to spawn them intentionally, as the order to prove their half breeds regularly make up for their lack of physical worth to the world. strength with increased cunning and aggression, making them natural chieftains and strategic advisors. Male Names: Ausk, Davor, Hakak, Relations: A lifetime of persecution leaves the average Kizziar, Makoa, half-orc wary and quick to anger, yet those who break Nesteruk, Tsadok. through his savage exterior might find a well-hidden core of empathy. Elves and dwarves tend to be the least accepting Female Names: of half-orcs, seeing in them too great a resemblance to Canan, Drogheda, their racial enemies, but other races aren’t much more Goruza, Mazon, understanding. Human societies with few orc problems Shirish, Tevaga, Zeljka.. 25 Halflings Relations: A typical half ling prides himself on his ability to go unnoticed by other races—it is this trait that Optimistic and cheerful by nature, blessed with uncanny allows so many half lings to excel at thievery and trickery. luck and driven by a powerful wanderlust, half lings Most half lings, knowing full well the stereotyped view make up for their short stature with an abundance of other races take of them as a result, go out of their way bravado and curiosity. At once excitable and easy-going, to be forthcoming and friendly to the bigger races when half lings like to keep an even temper and a steady eye on they’re not trying to go unnoticed. They get along fairly opportunity, and are not as prone as some of the more well with gnomes, although most half lings regard these volatile races to violent or emotional outbursts. Even in eccentric creatures with a hefty dose of caution. Half lings the jaws of catastrophe, a half ling almost never loses his coexist well with humans as a general rule, but since some sense of humor. of the more aggressive human societies value half lings as slaves, half lings try not to grow too complacent when Half lings are inveterate opportunists. Unable to dealing with them. Half lings respect elves and dwarves, physically defend themselves from the rigors of the world, but these races generally live in remote regions far from they know when to bend with the wind and when to hide the comforts of civilization that half lings enjoy, thus away. Yet a half ling’s curiosity often overwhelms his good limiting opportunities for interaction. Only half-orcs sense, leading to poor decisions and narrow escapes. are generally shunned by half lings, for their great size and violent natures are a bit too intimidating for most Though their curiosity drives them to travel and seek half lings to cope with. new places and experiences, half lings possess a strong sense of house and home, often spending above their Alignment and Religion: Half lings are loyal to their means to enhance the comforts of home life. friends and families, but since they dwell in a world dominated by races twice as large as themselves, they’ve Physical Description: Half lings rise to a humble come to grips with the fact that sometimes they’ll need to height of 3 feet. They prefer to walk barefoot, leading to scrap and scrounge for survival. Most half lings are neutral the bottoms of their feet being roughly calloused. Tufts as a result. Half lings favor gods that encourage small, of thick, curly hair warm the tops of their broad, tanned tight-knit communities, be they for good (like Erastil) or feet. Their skin tends toward a rich almond color and evil (like Norgorber). their hair toward light shades of brown. A half ling’s ears are pointed, but proportionately not much larger than Adventurers: Their inherent luck coupled with their those of a human. insatiable wanderlust makes half lings ideal for lives of adventure. Other such vagabonds tend to put up with the Society: Half lings claim no cultural homeland and curious race in hopes that some of their mystical luck will control no settlements larger than rural assemblies of free rub off. towns. Far more often, they dwell at the knees of their human cousins in human cities, eking out livings as they can from Male Names: Antal, Boram, Evan, Jamir, Kaleb, Lem, the scraps of larger societies. Many half lings lead perfectly Miro, Sumak. fulfilling lives in the shadow of their larger Female Names: Anafa, Bellis, Etune, Filiu, Lissa, Marra, neighbors, while some prefer more nomadic Rillka, Sistra, Yamyra. lives on the road, traveling the world and experiencing all it has to offer.. 26 Races 2 Hum ans members also makes humans quite adept at accepting others for what they are. Humans possess exceptional drive and a great capacity to endure and expand, and as such are currently the Alignment and Religion: Humanity is perhaps the most dominant race in the world. Their empires and nations are heterogeneous of all the common races, with a capacity for vast, sprawling things, and the citizens of these societies great evil and boundless good. Some assemble into vast carve names for themselves with the strength of their barbaric hordes, while others build sprawling cities that sword arms and the power of their spells. Humanity is cover miles. Taken as a whole, most humans are neutral, best characterized by its tumultuousness and diversity, yet they generally tend to congregate in nations and and human cultures run the gamut from savage but civilizations with specific alignments. Humans also have honorable tribes to decadent, devil-worshiping noble the widest range in gods and religion, lacking other races’ families in the most cosmopolitan cities. Human curiosity ties to tradition and eager to turn to anyone offering them and ambition often triumph over their predilection for a glory or protection. They have even adopted gods like Torag sedentary lifestyle, and many leave their homes to explore or Calistria, who for millennia were more identified with the innumerable forgotten corners of the world or lead older races, and as humanity continues to grow and prosper, mighty armies to conquer their neighbors, simply because new gods have begun emerging from their they can. ever-expanding legends. Physical Description: The physical characteristics Adventurers: Ambition alone of humans are as varied as the world’s climes. From the drives countless humans, and for dark-skinned tribesmen of the southern continents to the many, adventuring serves as a pale and barbaric raiders of the northern lands, humans means to an end, whether it be possess a wide variety of skin colors, body types, and facial wealth, acclaim, social status, features. Generally speaking, humans’ skin color assumes or arcane knowledge. A few a darker hue the closer to the equator they live. pursue adventuring careers simply for the thrill of danger. Society: Human society comprises a multitude of Humans hail from myriad governments, attitudes, and lifestyles. Though the oldest regions and backgrounds, and as human cultures trace their histories thousands of years into such can fill any role within an the past, when compared to the societies of common races adventuring party. like elves and dwarves, human society seems to be in a state of constant f lux as empires fragment and new kingdoms Names: Unlike other races, subsume the old. In general, humans are known for their who generally cleave to specific f lexibility, ingenuity, and ambition. traditions and shared histories, humanity’s diversity has Relations: Humans are fecund, and their drive and resulted in a near-infinite set numbers often spur them into contact with other races of names. The humans of a during bouts of territorial expansion and colonization. northern barbarian tribe In many cases, this leads to violence and war, yet humans have much different names are also swift to forgive and forge alliances with races than those hailing from a who do not try to match or exceed them in violence. subtropical nation of sailors Proud, sometimes to the point of arrogance, humans and tradesmen. Throughout most of might look upon dwarves as miserly drunkards, elves the world humans speak Common, yet as f lighty fops, half lings as craven thieves, gnomes their names are as varied as their beliefs as twisted maniacs, and half-elves and half-orcs as and appearances. embarrassments—but the race’s diversity among its own 1st level and one additional rank whenever they gain a level. Languages: Humans begin play speaking Common. Humans with high Intelligence scores can choose any languages they want (except secret languages, such as Druidic). 27 3 Classes The crumbling walkway atop the ancient dam shook with the force of water cascading through skull-shaped f lumes, then shook more as the ogre barbarians strode forth. “Looks like we’ve got a few tons of ugly in the way,” Valeros roared. “With faces like that, it’s no wonder they’re afraid to come out in the light.” The lead ogre’s eyes bulged as it realized it had been insulted, and it shrieked in anger as it f lew into a battle rage. Seoni cursed under her breath. One of these days, Valeros’s bravery was going to get them all killed. Acharacter’s class is one of his most defining features. Character Advancement It’s the source of most of his abilities, and gives him a specific role in any adventuring party. The following As player characters overcome challenges, they gain eleven classes represent the core classes of the game. experience points. As these points accumulate, PCs advance in level and power. The rate of this advancement Barbarian: The barbarian is a brutal berserker from depends on the type of game that your group wants to beyond the edge of civilized lands. play. Some prefer a fast-paced game, where characters gain levels every few sessions, while others prefer a game Bard: The bard uses skill and spell alike to bolster his where advancement occurs less frequently. In the end, allies, confound his enemies, and build upon his fame. it is up to your group to decide what rate fits you best. Characters advance in level according to Table 3–1. Cleric: A devout follower of a deity, the cleric can heal wounds, raise the dead, and call down the wrath of the gods. Advancing Your Character Druid: The druid is a worshiper of all things natural—a A character advances in level as soon as he earns enough spellcaster, a friend to animals, and a skilled shapechanger. experience points to do so—typically, this occurs at the end of a game session, when your GM hands out that Fighter: Brave and stalwart, the fighter is a master of all session’s experience point awards. manner of arms and armor. The process of advancing a character works in much the Monk: A student of martial arts, the monk trains his same way as generating a character, except that your ability body to be his greatest weapon and defense. scores, race, and previous choices concerning class, skills, and feats cannot be changed. Adding a level generally gives Paladin: The paladin is the knight in shining armor, a you new abilities, additional skill points to spend, more hit devoted follower of law and good. points, possibly a permanent +1 increase to one ability score of your choice, or an additional feat (see Table 3–1). Over time, Ranger: A tracker and hunter, the ranger is a creature of as your character rises to higher levels, he becomes a truly the wild and of tracking down his favored foes. powerful force in the game world, capable of ruling nations or bringing them to their knees. Rogue: The rogue is a thief and a scout, an opportunist capable of delivering brutal strikes against unwary foes. When adding new levels of an existing class or adding levels of a new class (see Multiclassing, below), make sure Sorcerer: The spellcasting sorcerer is born with an to take the following steps in order. First, select your new innate knack for magic and has strange, eldritch powers. class level. You must be able to qualify for this level before any of the following adjustments are made. Second, apply Wizard: The wizard masters magic through constant any ability score increases due to gaining a level. Third, study that gives him incredible magical power. integrate all of the level’s class abilities and then roll for additional hit points. Finally, add new skills and feats. Table 3–1: Character Advancement For more information on when you gain new feats and and Level-Dependent Bonuses ability score increases, see Table 3–1. Character Experience Point Total Ability Multiclassing Level Slow Medium Fast Feats Score 1st — — — 1st — Instead of gaining the abilities granted by the next level in 2nd 3,000 2,000 1,300 — — your character’s current class, he can instead gain the 1st- 3rd 7,500 5,000 3,300 2nd — level abilities of a new class, adding all of those abilities 4th 14,000 9,000 6,000 — 1st to his existing ones. This is known as “multiclassing.” 5th 23,000 15,000 10,000 3rd — 6th 35,000 23,000 15,000 — — For example, let’s say a 5th-level fighter decides to dabble 7th 53,000 35,000 23,000 4th — in the arcane arts, and adds one level of wizard when he 8th 77,000 51,000 34,000 — 2nd advances to 6th level. Such a character would have the 9th 115,000 75,000 50,000 5th — powers and abilities of both a 5th-level fighter and a 1st-level 10th 160,000 105,000 71,000 — — wizard, but would still be considered a 6th-level character. 11th 235,000 155,000 105,000 6th — (His class levels would be 5th and 1st, but his total character 12th 330,000 220,000 145,000 — 3rd level is 6th.) He keeps all of his bonus feats gained from 5 13th 475,000 315,000 210,000 7th — levels of fighter, but can now also cast 1st-level spells and 14th 665,000 445,000 295,000 — — picks an arcane school. He adds all of the hit points, base 15th 955,000 635,000 425,000 8th — attack bonuses, and saving throw bonuses from a 1st-level 16th 1,350,000 890,000 600,000 — 4th wizard on top of those gained from being a 5th-level fighter. 17th 1,900,000 1,300,000 850,000 9th — 18th 2,700,000 1,800,000 1,200,000 — — 19th 3,850,000 2,550,000 1,700,000 10th — 20th 5,350,000 3,600,000 2,400,000 — 5th 30 Classes 3 Note that there are a number of effects and prerequisites Class Features that rely on a character’s level or Hit Dice. Such effects are always based on the total number of levels or Hit All of the following are class features of the barbarian. Dice a character possesses, not just those from one class. Weapon and Armor Proficiency: A barbarian is The exception to this is class abilities, most of which are based on the total number of class levels that a character proficient with all simple and martial weapons, light possesses of that particular class. armor, medium armor, and shields (except tower shields). Favored Class Fast Movement (Ex): A barbarian’s base speed is faster than the norm for her race by +10 feet. This benefit Each character begins play with a single favored class of applies only when she is wearing no armor, light armor, or his choosing—typically, this is the same class as the one medium armor, and not carrying a heavy load. Apply this he chooses at 1st level. Whenever a character gains a level bonus before modifying the barbarian’s speed because of in his favored class, he receives either + 1 hit point or + 1 any load carried or armor worn. This bonus stacks with skill rank. The choice of favored class cannot be changed any other bonuses to the barbarian’s base speed. once the character is created, and the choice of gaining a hit point or a skill rank each time a character gains a level (including his f irst level) cannot be changed once made for a particular level. Prestige classes (see Chapter 11) can never be a favored class. Barbarian For some, there is only rage. In the ways of their people, in the fury of their passion, in the howl of battle, conf lict. 31 Table 3–2: Barbarian Base Attack Fort Ref Will Level Bonus Save Save Save Special Fast movement, rage 1st +1 +2 +0 +0 Rage power, uncanny dodge Trap sense +1 2nd +2 +3 +0 +0 Rage power Improved uncanny dodge 3rd +3 +3 +1 +1 Rage power, Trap sense +2 Damage reduction 1/— 4th +4 +4 +1 +1 Rage power Trap sense +3 5th +5 +4 +1 +1 Damage reduction 2/—, Rage power Greater rage 6th +6/+1 +5 +2 +2 Rage power, Trap sense +4 Damage reduction 3/— 7th +7/+2 +5 +2 +2 Indomitable will, Rage power Trap sense +5 8th +8/+3 +6 +2 +2 Damage reduction 4/—, Rage power Tireless rage 9th +9/+4 +6 +3 +3 Rage power, Trap sense +6 Damage reduction 5/— 10th +10/+5 +7 +3 +3 Mighty rage, Rage power Rage (Ex): A barbarian can call upon inner reserves of encounter or combat. If a barbarian falls unconscious, her strength and ferocity, granting her additional combat rage immediately ends, placing her in peril of death. prowess. Starting at 1st level, a barbarian can rage for a number of rounds per day equal to 4 + her Constitution Rage Powers (Ex): As a barbarian gains levels, she modifier. At each level after 1st, she can rage for 2 learns to use her rage in new ways. Starting at 2nd level, a additional rounds. Temporary increases to Constitution, barbarian gains a rage power. She gains another rage power such as those gained from rage and spells like bear’s for every two levels of barbarian attained after 2nd level. endurance, do not increase the total number of rounds A barbarian gains the benefits of rage powers only while that a barbarian can rage per day. A barbarian can enter raging, and some of these powers require the barbarian to rage as a free action. The total number of rounds of rage take an action first. Unless otherwise noted, a barbarian per day is renewed after resting for 8 hours, although cannot select an individual power more than once. these hours do not need to be consecutive. Animal Fury (Ex): While raging, the barbarian gains a bite While in rage, a barbarian gains a +4 morale bonus attack. If used as part of a full attack action, the bite attack is to her Strength and Constitution, as well as a +2 morale made at the barbarian’s full base attack bonus –5. If the bite bonus on Will saves. In addition, she takes a –2 penalty hits, it deals 1d4 points of damage (assuming the barbarian to Armor Class. The increase to Constitution grants the is Medium; 1d3 points of damage if Small) plus half the barbarian 2 hit points per Hit Dice, but these disappear barbarian’s Strength modifier. A barbarian can make a bite when the rage ends and are not lost first like temporary attack as part of the action to maintain or break free from a hit points. While in rage, a barbarian cannot use any grapple. This attack is resolved before the grapple check is Charisma-, Dexterity-, or Intelligence-based skills made. If the bite attack hits, any grapple checks made by the (except Acrobatics, Fly, Intimidate, and Ride) or any ability barbarian against the target this round are at a +2 bonus. that requires patience or concentration. Clear Mind (Ex): A barbarian may reroll a Will save. This A barbarian can end her rage as a free action and is power is used as an immediate action after the first save fatigued after rage for a number of rounds equal to 2 is attempted, but before the results are revealed by the times the number of rounds spent in the rage. A barbarian GM. The barbarian must take the second result, even if cannot enter a new rage while fatigued or exhausted but it is worse. A barbarian must be at least 8th level before can otherwise enter rage multiple times during a single selecting this power. This power can only be used once per rage. 32 Classes 3 Fearless Rage (Ex): While raging, the barbarian is immune Powerful Blow (Ex): The barbarian gains a +1 bonus on a to the shaken and frightened conditions. A barbarian must single damage roll. This bonus increases by +1 for every 4 be at least 12th level before selecting this rage power. levels the barbarian has attained. This power is used as a swift action before the roll to hit is made. This power can Guarded Stance (Ex): The barbarian gains a +1 dodge bonus only be used once per rage. to her Armor Class against melee attacks for a number of rounds equal to the barbarian’s current Constitution Quick Ref lexes (Ex): While raging, the barbarian can modifier (minimum 1). This bonus increases by +1 for every make one additional attack of opportunity per round. 6 levels the barbarian has attained. Activating this ability is a move action that does not provoke an attack of opportunity. Raging Climber (Ex): When raging, the barbarian adds her level as an enhancement bonus on all Climb skill checks. Increased Damage Reduction (Ex): The barbarian’s damage reduction increases by 1/—. This increase is always active Raging Leaper (Ex): When raging, the barbarian adds while the barbarian is raging. A barbarian can select this her level as an enhancement bonus on all Acrobatics skill rage power up to three times. Its effects stack. A barbarian checks made to jump. When making a jump in this way, must be at least 8th level before selecting this rage power. the barbarian is always considered to have a running start. Internal Fortitude (Ex): While raging, the barbarian is immune Raging Swimmer (Ex): When raging, the barbarian adds to the sickened and nauseated conditions. A barbarian must be her level as an enhancement bonus on all Swim skill checks. at least 8th level before selecting this rage power. Renewed Vigor (Ex): As a standard action, the barbarian Intimidating Glare (Ex): The barbarian can make an heals 1d8 points of damage + her Constitution modifier. Intimidate check against one adjacent foe as a move action. For every four levels the barbarian has attained above If the barbarian successfully demoralizes her opponent, 4th, this amount of damage healed increases by 1d8, to a the foe is shaken for 1d4 rounds + 1 round for every 5 points maximum of 5d8 at 20th level. A barbarian must be at least by which the barbarian’s check exceeds the DC. 4th level before selecting this power. This power can be used only once per day and only while raging. Knockback (Ex): Once per round, the barbarian can make a bull rush attempt against one target in place of a melee Rolling Dodge (Ex): The barbarian gains a +1 dodge bonus attack. If successful, the target takes damage equal to the to her Armor Class against ranged attacks for a number barbarian’s Strength modifier and is moved back as normal. of rounds equal to the barbarian’s current Constitution The barbarian does not need to move with the target if modif ier (minimum 1). This bonus increases by +1 for successful. This does not provoke an attack of opportunity. every 6 levels the barbarian has attained. Activating this ability is a move action that does not provoke an attack Low-Light Vision (Ex): The barbarian’s senses sharpen of opportunity. and she gains low-light vision while raging. Roused Anger (Ex): The barbarian may enter a rage even Mighty Swing (Ex): The barbarian automatically confirms if fatigued. While raging after using this ability, the a critical hit. This power is used as an immediate action barbarian is immune to the fatigued condition. Once this once a critical threat has been determined. A barbarian rage ends, the barbarian is exhausted for 10 minutes per must be at least 12th level before selecting this power. This round spent raging. power can only be used once per rage. Scent (Ex): The barbarian gains the scent ability while Moment of Clarity (Ex): The barbarian does not gain any raging and can use this ability to locate unseen foes (see benefits or take any of the penalties from rage for 1 round. Appendix 1 for rules on the scent ability). Activating this power is a swift action. This includes the penalty to Armor Class and the restriction on what actions Strength Surge (Ex): The barbarian adds her barbarian level can be performed. This round still counts against her total on one Strength check or combat maneuver check, or to her number of rounds of rage per day. This power can only be Combat Maneuver Defense when an opponent attempts a used once per rage. maneuver against her. This power is used as an immediate action. This power can only be used once per rage. Night Vision (Ex): The barbarian’s senses grow incredibly sharp while raging and she gains darkvision 60 feet. A Superstition (Ex): The barbarian gains a +2 morale bonus on barbarian must have low-light vision as a rage power or a saving throws made to resist spells, supernatural abilities, racial trait to select this rage power. and spell-like abilities. This bonus increases by +1 for every 4 levels the barbarian has attained. While raging, the barbarian No Escape (Ex): The barbarian can move up to double her cannot be a willing target of any spell and must make saving base speed as an immediate action but she can only use throws to resist all spells, even those cast by allies. this ability when an adjacent foe uses a withdraw action to move away from her. She must end her movement adjacent Surprise Accuracy (Ex): The barbarian gains a +1 morale to the enemy that used the withdraw action. The barbarian bonus on one attack roll. This bonus increases by +1 for provokes attacks of opportunity as normal during this every 4 levels the barbarian has attained. This power is movement. This power can only be used once per rage. used as a swift action before the roll to hit is made. This power can only be used once per rage. 33 Swift Foot (Ex): The barbarian gains a 5-foot enhancement increases to +6 and the morale bonus on her Will saves bonus to her base speed. This increase is always active increases to +3. while the barbarian is raging. A barbarian can select this rage power up to three times. Its effects stack. Indomitable Will (Ex): While in rage, a barbarian of 14th level or higher gains a +4 bonus on Will saves to resist Terrifying Howl (Ex): The barbarian unleashes a terrifying enchantment spells. This bonus stacks with all other howl as a standard action. All shaken enemies within 30 feet modifiers, including the morale bonus on Will saves she must make a Will save (DC equal to 10 + 1/2 the barbarian’s also receives during her rage. level + the barbarian’s Strength modifier) or be panicked for 1d4+1 rounds. Once an enemy has made a save versus Tireless Rage (Ex): Starting at 17th level, a barbarian no terrifying howl (successful or not), it is immune to this power longer becomes fatigued at the end of her rage. for 24 hours. A barbarian must have the intimidating glare rage power to select this rage power. A barbarian must be at Mighty Rage (Ex): At 20th level, when a barbarian enters least 8th level before selecting this power. rage, the morale bonus to her Strength and Constitution increases to +8 and the morale bonus on her Will saves Unexpected Strike (Ex): The barbarian can make an attack increases to +4. of opportunity against a foe that moves into any square threatened by the barbarian, regardless of whether or not that Ex-Barbarians movement would normally provoke an attack of opportunity. This power can only be used once per rage. A barbarian must A barbarian who becomes lawful loses the ability to rage be at least 8th level before selecting this power. and cannot gain more levels in barbarian. She retains all other benefits of the class. Uncanny Dodge (Ex): At 2nd level, a barbarian gains the ability to react to danger before her senses would normally Bard allow her to do so. She cannot be caught f lat-footed, nor does she lose her Dex bonus to AC if the attacker is invisible. She still Untold wonders and secrets exist for those skillful loses her Dexterity bonus to AC if immobilized. A barbarian enough to discover them. Through cleverness, talent, and with this ability can still lose her Dexterity bonus to AC if an magic, these cunning few unravel the wiles of the world, opponent successfully uses the feint action against her. becoming adept in the arts of persuasion, manipulation, and inspiration. Typically masters of one or many forms If a barbarian already has uncanny dodge from a different of artistry, bards possess an uncanny ability to know class, she automatically gains improved uncanny dodge (see more than they should and use what they learn to keep below) instead. themselves and their allies ever one step ahead of danger. Bards are quick-witted and captivating, and their skills Trap Sense (Ex): At 3rd level, a barbarian gains a +1 might lead them down many paths, be they gamblers or bonus on Ref lex saves made to avoid traps and a +1 dodge jacks-of-all-trades, scholars or performers, leaders or bonus to AC against attacks made by traps. These bonuses scoundrels, or even all of the above. For bards, every day increase by +1 every three barbarian levels thereafter (6th, brings its own opportunities, adventures, and challenges, 9th, 12th, 15th, and 18th level). Trap sense bonuses gained and only by bucking the odds, knowing the most, and from multiple classes stack. being the best might they claim the treasures of each. Improved Uncanny Dodge (Ex): At 5th level and higher, Role: Bards capably confuse and confound their foes a barbarian can no longer be f lanked. This defense while inspiring their allies to ever-greater daring. While denies a rogue the ability to sneak attack the barbarian accomplished with both weapons and magic, the true by f lanking her, unless the attacker has at least four strength of bards lies outside melee, where they can more rogue levels than the target has barbarian levels. support their companions and undermine their foes without fear of interruptions to their performances. If a character already has uncanny dodge (see above) from another class, the levels from the classes that grant Alignment: Any. uncanny dodge stack to determine the minimum rogue Hit Die: d8. level required to f lank the character. Class Skills Damage Reduction (Ex): At 7th level, a barbarian gains damage reduction. Subtract 1 from the damage the barbarian The bard’s class skills are Acrobatics (Dex), Appraise takes each time she is dealt damage from a weapon or a (Int), Bluff (Cha), Climb (Str), Craft (Int), Diplomacy (Cha), natural attack. At 10th level, and every three barbarian levels Disguise (Cha), Escape Artist (Dex), Intimidate (Cha), thereafter (13th, 16th, and 19th level), this damage reduction Knowledge (all) (Int), Linguistics (Int), Perception (Wis), rises by 1 point. Damage reduction can reduce damage to 0 Perform (Cha), Profession (Wis), Sense Motive (Wis), but not below 0. Sleight of Hand (Dex), Spellcraft (Int), Stealth (Dex), and Use Magic Device (Cha). Greater Rage (Ex): At 11th level, when a barbarian enters rage, the morale bonus to her Strength and Constitution Skill Ranks per Level: 6 + Int modifier. 34 Classes 3 Class Features him, including himself if desired. He can use this ability for a number of rounds per day equal to 4 + his All of the following are class features of the bard. Charisma modifier. At each level after 1st a bard can Weapon and Armor Prof iciency: A bard is prof icient use bardic performance for 2 additional rounds per day. Each round, the bard can produce any one of the types of with all simple weapons, plus the longsword, rapier, bardic performance that he has mastered, as indicated by sap, short sword, shortbow, and whip. Bards are also his level. prof icient with light armor and shields (except tower shields). A bard can cast bard spells while wearing light Starting a bardic performance is a standard action, armor and use a shield without incurring the normal but it can be maintained each round as a free action. arcane spell failure chance. Like any other arcane Changing a bardic performance from one effect spellcaster, a bard wearing medium or heavy armor to another requires the bard to stop the previous incurs a chance of arcane spell failure if the spell in performance and start a new one as a standard action. question has a somatic component. A multiclass bard A bardic performance cannot be disrupted, but it ends still incurs the normal arcane spell failure chance for immediately if the bard is killed, paralyzed, stunned, arcane spells received from other classes. knocked unconscious, or otherwise prevented from taking a free action to maintain it each round. A bard Spells: A bard casts arcane spells drawn from the bard cannot have more than one bardic performance in effect spell list presented in Chapter 10. He can cast any spell at one time. 3–3. In addition, he receives bonus spells per day if he has a high Charisma score (see Table 1–3). The bard’s selection of spells is extremely limited. A bard begins play knowing four 0-level spells and two 1st- level spells of the bard’s choice. At each new bard level, he gains one or more new spells, as indicated on Table 3–4. (Unlike spells per day, the number of spells a bard knows is not affected by his Charisma score. The numbers on Table 3–4 35 Table 3–3: Bard Base Attack Fort Ref Will Spells per Day Level Bonus Save Save Save Special 1st 2nd 3rd 4th 5th 6th 1st +0 +0 +2 +2 Bardic knowledge, bardic performance, cantrips, 1 — — — — — countersong, distraction, fascinate, inspire courage +1 2nd +1 +0 +3 +3 Versatile performance, well-versed 2 — — — — — 3rd +2 +1 +3 +3 Inspire competence +2 3 — — — — — 4th +3 +1 +4 +4 3 1 — — — — 5th +3 +1 +4 +4 inspire courage +2, lore master 1/day 4 2 — — — — 6th +4 +2 +5 +5 Suggestion, Versatile performance 4 3 — — — — 7th +5 +2 +5 +5 Inspire competence +3 4 3 1 — — — 8th +6/+1 +2 +6 +6 Dirge of doom 4 4 2 — — — 9th +6/+1 +3 +6 +6 Inspire greatness 5 4 3 — — — 10th +7/+2 +3 +7 +7 Jack-of-all-trades, Versatile performance 5 4 3 1 — — 11th +8/+3 +3 +7 +7 Inspire competence +4, inspire courage +3, 5 4 4 2 — — lore master 2/day 12th +9/+4 +4 +8 +8 Soothing performance 5 5 4 3 — — 13th +9/+4 +4 +8 +8 5 5 4 3 1 — 14th +10/+5 +4 +9 +9 Frightening tune, Versatile performance 5 5 4 4 2 — 15th +11/+6/+1 +5 +9 +9 Inspire competence +5, inspire heroics 5 5 5 4 3 — 16th +12/+7/+2 +5 +10 +10 5 5 5 4 3 1 17th +12/+7/+2 +5 +10 +10 inspire courage +4, lore master 3/day 5 5 5 4 4 2 18th +13/+8/+3 +6 +11 +11 Mass suggestion, Versatile performance 5 5 5 5 4 3 19th +14/+9/+4 +6 +11 +11 Inspire competence +6 5 5 5 5 5 4 20th +15/+10/+5 +6 +12 +12 Deadly performance 5 5 5 5 5 5 At 7th level, a bard can start a bardic performance as a sing) skill check. Any creature within 30 feet of the bard move action instead of a standard action. At 13th level, a (including the bard himself ) that is affected by a sonic or bard can start a bardic performance as a swift action. language-dependent magical attack may use the bard’s Perform check result in place of its saving throw if, after Each bardic performance has audible components, the saving throw is rolled, the Perform check result proves visual components, or both. to be higher. If a creature within range of the countersong is already under the effect of a noninstantaneous sonic or If a bardic performance has audible components, the language-dependent magical attack, it gains another targets must be able to hear the bard for the performance to saving throw against the effect each round it hears the have any effect, and many such performances are language countersong, but it must use the bard’s Perform skill dependent (as noted in the description). A deaf bard has a check result for the save. Countersong does not work on 20% chance to fail when attempting to use a bardic effects that don’t allow saves. Countersong relies on performance with an audible component. If he fails this audible components. check, the attempt still counts against his daily limit. Deaf creatures are immune to bardic performances with audible Distraction (Su): At 1st level, a bard can use his components. performance to counter magic effects that depend on sight. Each round of the distraction, he makes a Perform (act, If a bardic performance has a visual component, comedy, dance, or oratory) skill check. Any creature within the targets must have line of sight to the bard for the 30 feet of the bard (including the bard himself ) that is performance to have any effect. A blind bard has a 50% affected by an illusion (pattern) or illusion (figment) magical chance to fail when attempting to use a bardic performance attack may use the bard’s Perform check result in place of with a visual component. If he fails this check, the attempt its saving throw if, after the saving throw is rolled, the still counts against his daily limit. Blind creatures are Perform skill check proves to be higher. If a creature within immune to bardic performances with visual components. range of the distraction is already under the effect of a noninstantaneous illusion (pattern) or illusion (figment) Countersong (Su): At 1st level, a bard learns to counter magical attack, it gains another saving throw against the magic effects that depend on sound (but not spells that effect each round it sees the distraction, but it have verbal components). Each round of the countersong he makes a Perform (keyboard, percussion, wind, string, or 36 Classes 3 must use the bard’s Perform skill check result for the save. Table 3–4: Bard Spells Known Distraction does not work on effects that don’t allow saves. Distraction relies on visual components. Spells Known Level 0 1st 2nd 3rd 4th 5th 6th Fascinate (Su): At 1st level, a bard can use his performance 1st 4 2 — — — — — to cause one or more creatures to become fascinated with 2nd 5 3 — — — — — him. Each creature to be fascinated must be within 90 3rd 6 4 — — — — — feet, able to see and hear the bard, and capable of paying 4th 6 4 2 — — — — attention to him. The bard must also be able to see the 5th 6 4 3 — — — — creatures affected. The distraction of a nearby combat 6th 6 4 4 — — — — or other dangers prevents this ability from working. For 7th 6 5 4 2 — — — every three levels the bard has attained beyond 1st, he can 8th 6 5 4 3 — — — target one additional creature with this ability. 9th 6 5 4 4 — — — 10th 6 5 5 4 2 — — Each creature within range receives a Will save (DC 10 11th 6 6 5 4 3 — — + 1/2 the bard’s level + the bard’s Cha modifier) to negate 12th 6 6 5 4 4 — — the effect. If a creature’s saving throw succeeds, the bard 13th 6 6 5 5 4 2 — cannot attempt to fascinate that creature again for 24 14th 6 6 6 5 4 3 — hours. If its saving throw fails, the creature sits quietly 15th 6 6 6 5 4 4 — and observes the performance for as long as the bard 16th 6 6 6 5 5 4 2 continues to maintain it. While fascinated, a target takes 17th 6 6 6 6 5 4 3 a –4 penalty on all skill checks made as reactions, such 18th 6 6 6 6 5 4 4 as Perception checks. Any potential threat to the target 19th 6 6 6 6 5 5 4 allows the target to make a new saving throw against the 20th 6 6 6 6 6 5 5 effect. Any obvious threat, such as someone drawing a weapon, casting a spell, or aiming a weapon at the target, Suggestion (Sp): A bard of 6th level or higher can use his automatically breaks the effect. performance to make a suggestion (as per the spell) to a creature he has already fascinated (see above). Using this Fascinate is an enchantment (compulsion), mind- ability does not disrupt the fascinate effect, but it does affecting ability. Fascinate relies on audible and visual require a standard action to activate (in addition to the components in order to function. free action to continue the fascinate effect). A bard can use this ability more than once against an individual creature Inspire Courage (Su): A 1st-level bard can use his during an individual performance. performance to inspire courage in his allies (including himself ), bolstering them against fear and improving Making a suggestion does not count against a bard’s total their combat abilities. To be affected, an ally must be rounds per day of bardic performance. A Will saving throw able to perceive the bard’s performance. An affected ally (DC 10 + 1/2 the bard’s level + the bard’s Cha modifier) receives a +1 morale bonus on saving throws against charm negates the effect. This ability affects only a single and fear effects and a +1 competence bonus on attack and creature. Suggestion is an enchantment (compulsion), weapon damage rolls. At 5th level, and every six bard levels mind affecting, language-dependent ability and relies on thereafter, this bonus increases by +1, to a maximum of +4 audible components. at 17th level. Inspire courage is a mind-affecting ability. Inspire courage can use audible or visual components. The Dirge of Doom (Su): A bard of 8th level or higher can use his bard must choose which component to use when starting performance to foster a sense of growing dread in his enemies, his performance. causing them to become shaken. To be affected, an enemy must be within 30 feet and able to see and hear the bard’s Inspire Competence (Su): A bard of 3rd level or higher performance. The effect persists for as long as the enemy is can use his performance to help an ally succeed at a task. within 30 feet and the bard continues his performance. This That ally must be within 30 feet and be able to hear the performance cannot cause a creature to become frightened or bard. The ally gets a +2 competence bonus on skill checks panicked, even if the targets are already shaken from another with a particular skill as long as she continues to hear effect. Dirge of doom is a mind-affecting fear effect, and it the bard’s performance. This bonus increases by +1 for relies on audible and visual components. every four levels the bard has attained beyond 3rd (+3 at 7th, +4 at 11th, +5 at 15th, and +6 at 19th). Certain uses of Inspire Greatness (Su): A bard of 9th level or higher can this ability are infeasible, such as Stealth, and may be use his performance to inspire greatness in himself or a disallowed at the GM’s discretion. A bard can’t inspire single willing ally within 30 feet, granting extra fighting competence in himself. Inspire competence relies on audible components. 37 capability. For every three levels the bard attains beyond and hear the bard perform for 1 full round and be within 30 9th, he can target an additional ally while using this feet. The target receives a Will save (DC 10 + 1/2 the bard’s performance (up to a maximum of four targets at 18th level + the bard’s Cha modifier) to negate the effect. If a level). To inspire greatness, all of the targets must be able creature’s saving throw succeeds, the target is staggered for to see and hear the bard. A creature inspired with greatness 1d4 rounds, and the bard cannot use deadly performance on gains 2 bonus Hit Dice (d10s), the commensurate number that creature again for 24 hours. If a creature’s saving throw of temporary hit points (apply the target’s Constitution fails, it dies. Deadly performance is a mind-affecting death modifier, if any, to these bonus Hit Dice), a +2 competence effect that relies on audible and visual components. bonus on attack rolls, and a +1 competence bonus on Fortitude saves. The bonus Hit Dice count as regular Hit Cantrips: Bards learn a number of cantrips, or 0-level Dice for determining the effect of spells that are Hit Dice spells, as noted on Table 3–4 under “Spells Known.” These dependent. Inspire greatness is a mind-affecting ability spells are cast like any other spell, but they do not consume and it relies on audible and visual components. any slots and may be used again. Soothing Performance (Su): A bard of 12th level or higher Versatile Performance (Ex): At 2nd level, a bard can can use his performance to create an effect equivalent to a choose one type of Perform skill. He can use his bonus in mass cure serious wounds, using the bard’s level as the caster that skill in place of his bonus in associated skills. When level. In addition, this performance removes the fatigued, substituting in this way, the bard uses his total Perform sickened, and shaken conditions from all those affected. skill bonus, including class skill bonus, in place of its Using this ability requires 4 rounds of continuous associated skill’s bonus, whether or not he has ranks in performance, and the targets must be able to see and that skill or if it is a class skill. At 6th level, and every 4 hear the bard throughout the performance. Soothing levels thereafter, the bard can select an additional type of performance affects all targets that remain within 30 Perform to substitute. feet throughout the performance. Soothing performance relies on audible and visual components. The types of Perform and their associated skills are: Act (Bluff, Disguise), Comedy (Bluff, Intimidate), Dance Frightening Tune (Sp): A bard of 14th level or higher can (Acrobatics, Fly), Keyboard Instruments (Diplomacy, use his performance to cause fear in his enemies. To be Intimidate), Oratory (Diplomacy, Sense Motive), affected, an enemy must be able to hear the bard perform Percussion (Handle Animal, Intimidate), Sing (Bluff, and be within 30 feet. Each enemy within range receives Sense Motive), String (Bluff, Diplomacy), and Wind a Will save (DC 10 + 1/2 the bard’s level + the bard’s Cha (Diplomacy, Handle Animal). modifier) to negate the effect. If the save succeeds, the creature is immune to this ability for 24 hours. If the save Well-Versed (Ex): At 2nd level, the bard becomes fails, the target becomes frightened and f lees for as long resistant to the bardic performance of others, and to sonic as the target can hear the bard’s performance. Frightening effects in general. The bard gains a +4 bonus on saving tune relies on audible components. throws made against bardic performance, sonic, and language-dependent effects. Inspire Heroics (Su): A bard of 15th level or higher can inspire tremendous heroism in himself or a single ally Lore Master (Ex): At 5th level, the bard becomes a master within 30 feet. For every three bard levels the character of lore and can take 10 on any Knowledge skill check that attains beyond 15th, he can inspire heroics in an additional he has ranks in. A bard can choose not to take 10 and can creature. To inspire heroics, all of the targets must be able instead roll normally. In addition, once per day, the bard to see and hear the bard. Inspired creatures gain a +4 morale can take 20 on any Knowledge skill check as a standard bonus on saving throws and a +4 dodge bonus to AC. This action. He can use this ability one additional time per day effect lasts for as long as the targets are able to witness the for every six levels he possesses beyond 5th, to a maximum performance. Inspire heroics is a mind-affecting ability of three times per day at 17th level. that relies on audible and visual components. Jack-of-All-Trades (Ex): At 10th level, the bard can use Mass Suggestion (Sp): This ability functions just like any skill, even if the skill normally requires him to be suggestion, but allows a bard of 18th level or higher to make trained. At 16th level, the bard considers all skills to be a suggestion simultaneously to any number of creatures class skills. At 19th level, the bard can take 10 on any skill that he has already fascinated. Mass suggestion is an check, even if it is not normally allowed. enchantment (compulsion), mind-affecting, language- dependent ability that relies on audible components. Cleric Deadly Performance (Su): A bard of 20th level or higher In faith and the miracles of the divine, many find a greater can use his performance to cause one enemy to die from purpose. Called to serve powers beyond most mortal joy or sorrow. To be affected, the target must be able to see understanding, all priests preach wonders and provide for the spiritual needs of their people. Clerics are more than mere priests, though; these emissaries of the divine work 38 Classes 3 the will of their deities through strength of arms and the evil, good, and lawful spells on page 41. A cleric must magic of their gods. Devoted to the tenets of the religions choose and prepare her spells in advance. and philosophies that inspire them, these ecclesiastics quest to spread the knowledge and inf luence of their To prepare or cast a spell, a cleric must have a Wisdom faith. Yet while they might share similar abilities, clerics score equal to at least 10 + the spell level. The Difficulty prove as different from one another as the divinities they Class for a saving throw against a cleric’s spell is 10 + the serve, with some offering healing and redemption, others spell level + the cleric’s Wisdom modifier. judging law and truth, and still others spreading conf lict and corruption. The ways of the cleric are varied, yet all Like other spellcasters, a cleric can cast only a certain who tread these paths walk with the mightiest of allies and number of spells of each spell level per day. Her base daily bear the arms of the gods themselves. spell allotment is given on Table 3–5. In addition, she receives bonus spells per day if she has a high Wisdom score Role: More than capable of upholding the honor of their (see Table 1–3). inf luenced by their faith, all clerics must focus their worship upon a divine source. While the vast majority of clerics revere a specif ic deity, a small number dedicate themselves to a divine concept worthy of devotion—such as battle, death, justice, or knowledge— free of a deif ic abstraction. (Work with your GM if you prefer this path to selecting a specif ic deity.) Alignment: A cleric’s alignment must be within one step of her deity’s, along either the law/chaos axis or the good/evil axis (see Chapter 7). Hit Die: d8. deity. Aura (Ex): A cleric of a chaotic, evil, good, or lawful deity has a particularly powerful aura corresponding to the deity’s alignment (see the detect evil spell for details). Spells: A cleric casts divine spells which are drawn from the cleric spell list presented in Chapter 10. Her alignment, however, may restrict her from casting certain spells opposed to her moral or ethical beliefs; see chaotic, 39 Table 3–5: Cleric Base Attack Fort Ref Will Spells per Day Level Bonus Save Save Save Special 0 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 1st +0 +2 +0 +2 Aura, channel energy 1d6, 3 1+1 — — — — — — — — domains, orisons 2nd +1 +3 +0 +3 4 2+1 — — — — — — — — 3rd +2 +3 +1 +3 Channel energy 2d6 4 2+1 1+1 — — — — — — — 4th +3 +4 +1 +4 4 3+1 2+1 — — — — — — — 5th +3 +4 +1 +4 Channel energy 3d6 4 3+1 2+1 1+1 — — — — — — 6th +4 +5 +2 +5 4 3+1 3+1 2+1 — — — — — — 7th +5 +5 +2 +5 Channel energy 4d6 4 4+1 3+1 2+1 1+1 — — — — — 8th +6/+1 +6 +2 +6 4 4+1 3+1 3+1 2+1 — — — — — 9th +6/+1 +6 +3 +6 Channel energy 5d6 4 4+1 4+1 3+1 2+1 1+1 — — — — 10th +7/+2 +7 +3 +7 4 4+1 4+1 3+1 3+1 2+1 — — — — 11th +8/+3 +7 +3 +7 Channel energy 6d6 4 4+1 4+1 4+1 3+1 2+1 1+1 — — — 12th +9/+4 +8 +4 +8 4 4+1 4+1 4+1 3+1 3+1 2+1 — — — 13th +9/+4 +8 +4 +8 Channel energy 7d6 4 4+1 4+1 4+1 4+1 3+1 2+1 1+1 — — 14th +10/+5 +9 +4 +9 4 4+1 4+1 4+1 4+1 3+1 3+1 2+1 — — 15th +11/+6/+1 +9 +5 +9 Channel energy 8d6 4 4+1 4+1 4+1 4+1 4+1 3+1 2+1 1+1 — 16th +12/+7/+2 +10 +5 +10 4 4+1 4+1 4+1 4+1 4+1 3+1 3+1 2+1 — 17th +12/+7/+2 +10 +5 +10 Channel energy 9d6 4 4+1 4+1 4+1 4+1 4+1 4+1 3+1 2+1 1+1 18th +13/+8/+3 +11 +6 +11 4 4+1 4+1 4+1 4+1 4+1 4+1 3+1 3+1 2+1 19th +14/+9/+4 +11 +6 +11 Channel energy 10d6 4 4+1 4+1 4+1 4+1 4+1 4+1 4+1 3+1 3+1 20th +15/+10/+5 +12 +6 +12 4 4+1 4+1 4+1 4+1 4+1 4+1 4+1 4+1 4+1 Note: “+1” represents the domain spell slot Clerics meditate or pray for their spells. Each cleric or healed is equal to 1d6 points of damage plus 1d6 points must choose a time when she must spend 1 hour each day of damage for every two cleric levels beyond 1st (2d6 at 3rd, in quiet contemplation or supplication to regain her daily 3d6 at 5th, and so on). Creatures that take damage from allotment of spells. A cleric may prepare and cast any spell channeled energy receive a Will save to halve the damage. on the cleric spell list, provided that she can cast spells The DC of this save is equal to 10 + 1/2 the cleric’s level of that level, but she must choose which spells to prepare + the cleric’s Charisma modifier. Creatures healed by during her daily meditation. channeled energy cannot exceed their maximum hit point total—all excess healing is lost. A cleric may channel Channel Energy (Su): Regardless of alignment, any energy a number of times per day equal to 3 + her Charisma cleric can release a wave of energy by channeling the power modifier. This is a standard action that does not provoke of her faith through her holy (or unholy) symbol. This an attack of opportunity. A cleric can choose whether or energy can be used to cause or heal damage, depending on not to include herself in this effect. A cleric must be able to the type of energy channeled and the creatures targeted. present her holy symbol to use this ability. A good cleric (or one who worships a good deity) channels Domains: A cleric’s deity inf luences her alignment, what positive energy and can choose to deal damage to undead magic she can perform, her values, and how others see her. creatures or to heal living creatures. An evil cleric (or one A cleric chooses two domains from among those belonging who worships an evil deity) channels negative energy and can to her deity. A cleric can select an alignment domain choose to deal damage to living creatures or to heal undead (Chaos, Evil, Good, or Law) only if her alignment matches creatures. A neutral cleric who worships a neutral deity (or that domain. If a cleric is not devoted to a particular deity, one who is not devoted to a particular deity) must choose she still selects two domains to represent her spiritual whether she channels positive or negative energy. Once this inclinations and abilities (subject to GM approval). The choice is made, it cannot be reversed. This decision also restriction on alignment domains still applies. determines whether the cleric casts spontaneous cure or inf lict spells (see spontaneous casting). Each domain grants a number of domain powers, dependent upon the level of the cleric, as well as a Channeling energy causes a burst that affects all number of bonus spells. A cleric gains one domain spell creatures of one type (either undead or living) in a 30-foot slot for each level of cleric spell she can cast, from 1st on radius centered on the cleric. The amount of damage dealt 40 Classes 3 up. Each day, a cleric can prepare one of the spells from Air Domain her two domains in that slot. If a domain spell is not on the cleric spell list, a cleric can prepare it only in her Deities: Gozreh, Shelyn. domain spell slot. Domain spells cannot be used to cast Granted Powers: You can manipulate lightning, mist, spells spontaneously. and wind, traffic with air creatures, and are resistant to In addition, a cleric gains the listed powers from both electricity damage. of her domains, if she is of a high enough level. Unless otherwise noted, using a domain power is a standard action. Lightning Arc (Sp): As a standard action, you can unleash Cleric domains are listed at the end of this class entry. an arc of electricity targeting any foe within 30 feet as a ranged touch attack. This arc of electricity deals 1d6 points Orisons: Clerics can prepare a number of orisons, of electricity damage + 1 point for every two cleric levels or 0-level spells, each day, as noted on Table 3–5 under you possess. You can use this ability a number of times per “Spells per Day.” These spells are cast like any other day equal to 3 + your Wisdom modifier. spell, but they are not expended when cast and may be used again. Electricity Resistance (Ex): At 6th level, you gain resist electricity 10. This resistance increases to 20 at 12th level. Spontaneous Casting: A good cleric (or a neutral cleric of At 20th level, you gain immunity to electricity. a good deity) can channel stored spell energy into healing spells that she did not prepare ahead of time. The cleric can Domain Spells: 1st—obscuring mist, 2nd—wind wall, “lose” any prepared spell that is not an orison or domain 3rd—gaseous form, 4th—air walk, 5th—control winds, 6th— spell in order to cast any cure spell of the same spell level chain lightning, 7th—elemental body IV (air only), 8th— or lower (a cure spell is any spell with “cure” in its name). whirlwind, 9th—elemental swarm (air spell only). An evil cleric (or a neutral cleric who worships an evil Animal Domain deity) can’t convert prepared spells to cure spells but can convert them to inf lict spells (an inf lict spell is one with Deities: Erastil, Gozreh. “inf lict” in its name). Granted Powers: You can speak with and befriend A cleric who is neither good nor evil and whose deity animals with ease. In addition, you treat Knowledge is neither good nor evil can convert spells to either cure (nature) as a class skill. spells or inf lict spells (player’s choice). Once the player makes this choice, it cannot be reversed. This choice Speak with Animals (Sp): You can speak with animals, as per also determines whether the cleric channels positive or the spell, for a number of rounds per day equal to 3 + your negative energy (see Channel Energy). cleric level. Chaotic, Evil, Good, and Lawful Spells: A cleric can’t Animal Companion (Ex): At 4th level, you gain the service cast spells of an alignment opposed to her own or her of an animal companion. Your effective druid level for this deity’s (if she has one). Spells associated with particular animal companion is equal to your cleric level – 3. (Druids alignments are indicated by the chaotic, evil, good, and who take this ability through their nature bond class lawful descriptors in their spell descriptions. feature use their druid level – 3 to determine the abilities of their animal companions). Bonus Languages: A cleric’s bonus language options include Celestial, Abyssal, and Infernal (the languages of Domain Spells: 1st—calm animals, 2nd—hold animal, good, chaotic evil, and lawful evil outsiders, respectively). 3rd—dominate animal, 4th—summon nature’s ally IV These choices are in addition to the bonus languages (animals only), 5th—beast shape III (animals only), 6th— available to the character because of her race. antilife shell, 7th—animal shapes, 8th—summon nature’s ally VIII (animals only), 9th—shapechange. Ex-Clerics Artifice Domain A cleric who grossly violates the code of conduct required by her god loses all spells and class features, except for Deity: Torag. armor and shield proficiencies and proficiency with Granted Powers: You can repair damage to objects, simple weapons. She cannot thereafter gain levels as a cleric of that god until she atones for her deeds (see the animate objects with life, and create objects from nothing. atonement spell description). Artificer’s Touch (Sp): You can cast mending at will, using Domains your cleric level as the caster level to repair damaged objects. In addition, you can cause damage to objects Clerics may select any two of the domains granted by their and construct creatures by striking them with a melee deity. Clerics without a deity may select any two domains touch attack. Objects and constructs take 1d6 points (choice are subject to GM approval). of damage +1 for every two cleric levels you possess. This attack bypasses an amount of damage reduction and hardness equal to your cleric level. You can use this ability a number of times per day equal to 3 + your Wisdom modif ier. 41 Dancing Weapons (Su): At 8th level, you can give a weapon Community Domain touched the dancing special weapon quality for 4 rounds. You can use this ability once per day at 8th level, and an Deity: Erastil. additional time per day for every four levels beyond 8th. Granted Powers: Your touch can heal wounds, and your Domain Spells: 1st—animate rope, 2nd—wood shape, presence instills unity and strengthens emotional bonds. 3rd—stone shape, 4th—minor creation, 5th—fabricate, 6th— Calming Touch (Sp): You can touch a creature as a standard major creation, 7th—wall of iron, 8th—statue, 9th— prismatic sphere. action to heal it of 1d6 points of nonlethal damage + 1 point per cleric level. This touch also removes the fatigued, Chaos Domain shaken, and sickened conditions (but has no effect on more severe conditions). You can use this ability a number Deities: Calistria, Cayden Cailean, Desna, Gorum, of times per day equal to 3 + your Wisdom modifier. Lamashtu, Rovagug. Unity (Su): At 8th level, whenever a spell or effect targets Granted Powers: Your touch infuses life and weapons you and one or more allies within 30 feet, you can use this with chaos, and you revel in all things anarchic. ability to allow your allies to use your saving throw against the effect in place of their own. Each ally must decide Touch of Chaos (Sp): You can imbue a target with chaos individually before the rolls are made. Using this ability is as a melee touch attack. For the next round, anytime the an immediate action. You can use this ability once per day target rolls a d20, he must roll twice and take the less at 8th level, and one additional time per day for every four favorable result. You can use this ability a number of times cleric levels beyond 8th. per day equal to 3 + your Wisdom modifier. Domain Spells: 1st—bless, 2nd—shield other, 3rd—prayer, Chaos Blade (Su): At 8th level, you can give a weapon 4th—imbue with spell ability, 5th—telepathic bond, 6th—heroes’ touched the anarchic special weapon quality for a number feast, 7th—refuge, 8th—mass cure critical wounds, 9th—miracle. of rounds equal to 1/2 your cleric level. You can use this ability once per day at 8th level, and an additional time per Darkness Domain day for every four levels beyond 8th. Deity: Zon-Kuthon. Domain Spells: 1st—protection from law, 2nd—align Granted Power: You manipulate shadows and darkness. weapon (chaos only), 3rd—magic circle against law, 4th— chaos hammer, 5th—dispel law, 6th—animate objects, 7th— In addition, you receive Blind-Fight as a bonus feat. word of chaos, 8th—cloak of chaos, 9th—summon monster IX Touch of Darkness (Sp): As a melee touch attack, you can (chaos spell only). cause a creature’s vision to be fraught with shadows and Charm Domain darkness. The creature touched treats all other creatures as if they had concealment, suffering a 20% miss chance on all Deities: Calistria, Cayden Cailean, Norgorber, Shelyn. attack rolls. This effect lasts for a number of rounds equal to Granted Powers: You can baff le and befuddle foes with 1/2 your cleric level (minimum 1). You can use this ability a number of times per day equal to 3 + your Wisdom modifier. a touch or a smile, and your beauty and grace are divine. Dazing Touch (Sp): You can cause a living creature Eyes of Darkness (Su): At 8th level, your vision is not impaired by lighting conditions, even in absolute darkness to become dazed for 1 round as a melee touch attack. and magic darkness. You can use this ability for a number Creatures with more Hit Dice than your cleric level are of rounds per day equal to 1/2 your cleric level. These unaffected. You can use this ability a number of times per rounds do not need to be consecutive. day equal to 3 + your Wisdom modifier. Domain Spells: 1st—obscuring mist, 2nd—blindness/ Charming Smile (Sp): At 8th level, you can cast charm deafness (only to cause blindness), 3rd—deeper darkness, person as a swift action, with a DC of 10 + 1/2 your cleric 4th—shadow conjuration, 5th—summon monster V (summons level + your Wisdom modif ier. You can only have one 1d3 shadows), 6th—shadow walk, 7th—power word blind, creature charmed in this way at a time. The total number 8th—greater shadow evocation, 9th—shades. of rounds of this effect per day is equal to your cleric level. The rounds do not need to be consecutive, and Death Domain you can dismiss the charm at any time as a free action. Each attempt to use this ability consumes 1 round of its Deities: Norgorber, Pharasma, Urgathoa, Zon-Kuthon. duration, whether or not the creature succeeds on its save Granted Powers: You can cause the living to bleed at a to resist the effect. touch, and find comfort in the presence of the dead. Domain Spells: 1st—charm person, 2nd—calm Bleeding Touch (Sp): As a melee touch attack, you can emotions, 3rd—suggestion, 4th—heroism, 5th—charm monster, 6th—geas/quest, 7th—insanity, 8th—demand, cause a living creature to take 1d6 points of damage per 9th—dominate monster. round. This effect persists for a number of rounds equal to 1/2 your cleric level (minimum 1) or until stopped with a DC 15 Heal check or any spell or effect that heals damage. 42 Classes 3 Table 3–6: Deities of the Pathfinder Chronicles Deity AL Portfolios Domains Favored Weapon Animal, Community, Good, Law, Plant longbow Erastil LG God of farming, hunting, trade, family Glory, Good, Law, Sun, War longsword Artifice, Earth, Good, Law, Protection warhammer Iomedae LG Goddess of valor, rulership, justice, honor Fire, Glory, Good, Healing, Sun scimitar Torag LG God of the forge, protection, strategy Air, Charm, Good, Luck, Protection glaive Chaos, Good, Liberation, Luck, Travel starknife Sarenrae NG Goddess of the sun, redemption, Chaos, Charm, Good, Strength, Travel rapier Earth, Law, Nobility, Protection, Travel light crossbow honesty, healing Healing, Knowledge, Law, Rune, Strength unarmed strike Air, Animal, Plant, Water, Weather trident Shelyn NG Goddess of beauty, art, love, music Death, Healing, Knowledge, Repose, Water dagger Destruction, Knowledge, Magic, quarterstaff Desna CG Goddess of dreams, stars, travelers, luck Protection, Rune Chaos, Destruction, Glory, Strength, War greatsword Cayden Cailean CG God of freedom, ale, wine, bravery Chaos, Charm, Knowledge, Luck, Trickery whip Evil, Fire, Law, Magic, Trickery mace Abadar LN God of cities, wealth, merchants, law Darkness, Death, Destruction, Evil, Law spiked chain Death, Evil, Magic, Strength, War scythe Irori LN God of history, knowledge, self-perfection Charm, Death, Evil, Knowledge, Trickery short sword Chaos, Evil, Madness, Strength, Trickery falchion Gozreh N Deity of nature, weather, the sea Chaos, Destruction, Evil, War, Weather greataxe Pharasma N Goddess of fate, death, prophecy, birth Nethys N God of magic Gorum CN God of strength, battle, weapons Calistria CN Goddess of trickery, lust, revenge Asmodeus LE God of tyranny, slavery, pride, contracts Zon-Kuthon LE God of envy, pain, darkness, loss Urgathoa NE Goddess of gluttony, disease, undeath Norgorber NE God of greed, secrets, poison, murder Lamashtu CE Goddess of madness, monsters, nightmares Rovagug CE God of wrath, disaster, destruction You can use this ability a number of times per day equal to automatically confirmed. These rounds do not need to be 3 + your Wisdom modifier. consecutive. Death’s Embrace (Ex): At 8th level, you heal damage Domain Spells: 1st—true strike, 2nd—shatter, 3rd—rage, instead of taking damage from channeled negative energy. 4th—inf lict critical wounds, 5th—shout, 6th—harm, 7th— If the channeled negative energy targets undead, you heal disintegrate, 8th—earthquake, 9th—implosion. hit points just like undead in the area. Earth Domain Domain Spells: 1st—cause fear, 2nd—death knell, 3rd— animate dead, 4th—death ward, 5th—slay living, 6th—create Deities: Abadar, Torag. undead, 7th—destruction, 8th—create greater undead, 9th— Granted Powers: You have mastery over earth, metal, and wail of the banshee. stone, can fire darts of acid, and command earth creatures. Destruction Domain Acid Dart (Sp): As a standard action, you can unleash an Deities: Gorum, Nethys, Rovagug, Zon-Kuthon. acid dart targeting any foe within 30 feet as a ranged touch Granted Powers: You revel in ruin and devastation, and attack. This acid dart deals 1d6 points of acid damage + 1 point for every two cleric levels you possess. You can use can deliver particularly destructive attacks. this ability a number of times per day equal to 3 + your Destructive Smite (Su): You gain the destructive smite Wisdom modifier. power: the supernatural ability to make a single melee Acid Resistance (Ex): At 6th level, you gain resist acid 10. attack with a morale bonus on damage rolls equal to This resistance increases to 20 at 12th level. At 20th level, 1/2 your cleric level (minimum 1). You must declare the you gain immunity to acid. destructive smite before making the attack. You can use this ability a number of times per day equal to 3 + your Domain Spells: 1st—magic stone, 2nd—soften earth and Wisdom modifier. stone, 3rd—stone shape, 4th—spike stones, 5th—wall of stone, 6th—stoneskin, 7th—elemental body IV (earth only), 8th— Destructive Aura (Su): At 8th level, you can emit a 30-foot earthquake, 9th—elemental swarm (earth spell only). aura of destruction for a number of rounds per day equal to your cleric level. All attacks made against creatures in Evil Domain this aura (including you) gain a morale bonus on damage equal to 1/2 your cleric level and all critical threats are Deities: Asmodeus, Lamashtu, Norgorber, Rovagug, Urgathoa, Zon-Kuthon. 43 Granted Powers: You are sinister and cruel, and have Divine Presence (Su): At 8th level, you can emit a 30-foot wholly pledged your soul to the cause of evil. aura of divine presence for a number of rounds per day equal to your cleric level. All allies within this aura are Touch of Evil (Sp): You can cause a creature to become treated as if under the effects of a sanctuary spell with a DC sickened as a melee touch attack. Creatures sickened by equal to 10 + 1/2 your cleric level + your Wisdom modifier. your touch count as good for the purposes of spells with These rounds do not need to be consecutive. Activating the evil descriptor. This ability lasts for a number of this ability is a standard action. If an ally leaves the area or rounds equal to 1/2 your cleric level (minimum 1). You makes an attack, the effect ends for that ally. If you make can use this ability a number of times per day equal to 3 + an attack, the effect ends for you and your allies. your Wisdom modifier. Domain Spells: 1st—shield of faith, 2nd—bless weapon, Scythe of Evil (Su): At 8th level, you can give a weapon 3rd—searing light, 4th—holy smite, 5th—righteous might, 6th— touched the unholy special weapon quality for a number undeath to death, 7th—holy sword, 8th—holy aura, 9th—gate. of rounds equal to 1/2 your cleric level. You can use this ability once per day at 8th level, and an additional time per Good Domain day for every four levels beyond 8th. Deities: Cayden Cailean, Desna, Erastil, Iomedae, Domain Spells: 1st—protection from good, 2nd—align Sarenrae, Shelyn, Torag. weapon (evil only), 3rd—magic circle against good, 4th—unholy blight, 5th—dispel good, 6th—create undead, 7th—blasphemy, Granted Powers: You have pledged your life and soul to 8th—unholy aura, 9th—summon monster IX (evil spell only). goodness and purity. Fire Domain Touch of Good (Sp): You can touch a creature as a standard action, granting a sacred bonus on attack rolls, Deity: Asmodeus, Sarenrae. skill checks, ability checks, and saving throws equal to Granted Powers: You can call forth f ire, command half your cleric level (minimum 1) for 1 round. You can use this ability a number of times per day equal to 3 + creatures of the inferno, and your f lesh does not burn. your Wisdom modifier. Fire Bolt (Sp): As a standard action, you can unleash a Holy Lance (Su): At 8th level, you can give a weapon scorching bolt of divine fire from your outstretched you touch the holy special weapon quality for a number hand. You can target any single foe within 30 feet as a of rounds equal to 1/2 your cleric level. You can use this ranged touch attack with this bolt of fire. If you hit the ability once per day at 8th level, and an additional time per foe, the f ire bolt deals 1d6 points of f ire damage + 1 day for every four levels beyond 8th. point for every two cleric levels you possess. You can use this ability a number of times per day equal to 3 + your Domain Spells: 1st—protection from evil, 2nd—align Wisdom modifier. weapon (good only), 3rd—magic circle against evil, 4th—holy smite, 5th—dispel evil, 6th—blade barrier, 7th—holy word, Fire Resistance (Ex): At 6th level, you gain resist fire 10. 8th—holy aura, 9th—summon monster IX (good spell only). This resistance increases to 20 at 12th level. At 20th level, you gain immunity to fire. Healing Domain Domain Spells: 1st—burning hands, 2nd—produce f lame, Deities: Irori, Pharasma, Sarenrae. 3rd—fireball, 4th—wall of fire, 5th—fire shield, 6th—fire Granted Powers: Your touch staves off pain and death, seeds, 7th—elemental body IV (fire only), 8th—incendiary cloud, 9th—elemental swarm (fire spell only). and your healing magic is particularly vital and potent. Rebuke Death (Sp): You can touch a living creature as a Glory Domain standard action, healing it for 1d4 points of damage plus Deities: Gorum, Iomedae, Sarenrae. 1 for every two cleric levels you possess. You can only use Granted Powers: You are infused with the glory of the this ability on a creature that is below 0 hit points. You can use this ability a number of times per day equal to 3 + your divine, and are a true foe of the undead. In addition, when Wisdom modifier. you channel positive energy to harm undead creatures, the save DC to halve the damage is increased by 2. Healer’s Blessing (Su): At 6th level, all of your cure spells are treated as if they were empowered, increasing the Touch of Glory (Sp): You can cause your hand to shimmer amount of damage healed by half (+50%). This does not with divine radiance, allowing you to touch a creature as apply to damage dealt to undead with a cure spell. This a standard action and give it a bonus equal to your cleric does not stack with the Empower Spell metamagic feat. level on a single Charisma-based skill check or Charisma ability check. This ability lasts for 1 hour or until the Domain Spells: 1st—cure light wounds, 2nd—cure creature touched elects to apply the bonus to a roll. You moderate wounds, 3rd—cure serious wounds, 4th—cure critical can use this ability to grant the bonus a number of times wounds, 5th—breath of life, 6th—heal, 7th—regenerate, 8th— per day equal to 3 + your Wisdom modifier. mass cure critical wounds, 9th—mass heal. 44 Classes 3 Knowledge Domain your cleric level. Allies within this aura are not affected by the confused, grappled, frightened, panicked, paralyzed, Deities: Calistria, Irori, Nethys, Norgorber, Pharasma. pinned, or shaken conditions. This aura only suppresses Granted Powers: You are a scholar and a sage of legends. these effects, and they return once a creature leaves the aura or when the aura ends, if applicable. These rounds do In addition, you treat all Knowledge skills as class skills. not need to be consecutive. Lore Keeper (Sp): You can touch a creature to learn about Domain Spells: 1st—remove fear, 2nd—remove paralysis, its abilities and weaknesses. With a successful touch attack, 3rd—remove curse, 4th—freedom of movement, 5th—break you gain information as if you made the appropriate enchantment, 6th—greater dispel magic, 7th—refuge, 8th— Knowledge skill check with a result equal to 15 + your cleric mind blank, 9th—freedom. level + your Wisdom modifier. Luck Domain Remote Viewing (Sp): Starting at 6th level, you can use clairvoyance/clairaudience as a spell-like ability using your Deities: Calistria, Desna, Shelyn. cleric level as the caster level. You can use this ability for Granted Powers: You are infused with luck, and your a number of rounds per day equal to your cleric level. These rounds do not need to be consecutive. mere presence can spread good fortune. Bit of Luck (Sp): You can touch a willing creature as a Domain Spells: 1st—comprehend languages, 2nd—detect thoughts, 3rd—speak with dead, 4th—divination, 5th—true standard action, giving it a bit of luck. For the next round, seeing, 6th—find the path, 7th—legend lore, 8th—discern any time the target rolls a d20, he may roll twice and take location, 9th—foresight. the more favorable result. You can use this ability a number of times per day equal to 3 + your Wisdom modifier. Law Domain Good Fortune (Ex): At 6th level, as an immediate action, Deities: Abadar, Asmodeus, Erastil, Iomedae, Irori, Torag, you can reroll any one d20 roll you have just made before Zon-Kuthon. the results of the roll are revealed. You must take the result of the reroll, even if it’s worse than the original roll. You can Granted Powers: You follow a strict and ordered code use this ability once per day at 6th level, and one additional of laws, and in so doing, achieve enlightenment. time per day for every six cleric levels beyond 6th. Touch of Law (Sp): You can touch a willing creature as a Domain Spells: 1st—true strike, 2nd—aid, 3rd— standard action, infusing it with the power of divine order protection from energy, 4th—freedom of movement, 5th—break and allowing it to treat all attack rolls, skill checks, ability enchantment, 6th—mislead, 7th—spell turning, 8th—moment checks, and saving throws for 1 round as if the natural d20 of prescience, 9th—miracle. roll resulted in an 11. You can use this ability a number of times per day equal to 3 + your Wisdom modifier. Madness Domain Staff of Order (Su): At 8th level, you can give a weapon Deity: Lamashtu. touched the axiomatic special weapon quality for a number Granted Powers: You embrace the madness that lurks of rounds equal to 1/2 your cleric level. You can use this ability once per day at 8th level, and an additional time per deep in your heart, and can unleash it to drive your foes day for every four levels beyond 8th. insane or to sacrifice certain abilities to hone others. Domain Spells: 1st—protection from chaos, 2nd—align Vision of Madness (Sp): You can give a creature a vision weapon (law only), 3rd—magic circle against chaos, 4th—order’s of madness as a melee touch attack. Choose one of the wrath, 5th—dispel chaos, 6th—hold monster, 7th—dictum, 8th— following: attack rolls, saving throws, or skill checks. The shield of law, 9th—summon monster IX (law spell only). target receives a bonus to the chosen rolls equal to 1/2 your cleric level (minimum +1) and a penalty to the other two Liberation Domain types of rolls equal to 1/2 your cleric level (minimum –1). This effect fades after 3 rounds. You can use this ability a Deity: Desna. number of times per day equal to 3 + your Wisdom modifier. Granted Powers: You are a spirit of freedom and a Aura of Madness (Su): At 8th level, you can emit a 30-foot staunch foe against all who would enslave and oppress. aura of madness for a number of rounds per day equal to Liberation (Su): You have the ability to ignore your cleric level. Enemies within this aura are affected by confusion unless they make a Will save with a DC equal impediments to your mobility. For a number of rounds to 10 + 1/2 your cleric level + your Wisdom modifier. The per day equal to your cleric level, you can move normally confusion effect ends immediately when the creature leaves regardless of magical effects that impede movement, as the area or the aura expires. Creatures that succeed on if you were affected by freedom of movement. This effect their saving throw are immune to this aura for 24 hours. occurs automatically as soon as it applies. These rounds do These rounds do not need to be consecutive. not need to be consecutive. Freedom’s Call (Su): At 8th level, you can emit a 30-foot aura of freedom for a number of rounds per day equal to 45 Domain Spells: 1st—lesser confusion, 2nd—touch of idiocy, damage rolls equal to 1/2 your cleric level (minimum +1). 3rd—rage, 4th—confusion, 5th—nightmare, 6th—phantasmal You can use this ability for a number of rounds per day killer, 7th—insanity, 8th—scintillating pattern, 9th—weird. equal to 3 + your Wisdom modifier. These rounds do not need to be consecutive. Magic Domain Bramble Armor (Su): At 6th level, you can cause a host of Deities: Asmodeus, Nethys, Urgathoa. wooden thorns to burst from your skin as a free action. Granted Powers: You are a true student of all things While bramble armor is in effect, any foe striking you with an unarmed strike or a melee weapon without mystical, and see divinity in the purity of magic. reach takes 1d6 points of piercing damage + 1 point per Hand of the Acolyte (Su): You can cause your melee weapon two cleric levels you possess. You can use this ability for a number of rounds per day equal to your cleric level. to f ly from your grasp and strike a foe before instantly These rounds do not need to be consecutive. returning. As a standard action, you can make a single attack using a melee weapon at a range of 30 feet. This Domain Spells: 1st—entangle, 2nd—barkskin, 3rd—plant attack is treated as a ranged attack with a thrown weapon, growth, 4th—command plants, 5th—wall of thorns, 6th—repel except that you add your Wisdom modifier to the attack wood, 7th—animate plants, 8th—control plants, 9th—shambler. roll instead of your Dexterity modifier (damage still relies on Strength). This ability cannot be used to perform a Protection Domain combat maneuver. You can use this ability a number of times per day equal to 3 + your Wisdom modifier. Deities: Abadar, Nethys, Shelyn, Torag. Granted Powers: Your faith is your greatest source of Dispelling Touch (Sp): At 8th level, you can use a targeted dispel magic effect as a melee touch attack. You can use this protection, and you can use that faith to defend others. In ability once per day at 8th level and one additional time per addition, you receive a +1 resistance bonus on saving throws. day for every four cleric levels beyond 8th. This bonus increases by 1 for every 5 levels you possess. Domain Spells: 1st—identify, 2nd—magic mouth, 3rd— Resistant Touch (Sp): As a standard action, you can touch dispel magic, 4th—imbue with spell ability, 5th—spell resistance, an ally to grant him your resistance bonus for 1 minute. 6th—antimagic field, 7th—spell turning, 8th—protection from When you use this ability, you lose your resistance bonus spells, 9th—mage’s disjunction. granted by the Protection domain for 1 minute. You can use this ability a number of times per day equal to 3 + Nobility Domain your Wisdom modifier. Deity: Abadar. Aura of Protection (Su): At 8th level, you can emit a 30-foot Granted Powers: You are a great leader, an inspiration to aura of protection for a number of rounds per day equal to your cleric level. You and your allies within this aura all who follow the teachings of your faith. gain a +1 def lection bonus to AC and resistance 5 against Inspiring Word (Sp): As a standard action, you can speak an all elements (acid, cold, electricity, fire, and sonic). The def lection bonus increases by +1 for every four cleric inspiring word to a creature within 30 feet. That creature levels you possess beyond 8th. At 14th level, the resistance receives a +2 morale bonus on attack rolls, skill checks, ability against all elements increases to 10. These rounds do not checks, and saving throws for a number of rounds equal to need to be consecutive. 1/2 your cleric level (minimum 1). You can use this power a number of times per day equal to 3 + your Wisdom modifier. Domain Spells: 1st—sanctuary, 2nd—shield other, 3rd—protection from energy, 4th—spell immunity, 5th—spell Leadership (Ex): At 8th level, you receive Leadership as resistance, 6th—antimagic field, 7th—repulsion, 8th—mind a bonus feat. In addition, you gain a +2 bonus on your blank, 9th—prismatic sphere. leadership score as long as you uphold the tenets of your deity (or divine concept if you do not venerate a deity). Repose Domain Domain Spells: 1st—divine favor, 2nd—enthrall, 3rd—magic Deity: Pharasma. vestment, 4th—discern lies, 5th—greater command, 6th—geas/ Granted Powers: You see death not as something to be quest, 7th—repulsion, 8th—demand, 9th—storm of vengeance. feared, but as a final rest and reward for a life well spent. Plant Domain The taint of undeath is a mockery of what you hold dear. Deities: Erastil, Gozreh. Gentle Rest (Sp): Your touch can f ill a creature with Granted Powers: You f ind solace in the green, can grow lethargy, causing a living creature to become staggered for 1 round as a melee touch attack. If you touch a defensive thorns, and can communicate with plants. staggered living creature, that creature falls asleep for 1 Wooden Fist (Su): As a free action, your hands can become round instead. Undead creatures touched are staggered for a number of rounds equal to your Wisdom modifier. as hard as wood, covered in tiny thorns. While you have wooden fists, your unarmed strikes do not provoke attacks of opportunity, deal lethal damage, and gain a bonus on 46 Classes 3 You can use this ability a number of times per day equal to that rely on Strength, Strength-based skills, and Strength 3 + your Wisdom modifier. checks. You can use this ability a number of times per day equal to 3 + your Wisdom modifier. Ward Against Death (Su): At 8th level, you can emit a 30- foot aura that wards against death for a number of rounds Might of the Gods (Su): At 8th level, you can add your per day equal to your cleric level. Living creatures in this cleric level as an enhancement bonus to your Strength area are immune to all death effects, energy drain, and score for a number of rounds per day equal to your cleric effects that cause negative levels. This ward does not remove level. This bonus only applies on Strength checks and negative levels that a creature has already gained, but the Strength-based skill checks. These rounds do not need negative levels have no effect while the creature is inside the to be consecutive. warded area. These rounds do not need to be consecutive. Domain Spells: 1st—enlarge person, 2nd—bull’s strength, Domain Spells: 1st—deathwatch, 2nd—gentle repose, 3rd—magic vestment, 4th—spell immunity, 5th—righteous 3rd—speak with dead, 4th—death ward, 5th—slay living, might, 6th—stoneskin, 7th—grasping hand, 8th—clenched fist, 6th—undeath to death, 7th—destruction, 8th—waves of 9th—crushing hand. exhaustion, 9th—wail of the banshee. Sun Domain Rune Domain Deities: Iomedae, Sarenrae. Deities: Irori, Nethys. Granted Powers: You see truth in the pure and Granted Powers: In strange and eldritch runes you find burning light of the sun, and can call upon its blessing potent magic. You gain Scribe Scroll as a bonus feat. or wrath to work great deeds. Blast Rune (Sp): As a standard action, you can create a Sun’s Blessing (Su): Whenever you channel positive energy blast rune in any adjacent square. Any creature entering to harm undead creatures, add your cleric level to the this square takes 1d6 points of damage + 1 point for every damage dealt. Undead do not add their channel resistance two cleric levels you possess. This rune deals either acid, to their saves when you channel positive energy. cold, electricity, or fire damage, decided when you create the rune. The rune is invisible and lasts a number of Nimbus of Light (Su): At 8th level, you can emit a 30-foot rounds equal to your cleric level or until discharged. You nimbus of light for a number of rounds per day equal to cannot create a blast rune in a square occupied by another your cleric level. This acts as a daylight spell. In addition, creature. This rune counts as a 1st-level spell for the undead within this radius take an amount of damage equal purposes of dispelling. It can be discovered with a DC 26 to your cleric level each round that they remain inside the Perception skill check and disarmed with a DC 26 Disable nimbus. Spells and spell-like abilities with the darkness Device skill check. You can use this ability a number of descriptor are automatically dispelled if brought inside times per day equal to 3 + your Wisdom modifier. this nimbus. These rounds do not need to be consecutive. Spell Rune (Sp): At 8th level, you can attach another spell Domain Spells: 1st—endure elements, 2nd—heat metal, that you cast to one of your blast runes, causing that spell 3rd—searing light, 4th—fire shield, 5th—f lame strike, 6th—fire to affect the creature that triggers the rune, in addition to seeds, 7th—sunbeam, 8th—sunburst, 9th—prismatic sphere. the damage. This spell must be of at least one level lower than the highest-level cleric spell you can cast and it must Travel Domain target one or more creatures. Regardless of the number of targets the spell can normally affect, it only affects the Deities: Abadar, Cayden Cailean, Desna. creature that triggers the rune. Granted Powers: You are an explorer and find Domain Spells: 1st—erase, 2nd—secret page, 3rd—glyph enlightenment in the simple joy of travel, be it by foot or of warding, 4th—explosive runes, 5th—lesser planar binding, conveyance or magic. Increase your base speed by 10 feet. 6th—greater glyph of warding, 7th—instant summons, 8th— symbol of death, 9th—teleportation circle. Agile Feet (Su): As a free action, you can gain increased mobility for 1 round. For the next round, you ignore all Strength Domain difficult terrain and do not take any penalties for moving through it. You can use this ability a number of times per Deities: Cayden Cailean, Gorum, Irori, Lamashtu, Urgathoa. day equal to 3 + your Wisdom modifier. Granted Powers: In strength and brawn there is truth— Dimensional Hop (Sp): At 8th level, you can teleport up your faith gives you incredible might and power. to 10 feet per cleric level per day as a move action. This Strength Surge (Sp): As a standard action, you can touch a teleportation must be used in 5-foot increments and such movement does not provoke attacks of opportunity. You creature to give it great strength. For 1 round, the target must have line of sight to your destination to use this gains an enhancement bonus equal to 1/2 your cleric level ability. You can bring other willing creatures with you, (minimum +1) to melee attacks, combat maneuver checks but you must expend an equal amount of distance for each creature brought. 47 Domain Spells: 1st—longstrider, 2nd—locate object, 3rd— Cold Resistance (Ex): At 6th level, you gain resist cold 10. f ly, 4th—dimension door, 5th—teleport, 6th—find the path, This resistance increases to 20 at 12th level. At 20th level, 7th—greater teleport, 8th—phase door, 9th—astral projection. you gain immunity to cold. Trickery Domain Domain Spells: 1st—obscuring mist, 2nd—fog cloud, 3rd— water breathing, 4th—control water, 5th—ice storm, 6th—cone Deities: Asmodeus, Calistria, Lamashtu, Norgorber. of cold, 7th—elemental body IV (water only), 8th—horrid Granted Powers: You are a master of illusions and wilting, 9th—elemental swarm (water spell only). deceptions. Bluff, Disguise, and Stealth are class skills. Weather Domain Copycat (Sp): You can create an illusory double of yourself as Deities: Gozreh, Rovagug. a move action. This double functions as a single mirror image Granted Powers: With power over storm and sky, you and lasts for a number of rounds equal to your cleric level, or until the illusory duplicate is dispelled or destroyed. You can can call down the wrath of the gods upon the world below. have no more than one copycat at a time. This ability does Storm Burst (Sp): As a standard action, you can create a not stack with the mirror image spell. You can use this ability a number of times per day equal to 3 + your Wisdom modifier. storm burst targeting any foe within 30 feet as a ranged touch attack. The storm burst deals 1d6 points of nonlethal Master’s Illusion (Sp): At 8th level, you can create an illusion damage + 1 point for every two cleric levels you possess. In that hides the appearance of yourself and any number of allies addition, the target is buffeted by winds and rain, causing within 30 feet for 1 round per cleric level. The save DC to it to take a –2 penalty on attack rolls for 1 round. You can disbelieve this effect is equal to 10 + 1/2 your cleric level + your use this ability a number of times per day equal to 3 + your Wisdom modifier. This ability otherwise functions like the Wisdom modifier. spell veil. The rounds do not need to be consecutive. Lightning Lord (Sp): At 8th level, you can call down a Domain Spells: 1st—disguise self, 2nd—invisibility, 3rd— number of bolts of lightning per day equal to your cleric nondetection, 4th—confusion, 5th—false vision, 6th—mislead, level. You can call down as many bolts as you want with a 7th—screen, 8th—mass invisibility, 9th—time stop. single standard action, but no creature can be the target of more than one bolt and no two targets can be more than 30 War Domain feet apart. This ability otherwise functions as call lightning. Deities: Gorum, Iomedae, Rovagug, Urgathoa. Domain Spells: 1st—obscuring mist, 2nd—fog cloud, Granted Powers: You are a crusader for your god, 3rd—call lightning, 4th—sleet storm, 5th—ice storm, 6th— control winds, 7th—control weather, 8th—whirlwind, 9th— always ready and willing to fight to defend your faith. storm of vengeance. Battle Rage (Sp): You can touch a creature as a standard Druid action to give it a bonus on melee damage rolls equal to 1/2 your cleric level (minimum +1) for 1 round. You can do so a Within the purity of the elements and the order of the number of times per day equal to 3 + your Wisdom modifier. wilds lingers a power beyond the marvels of civilization. Furtive yet undeniable, these primal magics are guarded Weapon Master (Su): At 8th level, as a swift action, you gain the over by servants of philosophical balance known as druids. use of one combat feat for a number of rounds per day equal to Allies to beasts and manipulators of nature, these often your cleric level. These rounds do not need to be consecutive misunderstood protectors of the wild strive to shield their and you can change the feat chosen each time you use this lands from all who would threaten them and prove the ability. You must meet the prerequisites to use this feat. might of the wilds to those who lock themselves behind city walls. Rewarded for their devotion with incredible Domain Spells: 1st—magic weapon, 2nd—spiritual powers, druids gain unparalleled shape-shifting abilities, weapon, 3rd—magic vestment, 4th—divine power, 5th—f lame the companionship of mighty beasts, and the power to call strike, 6th—blade barrier, 7th—power word blind, 8th—power upon nature’s wrath. The mightiest temper powers akin to word stun, 9th—power word kill. storms, earthquakes, and volcanoes with primeval wisdom long abandoned and forgotten by civilization. Water Domain Role: While some druids might keep to the fringe of Deities: Gozreh, Pharasma. battle, allowing companions and summoned creatures Granted Powers: You can manipulate water and mist to fight while they confound foes with the powers of nature, others transform into deadly beasts and savagely and ice, conjure creatures of water, and resist cold. wade into combat. Druids worship personifications of Icicle (Sp): As a standard action, you can fire an icicle elemental forces, natural powers, or nature itself. Typically this means devotion to a nature deity, though druids are. 48 Classes 3 just as likely to revere vague spirits, animalistic demigods, cast spells of that level, but she must choose which spells or even specific awe-inspiring natural wonders. to prepare during her daily meditation. Alignment: Any neutral. Spontaneous Casting: A druid can channel stored spell Hit Die: d8. energy into summoning spells that she hasn’t prepared ahead of time. She can “lose” a prepared spell in order to Class Skills cast any summon nature’s ally spell of the same level or lower. The druid’s class skills are Climb (Str), Craft (Int), Fly (Dex), Chaotic, Evil, Good, and Lawful Spells: A druid can’t Handle Animal (Cha), Heal (Wis), Knowledge (geography) (Int), cast spells of an alignment opposed to her own or her Knowledge (nature) (Int), Perception (Wis), Profession (Wis), deity’s (if she has one). Spells associated with particular Ride (Dex), Spellcraft (Int), Survival (Wis), and Swim (Str). alignments are indicated by the chaos, evil, good, and law descriptors in their spell descriptions. Skill Ranks per Level: 4 + Int modifier. Orisons: Druids can prepare a number of orisons, or Class Features 0-level spells, each day, as noted on Table 3–7 under “Spells per Day.” These spells are cast like any other spell, but they All of the following are class features of the druid. are not expended when cast and may be used again. those crafted from wood. A druid who wears prohibited armor or uses a prohibited shield is unable to cast druid spells or use any of her supernatural or spell-like class abilities while doing so and for 24 hours thereafter. Spells: A druid casts divine spells which are drawn from the druid spell list presented in Chapter 10. Diff iculty Class for a saving throw against a druid’s spell is 10 + the spell level + the druid’s Wisdom modif ier. Like other spellcasters, a druid can cast only a certain number of spells of each spell level per day. Her base daily spell allotment is given on Table 3–7. In addition, she receives bonus spells per day if she has a high Wisdom score (see Table 1–3). A druid must spend 1 hour each day in a trance-like meditation on the mysteries of nature to regain her daily allotment of spells. A druid may prepare and cast any spell on the druid spell list, provided that she can 49
https://anyflip.com/zibyj/kzdy/basic
CC-MAIN-2022-33
en
refinedweb
Hi all, this is my first post. I’m just starting to learn three.js and JS (I’ve been coding C# in Unity for 10 years). I have a project coming up that’s going to have a lot of text objects in the 3D space so I’m trying out troika-three-text to see if it’ll work for my needs. So far so good, I can easily create text in my scene… Now I’d like to preload the font using the method provided by the maker but I can’t figure out how to reference the font after it has been loaded. Here’s what I’ve tried so far. Thanks for any help/guidance on this. -t import { Text } from 'troika-three-text'; import { preloadFont } from 'troika-three-text'; preloadFont( { font: '', characters: 'abcdefghijklmnopqrstuvwxyz', }, () => { console.log('preload font complete'); }, ); // then later after I know the font has been loaded... function createText() { const myText = new Text(); myText.text = 'hello!!'; myText.font = // ???? how do reference the preloaded font? myText.sync(); return myText; } export { createText };
https://discourse.threejs.org/t/how-to-preloadfont-in-troika-three-text/22500
CC-MAIN-2022-33
en
refinedweb
Pandas how to get a cell value and update it Accessing a single value or setting up the value of single row is sometime required when we doesn’t want to create a new Dataframe for just updating that single cell value. There are indexing and slicing methods available but to access a single cell values there are Pandas in-built functions at and iat. Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to use the at and iat methods, which are implemented on all of the data structures. Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to iloc Found a very Good explanation in one of the StackOverflow Answers which I wanted to Quote here: There are two primary ways that pandas makes selections from a DataFrame. By Label By Integer Location There are three primary indexers for pandas. We have the indexing operator itself (the brackets []), .loc, and .iloc. Let’s summarize them: [] - Primarily selects subsets of columns, but can select rows as well. Cannot simultaneously select rows and columns. .loc - selects subsets of rows and columns by label only .iloc - selects subsets of rows and columns by integer location only Never used .at or .iat as they add no additional functionality and with just a small performance increase. I would discourage their use unless you have a very time-sensitive application. Regardless, we have their summary: .at selects a single scalar value in the DataFrame by label only .iat selects a single scalar value in the DataFrame by integer location only In addition to selection by label and integer location, boolean selection also known as boolean indexing exists. Dataframe cell value by Column Label at - Access a single value for a row/column label pair Use at if you only need to get or set a single value in a DataFrame or Series. Let’s create a Dataframe first import pandas as pd df = pd.DataFrame([[30, 20, 'Hello'], [None, 50, 'foo'], [10, 30, 'poo']], columns=['A', 'B', 'C']) df Let’s access cell value of (2,1) i.e index 2 and Column B df.at[2,'B'] 30 Value 30 is the output when you execute the above line of code Now let’s update the only NaN value in this dataframe to 50 , which is located at cell 1,1 i,e Index 1 and Column A df.at[1,'A']=50 df So you have seen how we have updated the cell value without actually creating a new Dataframe here Let’s see how do you access the cell value using loc and at df.loc[1].B OR df.loc[1].at['B'] Output: 50 Dataframe cell value by Integer position From the above dataframe, Let’s access the cell value of 1,2 i.e Index 1 and Column 2 i.e Col C iat - Access a single value for a row/column pair by integer position. Use iat if you only need to get or set a single value in a DataFrame or Series. df.iat[1, 2] Ouput foo Let’s setup the cell value with the integer position, So we will update the same cell value with NaN i.e. cell(1,0) df.iat[1, 0] = 100 Select rows in a MultiIndex Dataframe Pandas xs Extract a particular cross section from a Series/DataFrame. This method takes a key argument to select data at a particular level of a MultiIndex. Let’s create a multiindex dataframe first #xs import itertools import pandas as pd import numpy as np a = ('A', 'B') i = (0, 1, 2) b = (True, False) idx = pd.MultiIndex.from_tuples(list(itertools.product(a, i, b)), names=('Alpha', 'Int', 'Bool')) df = pd.DataFrame(np.random.randn(len(idx), 7), index=idx, columns=('I', 'II', 'III', 'IV', 'V', 'VI', 'VII')) Access Alpha = B df.xs(('B',), level='Alpha') Access Alpha = ‘B’ and Bool == False df.xs(('B', False), level=('Alpha', 'Bool')) Access Alpha = ‘B’ and Bool == False and Column III df.xs(('B', False), level=('Alpha', 'Bool'))['III'] Conclusion. at Works very similar to loc for scalar indexers. Cannot operate on array indexers.Advantage over loc is that this is faster. Similarly, iat Works similarly to iloc but both of them only selects a single scalar value. Further to this you can read this blog on how to update the row and column values based on conditions.
https://kanoki.org/2019/04/12/pandas-how-to-get-a-cell-value-and-update-it/
CC-MAIN-2022-33
en
refinedweb
4.6. What is the Spectrum object ?¶ Normally Users should not be bother by the classes used. For instance if you use the pburg class to compute a PSD estimate base on the Burg method, you just nee to use pburg. Indeed, the normal usage to estimate a PSD is to use the PSD estimate starting with the letter p such as parma, pminvar, pburg, (exception: use Periodogram instead of pPeriodogram). Yet, it may be useful for some advanced users and developers to know that all PSD estimates are based upon the Spectrum class (used by specialised classes such as FourierSpectrum and ParametricSpectrum). The following example shows how to use Spectrum. First, let us create a Spectrum instance (first argument is the time series/data): from spectrum import * p = Spectrum(data_cosine(), sampling=1024) Some information are stored and can be retrieved later on: p.N p.sampling However, for now it contains no information about the PSD estimation method. For instance, if you type: p.psd it should return a warning message telling you that the PSD has not yet been computed. You can compute it either independantly, and set the psd attribute manually: psd = speriodogram(p.data) or you can associate a function to the method attribute: p.method = minvar and then call the function with the proper optional arguments: p(15, NFFT=4096) In both cases, the PSD is now saved in the psd attribute. Of course, if you already know the method you want to use, then it is much simpler to call the appropriate class directly as shown in previous sections and examples: p = pminvar(data_cosine(), 15) p() p.plot()
http://www.thomas-cokelaer.info/software/spectrum/html/user/tutorial_psd.html
CC-MAIN-2018-39
en
refinedweb
Understanding the Command Line The command line is a text-based environment that some users never even see. You type a command and the computer follows it — nothing could be simpler. In fact, early PCs relied on the command line exclusively (even earlier systems didn’t even have a console and instead relied on punched tape, magnetic tape, punched cards, or other means for input, but let’s not go that far back). Some people are amazed at the number of commands that they can enter at the command line and the usefulness of those commands even today. A few administrators still live at the command line because they’re used to working with it. The following sections give you a better understanding of the command line and how it functions. Newer versions of Windows (such as Vista and Windows 7) display a command prompt with reduced privileges as a security precaution. Many command line utilities require administrator privileges to work properly. To open an administrator command prompt when working with a newer version of Windows, right-click the Command Prompt icon in the Start menu and choose Run As Administrator from the context menu. You may have to provide a password to complete the command. When the command prompt opens, you have full administrator privileges, which let you execute any of the command line applications. Understanding the Need for Command Line Applications Many administrators today work with graphical tools. However, the graphical tools sometimes have problems — perhaps they’re slow or they don’t offer a flexible means of accomplishing a task. For this reason, good administrators also know how to work at the command line. A command line application can accomplish with one well-constructed command what a graphical application may require hundreds of mouse clicks to do — for example, the FindStr utility that lets you find any string in any file. Using FindStr is significantly faster than any Windows graphical search application and always provides completely accurate results. In addition, there’s that option of searching any file — many search applications skip executables and other binary files. Give it a try right now. Open a command prompt, change directories to the root directory (CD ), and type FindStr /M /S “Your Name“ and press Enter. You’ll find every file on the hard drive that contains your name. In some cases, the administrator must work at the command line. If you’ve taken a look at Windows Server 2008 Server Core edition, you know that it doesn’t include much in the way of a graphical interface. In fact, this version of Windows immediately opens a command processor when you start it. There’s no desktop, no icons, nothing that looks even remotely like a graphical interface. In fact, many graphical applications simply don’t work in Server Core because it lacks the required DLLs. When faced with this environment, you must know how to use command line applications. Don’t get the idea that command line applications are a panacea for every application ailment or every administrator need. Command line applications share some common issues that prompted the development of graphical applications in the first place. Here are the issues you should consider when creating a command line application of your own: - Isn’t intuitive or easy to learn. - Requires the user to learn arcane input arguments. - Relies on the user to open a separate command prompt. - Is error prone. - Output results can simply disappear when starting the application without opening a separate command prompt. Of course, you wouldn’t even be reading this chapter if command line applications didn’t also provide some benefits. In fact, command line applications are the only answer for certain application needs. Here are the benefits of using a command line application. - Fast, no GUI to slow things down - Efficient, single command versus multiple mouse clicks - Usable in automation, such as batch files - Less development time, no GUI code to write - Invisible when executed in the background Command line applications can have other benefits. For example, a properly written, general command line application can execute just fine on more than one platform. Even if you use .NET-specific functionality, there’s a very good chance that you can use an alternative, such as Mono (), to run your application on other platforms. Adding a GUI always complicates matters and makes your application less easy to move. Reading Data from the Command Line You have a multitude of options when working with data from the command line. Precisely which method you use depends on what you’re trying to achieve. If you merely want to see what the command line contains, you should use the Python approach because it’s fast and easy. However, Python doesn’t provide the widest range of command line processing features — it tends to focus on Unix methodologies. If you want additional flexibility in working with the command line options, you might use the .NET approach instead. The following sections describe both techniques. Using the Python Method Most programming languages provide some means of reading input from the command line and Python is no exception. As an IronPython developer, you also have full access to the Python method of working with the command line. While you’re experimenting, you may simply want to read the command line arguments. Listing 10-1 shows how to perform this task. Listin g 10-1: Displaying the command line arguments [code] # Perform the required imports. import sys # Obtain the number of command line arguments. print ‘The command line has’, len(sys.argv), ‘arguments.n’ # List the command line arguments. for arg in sys.argv: print arg # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] Developers who have worked with C or C++ know that the main() function can include the argc (argument count) and argv (argument vector — a type of array) arguments. Python includes the argv argument as part of the sys module. To obtain the argc argument, you use the len(sys.argv) function call. The example relies on a simple for loop to display each of the arguments, as shown in Figure 10-1. Of course, you’ll want to expand beyond simply listing the command line arguments into doing something with them. Listing 10-2 shows an example of how you could parse command line arguments for the typical Windows user. Listin g 10-2: Using the Python approach to parse command line arguments [code] # Perform the required imports. import sys import getopt # Obtain the command line arguments. def main(argv): try: # Obtain the options and arguments. opts, args = getopt.getopt(argv, ‘Dh?g:s’, [‘help’, ‘Greet=’, ‘Hello’]) # Parse the command line options. for opt, arg in opts: # Display help when requested. if opt in (‘-h’, ‘-?’, ‘–help’): usage() sys.exit() # Tell the user we’re in debug mode. if opt in (‘-D’): print ‘Application in Debug mode.’ # Display a user greeting. if opt in (‘-g’, ‘–Greet’): print ‘Good to see you’, arg.strip(‘:’) # Say hello to the user. if opt in (‘-s’, ‘–Hello’): print ‘Hello!’ # Parse the command line arguments. for arg in args: # Display help when requested. if arg.upper() in (‘/?’, ‘/HELP’): usage() sys.exit() # Tell the user we’re in Debug mode. elif arg in (‘/D’): print ‘Application in Debug mode.’ # Display a user greeting. elif ‘/GREET’ in arg.upper() or ‘/G’ in arg.upper(): print ‘Good to see you’, arg.split(‘:’)[1] # Say hello to the user. elif arg.upper() in (‘/S’, ‘/HELLO’): print ‘Hello!’ # User has provided bad input. else: raise getopt.GetoptError(‘Error in input.’, arg) # The user supplied command line contains illegal arguments. except getopt.GetoptError: # Display the usage information. usage() # exit with an error code. sys.exit(2) # Call main() with only the relevant arguments. if __name__ == “__main__“: main(sys.argv[1:]) # Pause after the debug session. raw_input(‘nPress any key to continue…’) This example actually begins at the bottom of the listing with an if statement: if __name__ == “__main__“: main(sys.argv[1:]) [/code] Many of your IronPython applications will use this technique to pass just the command line arguments to the main() function. As shown in Figure 10-1, the first command line argument is the name of the script and you don’t want to attempt processing it. Python assumes that everyone works with Linux or some other form of Unix. Consequently, it only supports the short dash (–) directly for command line options. An option is an input that you can parse without too much trouble because Python does most of the work for you. Options use a single dash for a single letter (short option) or a double dash for phrases (long option). Anything that doesn’t begin with a dash, such as something that begins with a slash (/) is an argument. Unfortunately, most of your Windows users will be familiar with arguments, not options, so your application should process both. The code begins by separating options and arguments that you’ve defined. The getopt.getopt() method requires three arguments: - The list of options and arguments to process - A list of short options - A list of long options In this example, argv contains the list of options and arguments contained in the command line, except for the script name. Each option and argument is separated by a space in the original string. The list of short options is ‘Dh?g:s‘. Notice that you don’t include a dash between each of the options — Python includes them for you automatically. Each of the entries is a different command line switch, except for the colon. So, this application accepts –D, –h, –?, –g:, and –s as command line switches. The command line switches are case sensitive. The colon after –g signifies that the user must also provide a value as part of the command line switch. The list of long options includes [‘help‘, ‘Greet=‘, ‘Hello‘]. Notice that you don’t include the double dash at the beginning of each long option. As with the short versions of the command line switch, these command line switches are case sensitive. The command line switches for this example are: - –D: Debug mode - –h, –?, and ––help: Help - –g:Username and ––Greet:Username: Greeting that includes the user’s name - –s and ––Hello: Says hello to the user without using a name At this point, the code can begin processing opts and args. In both cases, the code relies on a for loop to perform the task. However, notice that opts relies on two arguments, opt and arg, while args relies on a single argument arg. That’s because opts and args are stored differently. The opts version of -g:John appears as [(‘–g‘, ‘:John‘)], while the args version appears as [‘/g:John‘]. Notice that opts automatically separates the command line switch from the value for you. Processing opts takes the same course in every case. The code uses an if statement such as if opt in (‘–h‘, ‘–?‘, ‘––help‘) to determine whether the string appears in opt. In most cases, the code simply prints out a value for this example. The help routine calls on usage(), which is explained in the “Providing Command Line Help” section of the chapter. Calling sys.exit() automatically ends the application. If the application detects any command line options that don’t appear in your list of command line options to process, it raises the getopt.GetoptError() exception. Standard practice for Python applications is to display usage information using usage() and then exit with an error code (of 2 in this case by calling sys.exit(2)). Now look at the args processing and you see something different. Python doesn’t provide nearly as much automation in this case. In addition, your user will likely expect / command line switches to behave like those for most Windows applications (case insensitive). The example handles this issue by using a different if statement, such as if arg.upper() in (‘/?‘, ‘/HELP‘). Notice that the options use a slash, not a dash. Argument processing relies on a single if statement, rather than individual if statements. Consequently, the second through the last command line switches actually rely on an elif clause. Python won’t automatically detect errors in / command line switches. Therefore, your code also requires an else clause that raises the getopt.GetoptError() event manually. Remember that arguments are single strings, not command line switch and value pairs. You need some method to split the command line switch from the value. The code handles this case using elif ‘/GREET‘ in arg.upper() or ‘/G‘ in arg.upper() where it compares each command line switch individually. In addition, it relies on arg.split(‘:‘)[1] to display the value. The argument processing routine shows that you can accommodate both Linux and Windows users quite easily with your application. It’s time to test the example. Figure 10-2 shows the output of using IPY CmdLine2 .py –D –s –g:John /Hello /g:John. Using the .NET Method The .NET method of working with command line arguments is similar to the Python method, but there are distinct differences. When you design your application, you should use one technique of parsing the command line or the other because mixing the two will almost certainly result in application errors. Listing 10-3 shows a simple example of the .NET method. Listin g 10-3: Using the .NET approach to list command line arguments [code] # Perform the required imports. import System # Obtain the number of command line arguments. print ‘The command line has’, print len(System.Environment.GetCommandLineArgs()), print ‘arguments.n’ # List the command line arguments. for arg in System.Environment.GetCommandLineArgs(): print arg # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] This example also relies on len() to obtain the number of command line arguments contained in System.Environment.GetCommandLineArgs(). As before, the code relies on a for loop to process the command line arguments. You might expect that the results would also be the same, but look at Figure 10-3 and compare it to Figure 10-1. Notice that the .NET method outputs not only the script name, but also the name of the script processor and its location on the hard drive. Using the .NET method can have benefits if you need to verify the location of IPY.EXE on the user’s system. It’s time to see how you might parse a command line using the .NET method. Many of the techniques are similar, but there are significant differences because .NET lacks any concept of options versus arguments. In short, you use a single technique to process both in .NET. Listing 10-4 shows how to parse a command line using the .NET method. Listin g 10-4: Using the .NET approach to parse command line arguments [code] # Perform the required imports. from System import ArgumentException, Array, String from System.Environment import GetCommandLineArgs import sys print ‘.NET Version Outputn’ try: # Obtain the number of command line arguments. Size = GetCommandLineArgs().Count # Check the number of arguments. if Size < 3: # Raise an exception if there aren’t any arguments. raise ArgumentException(‘Invalid Argument’, arg) else: # Create an array that has just command line arguments in it. Arguments = Array.CreateInstance(String, Size – 2) Array.Copy(GetCommandLineArgs(), 2, Arguments, 0, Size – 2) # Parse the command line options. for arg in Arguments: # Display help when requested. if arg in (‘-h’, ‘-?’, ‘/?’, ‘–help’) or arg.upper() in (‘/H’, ‘/HELP’): usage() sys.exit() # Tell the user we’re in Debug mode. elif arg in (‘-D’, ‘/D’): print ‘Application in Debug mode.’ # Display a user greeting. elif ‘-g’ in arg or ‘–Greet’ in arg or ‘/G’ in arg.upper() or ‘/GREET’ in arg.upper(): print ‘Good to see you’, arg.split(‘:’)[1] # Say hello to the user. elif arg in (‘-s’, ‘–Hello’) or arg.upper() in (‘/S’, ‘/HELLO’): print ‘Hello!’ else: raise ArgumentException(‘Invalid Argument’, arg) except ArgumentException: usage() sys.exit(2) # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] The .NET implementation is a little simpler than the Python implementation — at least if you want to use both kinds of command line switches. This example begins by importing the required .NET assemblies. The example also relies on sys to provide the exit() function. The code begins by checking the number of arguments. When using .NET parsing, you must have at least three command line arguments to receive any input. The example uses the ArgumentException() method to raise an exception should the user not provide any inputs. In the IronPython example, the code uses a special technique to get rid of the script name. The .NET method also gets rid of the application name and the script name. In this case, the code creates a new array, Arguments, to hold the command line arguments. You must make Arguments large enough to hold all of the command line arguments, so the code uses the Array .CreateInstance() method to create an Array object with two fewer elements than the original array provided by GetCommandLineArgs(). The Array.CreateInstance() method requires two inputs: the array data type and the array length. The Array.Copy() method moves just the command line arguments to Arguments. The Array.Copy() method requires five inputs: source array, source array starting element, destination array, destination array starting element, and the number of elements to copy. At this point, the code can begin parsing the input arguments. Notice that unlike the Python method, you can parse all the permutations in a single line of code using the .NET method. The example provides the same processing as the Python method example, so that you can compare the two techniques. As with the Python method, the .NET method raises an exception when the user doesn’t provide correct input. The result is that the example displays usage instructions for the application. Figure 10-4 shows the output from this example. Providing Command Line Help Your command line application won’t have a user interface — just a command. While some people can figure out graphical applications by pointing here and clicking there, figuring out a command line application without help is nearly impossible. The methods used to understand an undocumented command line application are exotic and usually require advanced debugging techniques, time spent in the registry, lots of research online, and more than a little luck. If you seriously expect someone to use your command line application, you must provide help. Unlike a graphical application, you won’t need tons of text and screenshots to document most command line applications. All you really need is a little text that’s organized in a certain manner. Most command line applications use the same help format, which makes them easier to understand and use. However, not all command line applications provide all the help they really need. In order to provide your command line application with superior help, you need to consider the five following elements: - Application description - Application calling syntax - Command line switch summary and description - Usage examples - (Optional) Other elements The following sections describe all these elements and help you understand why they’re important. Of course, every command line application is different, so you’ll want to customize the suggestions in the following sections to meet your particular needs. The point is, you must provide the user with help of some kind. Creating an Application Description Many of the command line applications you see lack this essential feature. You ask for help and the application provides you with syntax and a quick overview of the command line switches. At the outset, you have little idea of what the application actually does and how the developer envisioned your using it. After a little experimentation, you might still be confused and have a damaged system as well. An application description doesn’t have to be long. In fact, you can make it a single sentence. If you can’t describe your command line application in a single sentence, it might actually be too big — characteristically, command line applications are small and agile. Of course, there are exceptions and you may very well need an entire paragraph to describe your application. The big mistake is writing a huge tome. Most people using your application have worked with computers for a long time, so a shorter description normally works fine. As a minimum, your application description should include the application name so the user can look for additional information online. The description should tell the user what the application does and why you created it. Describing the Application Calling Syntax Applications have a calling syntax — a protocol used to interact with the application. Unfortunately, you won’t have access to any formatting when writing your application help screen. Developers have come up with some methods to show certain elements over the years and you should use these methods for your command line syntax. Consider the following command line: [code] MyApp <Filename> [-S] [-s] [-D [-U[:<Name>]]] [-X | -Y | -Z | <Delta>] [-?] [/code] Believe it or not, all these strange looking symbols do have a meaning and you need to consider them for your application. Any item that appears in square brackets ([]), such as [–S], is optional. The user doesn’t need to provide it to use the application. Anything that appears in angle brackets (<>), such as <Filename>, is a variable. The user replaces this value with some other value. Normally, you provide a descriptive name for the variable. For example, when you see <Filename>, you know that you need to provide the name of a file. In this case, <Filename> isn’t optional — the user must provide it unless asking for help. It’s understood that requesting help, normally –? or /?, doesn’t require any other input. Command line switches within other command line switches are dependent on that command line switch. For example, you can use –D alone. However, if you want to use –U, you must also provide –D. In this case, –U is dependent on –D. Notice that you can use –U alone or you can include a <Name> variable with it. When you use the <Name> variable, the command line switch sequence must appear as –D –U:<Name>. Sometimes a command line switch is mutually exclusive with other command line switches or even variables. For example, the [–X | –Y | –Z | <Delta>] sequence says that you can provide –X or –Y or –Z or <Delta>, but you can’t provide more than one of them. Most Windows command line applications are case insensitive. However, there are notable exceptions to this rule. If you find that you must make your application case sensitive, be sure to use the correct case for the command line syntax. For example, –S isn’t the same as –s for this application and the command line syntax shows that. You should also note that the application is case sensitive in other areas of your help screen because some users won’t notice the difference in case. Some developers will simply use [Options] for the command line syntax if you can use any of the command line switches at any time, or simply ignore them completely. There isn’t anything wrong with this approach, especially when your application defaults to showing the help screen when the user doesn’t provide any command line switches. However, make absolutely certain that your application truly doesn’t have a unique calling syntax before you use this approach. Documenting the Command Line Switches No matter how simple or complex the application, you need to document every command line switch. Most application writers use anywhere from one to three sentences to document the command line switch unless it’s truly complex. The command line switch documentation should focus on the purpose of the command line switch. Save any examples you want to provide for the usage examples portion of the help screen. You must document every command line switch or the user won’t know it exists. Placing alternative command line switches together is a good idea because it reduces the complexity of the help screen. The order in which you place the command line switches depends on the purpose and complexity of your application. However, most developers use one of the following ordering techniques: - Alphabetical: Useful for longer lists of command line switches because alphabetical order can make it easier to find a particular command line switch in the list. - Syntactical: Developers especially like to see the command line switches in syntactical order. After viewing the syntax, the developer can find the associated command line switch description quickly. - Order of potential usage: Placing the command line switches in order of popularity means that the user doesn’t have to search the entire list to find a particular command line switch description. This approach is less useful on long or complex lists because you really don’t know how the user will work with the application. - Order of required use: In some cases, an application requires that a user place the command line switches in a particular order. For example, when creating a storyboard effect with a command line application, you want the user to know which command line switch to use first. Some command line switch lists become quite long. In this case, you might want to group like command line switches together and place them in groups on the help screen. For example, you might have a set of command line switches that affects input and another that affects output. You could create two groups, one for each task, on your help screen to make finding a particular command line switch easier. Showing Usage Examples Most users won’t really understand your command line application unless you provide some usage examples. A usage example should show the command line and its result — if you do this, then you get that as output. Precisely how you put the examples together depends on your application and its intended audience. An application designed for advanced users can probably get by with fewer examples, while a complex application requires more examples. The usage examples should be non-trivial. You should try to show common ways in which you expect the user to work with your application. Putting Everything Together Now that you have a basic understanding of the required help screen elements, it’s time to look at an example. Listing 10-5 shows a typical usage() function. It displays help information to users who need it, using simple print() statements. Listin g 10-5: Creating a help screen for your application [code] # Create a usage() function. def usage(): print ‘Welcome to the command line example.’ print ‘This application shows basic command line argument reading.’ print ‘nUsage:’ print ‘tIPY CmdLine2.py [Options]‘ print ‘nOptions:’ print ‘t-D: Places application in debug mode.’ print ‘t-h or -? or –help: Displays this help message.’ print ‘t-g:<Name> or –Greet:<Name>: Displays a simple greeting.’ print ‘t-s or –Hello: Displays a simple hello message.’ print ‘nExamples:’ print ‘tIPY CmdLine2.py -s outputs Hello!’ print ‘tIPY CmdLine2.py -g:John outputs Good to see you John’ print ‘tYou can use either the – or / as command line switches.’ print ‘tFor example, IPY CmdLine2.py /s outputs Hello!’ [/code] Notice the use of formatting in the code. The code places section titles at the left and an extra space below the previous section. Section content is indented so it appears as part of the section. Figure 10-5 shows the output from this code. Even though this help screen is quite simple, it provides everything needed for someone to use the example application to test command line switches. Including Other Elements Some command line application help screens become enormous and hard to use. In fact, some of Microsoft’s own utilities have help that’s several layers deep. Just try drilling into the Net utility sometime and you’ll discover just how cumbersome the help can become. Of course, you do want to document everything for the user. As an alternative, some command line application developers will provide an overview as part of the application, and then include a URL for detailed material online. It’s not a perfect solution because you can’t always count on the user having an Internet connection, but it does work most of the time. You don’t have to stop with simple information redirection as part of your help. Some utilities include a phone number (just in case the user really is lacking that Internet connection). E‑mail addresses aren’t unusual, and some developers get creative in providing other helpful tips. It’s also important to take ownership of your application by including a company or developer name. If copyright is important, then you should provide a copyright notice as well. The thing is to make it easy for someone to identify your command line application without cluttering up the help screens too much. To break the help screens up, you might want to include layered help. Typing MyApp /? might display an overview, while MyApp /MySwitch /? provides detailed information. Microsoft uses this approach with several of its utilities. If you use layered help, make sure you mention it on the overview help screen, or most users will think that the overview is all they get in the way of useful information. Special settings require a section as well. For example, IPY.EXE provides access to some application features through environment variables. These environment variables appear in a separate section of the help screen. Applications that could damage application data or the system as a whole in some way require warnings. Too few command line applications provide warnings, so command line applications have gotten a reputation for being dangerous — only experts need apply. The fact is that many of these applications would be quite easy to use with the proper warning information. However, don’t go too far in protecting the user by providing messages that request the user confirm a particular task. Using confirmations would reduce the ability of developers to use the command line applications for batch processing and automation needs. Given that your application might inadvertently damage something when the user misuses it, you might also want to include fixes and workarounds as part of your help. Unfortunately, it’s the nature of command line utilities that the actions they perform are one-way — once done, you can’t undo them. Interacting with the Environment The application environment consists of a number of elements. Of course, you need to consider whether the application uses a character mode interface or a graphical interface. The platform on which the application runs is also a consideration. Depending on the application’s purpose, you may need to consider background task management as part of the picture. Most developers understand that these elements, and more, affect the operation of the application. However, some developers miss out on a special environmental feature, the environment variable. Using environment variables makes it possible to communicate settings to your application at a number of different levels in a way that command line switches can’t. In fact, you may not even realize it, but there are several different levels of environment variables with which you can control an application, making the variables quite flexible. The following sections describe environment variables and their use in IronPython. Understanding Environment Variables Environment variables are simply a kind of storage location managed by the operating system. When you open a command prompt, you can see a list of environment variables by typing Set and pressing Enter. Figure 10-6 shows the environment variables on my system. The environment variables (or at least their values) will differ on your machine, so you should take a look at them. If you want to see the value of a particular environment variable, type Set VariableName (such as Set USERNAME) and press Enter. To remove an environment variable, simply type Set VariableName= (with no value) and press Enter. (Never remove environment variables you didn’t create because some of your applications could, or more likely will, stop working.) As you can see from Figure 10-6, environment variables appear as a name/value pair. An environment variable with a specific name has a certain value. Some environment variables in this list are common to all Windows machines. For example, the system wouldn’t be able to find applications without the Path environment variable. Environment variables such as COMPUTERNAME and USERNAME can prove helpful for your applications. You can also discover facts such as the processor type and system drive using environment variables. It’s possible to create environment variables using a number of techniques. However, the method used to create the environment variable determines its scope (personal or global), visibility (command prompt only or command prompt and Windows application), and longevity (session or permanent). For example, if you type Set MyVar=Hello (notice that there are no quotes for the value) and press Enter, you create a personal environment variable that lasts for the current session and is visible only in the command prompt window. You can see any environment variable by typing Echo %VarName% and pressing Enter. Try it out with MyVar. Type Echo %MyVar% and press Enter to see the output shown in Figure 10-7. The most common way to set a permanent environment variable is to click Environment Variables on the Advanced tab of the System Properties dialog box. You see the Environment Variables dialog box shown in Figure 10-8. This dialog box has two environment variable settings areas. The upper area manages personal settings that affect just one person — the current user. The lower area manages environment variables that affect everyone who uses the system. To create a new environment variable, simply click New. You see the New User Variable (shown in Figure 10-9) or the New System Variable dialog box. In both cases, you type an environment variable name in the Variable Name field and an environment variable value in the Variable Value field. Click OK and you see the environment variable added to the appropriate list. Editing an environment variable is just as easy. Simply highlight the environment variable you want to change in the list and click Edit. You’ll see a dialog box similar to the one shown in Figure 10-9 where you can change the environment variable value. To remove an environment variable, simply highlight its entry in the list and click Delete. Any changes you make to environment variables won’t show up until you close and reopen any command prompt windows. Windows provides the current set of environment variables to every command prompt window when it opens the window, but it doesn’t perform updates. The interesting thing about environment variables you set using the Environment Variables dialog box is that they are also available to Windows applications. You can read these environment variables just as easily in a graphical application as you can in a character mode application You may find that you want to create environment variables for just the command prompt. Of course, you can always use the Set command approach described earlier in this section. However, most developers will want something a little more automated. If you need to set command line–only environment variables for the entire machine, then you need to modify the AutoExec.NT file found in the WINDOWSsystem32 folder of your system. Figure 10-10 shows a typical view of this file. Simply open the file using a text editor, such as Notepad (don’t use WordPad), and add a Set command to it. Every time someone opens a command prompt, Windows reads this file and uses the settings in it to configure the command prompt window. Many people forget that the AutoExec.NT file even exists, but it’s a valuable way to add Set commands in certain cases. It’s also possible to set individualized command prompt environment variables for a specific application. In this case, create a batch (.BAT) file using a text editor. Add Set commands to it for the application, and then add a line to start the application, such as IPY MyApp.py. In short, you can make environment variables appear whenever and wherever you want by simply using the correct method to create them. Using the Python Method Python provides operating system–generic methods of reading and writing variables. As with many things in IronPython, the Python techniques work great across platforms, but probably won’t provide the greatest flexibility. The following sections describe the techniques you use to read and set environment variables using the Python method. Reading the Environment Variables Using Python This example looks at a new Python module, os, which contains a number of interesting classes. In this case, you use the environ class, which provides access to the environment variables and lets you manipulate them in various ways, as shown in Listing 10-6. Listin g 10-6: Displaying the environment variables using the Python method [code] # Import the required Python modules. import os # Obtain the environment variable keys. Variables = os.environ.keys() # Sort the keys in alphabetic order. Variables.sort() # Display the keys and their associated values. for Var in Variables: print ‘%30s %s’ % (Var,os.environ[Var]) # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] The code begins by importing the required modules, as normal. It then places the list of environment variable keys, the names, in Variables using os.environ.keys(). In most cases, you want to view the environment variables in sorted order because there are too many of them to simply peruse a list, so the code sorts the list using Variables.sort(). At this point, the code is ready to display the list. It uses a simple for loop to perform the task. Notice the use of formatting to make the output more readable. Remember that the values don’t appear in the Variables list, so you must obtain them using os.environ[Var]. Figure 10-11 shows typical output from this example. Setting the Environment Variables Using Python Python makes it relatively easy to set environment variables. However, the environment variables you create using IronPython affect only the current command prompt session and the current user. Consequently, if you start another application in the current session (see the section “Starting Other Command Line Applications” later in the chapter for details), it can see the environment variable, but if you start an application in a different session or start a graphical application, the environment variable isn’t defined. In addition, changes you make to existing environment variables affect only the current session. Nothing is permanent. Listing 10-7 shows how to modify environment variables using the Python method. Listin g 10-7: Setting an environment variable using the Python method [code] # Import the required Python modules. import os # Create a new environment variable. os.environ.__setitem__(‘MyVar’, ‘Hello’) # Display its value on screen. print ‘MyVar =’, os.environ[‘MyVar’] # Change the environment variable and show the results. os.environ.__setitem__(‘MyVar’, ‘Goodbye’) print ‘MyVar =’, os.environ[‘MyVar’] # Delete the variable, and then try to show it. try: os.environ.__delitem__(‘MyVar’) print ‘MyVar =’, os.environ[‘MyVar’] except KeyError as (KeyName): print ‘Can‘t display’, KeyName # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] Setting and changing an environment variable use the same method, os.environ.__setitem__(). In both cases, you supply a name/value pair (MyVar/Hello). When you want to see the value of the environment variable, you request the value by supplying the name, such as os.environ[‘MyVar‘] for this example. Deleting an environment variable requires use of os.environ.__delitem__(). In this case, you supply only the name of the environment variable you want to remove. If you try to display an environment variable that doesn’t exist, the interpreter raises a KeyError exception. The example shows the result of trying to print MyVar after you remove it using os.environ.__delitem__(). Figure 10-12 shows the output from this example. Using the .NET Method Working with environment variables using the .NET method isn’t nearly as easy as working with them using the Python method. Then again, you can make permanent environment variable changes using .NET. In fact, .NET provides support for three levels of environment variables. - Process: Affects only the current process and any processes that the current process starts - User: Affects only the current user - Machine: Affects all users of the host system An important difference between the Python and .NET methods is that any change you make using the .NET method affects both command line and graphical applications. You have significant control over precisely how and where an environment variable change appears because you specify precisely what level the environment variable should affect. The following sections provide more information on reading and setting environment variables using the .NET method. Reading the Environment Variables Using .NET As previously mentioned, the .NET method is more flexible than the Python method, but also requires a little extra work on your part. Some of the extra work comes in the form of flexibility. The .NET method provides several ways to obtain environment variable data. - Use one of the Environment class properties to obtain a standard environment variable value. You can find a list of these properties at system.environment_properties.aspx. - Check a specific environment variable using GetEnvironmentVariable(). - Obtain all the environment variables for a particular level using GetEnvironmentVariables() with an EnvironmentVariableTarget enumeration value. - Obtain all the environment variables regardless of level using GetEnvironmentVariables(). It’s important to note that these techniques let you answer questions such as whether a particular environment variable is a standard or custom setting. You can also determine whether the environment variable affects the process, user, or machine as a whole. In short, you obtain more information using the .NET method, but at the cost of additional complexity. Listing 10-8 shows how to read environment variables using each of the .NET methods. Listin g 10-8: Displaying the environment variables using the .NET method [code] # Obtain access to Environment class properties. from System import Environment # Obtain all of the Environment class methods. from System.Environment import * # Import the EnvironmentVariableTarget enumeration. from System import EnvironmentVariableTarget # Display specific, standard environment variables. print ‘Standard Environment Variables:’ print ‘tCurrent Directory:’, Environment.CurrentDirectory print ‘tOS Version:’, Environment.OSVersion print ‘tUser Name:’, Environment.UserName # Display any single environment variable. print ‘nSpecific Environment Variables:’ print ‘tIronPython Path:’, GetEnvironmentVariable(‘IronPythonPath’) print ‘tSession Name:’, GetEnvironmentVariable(‘SessionName’) # Display a particular kind of environment variable. print ‘nUser Level Environment Variables:’ for Var in GetEnvironmentVariables(EnvironmentVariableTarget.User): print ‘t%s: %s’ % (Var.Key, Var.Value) # Display all of the environment variables in alphabetical order. print ‘nAll of the environment variables.’ # Create a list to hold the variable names. Keys = GetEnvironmentVariables().Keys Variables = [] for Item in Keys: Variables.Add(Item) # Sort the resulting list. Variables.sort() # Display the result. for Var in Variables: print ‘t%s: %s’ % (Var, GetEnvironmentVariable(Var)) # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] The code begins by importing some .NET assemblies. Notice that the example reduces clutter by importing only what the code actually needs. As mentioned earlier, you can obtain standard environment variable values by using the correct property value from the System.Environment class. In this case, the code retrieves the current directory, operating system version, and the user name, as shown in Figure 10-13. The next code segment in Listing 10-8 shows how to obtain a single environment variable. All you need is the GetEnvironmentVariable() with a variable name, such as, IronPythonPath. If you want to work with the environment variables found at a particular level, you use GetEnvironmentVariables() with an EnvironmentVariableTarget enumeration value, as shown in the next code segment in Listing 10-8. Unless you create a custom environment variable, you won’t see any output at the EnvironmentVariableTarget.Process level. You might remember from Listing 10-6 the ease of sorting the environment variables when using the Python method. Sorting the environment variables when using the .NET method isn’t nearly as easy because the .NET method relies on a System.Collections.Hashtable object for the output of the GetEnvironmentVariables() method call. The easiest method to sort the environment variables is to obtain a list of the keys using GetEnvironmentVariables() .Keys, the Keys object; place them in a list object, Variables; and then sort as normal using Variables.sort(). Now that the code has a sorted list, it uses a for loop to enumerate each environment variable using GetEnvironmentVariable(). Figure 10-13 does show the entire list, but when you try the example, you’ll see that the list is indeed sorted. There are definitely times where .NET objects will cause problems for your IronPython application and this is one of them. Setting the Environment Variables Using .NET The .NET method provides some additional setting capabilities when compared to the Python method. For one thing, you can make the environment variable settings permanent. The reason for this difference is that the .NET method lets you write the settings directly to the registry. You won’t manipulate the registry directly, but the writing does take place in the background, just as it would if you used the Environment Variables dialog box. You do have some limitations. For example, you can’t change an Environment class property value. This restriction makes sense because you don’t want to change an environment variable that a number of applications might need. Listing 10-9 shows how to set environment variables as needed. Listin g 10-9: Setting an environment variable using the .NET method [code] # Obtain access to Environment class properties. from System import Environment # Obtain all of the Environment class methods. from System.Environment import * # Import the EnvironmentVariableTarget enumeration. from System import EnvironmentVariableTarget # Create a temporary process environment variable. SetEnvironmentVariable(‘MyVar’, ‘Hello’) print ‘MyVar =’, GetEnvironmentVariable(‘MyVar’) # Create a permanent user environment variable. SetEnvironmentVariable(‘Var2’, ‘Goodbye’, EnvironmentVariableTarget.User) print ‘Var2 =’, GetEnvironmentVariable(‘Var2’) print ‘Var2 =’, GetEnvironmentVariable(‘Var2’, EnvironmentVariableTarget.User) raw_input(‘nOpen the Environment Variables dialog box…’) # Delete the temporary and permanent variables. print ‘nDeleting the variables…’ SetEnvironmentVariable(‘MyVar’, None) SetEnvironmentVariable(‘Var2’, None, EnvironmentVariableTarget.User) print ‘MyVar =’, GetEnvironmentVariable(‘MyVar’) print ‘Var2 =’, GetEnvironmentVariable(‘Var2’, EnvironmentVariableTarget.User) # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] The example begins with the usual assembly imports. It then creates a new environment variable using the SetEnvironmentVariable() method. If you call SetEnvironmentVariable() without specifying a particular level, then the .NET Framework creates a temporary process environment variable that only lasts for the current session. The next step creates a permanent user environment variable. In this case, you must supply an EnvironmentVariableTarget enumeration value as the third argument. This portion of the example also demonstrates something interesting. If you create a new permanent environment variable in a process, the .NET Framework won’t update that process (or any other process for that matter). Consequently, the first call to GetEnvironmentVariable() fails, as shown in Figure 10-14. To see the environment variable, you must either restart the process or you must call GetEnvironmentVariable() with an EnvironmentVariableTarget enumeration value. As a result, the second call succeeds. At this point, the example pauses so you can open the Environment Variables dialog box and see for yourself that the environment variable actually does exist as a permanent value. Deleting an environment variable is as simple as setting it to None using the SetEnvironmentVariable() method. However, you need to delete permanent environment variables by including the EnvironmentVariableTarget enumeration value, or the .NET Framework won’t delete it. Unlike the Python method, you won’t get an error when checking for environment variables that don’t exist using the .NET method. Instead, you’ll get a value of None, as shown in Figure 10-14. Environment Variable Considerations Some developers don’t think too hard about how the changes they make to the environment will affect other applications. One application, which will remain nameless, actually changed the path environment variable and caused other applications to stop working. Users won’t tolerate such behavior because it impedes their ability to perform useful work. In addition, companies lose a lot of money when administrators have to devote time to fixing such problems. The standard rules for using environment variables is that you should only read environment variables created by others. You may find a situation where you need to change a non-standard environment variable, but proceed with extreme caution. It’s never allowed to change a standard environment variable, such as USERNAME, created by the operating system because doing so can cause a host of problems. If you want to have an environment variable you can change, create a custom environment variable specifically for your application. Even if you have to copy the value of another environment variable into this custom environment variable, you can be sure you won’t cause problems for other applications if you always use custom environment variables for your application. Starting Other Command Line Applications You can start other applications using IronPython. In fact, Python provides a number of techniques for performing this task. If you’ve worked with a .NET language for a while, you know that the .NET Framework also provides several methods of starting applications. However, most developers want to do something simple with the applications they start as subprocesses. For example, you might want to get the operating system to perform a task that IronPython won’t perform for you directly. IronPython sports a plethora of methods to execute external applications. However, the simplest of these methods is os.popen(). Using this method, you can quickly open an external application, obtain any output it provides, and work with that output in your application. These three steps are all that many developers need. Listing 10-10 shows how to use os.popen() to execute an external application. Listin g 10-10: Starting applications directly in IronPython [/code] # Import the required module. import os # Open a copy of Notepad. os.popen(‘Notepad C:/Test.TXT’) # Use the Dir command to get a directory listing and display it. Listing = os.popen(‘Dir C:\ /OG /ON’) for File in Listing.readlines(): print File, # Pause after the debug session. raw_input(‘nPress any key to continue…’) [/code] This example begins by opening a copy of Notepad with C:/Test.TXT. Notice that the command uses a slash, not a backslash. In many cases, you can use a standard slash to avoid having to use a double backslash (\) in your command. When this command executes, you see a copy of Notepad open with the file loaded. Of course, you need to create C:/Test.TXT before you execute the example to actually see the file loaded into Notepad. In some cases, you need to read the output from a command after it executes. For example, you might want to obtain a directory listing using particular command line switches. The second part of the example shows how to perform this task. When the Dir command returns, Listing 10-10 has a directory listing in it similar to the one shown in Figure 10-15. In this case, you must provide the double backslash because, for some reason, Dir won’t work with the / when called from IronPython. If you really need high-powered application management when working with IronPython, then you want to use the subprocess module, which contains a single method, Popen(). This approach is for those few who really need extreme control over the applications they execute. You can read about this module at. The os module also has a number of popen() versions, ranging from popen() to popen4(). Generally, if popen() won’t meet your needs, it’s probably a good idea to use the subprocess .Popen() method because it provides better support for advanced functionality. Providing Status Information Your administrative application often performs tasks without much user interaction, which means that the user might not even be aware of errors that occur. Consequently, you need to provide some means of reporting status information. The following sections provide a quick overview of some techniques you can use to report status information to the user. Reporting Directly to the User The time honored method of reporting status information to the user is to display it directly onscreen. In fact, most of the applications in this book use this approach. If you know that the user will be watching the display or at least checking it from time-to-time, it’s probably a good idea to provide direct information. Make sure you provide all the details, including error numbers and strings as appropriate. Depending on the skill of the user, you’ll want to provide messages that are both friendly and easy to understand. Otherwise, less-skilled users are apt to do something rash because they don’t understand what the message is telling them. If you know that less skilled users will rely on your application, you should provide a secondary method of reporting status information such as an event log. Log files are also helpful, but can prove troublesome for the administrator to access from a remote location. The Microsoft Management Console (MMC) provides easy methods for administrators to gain access to remote event logs as necessary. You can probably provide a remote paging system or similar contact techniques for the administrator as well. However, such methods are somewhat complex and not directly supported by IronPython through the Python libraries. The implementation of these techniques is outside the scope of this book. However, you’ll probably want to use a .NET Framework methodology, such as the one described at, to perform this task. Creating Log Files At one time, administrators relied on text log files to store information from applications. However, most applications today output complex information that’s hard to read within a text file. If you plan to create log files for your application, you probably want to store them in XML format to make them easy to ready and easy to import into a database. Using the Event Log Many applications rely on the event log as a means to output data to the administrator. Of all of the methods that Microsoft has created for outputting error and status information, the event log has been around the longest and is the most successful. Fortunately, for the IronPython developer, using the event log is extremely easy and it’s the method that you should use most often. Listing 10-11 shows just how easy it is to write an event log entry. Listin g 10-11: Writing an event log entry [code] # Import the required assemblies. from System.Diagnostics import EventLog, EventLogEntryType # Create the event log entry. ThisEntry = EventLog(‘Application’, ‘Main’, ‘SampleApp’) # Write data to the entry. ThisEntry.WriteEntry(‘This is a test!’, EventLogEntryType.Information) # Pause after the debug session. raw_input(‘Event log entry written…’) [/code] The EventLog() constructor accepts a number of different inputs. The form shown in the example defines the log name, machine name, and the application name. In most cases, this is all the information you need to start writing event log entries. After you create ThisEntry, you can use it to begin writing event log entries as needed using the WriteEntry() method. The WriteEntry() is overloaded to accept a number of information formats — the example shows what you’ll commonly use for simple entries. You can see other forms of the WriteEntry() method at .eventlog.writeentry.aspx. In this case, the WriteEntry() provides a message and defines the kind of event log entry to create. You can also create warning, error, success audit, and failure audit messages. Figure 10-16 shows the results of running this example.
https://www.blograby.com/developer/using-ironpython-for-administration-tasks.html
CC-MAIN-2018-39
en
refinedweb
filedb.filestore module¶ Base class¶ - class whoosh.filedb.filestore. Storage¶ Abstract base class for storage objects. A storage object is a virtual flat filesystem, allowing the creation and retrieval of file-like objects ( StructFileobjects). The default implementation ( FileStorage) uses actual files in a directory. All access to files in Whoosh goes through this object. This allows more different forms of storage (for example, in RAM, in a database, in a single file) to be used transparently. For example, to create a FileStorageobject: # Create a storage object st = FileStorage("indexdir") # Create the directory if it doesn't already exist st.create() The Storage.create()method makes it slightly easier to swap storage implementations. The create()method handles set-up of the storage object. For example, FileStorage.create()creates the directory. A database implementation might create tables. This is designed to let you avoid putting implementation-specific setup code in your application. Closes any resources opened by this storage object. For some storage implementations this will be a no-op, but for others it is necessary to release locks and/or prevent leaks, so it’s a good idea to call it when you’re done with a storage object. create()¶ Creates any required implementation-specific resources. For example, a filesystem-based implementation might create a directory, while a database implementation might create tables. For example: from whoosh.filedb.filestore import FileStorage # Create a storage object st = FileStorage("indexdir") # Create any necessary resources st.create() This method returns selfso you can also say: st = FileStorage("indexdir").create() Storage implementations should be written so that calling create() a second time on the same storage create_index(schema, indexname='MAIN', indexclass=None)¶ Creates a new index in this storage. >>> from whoosh import fields >>> from whoosh.filedb.filestore import FileStorage >>> schema = fields.Schema(content=fields.TEXT) >>> # Create the storage directory >>> st = FileStorage.create("indexdir") >>> # Create an index in the storage >>> ix = st.create_index(schema) destroy(*args, **kwargs)¶ Removes any implementation-specific resources related to this storage object. For example, a filesystem-based implementation might delete a directory, and a database implementation might drop tables. The arguments are implementation-specific. file_modified(name)¶ Returns the last-modified time of the given file in this storage (as a “ctime” UNIX timestamp). lock(name)¶ Return a named lock object (implementing .acquire()and .release()methods). Different storage implementations may use different lock types with different guarantees. For example, the RamStorage object uses Python thread locks, while the FileStorage object uses filesystem-based locks that are valid across different processes. open_index(indexname='MAIN', schema=None, indexclass=None)¶ Opens an existing index (created using create_index()) in this storage. >>> from whoosh.filedb.filestore import FileStorage >>> st = FileStorage("indexdir") >>> # Open an index in the storage >>> ix = st.open_index() optimize()¶ Optimizes the storage object. The meaning and cost of “optimizing” will vary by implementation. For example, a database implementation might run a garbage collection procedure on the underlying database. temp_storage(name=None)¶ Creates a new storage object for temporary files. You can call Storage.destroy()on the new storage when you’re finished with it. Implementation classes¶ - class whoosh.filedb.filestore. FileStorage(path, supports_mmap=True, readonly=False, debug=False)¶ Storage object that stores the index as files in a directory on disk. Prior to version 3, the initializer would raise an IOError if the directory did not exist. As of version 3, the object does not check if the directory exists at initialization. This change is to support using the FileStorage.create()method. Helper functions¶ whoosh.filedb.filestore. copy_storage(sourcestore, deststore)¶ Copies the files from the source storage object to the destination storage object using shutil.copyfileobj.
https://whoosh.readthedocs.io/en/latest/api/filedb/filestore.html
CC-MAIN-2018-39
en
refinedweb
What I would like to do, is train a first model $f_{1}(underline{x})$, where $underline{x}$ is a set of features, fix what model 1 has learned, and then train a second model $f_{2}(underline{y})$ where $underline{y}$ is a second set of features. (it’s not really the emphasis of this post, but in case you’re curious as to why I want to do this, see the bottom of the post) My target variable is binary, and I want to maximise (binary) cross-entropy rather than accuracy. While there is nothing intrinsic to this problem that dictates I should use XGBoost, XGBoost is performing favourably compared to other models on the problem of predicting the target when using only the external variables, so I would like to find a way of getting XGBoost to do this. It seems to me, that this will require using a custom cost function, which requires using the generic booster class rather than xgb.XGBClassifier . When using the booster class, it outputs a real number, with no constraint of being between $[0,1]$, so one needs to define $P(z_{i}=1|x_{i})=frac{1}{1+e^{-f_{1}(x_{i})}}$ and then implement binary cross-entropy accordingly (because I’ve got two sets of features denoted by x and y, I’ve somewhat criminally used z to refer to the target) I then train a second custom booster, $f_{2}(underline{y})$ in which $P(z_{i}=1|x_{i}, y_{i})=frac{1}{1+e^{-(f_{1}(x_{i})+f_{2}(y_{i}))}}$ This requires a new custom implemented cost function (it’s a bit hacky, as I don’t think xgboost allows the custom cost function to be passed any arguments other than preds and dmatrix, so after training the first classifier, I save the train predictions in a global variable which I then call in the custom cost function of the second classifier. Not the main point of this post, but if anybody knows a way around this, I’d be super grateful) What happens when I do this, is a little odd (I’m using a validation set and verbose progress printing, as well as early stopping, so I can watch my classifier “get better” as it iterates). Note that for debugging purposes, I am not using a set of external and controllable features, I’m just using a set of features, X, which I artificially subdivide into (x,y), in which I know that x is a suboptimal set of features (as in, a classifier trained on only x performs somewhat worse than a classifier trained on the full X). I thus would expect the combination of classifiers to perform better than the first one (although not necessarily as well as a single classifier trained on the full X) Let’s say Classifier 1 finishes and has a cross-entropy of K. Classifier 2 starts training, and its validation cross-entropy does actually drop for a substantial number of iterations before exiting. But, the problem is that when classifier 2 starts training, on the zeroth iteration, the validation loss is substantially lower than it was when classifier 1 exited. I have a hypothesis as to why this happens, which is that in xgboost, the way the first tree is generated is special. For example one possible way to do xgboost regression, is for the first learner in the ensemble to simply output the mean of the target variable, and then each subsequent learner to learn a correction to this. In general, the theory behind xgboost assumes that corrections to the output will be small, and thus a second order taylor expansion is valid, so for this to be true, the first learner needs to be relatively good. Subsequent trees are multiplied by a “learning rate” to ensure that they only make small corrections, but the first tree is not. This hypothesis is backed up by the fact, that changing the learning rate to something ridiculously small basically doesn’t change the amount by which the loss decreases between the final iteration of classifier 1 and the zeroth iteration of classifier 2. My Actual Question(s) Is there a good way around this? I have two ideas but do not know how to implement them/whether it’s possible. 1: Can I force XGBoost to also multiply the first tree in the ensemble by the learning rate? 2: The XGBoost booster class can take xgb_model as an argument, and boost an already existing classifier (allegedly, this doesn’t have to be an XGB model, so I hear, but I’ve only played around with doing this with an xgb model). The problem here is, that I think the features the two models take need to be the same, as under the hood, XGBoost will be calling model_1.predict(dmatrix_train) and then using that same dmatrix to train model_2, but of course I want to do this with different dmatrices. If you’ve made it this far, thanks for reading, and an even bigger thanks if you can help. Below I’ll provide the actual maths/code details Cost Functions The first classifier $f_{1}(x)$ takes a set of features $underline{x}$ and maps to a real number. We associate this with a probability as discussed above. The corresponding cost function is: $C = sum_{i}z_{i}lnfrac{1}{1+e^{-f_{1}(x_{i})}}+(1-z_{i})ln left(1-frac{1}{1+e^{-f_{1}(x_{i})}}right)$ which rearranges to $sum_{i}f_{1}(x_{i})(z_{i}-1)-ln (1+e^{-f_{1}(x_{i})})$ Similarly, the second classifier takes a set of features $underline{y}$ and maps them to a real number $f_{2}(underline{y})$. The cost associated with the output of this second classifier is given by: $sum_{i}(f_{1}(x_{i})+f_{2}(y_{i}))(z_{i}-1)-ln(1+e^{-(f_{1}(x_{i})+f_{2}(y_{i}))})$ XGboost doesn’t need to be passed the actual cost functions, it needs to be passed the first and second derivatives, and as vectors, i.e. if $C=sum_{i}c_{i}$, XGBoost requires $frac{partial c_{i}}{partial f_{1}(x_{i})}$ and $frac{partial ^{2} c_{i}}{partial ^{2}f_{1}(x_{i})}$ for the first cost function and $frac{partial c_{i}}{partial f_{2}(y_{i})}$ and $frac{partial ^{2} c_{i}}{partial ^{2}f_{2}(y_{i})}$ for the second cost function. I calculate that $frac{partial c_{i}}{partial f_{1}(x_{i})}=z_{i} -1 +frac{1}{1+e^{f_{1}(x_{i})}}$ and $frac{partial ^{2}c_{i}}{partial ^{2}f_{1}(x_{i})}=-frac{e^{f_{1}(x_{i})}}{(1+e^{f_{1}(x_{i})})^{2}}$ and similar expressions for the second cost function. Code The code for my first cost function looks like this: def binary_cross_entropy(preds, dmat): labels = dmat.get_label() f_exp = np.exp(preds) grad = 1 - labels - 1/(1+f_exp) hess = f_exp/np.power(1+f_exp, 2) return grad, hess (note, grad and hess have been multiplied by -1, as I think XGBoost is trying to minimise loss rather than maximise) The cost function for the second classifier looks like: def boosted_binary_cross_entropy(preds, dmat): labels = dmat.get_label() # model_1_preds_train is a global variable exponent = labels + model_1_preds_train f_exp = np.exp(exponent) grad = 1 - labels - 1/(1+f_exp) hess = f_exp/np.power(1+f_exp, 2) return grad, hess (note the hack of requiring the global variable). Similarly, in order to watch your classifier’s performance on an evaluation set, a rather odd quirk of xgboost is that you need to re-implement the cost function so that it outputs the cost as a scalar (rather than its gradients as a vector). I’ll provide these here for completeness (another weird quirk is that you need to give the metric a name, which is what the strings are about): def custom_metric(preds, dmat): labels = dmat.get_label() return 'custom_cross_ent',np.mean(-1*np.log(1+np.exp(-1*preds)) + preds*(labels-1)) and def boosted_custom_metric(preds, dmat): labels = dmat.get_label() boosted_preds = preds + model_1_preds_eval return 'boosted_cross_entropy',np.mean(boosted_preds*(labels-1) - np.log(1 + np.exp(-1*boosted_preds))) Then the code which actually trains the first model (there are three decision matrices, dtrain, deval and dtest): params = {'max_depth': 7, 'n_jobs':-1, 'learning_rate':0.1, 'reg_lambda':1} bst_to_boost = xgboost.train(params, dtrain, obj=binary_cross_entropy, feval=custom_metric, num_boost_round=3000, early_stopping_rounds=5, evals=[(deval, 'eval')], maximize=True) After training the first model, I assign model_1_preds_train and model_1_preds_eval, as required by boosted_binary_cross_entropy and boosted_custom_metric respectively: model_1_preds_train = bst_to_boost.predict(dtrain) model_1_preds_eval = bst_to_boost.predict(deval) Next, I re-assign dtrain, deval and dtest (I won’t include this code, it’s just data manipluation), maybe this is sloppy I should call them dtrain_2 etc, but I haven’t… and the code which trains the second model boosted_model = xgboost.train(params, dtrain, obj=boosted_binary_cross_entropy, feval=boosted_custom_metric, num_boost_round=3000, early_stopping_rounds=5, evals=[(deval, 'eval')], maximize=True) I think that’s about all of the (useful) code I can provide. Why am I interested in this I’m looking to learn how a set of actions in the past have changed the outcome (e.g. marketing attribution). One way of doing this which is what I’m investigating here, is to train a model to learn how the target variable is related to external factors which I have no control over, denoted by $underline{x}$, and only when all of this dependence has been learned, do I train a second model which learns how factors which I can control/interventions, $underline{y}$ have affected the outcome. Implicit is the assumption that the external factors have a (much) larger effect than the ones I have control over, and that in the historical data, $underline{x}$ and $underline{y}$ might be inter-correlated (interventions have not been independent of environmental factors). If you train one model with all the features at once, you can’t control to what extent the model chooses to learn information contained within both $underline{x}$ and $underline{y}$ (because they are correlated) from each. Of course you want the model to learn as much from $underline{x}$ as possible, as you don’t have any influence on external factors.
https://alltopicall.com/tag/another/
CC-MAIN-2018-39
en
refinedweb
On Sat, Sep 16, 2000 at 11:39:45PM +0200, Henner Eisen wrote:> int netif_would_drop(dev)> {> return (queue->input_pkt_queue.qlen > netdev_max_backlog)> || ( (queue->input_pkt_queue.qlen) && (queue->throttle) )> }> > would fulfil those requirements.It would just be racy. You test, get a not drop and then another differentinterrupt would deliver another packet before you can and fill the queue.Jamal's extended netif_rx probably makes more sense, because it can be atomic.-Andi-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at
https://lkml.org/lkml/2000/9/16/12
CC-MAIN-2018-39
en
refinedweb
Questions about decorators, *args, and **kwargs come up often, but don’t always have concise and comprehensive answers. Official documentation tends to explain things in terms of textbook definitions and proper syntax, with an example or two if you’re lucky. If you already know a programming language or two that may be fine, but a newcomer will want to know why we use these things along with some specific examples. I put together two demo scripts that embellishes some Stackexchange answers for this reason. Knowing these patterns will add to your programming maturity and help you start to understand the machinery of this high level language. Let’s begin with decorators. Flask app in 5 lines Decorators These are for when you need to alter the function of some existing object or class (functions are first class objects in Python). A decorator is accomplished in Python by placing an “@somefunction” line above the original function you wish to modify. Here is an example decorator script. You might want to use this pattern if you have a function to call, but also want to ensure other things or done immediately before or after (and you don’t want to rewrite it with a slight change – DRY!). For example, many web apps will use a “@require_login” decorator to ensure users are logged in before directing them to a page. You can also use a decorator to benchmark your functions if you are concerned about performance. def power_decorator(original_function): def wrapper(passed_in_input): print('--- in the wrapper function ---') print('loop on number', passed_in_input) return 2 ** original_function(passed_in_input) return wrapper # the power_decorator function requires a return to carry back the "wrapped up" print_number call @power_decorator def print_number') if __name__ == '__main__': demo() This example was inspired by a Real Python post that has both simpler and more complex snippets. I changed the structure and variables to make it more clear what’s happening in the code every step of the way. Paste this example into your text editor or whatever environment you’re using and play around with it. Or just run it. You should get output like this: *I took out the “about to return the result” print statement. I think this example makes it more clear where the code flows step by step. *args and **kwargs So you might know by now that *args is used when you’re not sure how many parameters will be passed into the function you’re creating (many data science libraries make ample use of optional parameters since they need to satisfy so many conventions). And you might have even gleaned that **kwargs is used for named arguments in the form of a key:value dictionary mapping (and that it can be used when calling the function or when defining it). But how and why are they used? And how are they used alongside other arguments in such a way that the interpreter knows what goes where? Try running the script below and see the output, it speaks for itself. # a collection of examples found on the internet # adapted from: def print_everything(*args): for count, thing in enumerate(args): print('{0}. {1}'.format(count, thing)) print('\n') def table_things(**kwargs): for name, value in kwargs.items(): print( '{0} = {1}'.format(name, value)) print('\n') # adapted from: def demo_func(formalarg, b='default string value', *args, key1, key2): for x in formalarg, b, args, key1, key2: print(x) print('\n') def demo_func_2(formalarg, b='default string value', *args, **kwargs): for x in formalarg, b, args: print(x) for key in kwargs: print("key: %s, value: " % key, kwargs[key]) print('\n') def demo(): print('demo functions that use *args and *kwargs' + '\n') # 1 print_everything('all', 'the', 'things') # 2 table_things(drink='red wine', entre='spaghetti', side=None) # 3 test_dict = {"key1": "value1", "key2": "value2"} demo_func("this is a regular parameter/argument", 'b_override', '*args argument 1', '*args argument 2', **test_dict) # 4 demo_func_2("this is a regular parameter/argument", 'b_override', '*args argument 1', '*args argument 2', hobby='Python', name='John Doe', bonustip='kwargs is a dictionary, and dictionaries are unordered. so dont rely on the order of your kwargs!') # --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- # print('todo: extend this concept to classes and their subclasses') # # --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- # if __name__ == '__main__': demo() You can see how each function processes the input arguments by simply running the .py file. In short, here’s what’s happening: - *args is the only parameter, but three strings are passed in and able to be handled separately with enumerate (creating an integer for each string). - We don’t pass in a dictionary, but rather a series of named values. Once inside the function, the variable **kwargs can be treated like a dictionary with .items() to access the key and value pairs. - Ok now we have a mix of regular parameters with *args and **kwargs. The first two are nothing special here, I just put them there so it resembles a real function you might see. The first two arguments slot into the first two defined parameters. That leaves 2 unaccounted parameters with 3 arguments (‘*args argument 1’, ‘*args argument 2’, **test_dict) being passed in. **kwargs is used with the function call, so any parameter names that match the dictionary key will be evaluated as the value to the key. The remaining arguments go to *args since it can accept a variable length of inputs. - The first two arguments are matched up with the first two slots in the function just like in the previous example, leaving just *args and **kwargs to be matched up next. **kwargs all have assigned values, so it takes any inputs with assignment (x=’some string’). That leaves any danglers to *args. Notes: - Default arguments should come after the “regular” arguments or formal arguments, and be before *args and **kwargs. Happy coding. Update 7.23.18: I’m adding another version of this demo decorator for my own sake and for the sake of perhaps explaining it a better way. This is more or less the same code, but the wrapper function returns a tuple so you can see the variables side by side. Finally, I call a semantically identical function without a decorator to show what the output would look like otherwise. def power_decorator(original_function): def wrapper(passed_in_input): print('--- in the wrapper function ---') print('loop on number', passed_in_input) # assign a variable to the output of the original function, or the function that was decorated n = original_function(passed_in_input) # returns a tuple of the exponent, n, and the result of 2^n return (n, 2 ** n) return wrapper # the power_decorator function requires a return to carry back the "wrapped up" print_number call @power_decorator def print_number(input_number): print('--- back in the original function ---') return input_number # no decorator here anymore def print_number_plain') print("Now, without a decorator. Using the last iteration of the for loop (8). \n") print(print_number_plain(num)) if __name__ == '__main__': demo()
http://www.adamantine.me/2017/12/14/python-decorators-args-and-kwargs-explained/
CC-MAIN-2018-39
en
refinedweb
I would like to store a character (in order to compare it with other characters). If I declare the variable like this : char c = 'é'; everything works well, but I get these warnings : warning: multi-character character constant [-Wmultichar] char c = 'é'; ^ ii.c:12:3: warning: overflow in implicit constant conversion [-Woverflow] char c = 'é'; I think I understand why there is these warnings, but I wonder why does it still work? And should I define it like this : int d = 'é'; although it takes more space in memory? Moreover, I also get the warning below with this declaration : warning: multi-character character constant [-Wmultichar] int d = 'é'; Do I miss something? Thanks ;) é has the Unicode code point 0xE9, the UTF-8 encoding is "\xc3\xa9". I assume your source file is encoded in UTF-8, so char c = 'é'; is (roughly) equivalent to char c = '\xc3\xa9'; How such character constants are treated is implementation-defined. For GCC:'). Hence, 'é' has the value 0xC3A9, which fits into an int (at least for 32-bit int), but not into an (8-bit) char, so the conversion to char is again implementation-defined: For conversion to a type of width N, the value is reduced modulo 2N to be within range of the type; no signal is raised. This gives (with signed char) #include <stdio.h> int main(void) { printf("%d %d\n", 'é', (char)'é'); if((char)'é' == (char)'©') puts("(char)'é' == (char)'©'"); } Output: 50089 -87 (char)'é' == (char)'©' 50089 is 0xC3A9, 87 is 0xA9. So you lose information when storing é into a char (there are characters like © which compare equal to é). You can wchar_t, an implementation-dependent wide character type which is 4 byte on Linux holding UTF-32: wchar_t c = L'é';. You can convert them to the locale-specific multibyte-encoding (probably UTF-8, but you'll need to set the locale before, see setlocale; note, that changing the locale may change the behaviour of functions like isalphaor printf) by wcrtombor use them directly and also use wide strings (use the Lprefix to get wide character string literals) const char *c = "é";or const char *c = "\u00e9";or const char *c = "\xc3\xa9;", with possibly different semantics; for C11, perhaps also look for UTF-8 string literals and the u8prefix) Note, that file streams have an orientation (cf. fwide). HTH Try using wchar_t rather than char. char is a single byte, which is appropriate for ASCII but not for multi-byte character sets such as UTF-8. Also, flag your character literal as being a wide character rather than a narrow character: #include <wchar.h> ... wchar_t c = L'é';
http://www.dlxedu.com/askdetail/3/2754d688f398f4de4608ceca53ad7a91.html
CC-MAIN-2018-39
en
refinedweb
Creating a Blogging Platform in Java: Part 2 In the second part of his series about creating a blogging platform in Java, Grzegorz Ziemonski covers his vision, the start of his project, and building. Join the DZone community and get the full member experience.Join For Free This week I'll continue the topic of creating a Blogging Platform. If you're not familiar with it, you can read the first part here. I didn't have too much time to work on the project, but I got the very basic setup working, so we can look deeper into that. Vision Changes I thought about the general vision a bit more and even discussed it with a friend, which led me to two important changes in my short-term project vision: - I'm not going to avoid libraries and frameworks. Firstly, because these will seriously boost my productivity. Secondly, because I want to create something similar to the projects you can see in your work environment. Doing anything else would be selfish. - I think I won't need two elements that I had put in my initial design — "content keeper" and "content watch." The strive for simplicity suggests to start with something more straightforward and see how it works. Starting the Project As I'm pretty proficient with Spring Boot and Spring in general, I decided to start the project using Spring Boot via Spring Initializr. After generating the project, the obvious first step was to create the Application class: @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } I also added a HomeController, rendering a very basic view to make sure that Spring Boot and Freemarker are configured correctly: @Controller public class HomeController { @RequestMapping("/") public String hello(Model model) { model.addAttribute("name", "World"); return "home"; } } This gives me something similar to the concept of a walking skeleton. That view is not an end-to-end function, but enough to say that things are working. Build Process As I've written before, I'm using Gradle as the build tool. I'm kind of new to the tool, so any tips are welcome. Fortunately, I got the basic configuration generated by Spring Initializr. For various tests, I'll be using Spock. Configuring that requires just adding the groovy plugin and some dependencies: apply plugin: 'groovy' ... dependencies { ... // spock compile 'org.codehaus.groovy:groovy-all:2.4.1' testCompile 'org.spockframework:spock-core:1.0-groovy-2.4' testRuntime 'cglib:cglib-nodep:3.1' testRuntime 'org.objenesis:objenesis:2.1' } I created a simple test to make sure that Gradle actually executes it: class HomeControllerSpec extends Specification { def "test"() { given: println("Test running again!") expect: true } } The last thing to do with the build was to set up Travis CI. Travis gives free CI to all open-source projects on GitHub and it's very easy to set up. All you have to do is create a .travis.yml file with contents similar to these: language: java jdk: - oraclejdk8 before_cache: - rm -f $HOME/.gradle/caches/modules-2/modules-2.lock - rm -fr $HOME/.gradle/caches/*/plugin-resolution/ cache: directories: - $HOME/.gradle/caches/ - $HOME/.gradle/wrapper/ The first three lines are obvious. The before_cache and cache parts are optional, but docs suggested to use them. CI from the very beginning might not be absolutely necessary as long as I'm the only person working on the project, but it would be in any other case. Even in my case, it helps by proving that the build passes on other computers than mine. Cloud Deployment Every project has to be deployed somewhere and this one is no exception. I chose Heroku as it has nice GitHub-Travis integration. I just add an app using my GitHub account, create a Procfile and I have Continous Delivery with a server for free. The Procfile looks like this: web: java $JAVA_OPTS -Dserver.port=$PORT -jar build/libs/blogging-platform.jar I simplified the vision and decided to use frameworks and libraries freely. I generated a Gradle configuration with Spring Boot dependencies using Spring Initializr. To see that the configuration is correct, I created a simple controller with a view. Tests will be written in Spock, executed regularly by Travis CI. The application will be continously deployed to Heroku. You can see it working here (it might take a while to load, because it's on a free dyno). Published at DZone with permission of Grzegorz Ziemoński, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/creating-a-blogging-platform-in-java-2
CC-MAIN-2022-33
en
refinedweb
Web Developer Welcome to the first part of my JSWorld Conference 2022 summary series, in which I share a summary of all the talks in four parts with you. After this part which contains the first three talks, you can read the second part here,: Colin Ihrin - Engineer at Deno Deno is a simple, modern, and secure runtime for JavaScript and TypeScript similar to Node.js that uses V8 and is built in Rust. It is created by Ryan Dahl, the original inventor of Node.js, who talked about JSConf EU in 2018, where he talks about 10 Things he regrets about node.js. Some of the topics he talked about were module resolution through node_modules, the lack of attention to security, and various deviations from how browser’s worked. and he set out to fix all these mistakes in Deno. In this talk, Colin will explore Deno’s tech stack that allows the project to move fast, why Deno is betting on browser compatibility, and how they intend to provide compatibility with the existing JavaScript ecosystem. Deno - A modern runtime for JavaScript and TypeScript Node.js has been around since 2009, predates much of modern JavaScript like CommonJS vs. ECMAScript Modules and Callbacks vs. Promises, and does have a huge ecosystem with a lot of legacy code that can slow progress and standards compliance. The goal of Deno was to address some of the shortcomings of Node. June 2018: Deno introduced at JSConf EU August 2018: Deno v0.1.0 released, rewritten in Rust — previously written in Golang. Having to garbage collected languages inside the same process might not be the best thing. May 2020: Deno v1.0.0 released March 2021: The Deno company was announced June 2021: Deno Deploy announced Q3 2022: Deno Deploy reaches GA Deno has caught up with node rapidly in terms of Github starts: Deno is built from Rust crates. deno_core includes rusty_v8 and deno_ops.V8 is a C++ project, so they came up with a layer surrounding that called rusty_v8, and now everything outside of v8 itself is Rust code. There is also a layer called deno_ops which provides an API for performing operations with rusty-v8. deno_runtime includes deno_core mentioned above plus deno_console, deno_crypto, deno_fetch, deno_web, Tokio, etc. Deno CLI Includes deno_runtime and also an integrated toolchain. This is what you download and run — on Linux, macOS, and Windows — and it is distributed as a single executable file. Everything you need to run Deno programs is included. The idea is that Deno is pretty much battery included compare to historically Node.js which was not. Deno - A modern runtime for JavaScript and TypeScript Modules maintained by the core team for fs, http, streams, uuid, wasi, etc. It is similar to Core modules in Node.js, and it’s guaranteed to be maintained and work with Deno. It is recommended to pin a version because there may be breaking changes with new versions. Example: import { copy } from "[email protected]/fs/mod.ts"; await copy("source.txt", "destination.txt"); It works like in a browser, there is no CommonJS and it’s ESM, and there is no package.json, node_modules, or index.js. Runtime fetches, caches, and compiles automatically, so there is no separate npm install or things like that. deno.land/x is a hosting service for Deno scripts. import { serve } from "[email protected]/http/server.ts"; function handler(_req: Request): Response { return new Response("Hello, Wolrd!"); } serve(handler); // deno run --allow-net server.ts Deno prefers web platform APIs when possible. A number of the APIs that are supported: URL, FormData, fetch, Blob, console, File, TextEncoder, WebSocket, WebGPU, Local Storage, etc. Deno does have these tools out of the box in comparison to Node.js that doesn't have, like Linter, Formatter, Version Manager, Documentation Gen., Task Runner, Bundler, Packager, and TypeScript. You can see all subcommands with deno -help Deno does support TypeScript out of the box, caches transpired files for future use and The TypeScript compiler is snapshotted onto Deno. SWC is used for transpilation and bundling which is: 20x faster in single threaded app execution and up to 70x faster in multi threaded than babel. Deno cannot access the outside world by default and Permission must be explicitly granted. Some of Permissions CLI Flags: —alow-env, —allow-hrtime, —allow-net, —allow-ffi, —allow-read, —allow-run, —allow-write, —allow-all/ -A, etc. createRequire(...) is provided to create a require function for loading CJS modules. It also sets supported globals. import { createRequire } from "[email protected]/node/module.ts"; const require = createRequire(import.meta.url); // Loads native module polyfill. const path = require("path"); // Loads extensionless module. const cjsModule = require("./my_mod"); // Visits node_modules. const leftPad = require("left-pad"); // deno run —compat —unstable —alow-read ./node-code.mjs up to date V8, Web crypto, reportError API, Mocking utilities, Snapshot testing, etc. Announcing the Web-interoperable Runtimes Community Group There is a new W3C Community Group called Web-interoperable Runtimes Community Group also known as WinterCG, which is a Collaboration between non-browser JavaScript runtimes. This group is not aiming to create new APIs, but give a voice to server side runtimes at the table where all the discussions about specifications happen. Deno Deploy is essentially a globally distributed JavaScript VM, basically a V8 isolate cloud. It’s something similar to lambda functions but at the Edge (We will talk about Edge) and is currently available in over 30 regions globally and still growing. It is Built on the same open-source web APIs but with some tweaks for the cloud, also is integrated with netlify functions and supabase functions. Negar Jamalifard - Software Developer at Lightspeed Commerce CSS Houdini is one of the most recent changes in the CSS world that could change a lot of our old methods for updating CSS.. —[MDN] In this talk, she started with the basics of how browsers normally render CSS and then got to the point that how it would be different with the upcoming changes in CSS. CSS Houdini - CSS: Cascading Style Sheets | MDN One of the important parts of the browsers are rendering engines, for example: Blink in Chromium-based browsers Gecko in Firefox Webkit in Safari All of These browsers go through the same set of flow from the point that they receive an initial CSS file to the point that they can actually render something on the screen. Parsing Whenever the rendering engine encounters a link to a CSS file in an HTML file, it will download the CSS file in the background and then go through this process: Bytes → Characters → Tokens → Nodes → Object Model and it will generate two Object Models: DOM (Document Object Model) CSSOM (CSS Object Model) At this stage, the browser has all the information about the data structure of the page and the styling of the page, but to be able to render something on the screen, it needs to merge these two pieces of information. That’s what happens in the next step: Render tree Render tree is another tree-like structure of all the visible elements that suppose to be rendered on the screen. Layout At this stage, the browser tries to find the geometry and coordination of each element within the viewport and basically draws some sort of map for itself to know where the elements should go. Paint It paints Backgrounds, border colors, etc. and at this point, we see something on the screen. Within all of these Phases, there are only two points that developers have access to an API: DOM and some parts of CSSOM. A couple of years ago a group of people from W3C and other web communities agreed on a manifesto called Extensible Web Manifesto. The main idea of this manifesto is to move the focus from creating high-level APIs to providing low-level APIs and exposing the underlying layers. As result, a lot of changes and new APIs came onto the web, and the name of all these new APIs in the CSS world is CSS Houdini and it is going to enable developers to have access to every phase of the rendering process and extend its behavior. Typed OM is an upgraded enhanced version of the CSSOM. It will add types to CSS values in the form of JavaScript objects. On top of that, it will provide a set of semantic APIs that makes working with CSS in JavaScript more pleasant. const fontSize = getComputedStyle(box).getPropertyValue("font-size"); // 16px (string) const fontSize = box.computedStyleMap().get("font-size") const x = CSS.percent(-50); const y = CSS.percent(-50); const translation = new CSSTranslate(x, y); box.attributeStyleMap.set( "transform", new CSSTransformValue([translation]) ); Custom property or also known as CSS Variables was a cool feature, but it had some downsides. For example, the browser didn’t know how to animate them, because it didn’t have enough information about these properties to animate them. With all the information that we can provide to the browsers through this API, they can now apply transitions and animations to these custom properties. CSS.registerProperty({ name: '--my-color', syntax: '<color>', inherits: false, initialValue: 'pink', }); @property --my-color { syntax: '<color>'; inherits: false; initialValue: 'pink'; } We can now write scripts and pass them to the rendering engine and the rendering engine run it on a specific rendering phase, but we can not write those script in the main javascript body for two reasons: These scripts should not have access to the DOM environment, because when a rendering engine is for example at the layout phase, it assumes that DOM is not going to change. The rendering engine should be able to run these scripts on other threads than the main thread. The Solution is a Woklet. Worklets (pretty much like web workers) are independent scripts that run during the rendering process. They are run by the rendering engine and are thread-agnostic. Houdini has introduced three worklets: Paint Worklet or Paint API Animation Worklet or Animation API Layout Worklet or Layout API and that’s how we use it: registerPaint CSS.paintWorklet.addModule('worklet.js') A paint Worklet: class ImagePainter { static get inputProperties() { return ["--myVariable"]; } static get inputArguments() { return ["<color>"]; } static get contextOptions() { return {alpha: true}; } // The only mandatory method paint( ctx, // CanvasRenderingContext2D size, // The geometry of the painting area properties, // List of custom propertes args // List of arguments passed to paint(..) ) { // Painting logic } } // Register our class under a specific name registerPaint('my-image', ImagePainter); Writing a paint Worklet is very much like writing canvas, but what makes it super interesting is that from now on the creative developers can build these pain Worklets and then share them as npm packages and then lazy developers like me can just go ahead and install them in our apps and as easy as that you have new possibilities in your css, its like having plugins in your css. Useful Links: Dexter (Alexander Essleink) - Senior Frontend Engineer at Passionate People As a developer with a passion for those sweet fresh stacks who likes to play around new technologies, Dexter goes over his "perfect" stack, using graphQL, SvelteKit, Docker, and Github. He likes to control the things that he makes to be able to understand all the parts to know how to fix and change them. I've been honing this stack for many projects, and it's a dream. In the developing I like to be in the flow state when you not really get in to the details. It allows me to keep creating new things and to keep playing around with those things. A simple VPS dunning Debian. I do some things with the clouds, but mostly i just like to run on my own server with the versions i control and i got to choose who accesses my systems. He uses containers to separate parts and docker-compose to control the bits. He believes this is: The one database to rule them all, its Powerful, “scaleable” and functional. to manage the data in Postgres: Svelte I like how easy it is to get started and to make some components and to get growing with it. Gitlab or Github CI Then he shows some example projects and how these Tech stack works, which is hard to summarize in this article, but let me know if you want me to dive deeper into it in another post. In the end, he talks about serverless and the reason why he is not a fan of that: Because you are not in control of where your software is running, your file system calls, what api limit you have or what access control there is. He believes often it’s not necessary to worry about the scalability that serverless gives you because “You are not Google”. The most of the projects we build are not necessary to be in google scale. Its just simple things for 100 or 1000 people, a simple website with less traffic, And then if you are being successful its always easier to scale than to scale primilarly, to learn about all the details and limits.. If you control all the parts or make all the parts really simple it would be much easier and you can be much more creative. I hope you enjoyed this part and it can be as valuable to you as it was to me. You can read the second part here, the third part here, and the last part here, where I summarized the rest of the talks which are about: Encode, Stream, and Manage Videos With One Simple Platform
https://hackernoon.com/jsworld-conference-2022-part-i?ref=hackernoon.com
CC-MAIN-2022-33
en
refinedweb
Summary Applies the results of a least squares adjustment to parcel fabric feature classes. Least squares adjustment results stored in the AdjustmentLines and AdjustmentPoints feature classes are applied to the corresponding parcel line, connection line, and parcel fabric point feature classes. Use the Analyze Parcels By Least Squares Adjustment tool to run a least-squares adjustment on parcels and store the results in adjustment feature classes. Usage The tool uses the Point ID field in the AdjustmentPoints feature class to locate the corresponding points to update in the parcel fabric points feature class. Parcel fabric points are moved to the locations of the adjustment points if the distance between the points (coordinate shift) is more than the specified Movement Tolerance parameter value. The tool uses the Line ID and Source fields in the AdjustmentLines feature class to locate the corresponding lines in the parcel type line or connection line feature classes in the parcel fabric. If the endpoints of the lines were updated with the locations from the AdjustmentPoints feature class, the geometries of the parcel fabric lines are updated to lie between the updated points. Note: The COGO dimensions of the lines do not change. If the Update Source field in the AdjustmentPoints feature class is set to No, the corresponding point in the parcel fabric Points feature class will not be updated. This tool does not honor selections in the map. Syntax arcpy.parcel.ApplyParcelLeastSquaresAdjustment(in_parcel_fabric, {movement_tolerance}, {update_attributes}) Derived Output Code sample The following Python window script demonstrates how to use the ApplyParcelLeastSquaresAdjustment function to apply the results of a least squares analysis to the parcel fabric in immediate mode: import arcpy arcpy.parcel.ApplyParcelLeastSquaresAdjustment('c:/Parcels/Database.gdb/CountyParcels/CountyFabric', 0.05, 'NO_UPDATE_ATTRIBUTES') Environments Licensing information - Basic: No - Standard: Yes - Advanced: Yes
https://pro.arcgis.com/en/pro-app/2.7/tool-reference/parcel/applyparcelleastsquaresadjustment.htm
CC-MAIN-2022-33
en
refinedweb
). So the basic idea is to embed the expression to be compiled into a C# class and to compile that class. The compiled class will be called each time the expression must be evaluated. To increase performance, some pre-compilation of Linq Expressions will be applied. Here we go… Embed the expression into C# code As stated before, an expression has the form “row.Price * row.Quantity”. We embedded that code into the following C# code: using System; public class CompiledExpression { public static object Run(dynamic row2) { Func<dynamic, object> func = row => row.Price * row.Quantity; return func(row2); } } I guess that every C# developer already sees what we can do with this construct. It allows us to support even more complex expressions when they are surrounded with curly brackets. For example, we could rewrite the expression as following: { var price = row.Price; var qty = row.Quantity; var result = price * qty; return result; } Compile the generated C# code Now that we have generated C# code containing the expression, we can compile it. We will compile it using Roslyn. public static Assembly Compile(params string[] sources) { var assemblyFileName = "gen" + Guid.NewGuid().ToString().Replace("-", "") + ".dll"; var compilation = CSharpCompilation.Create(assemblyFileName, options: new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary), syntaxTrees: from source in sources select CSharpSyntaxTree.ParseText(source), references: new[] { new MetadataFileReference(typeof(object).Assembly.Location), new MetadataFileReference(typeof(RuntimeBinderException).Assembly.Location), new MetadataFileReference(typeof(System.Runtime.CompilerServices.DynamicAttribute).Assembly.Location) }); EmitResult emitResult; using (var ms = new MemoryStream()) { emitResult = compilation.Emit(ms); if (emitResult.Success) { var assembly = Assembly.Load(ms.GetBuffer()); return assembly; } } var message = string.Join("\r\n", emitResult.Diagnostics); throw new ApplicationException(message); } The code above is fairly easy to understand, I think, so I will skip further explanations. The good news is that when this method is successfully executed, it returns an assembly with one single class named “CompiledExpression” taht has on single static method named “Run”. So we can easily use this assembly right now: object row = ... // this object contains the runtime data string code = ... // this is the generated C# code var assembly = Compile(code); var type = assembly.GetType("CompiledExpression"); var method = type.GetMethod("Run"); var result = method.Invoke(null, new object[] { row }); Console.WriteLine("Hurray, the expression returned " + result); Apply pre-compiled Linq Expressions The code above relies heavily on Reflection. Depending on the performance requirements for your application, you might want to speed things up a little using pre-compilation: var assembly = Compile(code); var type = assembly.GetType("CompiledExpression"); var method = type.GetMethod("Run"); var func = (Func<DataRow, object>)method.CreateDelegate(typeof(Func<DataRow, object>)); var row = ... // get the runtime data var result = func(row); Of course, the whole pre-compilation thing only makes sense if the resulting delegate gets cached, e.g. in a static dictionary. In this post I have described a quick and dirty method to implement C# expressions. Well, it isn’t that dirty, actually. For real-world scenarios, the generated C# code could be more sophisticated, but this method is reliable and fast. You could alter the compiler error reporting and re-calculate line and column numbers so that they refer to the actual expression and not to the generated C# code. If you need more separation between the expression and your AppDomain, you can of course load the genearted assembly in a new AppDomain and call it there. Please see this post for details and remember that we have not saved the generated assembly to disk. So instead of loading the assembly from file as shown in that post, you must first load the assembly bytes using the AppDomain.Load method, then load the desired type by specifying the assembly name instead of the assembly location by using CreateInstanceAndUnwrap instead of AppDomain.CreateInstanceFromAndUnwrap. Thanks for perfect tutorial. I’m trying to load created assembly into new AppDomain. That not works: newAppDomain.Load(byte[]) throws FileNotFoundException. Has you tried that way? Based on your comment, I added a new post. I hope it helps. Please look here:.
https://softwareproduction.eu/2014/05/23/roslyn-compile-c-expressions-without-using-the-scripting-api/
CC-MAIN-2022-33
en
refinedweb
Write data to a stream at the given offset. #include <zircon/syscalls.h> zx_status_t zx_stream_writev_at(zx_handle_t handle, uint32_t options, zx_off_t offset, const zx_iovec_t* vector, size_t num_vector, size_t* actual); zx_stream_writev_at() attempts to write bytes to the stream, starting at the given offset, from the buffers specified by vector and num_vector. If successful, the number of bytes actually written are return via actual. If the write operation would write beyond the end of the stream, the function will attempt to increase the content size of the stream in order to receive the given data, filling any new, unwritten content with zero bytes. If the resize operation fails after some amount of data was written to the stream, the function will return successfully. If no bytes were written to stream, the operation will return ZX_ERR_FILE_BIG or ZX_ERR_NO_SPACE, as appropriate._WRITE. zx_stream_writev_at() returns ZX_OK on success, and writes into actual (if non-NULL) the exact number of bytes written. ZX_ERR_BAD_HANDLE handle is not a valid handle. ZX_ERR_WRONG_TYPE handle is not a stream handle. ZX_ERR_ACCESS_DENIED handle does not have the ZX_RIGHT_WRITE right. ZX_ERR_INVALID_ARGS vector is an invalid zx_iovec_t or options has an unsupported bit set to 1. ZX_ERR_NOT_FOUND the vector address, or an address specified within vector does not map to address in address space. ZX_ERR_BAD_STATE the underlying data source cannot be written. ZX_ERR_FILE_BIG the stream has exceeded a predefined maximum size limit. ZX_ERR_NO_SPACE the underlying storage medium does not have sufficient space. zx_stream_create() zx_stream_readv() zx_stream_readv_at() zx_stream_seek() zx_stream_writev()
https://fuchsia.googlesource.com/fuchsia/+/refs/heads/releases/f1r/docs/reference/syscalls/stream_writev_at.md
CC-MAIN-2022-33
en
refinedweb
replaced the glibc-2.1.3 version of fnmatch with the tar-1.1.13 version for SunOS 4.1 portability better configuration for fnmatch CODE_ADDRESS for SPARC can now deal with primitives in direct threading 1: /* Copyright (C) 1991, 1992, 1993 Free Software Foundation, Inc. 2: 3: NOTE: The canonical source of this file is maintained with the GNU C Library. 4: Bugs can be reported to bug-glibc@prep.ai.mit.edu. 5: 6: This program is free software; you can redistribute it and/or modify it 7: under the terms of the GNU General Public License as published by the 8: Free Software Foundation; either version 2, or (at your option) any 9: Foundation, 18: Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ 19: 20: #ifndef _FNMATCH_H 21: 22: #define _FNMATCH_H 1 23: 24: #ifdef __cplusplus 25: extern "C" { 26: #endif 27: 28: #if defined (__cplusplus) || (defined (__STDC__) && __STDC__) 29: #undef __P 30: #define __P(protos) protos 31: #else /* Not C++ or ANSI C. */ 32: #undef __P 33: #define __P(protos) () 34: /* We can get away without defining `const' here only because in this file 35: it is used only inside the prototype for `fnmatch', which is elided in 36: non-ANSI C where `const' is problematical. */ 37: #endif /* C++ or ANSI C. */ 38: 39: 40: /* We #undef these before defining them because some losing systems 41: (HP-UX A.08.07 for example) define these in <unistd.h>. */ 42: #undef FNM_PATHNAME 43: #undef FNM_NOESCAPE 44: #undef FNM_PERIOD 45: 46: /* Bits set in the FLAGS argument to `fnmatch'. */ 47: #define FNM_PATHNAME (1 << 0) /* No wildcard can ever match `/'. */ 48: #define FNM_NOESCAPE (1 << 1) /* Backslashes don't quote special chars. */ 49: #define FNM_PERIOD (1 << 2) /* Leading `.' is matched only explicitly. */ 50: 51: #if !defined (_POSIX_C_SOURCE) || _POSIX_C_SOURCE < 2 || defined (_GNU_SOURCE) 52: #define FNM_FILE_NAME FNM_PATHNAME /* Preferred GNU name. */ 53: #define FNM_LEADING_DIR (1 << 3) /* Ignore `/...' after a match. */ 54: #define FNM_CASEFOLD (1 << 4) /* Compare without regard to case. */ 55: #endif 56: 57: /* Value returned by `fnmatch' if STRING does not match PATTERN. */ 58: #define FNM_NOMATCH 1 59: 60: /* Match STRING against the filename pattern PATTERN, 61: returning zero if it matches, FNM_NOMATCH if not. */ 62: extern int fnmatch __P ((const char *__pattern, const char *__string, 63: int __flags)); 64: 65: #ifdef __cplusplus 66: } 67: #endif 68: 69: #endif /* fnmatch.h */
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/engine/fnmatch.h?rev=1.2;sortby=log;f=h;only_with_tag=v0-6-0;ln=1
CC-MAIN-2022-33
en
refinedweb
table of contents NAME¶ stdarg, va_start, va_arg, va_end, va_copy - variable argument lists SYNOPSIS¶ #include <stdarg.h> void va_start(va_list ap, last); type va_arg(va_list ap, type); void va_end(va_list ap); void va_copy(va_list dest, va_list src); DESCRIPTION¶()¶ The va_start() macro initializes ap for subsequent use by va_arg() and va_end(), and must be called first. The argument last()¶ The Each. An obvious implementation would have a va_list be instead, since that was the name used in the draft proposal. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ The va_start(), va_arg(), and va_end() macros conform to C89. C99 defines the va_copy() macro. BUGS¶ Unlike the historical). EXAMPLES¶ The function foo takes a string of format characters and prints out the argument associated with each format character based on the type. #include <stdio.h> #include <stdarg.h> void foo(char *fmt, ...) /* '...' is C syntax for a variadic function */ { va_list ap; int d; char c; char ); } SEE ALSO¶ vprintf(3), vscanf(3), vsyslog(3) COLOPHON¶ This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/va_copy.3.en.html
CC-MAIN-2022-33
en
refinedweb
Thanks. on Stackoverflow answered me. All the needed references are automatically added when you create Silverlight Windows Phone application, because Microsoft.Devices.Radio namespace is found at Windows.Phone library. So, i don't need other assemplies for work with devices in Windows phone silverlight 8.1 app Thank you for sharing the solution. --James We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.
https://social.msdn.microsoft.com/Forums/en-US/37ba11bb-3611-4076-99c0-3fca151f98fe/fm-radio?forum=winappswithcsharp
CC-MAIN-2022-33
en
refinedweb
Working Group within WG21. (February, 2012): Change 4.1 [conv.lval] paragraphs 1 and 2 as follows (including changing the running text of paragraph 2 into bullets): T implementation-defined. -. An unsigned char object with indeterminate value allocated to a register might trap. . (February, 2012): This issue is resolved by the resolution of issue 6.)... (February, 2012): from the array extends the lifetime of the array exactly like binding a reference to a temporary. [Example:typedef std::complex<double> cmplx; std::vector<cmplx> v1 = { 1, 2, 3 }; void f() { std::vector<cmplx> v2{ 1, 2, 3 }; std::initializer_list<int> i3 = { 1, 2, 3 }; } struct A { std::initializer_list array of N elements of type const E, where N is the number of elements in the initializer list. Each element of that array is copy-initialized with the corresponding element of the initializer list, and the std::initializer_list<E> object is constructed to refer to that array. If a narrowing conversion is required to initialize any of the elements, the program is ill-formed. [Example:struct X { X(std::initializer_list<double> v); }; X x{ 1,2,3 }; The initialization will be implemented in a way roughly equivalent to this:const double __a[3] = {double{1}, double{2}, double{3}}; X x(std::initializer_list<double>(__a, __a+3)); assuming that the implementation can construct an initializer_list object with a pair of pointers. —end example] According to 8.5.4 [dcl.init.list] paragraph 7, an implicit conversion from an integer type or unscoped enumeration type to an integer type that cannot represent all the values of the original type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type. } (February, [class.bit] Change: Bit-fields of type plain int are signed. Rationale: Leaving the choice of signedness to implementations could lead to inconsistent definitions of template specializations. For consistency, the implementation freedom was eliminated for non-dependent types, too. Effect on original feature: The choice is implementation-defined in C, but not so in C++. Difficulty of converting: Syntactic transformation. How widely used: Seldom.,() // redeclaration of h() uses the earlier lookup { return g(T()); } // ...although the lookup here does find g(int) int i = h<int>(); // template argument substitution fails;.:...]), possibly parenthesized,.: ...[Note: In particular, a global allocation function is not called to allocate storage for objects with static storage duration (3.7.1 [basic.stc.static]), for objects or references with thread storage duration (3.7.2 [basic.stc.thread]), for objects of type std::type_info (5.2.8 [expr.typeid]), or for]) a temporary object, called the exception] A destructor is used to destroy objects of its class type. A destructor (9.3.2 ]).(char (&)[1]); // #1 template<typename T> void f(T&&); // #2 void g() { f(""); //calls #2, should call #1 }]. When a similar question was raised in issue 413, the resolution was to remove the use of the term. The resolution of issue 1359 has now reintroduced the concept of an “empty” union, so there is once again the need to define it.. A member function with no ref-qualifier can be called for a class prvalue, as can a non-member function whose first parameter is an rvalue reference to that class type. However, 14.5.6.2 [temp.func.order] does not handle this. For an exception-specification of a function template specialization or specialization of a member function of a class template, if the exception-specification is implicitly instantiated because it is needed by another template specialization and the context that requires it depends on a template parameter, the point of instantiation of the exception-specification is the point of instantiation of the specialization that requires it. Otherwise, the point of instantiation for such an exception-specification immediately follows the namespace scope declaration or definition that requires the exception-specification... Additional note (January, 2012): In addition to the items previously mentioned, access declarations were removed from C++11 but are not mentioned in C.2 [diff.cpp03]. Proposed. current definition of constant expressions appears to make unevaluated operands constant expressions; for example, new char[10] would seem to be a constant expression if it appears as the operand of sizeof. This seems wrong.)... The.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3367.html
CC-MAIN-2022-33
en
refinedweb
Best practice for natural Ordering Scala has scala.math.Ordering, Java has java.lang.Comparable. These are interfaces used for defining a natural order, which can then be used to sort lists of elements, or in data structures implemented via binary search trees, such as SortedSet. This is what it looks like: trait Ordering[A] { def compare(x: A, y: A): Int } That function is supposed to define a “total order”, by returning a number that’s: - strictly negative if x < y - strictly positive if x > y 0in case x == y What’s often overlooked is that — ordering has to be consistent with equals, as a law, or in other words: compare(x, y) == 0 <-> x == y Example: import scala.math.Ordering final case class Contact( lastName: String, firstName: String, phoneNumber: String ) object Contact { // WRONG — never do this! implicit val ordering: Ordering[Contact] = (x: Contact, y: Contact) => x.lastName.compareTo(y.lastName) } This might seem reasonable if you were building some sort of contacts agenda, but isn’t. Some reasons: - data-structures backed by binary-search trees can use just comparewhen searching for keys - the result of sorting a list should not depend on its initial ordering Example of what can happen when sorting lists: val agenda1 = List( Contact("Nedelcu", "Alexandru", "0738293904"), Contact("Nedelcu", "Amelia", "0745029304"), ) agenda1.sorted //=> List(Contact(Nedelcu,Alexandru,0738293904), Contact(Nedelcu,Amelia,0745029304)) val agenda2 = List( Contact("Nedelcu", "Amelia", "0745029304"), Contact("Nedelcu", "Alexandru", "0738293904"), ) agenda2.sorted //=> List(Contact(Nedelcu,Amelia,0745029304), Contact(Nedelcu,Alexandru,0738293904)) agenda1.sorted == agenda2.sorted //=> false Example of what happens when using SortedSet: import scala.collection.immutable.SortedSet val set1 = SortedSet(agenda1:_*) //=> TreeSet(Contact(Nedelcu,Alexandru,0738293904)) val set2 = SortedSet(agenda2:_*) //=> TreeSet(Contact(Nedelcu,Amelia,0745029304)) SortedSet, being a Set implementation, is not duplicating items, and as you can see here, it basically eliminates all contacts with the same lastName, except for the first one it saw. If you’ve ever defined your ordering like this, don’t feel bad, as it happens to many of us. I was bitten by this just last week, in spite of knowing what I was doing … I defined a private ordering and figured that it won’t get used improperly. Except that after a refactoring, by yours truly, it did end up being used improperly. Which is why definitions have to always be correct, as their correctness has to survive refactorings, even when it’s just you operating on that codebase. The correct definition for our Ordering would be: object Contact { implicit val ordering: Ordering[Contact] = (x: Contact, y: Contact) => { x.lastName.compareTo(y.lastName) <||> x.firstName.compareTo(y.firstName) <||> x.phoneNumber.compareTo(y.phoneNumber) } // Just to help us out, this time private implicit class IntExtensions(val num: Int) extends AnyVal { def `<||>`(other: => Int): Int = if (num == 0) other else num } } Don’t take shortcuts, don’t do anything less than this, even if you think that you currently don’t need it. Because one of your colleagues, or even your future self, might reuse this definition without knowing that it’s broken.
https://alexn.org/blog/2020/11/17/best-practice-for-ordering-comparable/
CC-MAIN-2022-33
en
refinedweb
libssh2_channel_eof man page libssh2_channel_eof — check a channel's EOF status Synopsis #include <libssh2.h> int libssh2_channel_eof(LIBSSH2_CHANNEL *channel); Description channel - active channel stream to set closed status on. Check if the remote host has sent an EOF status for the selected stream. Return Value Returns 1 if the remote host has sent EOF, otherwise 0. Negative on failure. See Also libssh2_channel_close(3) Referenced By libssh2_channel_send_eof(3), libssh2_channel_wait_closed(3), libssh2_channel_wait_eof(3). 1 Jun 2007 libssh2 0.15 libssh2 manual
https://www.mankier.com/3/libssh2_channel_eof
CC-MAIN-2017-22
en
refinedweb
Characteristics of Windows Controls Introduction A Windows control, also called a control, is an object you provide to the user as part of your application. This object allows the user to interact with the computer. As various applications address different issues, there are various types of controls. There are some characteristics that all or most controls share. A characteristic of a control is also referred to as, or called, a property of the control. There are some characteristics that are appropriate for a category of controls. There are some other characteristics so restrictively particular that only a few or just one control would use them. We will review properties shared by all or many controls. Characteristics that only one or a few controls use will be reviewed when studying those controlled. Controls Names Every object of your computer has a name. That is, except for the letters you see on a document, anything that has a "physical" presence on your application must have a name. The name allows you and the operating system to identify the object and refer to it appropriately when necessary. The first object created on a Windows Application is the form. It is named Form1. When you add a control to a form, it automatically receives a name. For example, the first button would be called button1. Every control is named after its class. The names automatically given to controls added to an application are incremental. That is, the first button is named button1. If you add another button, it would be named button2, etc. If an application is using various controls, and especially if it is using various copies of the same type of controls, such names become insignificant and can lead to some confusion. Therefore, you should make it a (good) habit to give appropriate names to your controls. For example, if a control deals with an issue related to the employees marital status, the name of the control should reflect it. When naming controls, you must follow some rules of naming variables. Besides these rules, you should also follow the .Net's suggestions that the name of a control starts in lowercase. Examples are address or age. If the name is a combination of words, the first part should be in lowercase. The first letter of the second part should start in uppercase. Examples are firstName, dateOfBirth. Control Location Once a control is part of your application, it must have "physical" presence. Every control has a parent. The parent gives presence to the control. The parent is also responsible for destroying the control when the parent is destroyed. When you create a Windows Application, it creates a default or first form. The parent of this main form is the Windows desktop. In fact any form that is not part of an MDI is located with regards to its parent the desktop. The location of an object whose parent is the desktop is based on a coordinate system whose origin is located on the top-left corner of the desktop: The axes of this coordinate system are oriented so the x axis moves from the origin to the right and the y axis moves from the origin down: The distance from the left side of the desktop to the left border of the form is called the Left property. The distance from the top section of the desktop to the top border of the form is referred to as the Top property. The distance from the left border of the desktop to the right border of the form is called the Right property. The distance from the top border of the desktop to the bottom border of the form is called the Bottom property. These characteristics can be illustrated as follows: The rectangular area that the desktop makes available to all objects you can see on your screen is called the client area because this area is reserved to the children, called clients, of the desktop. When a form has been created, we saw in the previous lesson that you can add Windows controls to it. These controls can be positioned only in a specific area, the body of the form. The body spans from the left border, excluding the border, of the form to the right border, excluding the border, of the form. It also spans from the top side, just under the title bar, to the bottom border, excluding the border, of the form. The area that the form makes available to the controls added to it is called the client area: If a control is positioned on a form, its location uses a coordinate system whose origin is positioned on the top-left section of the client area (just under the title bar). The x axis moves from the origin to the left. The y axes moves from the origin down: The distance from the left border of the client area to the left border of the control is the Left property. The distance from the top border of the client area to the top border of the control is the Top property. These can be illustrated as follows: To change the location of a control, at design time, click and drag it in the desired direction. Once you get to the necessary location, release the mouse. Alternatively, select it and, in the Properties window, click the + button of the Location field. Type the Left value on the X field and type the Top value in the Y field. To programmatically change the location of a control, call the Point constructor from the Drawing namespace of the System with the desired value. Here is an example: private void Form1_Load(object sender, System.EventArgs e) { Location = new System.Drawing.Point(200, 180); } To retrieve the location of a control, declare a Point variable and assign the Location property of the desired control to it. This would be done as follows: private void Form1_Load(object sender, System.EventArgs e) { System.Drawing.Point Pt = Location; } Control Size A control size is the amount of space it is occupying on the screen or on a form. It is expressed as its width and its height. The width of a control is the distance from its left to its right borders. The height is the distance from the top border of the control to its bottom border. When you create a form or add a control to a form, it assumes a default size, unless you draw the control while adding it. To set the size of a control, at design time, select it. Then, position the mouse on one of its handles and drag in the desired direction. To assist you with this, the mouse cursor changes depending on the handle you grab. The cursors used are: Alternatively, to change the size of a control, select it and, in the Properties window, click the + button of the Size field. Then type the desired value for the Width and the desired value for the Height. To programmatically change the size of a control, assign the desired values of either or both its Width and Height properties. Here is an example: public static void main(String[] args) { Form1 Fm = new Form1(); Fm.set_Size(new System.Drawing.Size(450, 320)); Application.Run(Fm); } To find out the size of a control, declare a Size variable and assign it the Size property of the control. The Bounding Rectangle of a Control When a control is positioned on the screen, we saw that it can be located by its Left and its Top properties. A control also occupies an area represented by its Width and its Height. These four values can be grouped in an entity called the bounding rectangle and represented by the Bounds property. Control Text or Caption Some controls must display text to indicate what they are used for. Such a text can also be referred to as a caption. Some controls may not need this text and some others should or must display it. For a form, this text displays on the title bar. For most other controls, it displays in the middle of their body. If you create a form, it automatically gets text on its title bar. If you add a control to a form, it automatically gets a default text if that control is supposed to display it. To change the text of a control, select it. Then, in the Properties window, click the Text field and type the desired value. At design time, you can only provide a static type of text to the control. If you want text that can be changed in response to another action, you must change it programmatically. To programmatically change the text of a control, assign a string to its Text property. Here is an example: public static void main(String[] args) { Form1 Fm = new Form1(); Fm.set_Text("Employment Application"); Fm.set_Size(new System.Drawing.Size(450, 320)); Application.Run(Fm); } The text can also be created from an expression or an operation. Control Visibility In order to use a control, the user must be able to see it. When a control is not visible, it is hidden. This aspect is controlled by the control's visibility. The visibility of a control is controlled by the Visible property, which is a Boolean type. To make a control visible, at design time, set its Visible property to True. To hide it, set this property to False. To programmatically display or hide a control, assign the true or false value to its Visible property. If you assign a true value to this property, if the control was hidden, it would become visible. If the control was already visible, nothing would happen. On the other hand, if you assign a false value to a visible control, it would become hidden. If it was already hidden, nothing would happen. To find out whether a control is visible or hidden at one time, check the state of its Visible property, whether it is true or false. Control Availability When a control is visible, in order to use it, it must allow it to the user. A control is referred to as enabled if it can receive input from the user. For example, if it is a control that can be clicked, at a certain time, it must be "clickable". If it is not, clicking it would not make any difference. This aspect is controlled by the Enabled property. To make a control available to the user, set its Enabled property to True. To prevent the user from interacting with a control, set its Enabled property to False. To programmatically enable or disable a control, assign a value of true or false to its Enabled property. To find out whether a control is enabled or disabled, check its Enabled state by comparing to a true or false value. A control that is enabled displays in its normal or regular appearance. A control that is disabled sometimes appears dimmed.. At design time, the participation to tab sequencing is controlled by the TabStop property on the Properties window. Fortunately, every control that can receive focus is also configured to have this Boolean property set to true. If you want to remove a control from this sequence, set its TabStop value to false. If a control has the TabOrder property set to true, to arrange the navigation order of controls, at design time, change the value of its TabIndex field. The value must be a positive short integer.
http://functionx.com/vjsharp/lesson03.htm
CC-MAIN-2017-22
en
refinedweb
12 July 2016 Simplifying .Net REST API development: Nancy, self-hosting and ASP.Net Core REST API development using ASP.Net WebAPI can seem so fussy compared to other ecosystems. There’s a big application server to deal with, dense XML-based configuration and a lot of code to write just to stand up a basic API. Compare this with simpler, DSL-style approaches such as Sinatra on Ruby. This lets you push out a self-hosted API with sensible defaults with a minimal amount of ceremony, i.e. require 'sinatra' get '/' do "Hello, World!" end Most ecosystems seem to offer “Hello world” examples that are pretty low friction compared to WebAPI. For example, Go just expects you to set up a server and configure a few routes while Node.js has its own Sinatra-inspired libraries such as Express. Why does this matter? When you’re developing a RESTful API in the full, HATEOAS-enabled sense then you need to focus on the design of the resources and methods that are being exposed. This requires a degree immediate clarity that can be easily obscured by boilerplate code. A more minimal approach to developing and deploying APIs also be useful in building finely-grained microservices. When you are pushing out numerous of small, self-contained services you don’t want the feature logic to be swamped by the demands of boilerplate, configuration and hosting. Nancy and “low ceremony” API development NancyFX gained traction in the .Net space by bringing a similar lightweight, “low ceremony” framework to the .Net space. The syntax is focused around routes and responses and it provides sensible defaults that do not require any configuration files. The build-in dependency resolution allows you to change any aspect of request pipeline by implementing interfaces. Note that “low ceremony” needs qualification as there’s always some boilerplate involved in API development. Four-line “hello world” samples aside, “low ceremony” generally means you have to accept somebody else’s idea of sensible defaults or start writing code. For example, the default error response in Nancy is an HTML page containing a grim looking embedded image and a jokey error message (“Oh noes!”). This is easy enough to override with an IStatusCodeHandler instance, but it is essential scaffolding you have to write to avoid leaking implementation detail. ASP.Net WebAPI has improved considerably since Nancy was released. For example, features such as IHttpActionResult and HttpRequestContext have improved the testability of controllers, while attribute routing has made it easier to wire up routes. Despite this, it continues to provides a weighty and opinionated framework in comparison to the lighter-touch, declarative approach of Nancy. Do we still need IIS? One of the nicer aspects of Nancy was that it was host agnostic. It ran on Mono. It could be deployed pretty much anywhere: via IIS, WCF, embedded in an executable or running as a Windows service. Best of all, it was not dependent on System.Web and implemented its own pipeline that could be customised through hooks and plugins. The release of OWIN freed things up for ASP.Net by providing a standard for decoupling servers and applications. Despite this growing flexibility, many developers in the .Net space are so accustomed to using IIS that they cannot conceive a solution without it. It is a mistake to regard solutions involving IIS as somehow more “enterprise-ready”. IIS exposes a complex processing pipeline containing many features that a REST API does not need. The static content handling and compression in IIS may be impressive, but irrelevant. Process recovery can be taken care of by running the executable as a service with a service manager like NSSM. Other features such as monitoring and logging should arguably be application-level concerns. This is not to say you should be exposing self-hosted applications directly to the internet. You’ll need some form of reverse proxy to expose a number of separate applications to the externally-opened HTTP ports (i.e. 80 and 443). Other concerns such as request validation and authentication also need to be taken care of somewhere. An API Gateway is often used to address many of these concerns in API ecosystems. This acts as a “gatekeeper” that separates public-facing end-points from the applications that execute requests. It is first and foremost a security measure that can take care of authentication and request validation, but it can also serve as a reverse proxy to expose numerous self-hosted end-points through the same public-facing port. Onwards with ASP.Net Core ASP.Net Core finally breaks the IIS dependency. An ASP.Net Core application runs as a stand-alone console application and uses a self-hosted web server (Kestrel) to process requests. This allows it to offer a container-friendly deployment pipeline that does not rely on any frameworks or servers. The example below is the minimal code required to stand up a simple API end-point: namespace CoreExample { // Bootstraps the application public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseStartup<Startup>() .Build(); host.Run(); } } // Used to inject in the MVC framework public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddMvc(); } public void Configure(IApplicationBuilder app) { app.UseMvc(); } } // Example controller public class TestController : Controller { [HttpGet("api/helloworld")] public object HelloWorld() { return new { message = "Hello World" }; } } } You can still host Core applications in IIS, but they will run out of process using an HTTP module called AspNetCoreModule. This is responsible for loading the core application and ensuring that it stays loaded by detecting any crashes. This module bypasses much of the normal IIS request cycle and redirects traffic to the underlying core application. This style of application is similar to environments such as Ruby where you can bootstrap self-hosted applications using a single command line statement. They can be mounted behind an API gateway, web server or other proxy technology of your choice. Many of the remaining advantages that made Nancy look so attractive several years ago have been cancelled out by .Net Core. Dependency injection is now included out of the box. It is also host-agnostic and you can stand up a new end-point without any configuration files. The one thing Nancy still provides is options. Nancy represents a different approach that brings in great ideas from other ecosystems. For example, where ASP.Net WebAPI binds you into a model\controller pattern you have a lot more freedom over how to organise your Nancy code. This flexibility is something developers are accustomed to outside of the .Net ecosystem. Filed under API design, ASP.NET, Microservices, Net Framework, Web services.
http://www.ben-morris.com/simplifying-net-rest-api-development-nancy-self-hosting-and-asp-net-core/
CC-MAIN-2017-22
en
refinedweb
dark@xs4all.nl (Richard Braakman) wrote on 06.02.98 in <m0y0d4L-001NHbC@night>: > Santiago Vila wrote: > > On Thu, 5 Feb 1998, Richard Braakman wrote: > > > > > Overlap between cfs_1.3.3-1 (nonus,orphaned) and chris-cust_0.4: > > > usr/bin/i > > > Right. I wonder which is the more intuitive meaning for i. > > > Reported as #17841 to cfs and #17842 to christ-cust. > > > > This is namespace pollution again... > > We would need some policy preventing this. > > Such a policy would be very hard to define. Consider just the > one-character commands. Which of these are namespace pollution? All except for those that match the description for "Important", that is, stuff every experienced Unix person would expect. > (And most importantly, if it is to be policy, why?) Anything else should be available for users (for example, for aliases and convenience scripts). IMHO, of course. In these days of tab completion, there's not much reason left to keep command names excessively short and meaningless. There are too much short commands already. I wish we could use the same rule for two- and even three-character commands. > Command Package > i and o cfs > i and l chris-cust > w procps > B sam > R r-base > X xbase > [ shellutils Maybe w, certainly [. I see no reason why X shouldn't be named X, but I also don't see why it has to be in the $PATH. startx, xdm, openwin, stuff like that, but hardly X. None of the others. MfG Kai -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to debian-devel-request@lists.debian.org . Trouble? e-mail to templin@bucknell.edu .
https://lists.debian.org/debian-devel/1998/02/msg00421.html
CC-MAIN-2017-22
en
refinedweb
STRSPN(3P) POSIX Programmer's Manual STRSPN(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. strspn — get length of a substring #include <string.h> size_t strspn(const char *s1, const char *s2); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The strspn() function shall compute the length (in bytes) of the maximum initial segment of the string pointed to by s1 which consists entirely of bytes from the string pointed to by s2. The strspn() function shall return the computed length; no return value is reserved to indicate an error. No errors are defined. The following sections are informative. None. None. None. None. strcspn(3p) The Base Definitions volume of POSIX.1‐2008, stringSPN(3P) Pages that refer to this page: string.h(0p), localeconv(3p), strcspn(3p)
http://man7.org/linux/man-pages/man3/strspn.3p.html
CC-MAIN-2017-22
en
refinedweb
#include <semaphore.h> sem_unlink() removes the named semaphore referred to by name. The semaphore name is removed immediately. The semaphore is destroyed once all other processes that have the semaphore open close it. On success sem_unlink() returns 0; on error, −1 is returned, with errno set to indicate the error. The caller does not have permission to unlink this semaphore. name was too long. There is no semaphore with the given name. For an explanation of the terms used in this section, see attributes(7). sem_getvalue(3), sem_open(3), sem_post(3), sem_wait(3), sem_overview(7)
http://manpages.courier-mta.org/htmlman3/sem_unlink.3.html
CC-MAIN-2017-22
en
refinedweb
view raw First a little setup. Last week I was having trouble implementing a specific methodology that I had constructed which would allow me to manage two unique fields associated with one db.Model object. Since this isn't possible, I created a parent entity class and a child entity class, each having the key_name assigned one of the unique values. You can find my previous question located here, which includes my sample code and a general explaination of my insertion process. On my original question, someone commented that my solution would not solve my problem of needing two unique fields associated with one db.Model object. My implementation tried to solve this problem by implementing a static method that creates a ParentEntity and it's key_name property is assigned to one of my unique values. In step two of my process I create a child entity and assign the parent entity to the parent parameter. Both of these steps are executed within a db transaction so I assumed that this would force the uniqueness contraint to work since both of my values were stored within two, separate key_name fields across two separate models. The commenter pointed out that this solution would not work because when you set a parent to a child entity, the key_name is no longer unique across the entire model but, instead, is unique across the parent-child entries. Bummer... I believe that I could solve this new problem by changing how these two models are associated with one another. First, I create a parent object as mentioned above. Next, I create a child entity and assign my second, unique value to it's key_name. The difference is that the second entity has a reference property to the parent model. My first entity is assigned to the reference property but not to the parent parameter. This does not force a one-to-one reference but it does keep both of my values unique and I can manage the one-to-one nature of these objects so long as I can control the insertion process from within a transaction. This new solution is still problematic. According to the GAE Datastore documentation you can not execute multiple db updates in one transaction if the various entities within the update are not of the same entity group. Since I no longer make my first entity a parent of the second, they are no longer part of the same entity group and can not be inserted within the same transaction. I'm back to square one. What can I do to solve this problem? Specifically, what can I do to enforce two, unique values associated with one Model entity. As you can see, I am willing to get a bit creative. Can this be done? I know this will involve an out-of-the-box solution but there has to be a way. Below is my original code from my question I posted last week. I've added a few comments and code changes to implement my second attempt at solving this problem. class ParentEntity(db.Model): str1_key = db.StringProperty() str2 = db.StringProperty() @staticmethod def InsertData(string1, string2, string3): try: def txn(): #create first entity prt = ParentEntity( key_name=string1, str1_key=string1, str2=string2) prt.put() #create User Account Entity child = ChildEntity( key_name=string2, #parent=prt, #My prt object was previously the parent of child parentEnt=prt, str1=string1, str2_key=string2, str3=string3,) child.put() return child #This should give me an error, b/c these two entities are no longer in the same entity group. :( db.run_in_transaction(txn) except Exception, e: raise e class ChildEntity(db.Model): #foreign and primary key values str1 = db.StringProperty() str2_key = db.StringProperty() #This is no longer a "parent" but a reference parentEnt = db.ReferenceProperty(reference_class=ParentEntity) #pertinent data below str3 = db.StringProperty() After scratching my head a bit, last night I decided to go with the following solution. I would assume that this still provides a bit of undesirable overhead for many scenarios, however, I think the overhead may be acceptable for my needs. The code posted below is a further modification of the code in my question. Most notably, I've created another Model class, called named EGEnforcer (which stands for Entity Group Enforcer.) The idea is simple. If a transaction can only update multiple records if they are associated with one entity group, I must find a way to associate each of my records that contains my unique values with the same entity group. To do this, I create an EGEnforcer entry when the application initially starts. Then, when the need arises to make a new entry into my models, I query the EGEnforcer for the record associated with my paired models. After I get my EGEnforcer record, I make it the parent of both records. Viola! My data is now all associated with the same entity group. Since the *key_name* parameter is unique only across the parent-key_name groups, this should inforce my uniqueness constraints because all of my FirstEntity (previously ParentEntity) entries will have the same parent. Likewise, my SecondEntity (previously ChildEntity) should also have a unique value stored as the key_name because the parent is also always the same. Since both entities also have the same parent, I can execute these entries within the same transaction. If one fails, they all fail. #My new class containing unique entries for each pair of models associated within one another. class EGEnforcer(db.Model): KEY_NAME_EXAMPLE = 'arbitrary unique value' @staticmethod setup(): ''' This only needs to be called once for the lifetime of the application. setup() inserts a record into EGEnforcer that will be used as a parent for FirstEntity and SecondEntity entries. ''' ege = EGEnforcer.get_or_insert(EGEnforcer.KEY_NAME_EXAMPLE) return ege class FirstEntity(db.Model): str1_key = db.StringProperty() str2 = db.StringProperty() @staticmethod def InsertData(string1, string2, string3): try: def txn(): ege = EGEnforcer.get_by_key_name(EGEnforcer.KEY_NAME_EXAMPLE) prt = FirstEntity( key_name=string1, parent=ege) #Our EGEnforcer record. prt.put() child = SecondEntity( key_name=string2, parent=ege, #Our EGEnforcer record. parentEnt=prt, str1=string1, str2_key=string2, str3=string3) child.put() return child #This works because our entities are now part of the same entity group db.run_in_transaction(txn) except Exception, e: raise e class SecondEntity(db.Model): #foreign and primary key values str1 = db.StringProperty() str2_key = db.StringProperty() #This is no longer a "parent" but a reference parentEnt = db.ReferenceProperty(reference_class=ParentEntity) #Other data... str3 = db.StringProperty() One quick note-- Nick Johnson pinned my need for this solution: This solution may be sufficient to your needs - for instance, if you need to enforce that every user has a unique email address, but this is not your primary identifier for a user, you can insert a record into an 'emails' table first, then if that succeeds, insert your primary record. This is exactly what I need but my solution is, obviously, a bit different than your suggestion. My method allows for the transaction to completely occur or completely fail. Specifically, when a user creates an account, they first login to their Google account. Next, they are forced to the account creation page if there is no entry associated with their Google account in SecondEntity (which is actually UserAccount form my actual scenario.) If the insertion process fails, they are redirected to the creation page with the reason for this failure. This could be because their ID is not unique or, potentially, a transactional timeout. If there is a timeout on the insertion of their new user account, I will want to know about it but I will implement some form of checks-and-balance in the near future. For now I simply want to go live, but this uniqueness constraint is an absolute necessity. Being that my approach is strictly for account creation, and my user account data will not change once created, I believe that this should work and scale well for quite a while. I'm open for comments if this is incorrect.
https://codedump.io/share/TrpP850Hlefl/1/how-can-i-create-two-unique-queriable-fields-for-a-gae-datastore-data-model
CC-MAIN-2017-22
en
refinedweb
Configure Force Tunneling for DirectAccess Clients Published: October 7, 2009 Updated: May 24, 2010 Applies To: Windows Server 2008 R2 Before configuring force tunneling settings for DirectAccess clients, you should have deployed and determined the Internet Protocol version 6 (IPv6) addresses of either your dual protocol (Internet Protocol version 4 [IPv4] and IPv6) proxy servers or your IPv6/IPv4 translator (NAT64) and IPv6/IPv4 DNS gateway (DNS64) devices that are in front of your IPv4-based proxy servers. For more information about these devices, see Choose Solutions for IPv4-only Intranet Resources. snap-in, open the appropriate forest and domain object, right-click the Group Policy object for DirectAccess clients, and then click Edit. In the console tree of the Group Policy Management Editor snap-in, open Computer Configuration\Policies\Administrative Templates\Network\Network Connections. In the details pane, double-click Route all traffic through the internal network. In the Route all traffic through the internal network dialog box, click Enabled, and then click OK. In the console tree of the Group Policy Management Editor snap-in, open Computer Configuration\Policies\Windows Settings\Name Resolution Policy. In the details pane, in To which part of the namespace does this rule apply?, click Any. Click the DNS Settings for Direct Access tab, and then click Enable DNS settings for Direct Access in this rule. In DNS servers (optional), click Add. In DNS server, type the IPv6 address of your dual protocol (IPv4 and IPv6) proxy server or your NAT64/DNS64 devices that are in front of your IPv4-based proxy server. Repeat this step if you have multiple IPv6 addresses. Click Create, and then click Apply. In the console tree of the Group Policy Management Editor snap-in, open Computer Configuration\Policies\Administrative Templates\Network\TCPIP Settings\IPv6 Transition Technologies. In the details pane, double-click 6to4 State. In the 6to4 State dialog box, click Enabled, click Disabled State in Select from the following states, click Apply, and then click OK. In the details pane, double-click Teredo State. In the Teredo State dialog box, click Enabled, click Disabled State in Select from the following states, click Apply, and then click OK. In the details pane, double-click IP-HTTPS State. In the IP-HTTPS State dialog box, click Enabled State in Select Interface state from the following options, click Apply, and then click OK. DirectAccess clients will apply these settings the next time they update their Computer Configuration Group Policy. If you arrived at this page by clicking a link in a checklist, use your browser’s Back button to return to the checklist.
https://technet.microsoft.com/en-us/library/ee649127(WS.10).aspx
CC-MAIN-2017-22
en
refinedweb