text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Constant::FromGlobal - declare constant(s) with value from global or environment variable package Foo; use Constant::FromGlobal qw(DEBUG); sub foo { # to enable debug, set $Foo::DEBUG=1 before loading Foo warn "lalala" if DEBUG: } This module lets you define constants that either take their values from global variables, or from environment variables. The constants are function-style constants, like those created using the constant pragma. Here's a minimal example showing how to set a constant from a global variable: our $DEBUG; BEGIN { $DEBUG = 1; } use Constant::FromGlobal qw/ DEBUG /; You might wonder why you might want to do that? A better example is where a module sets a constant from a global variable in its package, but you can set that variable before using the module. First, here's the module: package Foobar; use Constant::FromGlobal LOGLEVEL => { default => 0 }; Then elsewhere you can write something like this: BEGIN { $Foobar::LOGLEVEL = 3; } use Foobar; By default Constant::FromGlobal will only look at the relevant global variable. If you pass the env option, then it will also look for an appropriately named environment variable: use Constant::FromGlobal DEBUG => { env => 1, default => 0 }; Note that you can also set a default value, which will be used if neither a global variable nor environment variable was found. This routine takes an optional hash of options for all constants, followed by an option list (see Data::OptList) of constant names. For example: use Constant::FromGlobal { env => 1 }, "DSN", MAX_FOO => { int => 1, default => 3 }; is the same as use Constant::FromGlobal DSN => { env => 1 }, MAX_FOO => { int => 1, default => 3, env => 1 }; which will define two constants, DSN and MAX_FOO. DSN is a string and MAX_FOO is an integer. Both will take their values from $Foo::DSN if defined or $ENV{FOO_DSN} as a fallback. Note: if you define constants in the main namespace, version 0.01 of this module looked for environment variables prefixed with MAIN_. From version 0.02 onwards, you don't need the MAIN_ prefix. There are three types you can specify for constants: constant - core module for defining constants, and used by Constant::FromGlobal. constant::lexical - very similar to the constant pragma, but defines lexically-scoped constants. Const::Fast - CPAN module for defining immutable variables (scalars, hashes, and arrays). Adam's original post that inspired this module was on use.perl.org, and is no longer available online. constant modules - A review of all perl modules for defining constants, by Neil Bowers. This module was originally written by Yuval Kogman, inspired by a blog post by Adam Kennedy, describing the "Constant Global" pattern. The module is now being maintained by Neil Bowers <neilb@cpan.org>. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Constant-FromGlobal/lib/Constant/FromGlobal.pm
CC-MAIN-2016-18
en
refinedweb
poepp - Interprets POE::Devel::Profiler output perl -MPOE::Devel::Profiler myPOEapp.pl poepp BasicSummary First release! Interprets and visualizes the data POE::Devel::Profiler produces This small program handles parsing the data and passing it to a Visualization module. Included in this distribution is one Visualizer, 'BasicSummary'. As I have time, more visualizers will be added, hopefully some nice graphs :) The desired Visualizer must be the first argument, and the rest of the arguments will be passed intact to the Visualizer for further processing. Okay, you want to code your own Visualizer! All you need to do is look at POE::Devel::Profiler::Visualizer::BasicSummary to get a general idea of what to do. The visualizer must reside in the POE::Devel::Profiler::Visualizer namespace The visualizer must define 2 subroutines: GET_ARGS and OUTPUT GET_ARGS will be called at the start, the Visualizer can grab arguments from @ARGV OUTPUT will be called with a pointer to the massive data structure :) For now, the source to POE::Devel::Profiler::Parser contains the entire data structure, play around with Devel::Dumper if necessary :( L<POE> L<POE::Devel::Profiler::Parser> L<POE::Devel::Profiler::Visualizer::BasicSummary> Apocalypse <apocal@cpan.org> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/POE-Devel-Profiler/bin/poepp
CC-MAIN-2016-18
en
refinedweb
Where does WinSCP store site's information or password? I can't find it under Documents and Settings... The configuration file is stored either in the Windows registry or, if you are using the portable version, in an INI file. (See the documentation.) The registry location is: HKEY_CURRENT_USER\Software\Martin Prikryl\WinSCP 2 You can always export the settings to an INI file by pressing Export in the preferences dialog. Export Note that your passwords are not stored in text, but encoded. Though difficult to decrypt, it is not impossible. This is a simplified version of Cesar's excellent answer and assumes your password still works in SCP. Cesar's Create a batch file called echo.cmd that contains the following: echo.cmd echo %* pause Place it on a suitable place, such as your desktop. Fire up WinSCP and connect to your site. Click on Options -> Preferences: Options -> Preferences On the Preferences dialog, go to Integration -> Applications. Replace what was previously in the PuTTY path with the path to your newly created echo.cmd batch file. Also select the option: Integration -> Applications Remember session password and pass it to PuTTY (SSH) Click OK. OK Now launch PuTTY from within WinSCP. Your previously stored password should now be displayed on the screen! Create a new C# console application, then type the following program: namespace ConsoleApplication8 { class Program { static void Main(string[] args) { foreach (var str in args) System.Console.WriteLine(str); System.Console.ReadLine(); } } } If you are not a C# programmer, you can do as easily on any other language you might be familiar with. The point is simply to print whatever values are passed as an argument to your program. You can even do it with a script, if that is more familiar to you. Now, compile your program, and grab your binary (such as ConsoleApplication8.exe). Place it on a suitable place, such as your desktop. Now, if your password still works, fire up WinSCP and connect to your site. Click on Options -> Preferences On the Preferences dialog, go to Integration -> Applications. Replace what was previously in the PuTTY path with the path to your newly created binary. Also select the option "Remember session password and pass it to PuTTY (SSH)". Click OK, and then try to launch PuTTY from within WinSCP. Your previously stored password should now be visible in your screen. I had a very weird use case, I needed to recover my password, but my system admins had locked down my environment so I couldn't run any non-whitelisted executable files, and that included .cmd files. My solution was to instead point the Putty command line to Notepad++. When it ran it said "File -pw" does not exist, should I create it, you say no. Then it says "File {the password shows up here} does not exist, should I create it" and again you say now, and bam, there was the password in clear text. ps You could use a tool like WireShark to "see" what goes on over the wire. What I mean is to have a packet capturing session running (in WireShark) and then login to your FTP server (using WinSCP, with NO encryption). Then, by looking at the registered session in WireShark, one could easily identify the "discussion" (filtering by the destination IP for example) and then identifying the Request: USER blabla, and then REQUEST: PASS blabla, at the FTP level of the "conversation". I came to this answer while researching a slightly different problem however this was helpful and I wanted to share what I did. My problem was that I was using WinSCP with passwords saved under Windows XP within an Active Directory domain which then changed. With the new Active Directory domain, my user profile also changed resulting in WinSCP showing no saved logon profiles. In order to recover the previous WinSCP logon profiles I did the following. Started up the regedit application and did a search for any keys that had a name of Martin Prikryl. After several false matches, I found the key with what looked to be the correct session data. regedit Martin Prikryl I then exported the WinSCP Session registry key using the regedit export command into a text file. Next I modified the exported text in the text file so that it used HKEY_CURRENT_USER as the beginning of the complete key in front of the Software sub-key HKEY_CURRENT_USER Next using regedit, I imported the data to modify the Windows Registry keys used by WinSCP for the current user. These actions did the following: (1) found the WinSCP logon Session data for the old user profile, (2) made a copy of that data, (3) modified the Windows Registry key to allow an import with regedit to modify the current user, (4) imported the data modifying the WinSCP registry entries for the current user profile. After doing this procedure I was able to access my web server with WinSCP. There are probably a couple of reasons why this was straightforward and worked. First of all this PC was used only by one person so was not shared reducing the false matches. Secondly I had Administrator privileges to the PC. Third this was Windows XP and not Windows 7/8. Try this method. login into your saved session go to session menu click on generate URL only check username and password options click on copy to clip board. it will contain username and password. thats it. By posting your answer, you agree to the privacy policy and terms of service. asked 6 years ago viewed 15515 times active 4 months ago
http://superuser.com/questions/100503/where-does-winscp-store-sites-password/100522
CC-MAIN-2016-18
en
refinedweb
, I am going to show how to call a published Web service inside a Web project. I use a test published Web service; Extentrix Web Services 2.0 Application Edition that Extentrix published for the developer community to help them in testing and developing. So I'll simply explain this website. You can find more samples, use this web service, and test it here. Knowledge in ASP.NET is preferred. After creating the Web Site project, it’s time to add a Web reference for our Web service. In the URL field, insert the URL for the Web service. In this tutorial, as I mentioned before, I'll use the test published Web services from Extentrix: “Extentrix Web Services 2.0 – Application Edition”. After clicking the Go button, you will see the Web services APIs. After successfully adding to the Web service, now we are ready to call the Web services APIs inside our project. ExtentrixWS using ExtentrixWS; ExtentrixWebServicesForCPS //define a Web service proxy object. private ExtentrixWS.ExtentrixWebServicesForCPS proxy; //define a Citrix Presentation Server Credentials object private Credentials credentials; Initialize the proxy and the credentials objects: //initialize objects proxy = new ExtentrixWebServicesForCPS(); credentials = new Credentials(); //set credentials //these values are according to Citrix testdrive presentation server //for which Extentrix published a web service for developers. It is as simple as calling any ordinary function. GetApplicationsByCredentialsEx I am not going to explain Extentrix Web services APIs, if you are interested, you can go here and look for it. This API returns an array of ApplicationItemEx. This class will be built for you once you add the Web reference. ApplicationItemEx This class contains the published application properties. I used this Web service to get all the published applications, and then I created an ImageButton for each application. ImageButton // a Web service is to launch the published application. In this example, in the event handler of the applications ImageButtons I launch the clicked application. ImageButtons I get the ICA file content by calling LaunchApplication Web service. Then I write the ICA file content to the response to launch the application. LaunchApplication)); Response.End(); } This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) The URI prefix is not recognized and HTTP 404 error General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/22760/Calling-Web-Service-using-ASP-NET?fid=967268&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=4079133&fr=1
CC-MAIN-2016-18
en
refinedweb
Displaying | Jmeter | Questions? Server Section Servers index Mysql Date Index Mysql Date Index Mysql Date Index is used to create a index on specified table. Indexes... combination of columns in a database table. An Index is a database structure which creating index for xml files - XML creating index for xml files I would like to create an index file... the same structure. It would be like 100 to 200 xml files and each xml file has... after another and then retrieve each tag and create index to that file. In some multiply of 100 digits numbers multiply of 100 digits numbers multiplying 100 digits numbers to eachother Index Out of Bound Exception Index Out of Bound Exception Index Out of Bound Exception are the Unchecked Exception... passed to a method in a code. The java Compiler does not check the error during : What is Index? What is Index? What is Index Jmeter - Jmeter Tutorials Jmeter - Jmeter Tutorials Apache Jmeter is java application designed to load test the application. You can use Jmeter to test how much load your Web site can handle print 100 numbers using loops print 100 numbers using loops how to print from 1 to 100 using...{ public static void main(String[] args){ for(int i=1;i<=100;i...[] args){ for(int i=1;i<=100;i++){ System.out.println(i. Could you help me please on this point. Thank you in advance index of javaprogram index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student. To learn java, please visit the following link: Java Tutorial print the sum of even number from 1 to 100 print the sum of even number from 1 to 100 how to print the sum of even number from 1 to 100 using for loops? Thanks Generate random numbers from 1 to 100 Generate random numbers from 1 to 100 1)A class Called: RandomNumberGenerator that generate random numbers from 1 to 100 2)A class Test that tests the hierarchy in A) especially the getArea() and getVolume() methods. Use php error - WebSevices php error how can an error showing undefined index be resolved Search index Jmeter - Jmeter Tutorials SERVLET ERROR an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class ServletDemo...:100) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562 Error - Struts This is my index page Java error ArrayIndexOutofBoundsException Java error: ArrayIndexOutofBoundsException In this section you will get... when accessing to an illegal array index. The index is either greater... error public class ArrayIndex { public static void main(String args Using JMeter for a Simple Test Using JMeter for a Simple Test Lets see how to run JMeter now. We will conduct a simple test... with the test we need to have a test plan first which will help the Jmeter to perform plz check my codings are correct or not...There is an error..i cant find it.. plz check my codings are correct or not...There is an error..i cant find... void setMark1(int newMark1) { if(newMark1<0 || newMark1>100...) { if(newMark2<0 || newMark2>100) mark2=0; } public void setMark3(int Error in POI - Java Beginners Error in POI Hi friend, I am working with Excel using POI API.I need to read an excel file and do some modifications and save this file... workbook.removeSheetAt(index). This works fine without any exception. The problem now index - Java Beginners Error with KeyListeners - Java Beginners Error with KeyListeners Here tf1id is Jtextfield,when I implement following code Gives Error Source Code- tf1id.addKeyListener(new KeyAdapter... java.awt.Color(255, 100, 100)); tf1.setBackground(new java.awt.Color(255, 100, 100 Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import Proogramming Error - JSP-Servlet { display:block; background-color:#C9C299; width:100%; margin:4px; float... ,BankName shoukd get validated but i tried its getting error sir can u please tell me Error - Java Beginners try to solve the error.but i dont know where to correct the error. i am new...; String ltr = ""; nq = numq / 100; numr = numq % 100; if (numr == 0) { ltr error error while iam compiling iam getting expected error connection database error for frame super(); setBounds(100, 100, 430, 430...;which type of error occurs? Specify it. Is NWIND is your dsn jsp error - JSP-Servlet jsp error how to remove below error:: exception org.apache.jasper.JasperException: java.lang.NullPointerException root cause java.lang.NullPointerException Hi Friend, It seems that something has Program Error - WebSevices . application/views/scripts/index/index.phtml Simple insert data Programming error - Java Beginners .") return false } //get the zero-based index of the "@" character java runtime error java runtime error hi friends i am trying to run the fallowing program but i am getting the error Exception in thread "main...(Unknown Source) at java.net.URLClassLoader.access$100(Unknown Source Java Error - Java Beginners =(prin*rate*no)/100; String s=Float.toString(interest); System.out.print Programming Error - JSP-Servlet ;} #title { display:block; background-color:#C9C299; width:100%; margin:4px compilation error - JSP-Servlet index out of range: -11243 java.lang.String.substring(Unknown Source Logic error? HELP PLEASE! :( Logic error? HELP PLEASE! :( Hello Guys! i have a huge problem. What...; <td width="100"> { int int_excess = (temp_excess - 500) / 100; finalPremium java compilation error - Java Beginners ; // index in the supply inventory of the office supply that is currently... index = 0; // GUI elements to display currently selected office supplies... actionPerformed(ActionEvent evt) { // proceed forward one office supply index including index in java regular expression including index in java regular expression Hi, I am using java regular expression to merge using underscore consecutive capatalized words e.g., "New York" (after merging "New_York") or words that has accented characters - Development process Error I want to make analysis for GPS data but there are some error.Why this code has error?? Public Sub playdate() Dim datFirst, datsecond... .HasTitle = True .ChartTitle.Character.Text = "AVERAGE S4 INDEX Reply to the mail(import files error) Reply to the mail(import files error) Hi! Thank you. That error has... as org.hibernate.MappingException: Error reading resource: contact.hbm.xml... at org.hibernate.cfg.Mappings.addImport(Mappings.java:100 Shopping Cart Index Page StringIndexOutOfBound error !!! plz give me the reason for this error ,plz......... StringIndexOutOfBound error !!! plz give me the reason for this error ,plz......... import java.util.Scanner; class Even { static int getEven(int..." java.lang.StringIndexOutOfBoundsException: String index out of range: 0 error Java runtime error - JSP-Servlet Java runtime error Following Error is showing when i way...:52) at org.apache.jsp.index_jsp._jspService(index_jsp.java:119... index=0; public SaneExample(String[] argv){ scanner=Scanner.getDevice Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/31308
CC-MAIN-2016-18
en
refinedweb
Why is rtmfp not working with these parameters and functions?Gabi_HUN Feb 2, 2013 3:46 PM I wrote some basic functions in ActionScript in order to use RTMFP: import flash.events.NetStatusEvent; import flash.net.NetConnection; import flash.net.NetStream; import flash.ui.Keyboard; private var serverAddress:String = "rtmfp://p2p.rtmfp.net/"; private var serverKey:String = "xxxxx"; private var netConnection:NetConnection; private var outgoingStream:NetStream; private var incomingStream:NetStream; private function initConnection():void { netConnection = new NetConnection(); netConnection.addEventListener(NetStatusEvent.NET_STATUS, netConnectionHandler); netConnection.connect(serverAddress + serverKey); } private function netConnectionHandler(event:NetStatusEvent):void { receivedMessages.text += "NC Status: " + event.info.code + "\n"; //Some status handling will be here, for now, just print the result out. switch (event.info.code) { case 'NetConnection.Connect.Success': receivedMessages.text += "My ID: " + netConnection.nearID + "\n"; break; } } private function sendInit():void { outgoingStream = new NetStream(netConnection, NetStream.DIRECT_CONNECTIONS); outgoingStream.addEventListener(NetStatusEvent.NET_STATUS, outgoingStreamHandler); outgoingStream.publish("media"); var sendStreamObject:Object = new Object(); sendStreamObject.onPeerConnect = function(sendStr:NetStream):Boolean { receivedMessages.text += "Peer Connected ID: " + sendStr.farID + "\n"; return true; } outgoingStream.client = sendStreamObject; } private function receiveInit():void { receivedMessages.text += "Initializing Receiving Stream: " + incomingID.text + "\n"; incomingStream = new NetStream(netConnection, incomingID.text); incomingStream.addEventListener(NetStatusEvent.NET_STATUS, incomingStreamHandler); incomingStream.play("media"); incomingStream.client = this; } public function receiveMessage(message:String):void { receivedMessages.text += "Received Message: " + message + "\n"; } private function outgoingStreamHandler(event:NetStatusEvent):void { receivedMessages.text += "Outgoing Stream: " + event.info.code + "\n"; } private function incomingStreamHandler(event:NetStatusEvent):void { receivedMessages.text += "Incoming Stream: " + event.info.code + "\n"; } private function sendMessage():void { outgoingStream.send("receiveMessage", toSendText.text); } private function disconnectNetConnection():void { netConnection.close(); netConnection.removeEventListener(NetStatusEvent.NET_STATUS, netConnectionHandler); netConnection = null; } private function disconnectOutgoing():void { outgoingStream.close(); outgoingStream.removeEventListener(NetStatusEvent.NET_STATUS, outgoingStreamHandler); outgoingStream = null; } private function disconnectIncoming():void { incomingStream.close(); incomingStream.removeEventListener(NetStatusEvent.NET_STATUS, incomingStreamHandler); incomingStream = null; } Notes: - initConnection initializes the NC, sendInit initializes the send stream - receiveInit initializes the incoming stream based on the far peer id which I copy paste intoincomingID.text - I print every results into receivedMessages.text - I do not have firewall, nor NAT. - The Adobe sample application () works perfectly. The procedure I follow: - Initialize NC on application 1. (sender) - Initialize NC on application 2. (receiver) - I initialize the sending stream (sendInit) on application 1. - I copy the far peer id to application 2. and run receveInit. After this, I execute sendMessage() which reads the message from toSendText.text. Does not working. Why? What is wrong with my codes? Thank you for every helpful answer in advance. 1. Re: Why is rtmfp not working with these parameters and functions?Gabi_HUN Feb 5, 2013 9:38 PM (in response to Gabi_HUN) Jesus. I found out. I used to test the application on the same computer I am doing the development on, its a Mac (OSX 10.8.2, Chrome 24.0.1312.57). Later, I put it on Windows, to see if everything is okay. It worked. My RTMFP works on Windows, but not on Mac. It's strange.
https://forums.adobe.com/thread/1146483
CC-MAIN-2016-18
en
refinedweb
Background After Friedemann has given an initial introduction about Qt’s Windows Runtime port, I would like to give some further insight about technical aspects and our ways of working on the port. When reading about Windows Runtime development (Windows 8 store applications and Windows runtime components) in connection with C++ you will find C++/CX again and again. Windows C++/CX are C++ language extensions which were developed by Microsoft to make software development for Windows Runtime easier by bringing it “as close as possible to modern C++” (Visual C++ Language Reference (C++/CX)). In some cases, these extensions look similar to C++/CLI constructs, but they might have other meanings or slightly different grammar. The first thing that catches someone’s eye when one has a look at C++/CX documentation or an example/application are lines like Foo ^foo = ref new Foo(); The ^ is basically a pointer, but gives the additional information that it is used on a ref-counted COM object so that memory management happens “automagically”. The “ref new” keyword means that the user wants to create a new “Ref class” (see Ref classes and structs in Type System (C++/CX)), which means that it is copied by reference and memory management happens by reference count. So there isn’t much magic involved in that line; it only tells the compiler that the object’s memory can be managed by reference count and the user does not have to delete it himself. Basically C++/CX is just what its name tells us it is – extensions to the C++ language. Everything ends up as native unmanaged code quite similar to the way Qt works. Some people might argue whether it is necessary to reinvent the wheel for the n-th time where a lot of the “problems” are actually solved in C++11 (by the way, auto foo also works in the example above), but that is what was decided for Windows Runtime development. Use of C++/CX inside Qt’s WinRT port Microsoft has said on different occasions (among others during the Build conference 2012) that everything that can be done using C++/CX can also be done without the extensions, as everything ends up as native code in either case. So we had to decide whether we want to use the new fancy stuff or take the cumbersome road of manual memory management etc. Theoretically there is nothing that keeps us from using C++/CX in Qt’s WinRT port, but there are some reasons why we try to avoid them. For one, these extensions might prevent developers who are willing to help with the Windows Runtime port from having a deeper look at the code. If you do not have any previous experience with this development environment, having new constructs and keywords (in addition to new code) might be enough for you to close the editor right away. While WinRT code which doesn’t use CX might not be especially beautiful, there are no non-default things which might obscure it even more. Another issue is that Qt Creator’s code model cannot handle these extensions (yet). You don’t get any auto completion for ^-pointers, for example. This can of course be fixed in Qt Creator and time will tell whether it will be, but at the moment the port and basic Qt Creator integration (building, debugging & deployment) are our first priorities. Due to these facts, we decided that we do not want to use the extensions. Though, if someone wants to help out with the port and is eager to use CX he/she might be able to persuade us to get the code in (after proper review of course 😉 ). Problems and challenges of not using C++/CX The main problem when it comes to development of Windows Runtime code without using C++/CX is the severe lack of documentation. While the MSDN documentation generally can be improved in certain areas, it almost completely lacks anything about this topic. But thanks to Andrew Knight, who gave me an initial overview how things are to be used and was always helpful whenever I had additional questions, I think I am getting the grip on things. In order to help others who want to join the efforts (and have all the things written down), I will cover some basic areas below. Namespaces The namespaces given in the documentation are always the same for the CX usage of the classes, just with “ABI” added as the root namespace. So for StreamSocket, Windows::Networking::Sockets becomes ABI::Windows::Networking::Sockets. Additionally, you probably need Microsoft::WRL (and also Microsoft::WRL::Wrappers). WRL stands for “Windows Runtime C++ Template Library” and is used for direct COM access in Windows Runtime applications – but you will also need its functionality when omitting CX (for creating instances for example). Creating instances When not using CX, most of the classes cannot be accessed directly. Instead, there are interface classes which need to be used. These interfaces are marked by an ‘I’ in front of the class name so that StreamSocket becomes IStreamSocket. As these interfaces are abstract classes, they cannot be instantiated directly. First of all, you have to create a string which represents the class’s classId. HStringReference classId(RuntimeClass_Windows_Networking_Sockets_StreamSockets); These RuntimeClass_Windows… constructs are defined in the related header files and expand to strings like “Windows.Networking.Sockets.StreamSocket” for example. The way objects can be instantiated depends on whether the class is default constructable or not. If it is, ActivateInstance can be used to obtain an object of the type you are after. IStreamSocket *streamSocket = 0; if (FAILED(ActivateInstance(classId.Get(), &streamSocket)) { // handle error } Unfortunately, the ActivateInstance convenience function fails for StreamSocket in that case as it expects a ComPtr as parameter. In order to avoid that failure one has to take the long way using RoActivateInstance IInspectable *inspectable = 0; if (FAILED(RoActivateInstance(classId.Get(), &inspectable)) { // handle error } if (FAILED(inspectable->QueryInterface(IID_PPV_ARGS(&streamSocket)))) { // handle error } If the class is not default constructable, it has to use a factory in order to create instances. These factories can be obtained by calling GetActivationFactory with the appropriate class Id. One example of a class like that would be HostName: IHostNameFactory *hostnameFactory; HStringReference classId(RuntimeClass_Windows_Networking_HostName); if (FAILED(GetActivationFactory(classId.Get(), &hostnameFactory))) { // handle error } IHostName *host; HStringReference hostNameRef(L""); hostnameFactory->CreateHostName(hostNameRef.Get(), &host); hostnameFactory->Release(); People who are used to Windows development probably have already noticed that all this is COM based. That means that all this has been around for ages and is loved everywhere. Calling static functions For classes which have static functions there is an extra interface for these functions. These interfaces are marked by “Statics” at the end of the “basic” interface name and can also be obtained by using GetActivationFactory. One example would be IDatagramSocketStatics which contains GetEndpointPairsAsync for example. IDatagramSocketStatics *datagramSocketStatics; GetActivationFactory(HString::MakeReference(RuntimeClass_Windows_Networking_Sockets_DatagramSocket).Get(), &datagramSocketStatics); IAsyncOperation<IVectorView *> *endpointpairoperation; HSTRING service; WindowsCreateString(L"0", 1, &service); datagramSocketStatics->GetEndpointPairsAsync(host, service, &endpointpairoperation); datagramSocketStatics->Release(); host->Release(); The endpointpairoperation defines the callback(s) for this asynchronous function, but that topic could be covered in another post. The interesting parts here are how the datagramSocketStatics pointer is filled by calling GetActivationFactory and the actual call to the static function by datagramSocketStatics->GetEndpointPairsAsync(...). ComPtr There is a way to use reference counted memory management even without using CX. It can be achieved by using the Microsoft-provided smart pointer ComPtr. So IStreamSocket *streamSocket would become ComPtr<IStreamSocket> streamSocket. When using these, we had some memory access errors we could not explain (but did not investigate much further). In addition to that, Qt Creator does not support code completion with “streamSocket->” but one would have to call “streamSocket.Get()->”. Thus we decided not to use ComPtr but keep using “normal” pointers. All you have to do is to remember to call “Release” as soon as you are done with the pointer. All in all, we try to avoid these extensions even though it might not make the code beautiful. If you want to contribute though and feel at home using these, feel free to create a patch containing CX code. If you have any further questions or advice, feel free to add them in the comments or join us in #qt-winrt on freenode. OMG, what have they done to C/C++ ?!?! I believe if I am ever going to return to Windows programming I will stick with Win32 C APIs. I once tried to port a small GUI app from Win32 to a more modern .NET UI and started looking into C++/CLI. I was totally horrified of the syntax and the new paradigms. So thanks Qt for giving me a clean and elegant C++ wrapper for Windows programming and hiding all these “improved” extensions from me. I’ve just felt the same. I was surprised at first by how many Windows-only applications are written in Qt. Now I know why. I think not using the C++/CX was a good decision. On the other hand, not using RAII/smart pointers/ComPtr is two steps backwards. The first thing you will forget is to call Release() in every code path. It is exactly the same problem what we have with new and delete. I agree. I’m assuming the reason code completion doesn’t work on the ComPtr class is the overloaded -> operator returning a hacked pointer cast to hide AddRef/Release, Qt could just roll their own that doesn’t do that. +1 Very, very, +1 Agreed. Not using ComPtr is just as silly as not using other C++ pointer types, such as QScopedPointer etc. Why not std::unique_ptr / std::shared_ptr with a custom deleter that calls Object::Release() on destruction? Just std::bind() the instance to the Release() member, store it in a std::function and wrap the whole thing in a nice qt::make_com() wrapper. That way you can have all the benefits of reference counting with non of the silly language extensions. That would be slow, and memory hungry, but yes, one can create a QComPtr easily if she/he has problems with ComPtr. “That would be slow, and memory hungry” Why? This isn’t garbage collection. The only cost is the reference count which is negligible next to the guarantee that resources are properly cleaned up. The std::bind creates an unnecessary object on the heap, the std::function will create another one. It’s at least two new, and two indirection. Totally fine when we need one quick solution for one place, not so good when we want to use it million times. Also the COM objects have an internal reference count mechanism (AddRef()/Release()) so no need to use another one. If we want reference counting/ComPtr/all that modern garbage (collection) we might as well use Java/.NET and be done (with Qt). Java/.NET don’t use reference counting, and reference counting is not garbage collection. It’s a deterministic lifetime model. Qt already uses reference counting, a prime example is the QSharedPointer class. And QVector, QString… That’s implicit sharing, not quite the same thing. It’s reference counting with Copy-On-Write semantics. C++ is complex enough without yet another proprietary extension to the language. And how someone at MS thought ‘^’ would naturally mean ‘reference counted pointer’ is beyond me! We’ve had reference counting in C and C++ for decades and it didn’t require extensions to the base language. Instead of ‘^’ they should have used ‘EEE’ for Embrace, Extend, Extinguish; C++/CX needs to die. It’s just as “natural” as “*” meaning “a pointer”. It’s a matter of convention. ^ should be familiar to those who dealt with Pascal, ref rings a bell to Algolites. Speaking many languages sometimes widens your horizons and makes you less surprised at stuff “… has been around for ages and is loved everywhere…” You almost made me spill coffee on my Retina keyboard with that sentence Your C++ notation sucks btw… the variable is a pointer to IStreamSocket, so you should write IStreamSocket* streamSocket = 0; instead of IStreamSocket *streamSocket = 0; even though the compiler accepts both. Read on “Coding Conventions” for C++. As a first start, here is for Qt: If you have two things on one line, the placement of the * can have a big impact on readability: IStreamSocket* foo, bar; looks like foo and bar will be the same type at a glance. But, the * only applies to foo. So, a lot of people consider it good practice to always put the * with the name instead of the type. Or don’t declare multiple variables on the same line. Then it’s a non-issue. LOL no. Thus we decided not to use ComPtr but keep using “normal” pointers. It’s better to write QComPtr for Qt. Step backward Please, don’t use raw pointers in 2013 (C++11 is already here)! Especially with COM/OLE (I personally like to use “Compiler COM Support Classes” with MSVC: _com_ptr_t, _bstr_t, etc). Wrapped those raw pointer by unique_ptr or just design a lightweight wrapper for them, it should not be too hard.But even in 2013, you could find c++ programmers with years experience refuse to rely on RAII, they keep telling me RAII is expensive blahblahblah, even after you show them the truth RAII is not as expensive as they though, that depend on what kind of smart pointer you are using, they keep telling you “abstraction will bite you one day, if you want safe codes, you should write bare bone codes, they are faster, safer, and easier to debug”. No way to change them, even c++11 is out, I still feel pessimistic about c++. Wrapped those raw pointer by unique_ptr or just design a lightweight wrapper for them, it should not be too hard. Of course, I’ll do so. But it’s better to Qt provides them in its code/interfaces/classes/etc by default. Ms keeps inventing new totally useless things to earn money on it and to discontinue after several releases and start some new totally useless thing. So you need to keep learning technologies that will be discontinued soon instead of adding up to your knowledge and doing something real with it. Now they started to play with C/C++. There is no need to use raw pointers if you don’t want to. There is: QSharedPointer, QScopedPointer… shared_ptr… And now in C++14: make_unique and make_shared Why don’t we just create bindings of Qt for C#? Creating .NET bindings for a heavily C++ API is hard. Also, WinRT is natively C++ and some stuff can’t be done entirely in C# yet so why would Qt go there? Yep, seems like Windows 8 really backs away from MSFT’s commitment to .NET. So we shouldn’t complain about C++/CX too much I don’t think. Actually there already are free and open source (LGPL) C#-bindings for Qt, check them out at. They wrap just Qt 4 for now but I’ve been already working for a week now on Qt 5 support, it will be complete in a month at most. Please have a look at an old library called comet () It wraps COM into a very nice set of c++ classes allowing you to use plain c++ to access COM object. Much nicer and easier to use than any Microsoft implementation. COM errors are automatically translated to exptions, and vice versa. Just to mention one of the nice features. A more comprehensive feature list at: @Mikael I don’t know too much about COM, but that comet library looks nice. It’s also liberally licensed. Could this help the WinRT Qt port? I believe so. You run a utility called tlb2h on any COM type library, and get a header file with all COM classes and interfaces wrapped into a set of easy to use c++ classes I don’t think this will work. WinRT comes with “newer version” of COM which differs from the old COM in several areas. For example component activation is different – you don’t registry your component in registry anymore. There is now COM inheritance and static function members (emulated by normal members in object factories). If comet didn’t get significant update there is a small chance it will just work. Just say no to MSFT proprietary bullshit. On the other hand, WinRT is a proprietary Windows platform. What’s wrong with using that platform’s native (and proprietary) tooling to create the Qt port? Seems odd _not_ to do that. Great decison not to embrace this new (and totally unnecessary) C++/CX that Microsoft invented for some obscure reasons… Great decision not to embrace this new (and totally unnecessary) C++/CX that Microsoft invented for some obscure reasons… ÇHonestly, and I think I speak for the majority of users: we’re not interestered in Win8 and WinRT is a total fiasco. Better use your time in other things. Indeed, if the rumor is true that Win8.1 a.k.a. Blue (to be released this summer) will allow booting into a classic desktop, that will be the first nail in WinRT’s coffin, I believe. Please note that a WinRT does have to a very large extent the same codebase to enable Windows Phone support. Meaning with the Qt/WinRT version we are aiming at both those distribution platforms. Keep up the very good work. Ironic really how many comments there are here complaining about non standard extensions to c++. Qt signals and slots anyone? This is why I don;t use either C++/CX or Qt. Neither of them are c++ Many people do not complain about the signal and slot of Qt because it is very easy to use and learn, clean and concise, they make the codes become easier to manage(The only draw back for me is unable to work with template). Extension of the langauges are not a bad thing, but something extremely awkward like microsoft did is another story, I would drop back to C API rather than using those unfriendly API designed by microsoft. Except Qt signals and slots are NOT an extension of the language itself – they’re macros that are picked up by the MOC compiler that then spits out standard C++ in a accompanying header. Qt signals and slots filled a major hole in C++98: the inability to easily bind arbitrarily member function callbacks to signals. Any conforming C++ compiler can build Qt and Qt apps but only Visual Studio can build C++/cx. This ‘^’ bullshit is just Microsoft’s latest attempt to lock developers into it’s platform.
http://blog.qt.io/blog/2013/04/19/qts-winrt-port-and-its-ccx-usage/
CC-MAIN-2016-18
en
refinedweb
Test::Most - Most commonly needed test functions and features. Version 0.34 Instead of this: use strict; use warnings; use Test::Exception 0.88; use Test::Differences 0.500; use Test::Deep 0.106; use Test::Warn 0.11; use Test::More tests => 42; You type this: use Test::Most tests => 42; Test::Most exists to reduce boilerplate and to make your testing life easier. We provide "one stop shopping" for most commonly used testing modules. In fact, we often require the latest versions so that you get bug fixes through Test::Most and don't have to keep upgrading these modules separately. This module provides you with the most commonly used testing functions, along with automatically turning on strict and warning and gives you a bit more fine-grained control over your test suite. use Test::Most tests => 4, 'die'; ok 1, 'Normal calls to ok() should succeed'; is 2, 2, '... as should all passing tests'; eq_or_diff [3], [4], '... but failing tests should die'; ok 4, '... will never get to here'; As you can see, the eq_or_diff test will fail. Because 'die' is in the import list, the test program will halt at that point. If you do not want strict and warnings enabled, you must explicitly disable them. Thus, you must be explicit about what you want and no longer need to worry about accidentally forgetting them. use Test::Most tests => 4; no strict; no warnings; All functions from the following modules will automatically be exported into your namespace: Functions which are optionally exported from any of those modules must be referred to by their fully-qualified name: Test::Deep::render_stack( $var, $stack ); Several other functions are also automatically exported: die_on_fail die_on_fail; is_deeply $foo, bar, '... we throw an exception if this fails'; This function, if called, will cause the test program to throw a Test::Most::Exception, effectively halting the test. bail_on_fail bail_on_fail; is_deeply $foo, bar, '... we bail out if this fails'; This function, if called, will cause the test suite to BAIL_OUT() if any tests fail after it. restore_fail die_on_fail; is_deeply $foo, bar, '... we throw an exception if this fails'; restore_fail; cmp_bag(\@got, \@bag, '... we will not throw an exception if this fails'; This restores the original test failure behavior, so subsequent tests will no longer throw an exception or BAIL_OUT(). set_failure_handler If you prefer other behavior to 'die_on_fail' or 'bail_on_fail', you can set your own failure handler: set_failure_handler( sub { my $builder = shift; if ( $builder && $builder->{Test_Results}[-1] =~ /critical/ ) { send_admin_email("critical failure in tests"); } } ); It receives the Test::Builder instance as its only argument. Important: Note that if the failing test is the very last test run, then the $builder will likely be undefined. This is an unfortunate side effect of how Test::Builder has been designed. explain Similar to note(), the output will only be seen by the user by using the -v switch with prove or reading the raw TAP. Unlike note(), any reference in the argument list is automatically expanded using Data::Dumper. Thus, instead of this: my $self = Some::Object->new($id); use Data::Dumper; explain 'I was just created', Dumper($self); You can now just do this: my $self = Some::Object->new($id); explain 'I was just created: ', $self; That output will look similar to: I was just created: bless( { 'id' => 2, 'stack' => [] }, 'Some::Object' ) Note that the "dumpered" output has the Data::Dumper variables $Indent, Sortkeys and Terse all set to the value of 1 (one). This allows for a much cleaner diagnostic output and at the present time cannot be overridden. Note that Test::More's explain acts differently. This explain is equivalent to note explain in Test::More. show Experimental. Just like explain, but also tries to show you the lexical variable names: my $var = 3; my @array = qw/ foo bar /; show $var, \@array; __END__ $var = 3; @array = [ 'foo', 'bar' ]; It will show $VAR1, $VAR2 ... $VAR_N for every variable it cannot figure out the variable name to: my @array = qw/ foo bar /; show @array; __END__ $VAR1 = 'foo'; $VAR2 = 'bar'; Note that this relies on Data::Dumper::Names version 0.03 or greater. If this is not present, it will warn and call explain instead. Also, it can only show the names for lexical variables. Globals such as %ENV or %@ are not accessed via PadWalker and thus cannot be shown. It would be nice to find a workaround for this. always_explainand always_show These are identical to explain and show, but like Test::More's diag function, these will always emit output, regardless of whether or not you're in verbose mode. all_done DEPRECATED. Use the new done_testing() (added in Test::More since 0.87_01). Instead. We're leaving this in here for a long deprecation cycle. After a while, we might even start warning. If the plan is specified as defer_plan, you may call &all_done at the end of the test with an optional test number. This lets you set the plan without knowing the plan before you run the tests. If you call it without a test number, the tests will still fail if you don't get to the end of the test. This is useful if you don't want to specify a plan but the tests exit unexpectedly. For example, the following would pass with no_plan but fails with all_done. use Test::More 'defer_plan'; ok 1; exit; ok 2; all_done; See "Deferred plans" for more information. The following will be exported only if requested: timeit Prototype: timeit(&;$) This function will warn if Time::HiRes is not installed. The test will still be run, but no timing information will be displayed. use Test::Most 'timeit'; timeit { is expensive_function(), $some_value, $message } "expensive_function()"; timeit { is expensive_function(), $some_value, $message }; timeit accepts a code reference and an optional message. After the test is run, will explain the time of the function using Time::HiRes. If a message is supplied, it will be formatted as: sprintf "$message: took %s seconds" => $time; Otherwise, it will be formatted as: sprintf "$filename line $line: took %s seconds" => $time; Sometimes you want your test suite to throw an exception or BAIL_OUT() if a test fails. In order to provide maximum flexibility, there are three ways to accomplish each of these. use Test::Most 'die', tests => 7; use Test::Most qw< no_plan bail >; If die or bail is anywhere in the import list, the test program/suite will throw a Test::Most::Exception or BAIL_OUT() as appropriate the first time a test fails. Calling restore_fail anywhere in the test program will restore the original behavior (not throwing an exception or bailing out). use Test::Most 'no_plan'; ok $bar, 'The test suite will continue if this passes'; die_on_fail; is_deeply $foo, bar, '... we throw an exception if this fails'; restore_fail; ok $baz, 'The test suite will continue if this passes'; The die_on_fail and bail_on_fail functions will automatically set the desired behavior at runtime. DIE_ON_FAIL=1 prove t/ BAIL_ON_FAIL=1 prove t/ If the DIE_ON_FAIL or BAIL_ON_FAIL environment variables are true, any tests which use Test::Most will throw an exception or call BAIL_OUT on test failure. It used to be that this module would produce a warning when used with Moose: Prototype mismatch: sub main::blessed ($) vs none This was because Test::Deep exported a blessed() function by default, but its prototype did not match the Moose version's prototype. We now exclude the Test::Deep version by default. If you need it, you can call the fully-qualified version or request it on the command line: use Test::Most 'blessed'; Note that as of version 0.34, reftype is also excluded from Test::Deep's import list. This was causing issues with people trying to use Scalar::Util's reftype function. Sometimes you want a exclude a particular test module. For example, Test::Deep, when used with Moose, produces the following warning: Prototype mismatch: sub main::blessed ($) vs none You can exclude this with by adding the module to the import list with a '-' symbol in front: use Test::Most tests => 42, '-Test::Deep'; See for more information. Sometimes you don't want to exclude an entire test module, but just a particular symbol that is causing issues You can exclude the symbol(s) in the standard way, by specifying the symbol in the import list with a '!' in front: use Test::Most tests => 42, '!throws_ok'; DEPRECATED and will be removed in some future release of this module. Using defer_plan will carp(). Use done_testing() from Test::More instead. use Test::Most qw<defer_plan>; use My::Tests; my $test_count = My::Tests->run; all_done($test_count); Sometimes it's difficult to know the plan up front, but you can calculate the plan as your tests run. As a result, you want to defer the plan until the end of the test. Typically, the best you can do is this: use Test::More 'no_plan'; use My::Tests; My::Tests->run; But when you do that, Test::Builder merely asserts that the number of tests you ran is the number of tests. Until now, there was no way of asserting that the number of tests you expected is the number of tests unless you do so before any tests have run. This fixes that problem. We generally require the latest stable versions of various test modules. Why? Because they have bug fixes and new features. You don't want to have to keep remembering them, so periodically we'll release new versions of Test::Most just for bug fixes. use ok We do not bundle Test::use::ok, though it's been requested. That's because use_ok is broken, but Test::use::ok is also subtly broken (and a touch harder to fix). See for more information. If you want to test if you can use a module, just use it. If it fails, the test will still fail and that's the desired result. People want more control over their test suites. Sometimes when you see hundreds of tests failing and whizzing by, you want the test suite to simply halt on the first failure. This module gives you that control. As for the reasons for the four test modules chosen, I ran code over a local copy of the CPAN to find the most commonly used testing modules. Here's the top twenty as of January 2010 (the numbers are different because we're now counting distributions which use a given module rather than simply the number of times a module is used). 1 Test::More 14111 2 Test 1736 3 Test::Exception 744 4 Test::Simple 331 5 Test::Pod 328 6 Test::Pod::Coverage 274 7 Test::Perl::Critic 248 8 Test::Base 228 9 Test::NoWarnings 155 10 Test::Distribution 142 11 Test::Kwalitee 138 12 Test::Deep 128 13 Test::Warn 127 14 Test::Differences 102 15 Test::Spelling 101 16 Test::MockObject 87 17 Test::Builder::Tester 84 18 Test::WWW::Mechanize::Catalyst 79 19 Test::UseAllModules 63 20 Test::YAML::Meta 61 Test::Most is number 24 on that list, if you're curious. See. The modules chosen seemed the best fit for what Test::Most is trying to do. As of 0.02, we've added Test::Warn by request. It's not in the top ten, but it's a great and useful module. Curtis Poe, <ovid at cpan.org> Please report any bugs or feature requests to bug-test-extended at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. You can find documentation for this module with the perldoc command. perldoc Test::Most You can also look for information at: Sometimes you don't know the number of tests you will run when you use Test::More. The plan() function allows you to delay specifying the plan, but you must still call it before the tests are run. This is an error: use Test::More; my $tests = 0; foreach my $test ( my $count = run($test); # assumes tests are being run $tests += $count; } plan($tests); The way around this is typically to use 'no_plan' and when the tests are done, Test::Builder merely sets the plan to the number of tests run. We'd like for the programmer to specify this number instead of letting Test::Builder do it. However, Test::Builder internals are a bit difficult to work with, so we're delaying this feature. if ( $some_condition ) { skip $message, $num_tests; } else { # run those tests } That would be cleaner and I might add it if enough people want it. Because of how Perl handles arguments, and because diagnostics are not really part of the Test Anything Protocol, what actually happens internally is that we note that a test has failed and we throw an exception or bail out as soon as the next test is called (but before it runs). This means that its arguments are automatically evaluated before we can take action: use Test::Most qw<no_plan die>; ok $foo, 'Die if this fails'; ok factorial(123456), '... but wait a loooong time before you throw an exception'; Many thanks to perl-qa for arguing about this so much that I just went ahead and did it :) Thanks to Aristotle for suggesting a better way to die or bailout. Thanks to 'swillert' () for suggesting a better implementation of my "dumper explain" idea (). This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~ovid/Test-Most/lib/Test/Most.pm
CC-MAIN-2016-18
en
refinedweb
I decided not to use an array and go with a so called easier way. here is the question again "write a program to read in 3 doubles then call 2 functions get_sum and get_big. The functions get passed the 3 doubles and return, respectively, the sum and the biggest, and print them out." Here is my code, please take a look at it and tell me what is wrong, what i need to change etc. I know it is sloppy, i'm getting there though. #include <stdio.h> double get_sum(double, double , double, double); void double get_big(double, double, double, double); main() { double x, y, z, biggest; printf ("please enter x y and z"\n) scanf ("%d %d %d", &x, &y, &z); get_sum(x, y, z, s); printf("the sum is %f", s); double get_big(x, y, z); } double get_sum(double a, double b, double sum) { double sum; sum=a + b +c return sum; } void double get_big(double c, double d, double e) { if (c > d & c > e) printf ("%f is the biggest\n",c); if (d > c & d> e) printf ("%f is the biggest\n",d); else printf("%d is the biggest\n",e); return; }
http://cboard.cprogramming.com/c-programming/18064-second-version-old-problem-help.html
CC-MAIN-2016-18
en
refinedweb
Reviving an old thread about the following construct. foo = mumble if COND foo = blurgle endif Tom> I've long thought that we should, eventually, support the Tom> latter use. It seems to have a clearly defined meaning. Tom> And it is even useful in some situations. For instance, Tom> suppose in a very large project you want to `include' some Tom> boilerplate. Then you might conditionally override some Tom> value or another in a particular Makefile.am. I think we have three choices: 1. Like with "standard" Makefile assignments, the second definition of `foo' overrides the first one. So `foo' is undefined when COND is false. 2. The second definition of `foo' _partially_ overrides the first one. Yielding a definition equivalent to if COND foo = blurgle else foo = mumble endif 3. This construct is ill-formed and should be diagnosed. #3 is what Automake 1.7 assumes; it seems you want #2; and #1 doesn't make any sense (just consider if COND a = A else a = B endif ). Let's consider another snippet. if COND1 foo = mumble else foo = mumble endif if COND2 foo = blurgle endif At a first glance, this looks equivalent to the previous construct. However Automake 1.7 isn't aware of this and produces the following Makefile fragment, without the slightest warning: @address@hidden = blurgle @address@hidden = mumble @address@hidden = mumble (The order of definitions in the output doesn't match the order of definitions in the input, because Automake doesn't keep track of this sort of things.) IMO this is a bug. Either we should diagnose an error (#3), or we should produce something sensible (#2). If we take road #2, then the third `foo' definition should partially override previous definitions of `foo', in the COND2 condition; as if the user had written if COND2 foo = blurgle else if COND1 foo = mumble else foo = mumble endif endif This seems to suggest a way to implement #2 easily. When defining a variable in condition `COND', append `!COND' to all previous definitions' conditions. This also works for things like foo = mumble foo = blurgle which would be interpreted as foo = blurgle if FALSE foo = mumble endif Something that I don't know how to handle is the tracking of location for variables whose conditions have been changed as a side effect of a redefinition. It will be confusing if Automake diagnoses something about a variable `defined in condition COND1_TRUE COND2_FALSE' and then point the user to the place where the variable was simply defined in COND1_TRUE (should we explain that COND2_FALSE was added because the variable was later redefined in COND2_TRUE? how?) Another question is when should this construct be allowed? I agree #2 is useful when the definitions and redefinitions are in different files. IMO #3 would be helpful when the (re)definitions are in the same file. I can't think of any reason why a variable would be redefined in the same file, except user's inattention. Opinions? -- Alexandre Duret-Lutz
http://lists.gnu.org/archive/html/automake/2002-10/msg00018.html
CC-MAIN-2016-18
en
refinedweb
csFixed24 Class Reference [Geometry utilities] Encapsulation of a 8.24 fixed-point number. More... #include <csgeom/fixed.h> Detailed Description Encapsulation of a 8.24 fixed-point number. Definition at line 120 of file fixed.h. Member Function Documentation Friends And Related Function Documentation The documentation for this class was generated from the following file: Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/classcsFixed24.html
CC-MAIN-2016-18
en
refinedweb
i tables in oracle 10g tables in oracle 10g sir i have created a table in oracle 10g,i want to know where this table is stored and how can i move this table to another pc and insert values android connection to database oracle 10g android connection to database oracle 10g Hello, How i can connect my android application to my oracle 10g database Problem with url in oracle Problem with url in oracle hi i m having trouble with the following code. when i run it i get the error as invalid oracle url specified. i am using tomcat 6 and oracle 10g release2 10.2.0.3. heres the code <html> < Unable to connect servet, jsp to oracle 10g database.. Unable to retrieve data.. . If I use it also, unable to connect to backend oracle database. Let me...Unable to connect servet, jsp to oracle 10g database.. Unable to retrieve data...=oracle Let me know what is the duser and dpass.. ? I have created a new user why to use hibernet vs jdbc why to use hibernet vs jdbc plz send me the reply Hi... specific 2) Hibernate is a set of Objects so there is no use of sql whereas in JDBC there is a use of SQL. 3) As Hibernate Supports two level of cache, you error in uploading image from jsp to oracle 10g database error in uploading image from jsp to oracle 10g database java.sql.SQLException: ORA-01460: unimplemented or unreasonable conversion requested when i try to insert into the image into the database i got the above error Your hibernet tutorial is not working - Hibernate Your hibernet tutorial is not working Hi, I am learning hibernate from your tutorial. when i execute the 1st hibernate example at following location it working Insert or retrieve image into oracle 10g by using java Insert or retrieve image into oracle 10g by using java How can i insert or retrieve image into oracle10g using java plz i need it urgently,need guidance to do this plz Hibernet - Hibernate Hibernet Hi , m pratik 4rm gujarat. Currently my clg seminar on hibernet tech. So, wnt the syntax for insert, retrive, delet data in database through hibernet. Plz what is difference b/w oracle 8i, 9i and 10g what is difference b/w oracle 8i, 9i and 10g what are the difference between oracle 8i,9i and 10g? I am not considering the versions and their supported os, in the interview point of view Oracle Tutorial . These are as follows : Oracle 10g, Oracle 9i, Oracle 8i, Oracle 7, Oracle v6, Oracle V5...Oracle Tutorial In this section we will discuss about the Oracle Database. This tutorial will describe you about the Oracle Database. This section instruction to install oracle 10g in linux.... instruction to install oracle 10g in linux.... how to install oracle 10g in linux Error in jdeveloper 10G [Oracle Application Server Containers for J2EE 10g (10.1.2.0.0...Error in jdeveloper 10G 500 Internal Server Error...._jspService(Address.jsp:16) [SRC:/Address.jsp] at com.orionserver[Oracle hibernet code - Hibernate hibernet code is there any good tutorial site . so that hibernet cal learn easily with eclipse ? if u know that then send the link to my mail ID : keshab78@gmail.com i have problem with classnofounderror i have problem with classnofounderror import java.sql.*; public... = DriverManager.getConnection("jdbc:oracle:thin:@localhost:8080:oracle", "System", "manager... student"); while(rs.next()) System.out.println Oracle Books Edge: "I enjoyed this book from the beginning 'til the end. The Oracle Edge... Oracle Application Server 10g Administrator Exam Guide A powerful... on the new Oracle 10g Application Server administration exam. Certification why to use hibernet as a dataacces layer why to use hibernet as a dataacces layer plz give me the reply storing date from html form to oracle 10g using servlet storing date from html form to oracle 10g using servlet i have... this date month year from html form into oracle 10g database where i have...","ors"); // code for inserting date into oracle 10g in the format of oracle Hibernate- Oracle connection - Hibernate Hibernate- Oracle connection In Eclipse I tried Windows -->... on databaseconnection --> New Oracle Added ojdbc14 Jar file path UID... to make a connection to oracle DB from eclipse. Thanks index MySQL Database Tutorials Oracle Database Tutorials Structured oracle connectivity problem with netbeans oracle connectivity problem with netbeans sir I am using oracle.... for this after adding new driver(ojdbc6.jar) in services tab I got connectivity with oracle... of simple connectivity with oracle in this I am giving driver details i got an exception while accept to a jsp i got an exception while accept to a jsp type Exception report... in a file.later i changed it to ANSII problem is resolved...(index_jsp.java:74) org.apache.jasper.runtime.HttpJspBase.service Oracle 10g Express Edition - IDE Questions Oracle 10g Express Edition Do we need internet connection to work with oracle10g Express Edition ex. connect to Oracle - Java Beginners ex. connect to Oracle dear sir, I want to ask how to connect java to oracle, please give me a details tutorial with example code how to connect to oracle. what software i must to use? thank's Hi Problem with open connection - Hibernate Problem with open connection Hi Team, I am running one hibernate application and the database is ORACLE 10g.I am getting the below error.I... is: oracle.jdbc.driver.OracleDriver jdbc:oracle:thin://localhost:1522/xe> Hibernate @ManyToOne persisting problem - Hibernate (); (Constructor, getters, setters) because I use "hibernate.hbm2ddl.auto" all tables...Hibernate @ManyToOne persisting problem hello, In my apllication... is using a join table. I followed examples I found on internet and at books. So I code save word file in 10g database - SQL code save word file in 10g database I am not having any idea to save the whole word document in Oracle 10g. Please help me. Hi Friend... = DriverManager.getConnection ("jdbc:oracle:thin:@localhost:3306:test","root", "root Oracle Tutorial 10g Oracle 11g Oracle 12c - (June 2013) Resources: Oracle Tutorial...In this section we will discuss in detail about every aspect of Oracle... beginners and professionals learn different aspects of Oracle. Body: Oracle can i use big query in hibernate? can i use big query in hibernate? can i use big query in hibernate how do i solve this problem? how do i solve this problem? Define a class named Circle... the surface area of the Circle object. Use the constant PI value for the calculation... object is circumference. Use the constant PI value for the calculation. Write Oracle - SQL Oracle I have one .dmp file. I want to import this file into oracle 9i and oracle 10g. wat is procedure and stepts. thanks u Open... D:\oracle\ora90\BIN\imp type this command if your are using oracle 9i How to create a databas locally in our system using Oracle 10g???? How to create a databas locally in our system using Oracle 10g???? How to create a databas locally in our system using Oracle 10g Oracle Database connectivity probem Foundation\Tomcat 6.0\lib\servlet-api.jar;E:\oracle\ora81\jdbc\lib\classes111.zip I...Oracle Database connectivity probem hi Below is the code of oracle database connectivity, when i compile it, it will show the error jsp-oracle validation - JDBC . --------------------------------------- logoutaction.jsp --------------------------------------- oracle 10g...jsp-oracle validation Dear friends, my validation not takes place.... Pl. as this code is very urgent and I tried a lot but unable to find out Hibernate error - JDBC Oracle Database error String query11 = "SELECT product_code... = stmt.executeQuery(query11); while(rs11.next()){ product_code[j...); } rs11.close(); This is my code. I want to save Result set values in array wt are the components in the hibernet ; Hi Friend, Components of Hibernate a) Hibernate Core b) Hibernate Annotations c) Hibernate Entity Manager d) Hibernate Shards e) Hibernate Validator f) Hibernate Search facilitates For more information oracle oracle sir now am doing one project , my frond end is vb and backend is oracle. so 1> how can i store the image in my field 2> how can i back up the table into .txt file JSP-Oracle connectivity JSP-Oracle connectivity I have created a "dynamic web project" mainly with jsp files in eclipse and now, want to connect with oracle 10g, so how can I proceed for the database connection HOW TO I CHANGE THE SWITCH TO IF ELSE OR DO WHILE OR WHILE DO FOR THIS CODING HOW TO I CHANGE THE SWITCH TO IF ELSE OR DO WHILE OR WHILE DO... to the list"); System.out.println("| b. Add an element at specified index.... Replace the elements at a specified index"); System.out.println("| h Dialect in Hibernate Dialect in Hibernate In this article I will list down all the dialects... can use any supported database with the it. Hibernate is using Dialects... Oracle 9i org.hibernate.dialect.Oracle9iDialect Oracle 10g jsp page connectivity with oracle - SQL connectivity jsp with oracle. Please send the code for solving problem. thanks ...: a) If you are using oracle oci driver,you have to use: Connection connection... and password. b) If you are using oracle thin driver,you have to use: Connection Tutorial of various databases To use the different databases in hibernate... Oracle 9i org.hibernate.dialect.Oracle9iDialect Oracle 10g...Hibernate Tutorial This section contains the various aspects of Hibernate While and do-while condition is true. To write a while statement use the following form: while...; }while (i < 5); } }  ... While and do-while   While and do-while While and do-while  ... of nested classes i) Static classes ii) Inner classes ( Non-static... { ... } class InnerClass { ... } } i) Static Nested Classes Hibernate - Hibernate Hibernate Hai this is jagadhish while running a Hibernate application i got the exception like this what is the solution for this plz inform me... project. I think this will solve your problem. Thanks creating index for xml files - XML creating index for xml files I would like to create an index file... to use to get all files from a directory and read them and create index. And can you tell me if there any example or reference I can use. Sorry that my Hibernate - Hibernate Hibernate Hai this is jagadhish, while executing a program in Hibernate in Tomcat i got an error like this HTTP Status 500.... Hopefully this will solve your problem. Regards Deepak Kumar The while and do a particular condition is true. To write a while statement use the following form... + 1; }while (i < 5... While and do-while   Java I/O problem Java I/O problem Write a Java application that prompts the user.... The program should use the FileWriter class and an appropriate processing stream... then be saved to a file named studentData. The program should use the FileWriter: problem while hosting application - JSP-Servlet problem while hosting application hi , when i upload track.war file into my local tomcat server,it works properly whereas the same track .war file... privLabel = this.initPrivateLabelProperties(request); when i used my local tomcat Hibernate Search and modification of data: The search index in Hibernate is automatically updated when.... Direct use of Lucene API: Hibernate provides the facility to use Lucene API...: Hibernate search automatically updates and manages the Lucene Index by using Eclipse hibernate problem Eclipse hibernate problem hie..i've just started a basic hibernate... and created a pojo class. I have created HibernateUtil class but I'm getting...;Hibernate Eclipse Integration Hibernate Tutorials Java while coding - Java Beginners Java while coding Java loop and function coding question? How can I... * 19; I'd to use a method (function) to do the computation I'd like..., Code to help in solving the problem : import java.io.*; class Computation hibernate hibernate I want to learn how to use hibernate framework in web application . for storing database in our application How to get data from Oracle database using JSP How to get data from Oracle database using JSP hello i have a simple problem in jsp in the sense to get data from the database like oracle . I have... what i have done is that ( i have a simple problem in jsp in the sense to get How to save array of UserType in Oracle using Hibernate. How to save array of UserType in Oracle using Hibernate. Hi How to save array of UserType in Oracle using Hibernate. CREATE OR REPLACE TYPE...]; for(int i=0;i<objs.length;i++) { addr[i Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/44484
CC-MAIN-2016-18
en
refinedweb
Recently I received an email from Peru. An ADF developer from Peru was facing a challenge with ADF. In short: ‘the upload of a (large) file should be followed by a potentially long running job. Ideally, the browser would not freeze while the uploaded file is processed and on top of that it would be great to report the progress of the job to the user’. I like this kind of challenge, especially since I consider both asynchronous processing and server push two of my areas of interest. So I took on the challenge and tried a quickly put together an application that demonstrates this behavior. This article discusses how I used standard Java concurrency functionality to take the job off line (in a scheduled, background job) and how I leveraged Active Data Service in ADF Faces to have the background job report its progress through an active bean and server push to the browser. After the user kicks off the job by pushing a button: the user will be in control again (synchronous but background parrtial request completes) and and will also be informed on the job’s progress through the server push: In this example, the job progress in steps of 10% that take between 2 and 4 seconds. As soon as a step is completed, the client is updated and the user thus informed. The outline of the solution can be described as follows: when the user pushes a button, an action listener in a (request scope) managed bean is triggered (in a partial page request, although that does not really matter). This beans spawns a second thread that will do the background processing of the job. The actionListener is implemented like this: public void runBigJob(ActionEvent ae) { // start job in parallel thread // (the activeBean is available as the object to inform of the progress of the big job) ScheduledExecutorService ses = Executors.newScheduledThreadPool(1); ses.schedule ( this , 3 // let's wait one second before starting the job , TimeUnit.SECONDS ); // then complete the synchronous request } The managed bean jobCoordinator is injected with a reference to the activeBean. This activeBean implements the BaseActiveDataModel – a class dictated by ADF Active Data Service. Values passed to this bean can be pushed to the client. The bean configuration from the faces-config.xml file: <managed-bean> <managed-bean-name>activeBean</managed-bean-name> <managed-bean-class>nl.amis.hrm.view.ActiveBean</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> <managed-bean> <managed-bean-name>jobCoordinator</managed-bean-name> <managed-bean-class>nl.amis.hrm.view.LongRunningJobCoordinator</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> <managed-property> <property-name>activeBean</property-name> <property-class>nl.amis.hrm.view.ActiveBean</property-class> <value>#{activeBean}</value> </managed-property> </managed-bean> The reference to the activeBean is passed from the jobCoordinator to the second thread that is scheduled to process the job in the background. This thread will be able to directly update activeBean (and indirectly push to the client). Here is the Java code that represents the background job and informs the activeBean: public void run() { activeBean.triggerDataUpdate("Start - 0 %"); // normally you would do the real work such as processing the big file here for (int i=0;i<10;i++) { // sleep between 2 and 4 seconds try { Thread.sleep(((Double)((2+2* Math.random())* 1000)).longValue()); } catch (InterruptedException e) { } activeBean.triggerDataUpdate((i+1)*10+" %"); } activeBean.triggerDataUpdate("Job Done - 100 %"); } When the thread has been scheduled, the synchronous HTTP request cycle that started with the button push in the browser is now complete. The user can continue to work ; the job is in progress (or soon will be) in a separate thread in the Application Server’s JVM. This thread has its own reference to the activeBean and can inform this bean of the progress it makes on the job. The activeBean is referenced from an activeOutputText component. <af:activeOutputText <af:clientListener </af:activeOutputText> Note: the clientListener is triggered whenever a new value is pushed to the activeOutputText. It invokes a JavaScript function that uses the pushed value to update various client side components. That is why the progress is reported in various locations in the client. Because the activeBean implements the ActiveDataModel that allows for push, the component can receive push messages from the bean. public class ActiveBean extends BaseActiveDataMode @PostConstruct public void setupActiveData() { ActiveModelContext context = ActiveModelContext.getActiveModelConte Object[] keyPath = new String[0]; context.addActiveModelInfo(this, keyPath, } public void triggerDataUpdate(String message) counter.incrementAndGet(); ActiveDataUpdateEvent event = ActiveDataEventUtil.buildActiveDataUpd ( ActiveDataEntry.ChangeType.UPDATE , counter.get () , new String[0], null , new String[] { "state" } , new Object[] { message } ); fireActiveDataUpdate(event); } ..... Any update sent from the background thread to the activeBean is passed onwards to the activeOutputText component. When the job is complete, a final update is sent to the activeBean, just before the background process completes. This final message is also passed to the browser through the Push mechanism. The page looks as follows after the job has completed. Remember: after pressing the button to start the job, the user did not have to do anything in order to get the status updates pushed to the browser. Resources Download JDeveloper 11.1.1.4 application with this server push example: ProgressIndicator. Another allied question… Problem#1 I have to upload a XML file (say a SAP purchase order) and populate UI with the details. To achieve this I have coded a file upload mechanism and a JAXB Data control derived out of the XSD (which the uploaded XML bases on) and did a drag drop of the DC on a JSPX page as a master detail form (read only). Now, I can create a jaxb context, I can marshal the input xml and cast it to the root JAXB object  (lets say OrderRequestDocument).. I can populate the entire hierarchy from the root  (OrderRequestDocument.Order, OrderRequestDocument.BillingAddress) .  At this point, how can I update the view just by populating the root (from a view/pageflow/request scoped managed bean)?  I can always use EL and update the UI elements one by one like          ValueExpression idBinding = exprFactory.createValueExpression(elctx,”#{bindings.orderId.inputValue}”,Object.class); idBinding.setValue(elctx, getOfferDetailsResponse.getOrder().getOrderId());   but considering the number of fields this approach doesn’t seem feasible. Problem#2 I have followed the example code given in this post and it indeed works like a charm. However, something  is amiss when I try to get this to work in conjunction with the file upload+JAXB processing (as the long running job).  I want the  progress counter to start as soon as the valueChangeEvent is fired from the file dialog and to stop when the form is populated (last line of some method)… here the progress monitor panel starts printing an integer i.e the STATE and not the MESSAGE..  i see on the page that the counter increments but the data in the ‘message’ variable doesn’t appear.  It would really help if you share your thought. Hi Adi, It should indeed be possible to use the progressIndicator component. Simply manipulate that component via JavaScript, or, if that fails, send a server event and have the backend ppr the progressIndicator. I think the upload itself – the process of reading data from the local client and posting it over HTTP to the server – is not a process that we can track the progress of. However, as soon as the upload is done, the processing of the file is a candidate for reporting progress. I may be mistaken in that perhaps a large file, while it is received bit by bit, is already available for processing; I am not sure about that.Using the file upload component from Apache Commons (for example) it is possible to access the stream containing the file content that is being uploaded; this would open the door for reporting progress on receiving the file; note that this may interfere with the ADF way of uploading files – I am not sure about that. See Lucas Awesome post Lucas. I was wondering if a file upload status can be monitored with this approach. My requirement is something like – I’ll upload a XML–> populate a ADF form/Table (JAXB data control) .  Also wanted to check if  a af:progressindicator component can be with this push pattern instead of the conventional poll mechanism Hi Adi, I have same requirement now. Can you share your thought how u have implemented it? thanks. HI Lucus, I am having one questions regarding ADS Flow. In your example your having one variable with ( state) what about multiple variables. suppose i want to use ADS with af:table let's consider Employee and department. In Employee as we have department as a contained object so when we drag drop employee as a table . The table will have both employee and department binding as we are dragging employee all attributes from employee are accessible at row first level as #{row.firstname} and for department attribute it will create entry as #{row.employee.binding.departments.fepartmentName} as a entry. My question how we can update such a contained object when ADS. How exactly we can fire event for contained object?   Nice share.  Just curious the adf processIndicator & poll components can do same thing. What’s the benefit for the ActiveDataModel? Hi Lucus, Nice showcase! I’ve got a remark/question about the scope of the activeBean, it’s set to session. What happens when the jobs is taking a very long time (lets say an hour) and the user doesn’t wait until its finished and closes the browser. It the ActiveBean going to be GC’ed after session timeout or does it finish it’s job? (lets assume it should finish the job). Or is application scope necessary to achieve this? Also I’m a bit currious how this works under the hood. Long Ajax request, polling Ajax requests or another technique?
https://technology.amis.nl/2011/10/19/adf-faces-handle-task-in-background-process-and-show-real-time-progress-indicator-for-asynchronous-job-using-server-push-in-adf/
CC-MAIN-2016-18
en
refinedweb
In this tutorial we will learn how to get started with webapp2_extras.i18n. This module provides a complete collection of tools to localize and internationalize apps. Using it you can create applications adapted for different locales and timezones and with internationalized date, time, numbers, currencies and more. If you don’t have a package installer in your system yet (like pip or easy_install), install one. See Installing packages. The i18n module depends on two libraries: babel and pytz (or gaepytz). So before we start you must add the babel and pytz packages to your application directory (for App Engine) or install it in your virtual environment (for other servers). For App Engine, download babel and pytz and add those libraries to your app directory: For other servers, install those libraries in your system using pip. App Engine users also need babel installed, as we use the command line utility provided py it to extract and update message catalogs. This assumes a *nix environment: $ sudo pip install babel $ sudo pip install gaepytz Or, if you don’t have pip but have easy_install: $ sudo easy_install babel $ sudo easy_install gaepytz We need a directory inside our app to store a messages catalog extracted from templates and Python files. Create a directory named locale for this. If you want, later you can rename this directory the way you prefer and adapt the commands we describe below accordingly. If you do so, you must change the default i18n configuration to point to the right directory. The configuration is passed when you create an application, like this: config = {} config['webapp2_extras.i18n'] = { 'translations_path': 'path/to/my/locale/directory', } app = webapp2.WSGIApplication(config=config) If you use the default locale directory name, no configuration is needed. For the purposes of this tutorial we will create a very simple app with a single message to be translated. So create a new app and save this as main.py: import webapp2 from webapp2_extras import i18n class HelloWorldHandler(webapp2.RequestHandler): def get(self): # Set the requested locale. locale = self.request.GET.get('locale', 'en_US') i18n.get_i18n().set_locale(locale) message = i18n.gettext('Hello, world!') self.response.write(message) app = webapp2.WSGIApplication([ ('/', HelloWorldHandler), ], debug=True) def main(): app.run() if __name__ == '__main__': main() Any string that should be localized in your code and templates must be wrapped by the function webapp2_extras.i18n.gettext() (or the shortcut _()). Translated strings defined in module globals or class definitions should use webapp2_extras.i18n.lazy_gettext() instead, because we want translations to be dynamic – if we call gettext() when the module is imported we’ll set the value to a static translation for a given locale, and this is not what we want. lazy_gettext() solves this making the translation to be evaluated lazily, only when the string is used. We use the babel command line interface to extract, initialize, compile and update translations. Refer to Babel’s manual for a complete description of the command options. The extract command can extract not only messages from several template engines but also gettext() (from gettext) and its variants from Python files. Access your project directory using the command line and follow this quick how-to: 1. Extract all translations. We pass the current app directory to be scanned. This will create a messages.pot file in the locale directory with all translatable strings that were found: $ pybabel extract -o ./locale/messages.pot ./ You can also provide a extraction mapping file that configures how messages are extracted. If the configuration file is saved as babel.cfg, we point to it when extracting the messages: $ pybabel extract -F ./babel.cfg -o ./locale/messages.pot ./ 2. Initialize the directory for each locale that your app will support. This is done only once per locale. It will use the messages.pot file created on step 1. Here we initialize three translations, en_US, es_ES and pt_BR: $ pybabel init -l en_US -d ./locale -i ./locale/messages.pot $ pybabel init -l es_ES -d ./locale -i ./locale/messages.pot $ pybabel init -l pt_BR -d ./locale -i ./locale/messages.pot 3. Now the translation catalogs are created in the locale directory. Open each .po file and translate it. For the example above, we have only one message to translate: our Hello, world!. Open /locale/es_ES/LC_MESSAGES/messages.po and translate it to ¡Hola, mundo!. Open /locale/pt_BR/LC_MESSAGES/messages.po and translate it to Olá, mundo!. 4. After all locales are translated, compile them with this command: $ pybabel compile -f -d ./locale That’s it. When translations change, first repeat step 1 above. It will create a new .pot file with updated messages. Then update each locales: $ pybabel update -l en_US -d ./locale/ -i ./locale/messages.pot $ pybabel update -l es_ES -d ./locale/ -i ./locale/messages.pot $ pybabel update -l pt_BR -d ./locale/ -i ./locale/messages.pot After you translate the new strings to each language, repeat step 4, compiling the translations again. Start the development server pointing to the application you created for this tutorial and access the default language: Then try the Spanish version: And finally, try the Portuguese version: Voilà! Our tiny app is now available in three languages. The webapp2_extras.i18n module provides several other functionalities besides localization. You can use it to internationalize dates, currencies and numbers, and there are helpers to set the locale or timezone automatically for each request. Explore the API documentation to learn more.
http://webapp-improved.appspot.com/tutorials/i18n.html
CC-MAIN-2016-18
en
refinedweb
This is my first article on CodeProject, so be kind to me . Did you ever wonder how those loot systems work in today's MMO's like WoW, Rift, and many others or H&S games like Diablo?! if (new Random().Next(1,10) < 5) .... In the first part of this article I want to go into the theory of what it means, to develop a RDS. The second part will then bring all this to life. So if you want to see what it can do for you first, maybe you want to start with Part II and then go into the details here on Part I. Depends on your personal preferences, but expect to find some terms and methods in Part II that you will not fully understand without knowledge from this first part. Lets take a look at what we see happen in the game and then break it down to what that means in technical terms. I will concentrate on two games who almost everybody knows (WoW and Diablo) for my examples, so the chances are good you have a picture in mind when I describe things. Examples from WoW: When a (normal outdoor) mob dies in WoW, the loot looks like this: If you are lucky, the Axe might be blue or even epic colored. Requirements that can be seen from this drop: When you kill a boss in an instantiated dungeon (no matter whether this is a raid or a 5-man-instance), you have some guaranteed loot and some random loot: Requirements seen from this drop: Let's put this into some code properties. We will need a class that holds the "table". Let's name it RDSTable. This is, what in Gamer's terms is called a LootTable. Such a table will contain a list of items (or better: objects) that can drop. Without too much detail, we know, that we want to allow the developer to put virtually any item in such a table, so we need an interface. We pick the name IRDSObject for this. The next logical step is, to declare the contents of our table as IENumerable<IRDSObject> rdsContents; in our class. Now we can put any number of objects in a list to be picked. Not really hard so far, right? RDSTable IRDSObject IENumerable<IRDSObject> rdsContents; Ok, what do we need to know about such an IRDSObject? What is a must-have here? We know, it will have a probability to drop. We know, there's a count involved. We know, it's possible to have it drop always. But when it drops always... as an opposite... isn't it a good idea, to include a switch, to make the item a unique drop? That it can be part of the result only once? Yes, this idea is good. We add that. And to add flexibility, we will add an enabled property too, so we can "turn off" parts of our table contents on demand, without modifying the table itself. probability count always unique enabled At the moment, our interface will then look like this (I removed the comments here to keep the code more compact. In the downloadable source, the code is, of course, fully documented): All properties have the name prefix rds to have them together in IntelliSense and to avoid naming conflicts, as "Count" and "Enabled" are quite common names in c#. Feel free to rename them or work with explicit interface implementation. I personally prefer grouping-by-prefix (as all my textboxes start with txt, my listboxes with lst, etc). public interface IRDSObject { double rdsProbability { get; set; } // The chance for this item to drop bool rdsUnique { get; set; } // Only drops once per query bool rdsAlways { get; set; } // Drops always bool rdsEnabled { get; set; } // Can it drop now? } Why is the probability a double? Because it will be easier to modify it dynamically with multiplications and divisions, as an example, if the player character has modifiers (like the allmighty MagicFind in Diablo), the drop probability for each item can be multiplied dynamically at runtime with the MagicFind Bonus of the character. double Probability is neither a percent value nor an absolute thing. It's a value that describes the chance of being hit in relation to the other values of the table. Let me give you a simple example: Item 1 - Probability 1Item 2 - Probability 1Item 3 - Probability 1All three items will have the same chance to drop.Item 1 - Probability 10Item 2 - Probability 5Item 3 - Probability 1.5The sum of all is 16.5 - If you calculate 16 drops from this table, you will likely have 10 times Item 1, 5 times Item 2 and maybe the 16th will be one single Item 3. You get it? The result will just take a random value and loop through the contents of a table until it hits the first value that is bigger than the random value. This is the item hit. I will explain the exact functionality (and recursion) of the Result method later in this article. Ok, then lets take a look at our RDSTable class. If we start with that as an interface too, we can make any class become a RDSTable in our game project. We do not want to put too many design rules on the developer's shoulders. If he needs some of his own base classes to be RDS-enabled, then he shall be able to do so. Beside the contents of our RDSTable, we will of course need a result set. As we have seen in the examples above, it's more than one IRDSObject we expect, so the Result will be an IEnumerable<IRDSObject>, too. RDSTable IRDSObject IEnumerable<IRDSObject> And now is one of the clou ideas: What if IRDSTable derives from IRDSObject? Great! Now each entry in the contents of a RDSTable, can be another (sub)table! That's one of the Jackpots we hit here - we make it recursive! This allows us, to design "Theme" tables, say, we put all epic world drops in one table (and each epic item in this table has its own probability), all our rares in a second table, all greens in a third and all white in a fourth table. We then set up a "Master Table" that contains those four tables as sub-tables, and each of those sub tables has it. IRDSTable So, the first shot of IRDSTable will look like this: public interface IRDSTable : IRDSObject { int rdsCount { get; set; } // How many items shall drop from this table? IEnumerable<IRDSObject> rdsContents { get; } // The contents of the table IEnumerable<IRDSObject> rdsResult { get; } // The Result set } The Count is part of the IRDSTable interface and not of the IRDSObject, because we want to ask "How many entries of this table shall drop?" and not "How often does this one item drop?". In the WoW example above (the silkweave clothes), we could assume, all "cloth" items are together in one table, their drop probabilities calculated dynamically based on a monster level formula (silk drops between levels 20 and 30, while mageweave drops only from 31++) that simply sets all probabilities to zero for cloth types this monster level can not drop. After we put some known things into the interfaces, we now have a base where we can start thinking about details. We are still missing tons of functionality, we can not tell the system to drop "0 to 3 green items", we have no control on the items when they drop (i.e. they are "hit" by the result evaluation) and we do not have any possibilities to modify probabilities immediately before a result-calculation occurs. And of course we do not have control over the result set after it has been calculated. Another thing we miss, is something like the Gold drops. We can only drop objects that implement IRDSObject. But we don't have values. Lets start with this first, as it is really easy. We want to drop a value of any kind. "Any kind"? Well, generics jump onto the stage now. We add an IRDSValue<T> interface to our model, that will derive from IRDSObject too and that adds a T Value property. This is, where we can store our Gold amount in a result. IRDSValue<T> T Value public interface IRDSValue<T> : IRDSObject { T rdsValue { get; } } Now we can add integers, doubles, strings or any other object as "values" to our tables. This step is very important. We need to expand the IRDSObject interface with some more goodies. We want to be able to run over the probabilities of all items in the table before a result is calculated, we want to know, when an item is "hit" by the result calculator and maybe, we even want to have a chance to check the entire result set before it is returned to the caller. To do this, we add some events to the IRDSObject interface, that give us control over these things. I leave the comments on the events in this code snippet, they explain very well, when each of those events will happen. /// <summary> /// Occurs before all the probabilities of all items of the current RDSTable are summed up together. /// This is the moment to modify any settings immediately before a result is calculated. /// </summary> event EventHandler rdsPreResultEvaluation; /// <summary> /// Occurs when this RDSObject has been hit by the Result procedure. /// (This means, this object will be part of the result set). /// </summary> event EventHandler rdsHit; /// <summary> /// Occurs after the result has been calculated and the result set is complete, but before /// the RDSTable's Result method exits. /// </summary> event ResultEventHandler rdsPostResultEvaluation; void OnRDSPreResultEvaluation(EventArgs e); void OnRDSHit(EventArgs e); void OnRDSPostResultEvaluation(ResultEventArgs e); Just a little bit more theory, then we will finally see the code of the result calculation which will put all the pieces together. The interface hierarchy for the library is very simple and looks like this. The library contains a full implementation of all interfaces. They are all named as their Interfaces without the leading "I", so the RDSObject class implements IRDSObject, RDSTable -> IRDSTable, and so on. Take a look at the attached source codes and the constructors I made.The implementations are easy to read and straightforward.Key class in the library is the RDSTable class which contains the Result calculation implementation that is used by RDS. We will take a very close look at this core functionality in the following chapters. RDSObject IRDSTable When you use the RDS you do not have to implement those interfaces, just make your baseclass for your game objects and monsters derive from RDSObject and you will be able to add each of them to any result set. One open issue we have, is the "0 to 3 green items" functionality. I do not want the Count property to be randomized, I choose a better approach. Null values! We just create a class called RDSNullValue : RDSObject that can be added to each loot table and have its own probability. With this, we can easily solve this issue. We create the table for the green drops with a Count of 3 and just add a RDSNullValue to the table with a given probability to just return "nothing". This is how "0 to 3" is implemented. RDSNullValue : RDSObject RDSNullValue For simplicity, the table could look like this: Null - Probability 1Green Item - Probability 2So, in theory every third drop is a null drop - but of course we will have queries where all three hit a green item and we will have drops, where null is hit twice or even three times. You can very easily increase/decrease the null-chance by modifying the probabilities of either the green item or the null value. The RDSNullValue class is very very simple but solves a lot of problems because it allows us to drop "nothing" when we need it. RDSNullValue /// <summary> /// This is the default class for a "null" entry in a RDSTable. /// It just contains a value that is null (if added to a table of RDSValue objects), /// but is a class as well and can be checked via a "if (obj is RDSNullValue)..." construct /// </summary> public class RDSNullValue : RDSValue<object> { public RDSNullValue(double probability) : base(null, probability, false, false, true) { } } Oh this is a topic where you can write books about it. Computers are not able to create "real" random numbers and all that stuff... I do not want to go into too much detail about that philosophical discussion. It's right yes, they are not real random, but they are more or less unpredictable. Anyway, I decided to put that decision away from me, and I created a static class, the RDSRandomizer. By default, it just uses .Net's Random class. If you want to use the RNGCryptoServiceProvider class from the System.Security.Cryptography namespace, you may well do it. The RDSRandomizer class allows to exchange the randomizer used via the SetRandomizer() method. The only question you should ask yourself is: "Do I really need it?". No one can tell anyway in your running game, why or how close to the "epic item" the drop of a monster was. As long as you don't deal with real money gambling (like Poker Software or Casino Software)... in a "normal fun game", a standard randomizer is... hmm... Random enough. RDSRandomizer RNGCryptoServiceProvider System.Security.Cryptography SetRandomizer() To allow the developer to change the Randomizer used, the method accepts any class derived from .Net's Random class. Almost all methods of Random are virtual, because Microsoft had the same idea: People will maybe want to change this. So feel free to create your own Randomizer, as long as it derives from Random, you are fine, and you can replace my default implementation with the SetRandomizer() method. Random class RDSRandom has some methods that are useful in most games at any point, here is a quick overview of the methods: RDSRandom public static double GetDoubleValue(double max) // From 0.0 (incl) to max (excl) public static double GetDoubleValue(double min, double max) // From min (incl) to max (excl) public static int GetIntValue(int max) // From 0 (incl) to max (excl) public static int GetIntValue(int min, int max) // From min (incl) to max (excl) // Rolls a given number of dice with a given number of sides per dice. // Result contains as first entry the sum of the roll // and then all the dice values // Example: RollDice(2,6) rolls 2 6-sided dice and the result will look like this // {9, 5, 4} ... 9 is the sum, one rolled a 5, the second one a 4 public static IEnumerable<int> RollDice(int dicecount, int sidesperdice) // A simple method to check for any percent chance. // The value must be between 0.0 and 1.0, so a 10% chance is NOT "10", it's "0.10" public static bool IsPercentHit(double percent) With those few simple methods you can easily do most of the random stuff in a game that does not depend on RDSTables, with the great addition, that you might have replaced the default .net Randomizer with your own. RDSTables I think it is a good time now to explain, how result calculation will work in the implementation, before confusion gets too high. So I will just show what the Result actually does to help you with your imagination. I implemented the Result in a getter, yes I know, some of you will say, this is bad, but honestly, I really like that . Feel free to convert it to a method, if that fits your style better. The code of the result method is well documented, but I will add further explanations after the code. // Any unique drops are added here when they are hit. // Anything contained here can not drop a second time. private List<IRDSObject> uniquedrops = new List<IRDSObject>(); // Calculate the result public virtual IEnumerable<IRDSObject> rdsResult { get { // The return value, a list of hit objects List<IRDSObject> rv = new List<IRDSObject>(); uniquedrops = new List<IRDSObject>(); // Do the PreEvaluation on all objects contained in the current table // This is the moment where those objects might disable themselves. foreach (IRDSObject o in mcontents) o.OnRDSPreResultEvaluation(EventArgs.Empty); // Add all the objects that are hit "Always" to the result // Those objects are really added always, no matter what "Count" // is set in the table! If there are 5 objects "always", those 5 will // drop, even if the count says only 3. foreach (IRDSObject o in mcontents.Where(e => e.rdsAlways && e.rdsEnabled)) AddToResult(rv, o); // Now calculate the real dropcount, this is the table's count minus the // number of Always-drops. // It is possible, that the remaining drops go below zero, in which case // no other objects will be added to the result here. int alwayscnt = mcontents.Count(e => e.rdsAlways && e.rdsEnabled); int realdropcnt = rdsCount - alwayscnt; // Continue only, if there is a Count left to be processed if (realdropcnt > 0) { for (int dropcount = 0; dropcount < realdropcnt; dropcount++) { // Find the objects, that can be hit now // This is all objects, that are Enabled and that have not already been added through the Always flag IEnumerable<IRDSObject> dropables = mcontents.Where(e => e.rdsEnabled && !e.rdsAlways); // This is the magic random number that will decide, which object is hit now double hitvalue = RDSRandom.GetDoubleValue(dropables.Sum(e => e.rdsProbability)); // Find out in a loop which object's probability hits the random value... double runningvalue = 0; foreach (IRDSObject o in dropables) { // Count up until we find the first item that exceeds the hitvalue... runningvalue += o.rdsProbability; if (hitvalue < runningvalue) { // ...and the oscar goes too... AddToResult(rv, o); break; } } } } // Now give all objects in the result set the chance to interact with // the other objects in the result set. ResultEventArgs rea = new ResultEventArgs(rv); foreach (IRDSObject o in rv) o.OnRDSPostResultEvaluation(rea); // Return the set now return rv; } } Step-by-step explanation: rdsUnique = true OnRDSPreResultEvaluation rdsAlways = true Count = 5 rdsAlways = true rdsEnabled = true runningvalue hitvalue OnRDSHit AddToResult OnRDSPostResultEvaluation AddToResult does some key action on all this: rdsUnique = true RDSCreateableObject private void AddToResult(List<IRDSObject> rv, IRDSObject o) { if (!o.rdsUnique || !uniquedrops.Contains(o)) { if (o.rdsUnique) uniquedrops.Add(o); if (!(o is RDSNullValue)) { if (o is IRDSTable) { rv.AddRange(((IRDSTable)o).rdsResult); } else { // INSTANCECHECK // Check if the object to add implements IRDSObjectCreator. // If it does, call the CreateInstance() method and add its return value // to the result set. If it does not, add the object o directly. IRDSObject adder = o; if (o is IRDSObjectCreator) adder = ((IRDSObjectCreator)o).rdsCreateInstance(); rv.Add(adder); o.OnRDSHit(EventArgs.Empty); } } else o.OnRDSHit(EventArgs.Empty); } } Step-by-step: if (!unique || !contained)... In the next chapter I explain the IRDSObjectCreator interface which is a very important of the system. IRDSObjectCreator As the RDSNullValue can be hit, I decided to fire the OnRDSHit event on the NullValue object too, even when in most cases, the default null value will be used, but it allows you to derive your own null value and even can react on it when hit. Think of disabling something in your game, when any dropchance xy results in a null-value, speak it as "react on something that does not happen". OnRDSHit NullValue This is one very important thing. You add references to your tables. So if you query a table multiple times, there are always the same references returned in the result set. This is nothing critical when you drop Gold or other dead things. But it is critical, when you drop something living, like a Monster or a Map segment. When all dropped monsters have the same reference, we make it easy for the hero of our game. If he kills one of them, they all die immediately . So we need a new instance of each of the objects when they drop. This is where this interface (or the RDSCreatableObject class which implements it) comes into play. RDSCreatableObject It offers only one single method: CreateInstance(). This method is of course virtual, so it can (and should) be overwritten. By default it just returns a new() of the default constructor of the type of the object it is. CreateInstance() new() Look at the code of RDSCreateableObject for a better understanding: /// <summary> /// This class is a special derived version of an RDSObject. /// It implements the IRDSObjectCreator interface, which can be used to create custom instances of classes /// when they are hit by the random engine. /// The RDSTable class checks for this interface before a result is added to the result set. /// If it is implemented, this object's CreateInstance method is called, and with this tweak it is possible /// to enter completely new instances into the result set at the moment they are hit. /// </summary> public class RDSCreatableObject : RDSObject, IRDSObjectCreator { /// <summary> /// Creates an instance of the object where this method is implemented in. /// Only paramaterless constructors are supported in the base implementation. /// Override (without calling base.CreateInstance()) to instanciate more complex constructors. /// </summary> /// <returns>A new instance of an object of the type where this method is implemented</returns> public virtual IRDSObject rdsCreateInstance() { return (IRDSObject)Activator.CreateInstance(this.GetType()); } } If you need anything else than the default constructor, you should override this method. Now you have seen all the classes and interfaces that are part of the RDS. The object model is very simple too, it looks like this: And now, read Part II of this article, which concentrates on using this library with some nice examples of random maps, monster spawns, item loots and even random events happening during the runtime of a game. We created a RDS that allows us to do these things: All we need to have fun while making and playing our games is there. The only thing you have not seen so far is, how that all comes to live. Fortunately there is a Part II, which will exactly do that! Check it out! (Editors, please be so kind to add a link to Part II here! Thank you!) This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) This is my first article on CodeProject, so please be kind to me Smile | <img src= " src="" /> . General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/ArticleVersion.aspx?aid=420046&av=607426
CC-MAIN-2016-18
en
refinedweb
A Button Graphicsitem for use in Stellarium's graphic widgets. More... #include <StelGuiItems.hpp> A Button Graphicsitem for use in Stellarium's graphic widgets. Definition at line 57 of file StelGuiItems.hpp. Button states. Definition at line 86 of file StelGuiItems.hpp. Constructor. Constructor. Get the width of the button image. The width is based on pixOn. Definition at line 93 of file StelGuiItems.hpp. Emitted when the hover state change. Get whether the button is checked. Definition at line 89 of file StelGuiItems.hpp. Transform the pixmap so that it look red for night vision mode. Set the background pixmap of the button. A variant for night vision mode (pixBackgroundRed) is automatically generated from the new background. set whether the button is checked Set the button opacity. Definition at line 96 of file StelGuiItems.hpp. Activate red mode for this button, i.e. will reduce the non red color component of the icon. Definition at line 99 of file StelGuiItems.hpp. Triggered when the button state changes. Triggered when the button state changes.
http://stellarium.org/doc/0.12.0/classStelButton.html
CC-MAIN-2016-18
en
refinedweb
Combinatorics has many applications within computer science for solving complex problems. However, it is under-represented in libraries since there is little application of Combinatorics in business applications. Fortunately, the science behind it has been studied by mathematicians for centuries, and is well understood and well documented. However, mathematicians are focused on how many elements will exist within a Combinatorics problem, and have little interest in actually going through the work of creating those lists. Enter computer science to actually construct these massive collections. The C++ Standard Template Library (STL) brought us the very useful algorithm next_permutation, which generates permutations of lists using iterators. However, most languages, including C#, do not have built in libraries for the generation of these lists. To fill this gap, a lot of work has been done in different languages focused on permutations, including a host of articles at the CodeProject. Even though so much quality work has already been done, I still found myself wanting for additional capabilities. In particular, a single solution that fulfils all of the following self-imposed requirements: next_permutation foreach If you're new to Combinatorics, then many of these requirements may not make sense. The Background section below includes a complete overview of the combinatorial concepts, including samples and how to calculate the size of the output sets. The Using the Code section follows up with the classes provided for each collection, and describes the features provided with these classes; this section contains everything you need to know to use the enclosed classes. Next, the Algorithm and Performance section discusses some of the options for implementation that were considered, and explains some of the major design decisions. Finally, the Sample section explains the small sample application included to demonstrate the use and need for Variations. There are two common combinatorial concepts that are taught in every probability course. These are permutations and combinations. There is a lesser known collection known as a variation, which adapts features from both permutations and combinations. In addition, there are variants of each of these three which involve introducing repetition to the input or the output. These collections with repetition are also typically glossed over in introductory courses. So, the complete list of combinatorial collections is: Permutations deal with the ordering of a set of items, for example, how many ways a deck of 52 cards can be shuffled. Combinations deal subsets of a set of items, for example, how many 5 card poker hands can be dealt from a deck of 52 cards. In both cases, each card in the deck or in the hand is unique, so repetition is not a factor. However, problems do arise where repetition does occur in the input and/or output. For these cases, the repetition versions allow us more options in constructing our output sets. Variations are used when not only the subset provided by combinations is relevant but also the ordering within that subset. Each of these is covered below. Permutations are all possible orderings of a given input set. Each ordering of the input is called a permutation. When each item in the input set is different, there is only one way to generate the permutations. However, when two or more items in the set are the same, two different permutation sets are possible. These are called Permutations and Permutations with Repetition. Standard permutations simply provide every single ordering of the input set: Permutations of {A B C}: {A B C}, {A C B}, {B A C}, {B C A}, {C A B}, {C B A} The number of Permutations can be easily shown [2] to be P(n) = n!, where n is the number of items. In the above example, the input set contains 3 items, and the size is 3! = 6. This means that the number of permutations grows exponentially with n. Even a small n can create massive numbers of Permutations; for example, the number of ways to randomly shuffle a deck of cards is 52! or approximately 8.1E67. Permutations with Repetition sets give allowance for repetitive items in the input set that reduce the number of permutations: Permutations with Repetition of the set {A A B}: {A A B}, {A B A}, {B A A} The number of Permutations with Repetition is not as large, being reduced by the number and count of repetitive items in the input set. For each set of m identical items, the overall count is reduced by m!. In the above example, the input set contains 3 items with one subset of 2 identical items, the count is 3! / 2! = 6 / 2 = 3. The idea behind the count is easier than the formula since the formula requires the product of each repetitive set of size ri. The total size is Pr(n) = n! / Π(ri!) (where Π is the product operator). All of the collating and calculating is handled for us using the Permutation.Count property. Permutation.Count The code library accompanying this article will determine, based on the input set, which type of permutation to use. Alternately, a type may be supplied to determine which permutation type to use. Combinations are subsets of a given size taken from a given input set. The size of the set is known as the Upper Index (n) and the size of the subset is known as the Lower Index (k). When counting the number of combinations, the terminology is generally "n choose k", and is known as the Binomial Coefficient [3]. Unlike permutations, combinations do not have any order in the output set. Like permutations, they do have two generation methods based on the repeating of output items. These are called Combinations and Combinations with Repetition. Combinations can be thought of as throwing a set of n dominos into a hat and then retrieving k of them. Each domino can only be chosen once, and the order that they were fished out of the hat is irrelevant. In a similar fashion, (n = 100) Scrabble tiles can be thrown into a bag, and the first player will select (k = 7) tiles. However, there are 9 A's in the bag, and selecting {A A A A A A A} would be a valid, although unlikely, draw. Since there are only 6 Ns in the bag, it is not possible to draw 7 Ns. As such, the values of the tiles are ignored for the Combinations, this differs from Permutations. Combinations of {A B C D} choose 2: {A B}, {A C}, {A D}, {B C}, {B D}, {C D} The number of outputs in this particular example is the Binomial Coefficient. It is calculated as n! / ( k! * (n - k)! ) [4]. The Scrabble example above would give us 100! / (7! * 93!) = 16,007,560,800. Note that the answer to 100! is much larger than the answer as most of its magnitude was cancelled out by 93!. Combinations with Repetition are determined by looking at a set of items, and selecting a subset while allowing repetition. For example, choose a tile from the scrabble bag above, write down the letter, and return the letter to the bag. Perform this 7 times to generate a sample. In this case, you could "draw" 7 Ns, just with a lower probability than drawing 7 As. As such, Combinations with Repetition are a superset of Combinations, as seen in the following example: Combinations with Repetition of {A B C D} choose 2: {A A}, {A B}, {A C}, {A D}, {B B}, {B C}, {B D}, {C C}, {C D}, {D D} Combinations are used in a large number of game type problems. For example, a deck of (n = 52) cards of which a (k = 5) card hand is drawn. Using the set of all combinations would allow for a brute force mechanism of solving statistical questions about poker hands. Variations combine features of combinations and permutations, they are the set of all ordered combinations of items to make up a subset. Like combinations, the size of the set is known as the Upper Index (n) and the size of the subset is known as the Lower Index (k). And, the generation of variations can be based on the repeating of output items. These are called Variations and Variations with Repetition. Variations are permutations of combinations. That is, a variation of a set of n items choose k, is the ordered subsets of size k. For example: Variations of {A B C} choose 2: {A B}, {A C}, {B A}, {B C}, {C A}, {C B} The number of outputs in this particular example is similar to the number of combinations of n choose k divided by the permutations of k. It can be calculated as V(n, k) = C(n, k) * P(k) = (n! / ( k! * (n - k)! )) * k! = n! / (n - k)!. The sample project included uses variations to select digits to be substituted for letters in a simple cryptographic word problem. Variations with Repetition expands on the set of variations, and allows items to be reused. Since each item can be re-used, this allows for variations to include all items in the output to be a single item from the input. For example: Variations with Repetition of {A B C} choose 2: {A A}, {A B}, {A C}, {B A}, {B B}, {B C}, {C A}, {C B}, {C C} The size of the output set for variations is easier to compute since factorials are not involved. Each of the p positions can be filled from any of the n positions in the input set. The first item is one of n items, the second is also one of n, and the pth is also one of n. This gives us Vr(n, k) = nk total variations of n items choose k. There are three class entry points in the code library, Permutations, Combinations, and Variations. Each of these is a generic class based on the type T of the items in the set. Each of these also generates a collection of collections based on the input set, making each a meta-collection. For ease of use, the classes implement IEnumerable<T>, which returns an IList<T>. However, this generic code is designed to make the consumption of each class easy. For example, using Permutations: Permutations Combinations Variations T IEnumerable<T> IList<T> char[] inputSet = {'A', 'B', 'C'}; Permutations<char> permutations = new Permutations<char>(inputSet); foreach(IList<char> p in permutations) { Console.WriteLine(String.Format("{{{0} {1} {2}}}", p[0], p[1], p[2])); } will generate: {A B C} {A C B} {B A C} {B C A} {C A B} {C B A} Using Combinations and Variations is similar, but the Lower Index must also be specified. (The Upper Index is derived from the size of the input set.) For example: char[] inputSet = { 'A', 'B', 'C', 'D' }; Combinations<char> combinations = new Combinations<char>(inputSet, 3); string cformat = "Combinations of {{A B C D}} choose 3: size = {0}"; Console.WriteLine(String.Format(cformat, combinations.Count)); foreach(IList<char> c in combinations) { Console.WriteLine(String.Format("{{{0} {1} {2}}}", c[0], c[1], c[2])); } Variations<char> variations= new Variations<char>(inputSet, 2); string vformat = "Variations of {{A B C D}} choose 2: size = {0}"; Console.WriteLine(String.Format(vformat, variations.Count)); foreach(IList<char> v in variations) { Console.WriteLine(String.Format("{{{0} {1}}}", v[0], v[1])); } Combinations of {A B C D} choose 3: size = 4 {A B C} {A B D} {A C D} {B C D} Variations of {A B C D} choose 2: size = 12 {A B} {A C} {A D} {B A} {C A} {D A} {B C} {B D} {C B} {D B} {C D} {D C} By default, Permutations, Combinations, and Variations will generate the standard or no-repetition sets. Each class has an overloaded constructor that takes a GenerateOption, which can either be GenerateOption.WithoutRepetition (the default) or GenerateOption.WithRepetition. For example, to generate a permutation set with and without repetition: GenerateOption GenerateOption.WithoutRepetition GenerateOption.WithRepetition char[] inputSet = { 'A', 'A', 'C' }; Permutations<char> P1 = new Permutations<char>(inputSet, GenerateOption.WithoutRepetition); string format1 = "Permutations of {{A A C}} without repetition; size = {0}"; Console.WriteLine(String.Format(format1, P1.Count)); foreach(IList<char> p in P1) { Console.WriteLine(String.Format("{{{0} {1} {2}}}", p[0], p[1], p[2])); } Permutations<char> P2 = new Permutations<char>(inputSet, GenerateOption.WithRepetition); string format2 = "Permutations of {{A A C}} with Repetition; size = {0}"; Console.WriteLine(String.Format(format2, P2.Count)); foreach(IList<char> p in P2) { Console.WriteLine(String.Format("{{{0} {1} {2}}}", p[0], p[1], p[2])); } Permutations of {A A C} without Repetition; size = 3 {A A C} {A C A} {C A A} Permutations of {A A C} with Repetition; size = 6 {A A C} {A C A} {A A C} {A C A} {C A A} {C A A} Note that the input set for Permutations must have repetition in it in order to see a difference in the output. Combinations and Variations will generate additional sets regardless of the similarity of incoming values. While the intent of these classes is not to calculate Binomial Coefficients, each class does have a Count property. This property will calculate the actual count of collections returned, without iterating through them. This is done by applying the formulas in the general discussion above and returning the value as a long. Finally, the counting is done without internal overflow, which is important since 21! will overflow a long. Count long The constructor parameters are also available; the upper index and lower index are available using UpperIndex and LowerIndex, respectively. The generator option is available through the Type property. For example: UpperIndex LowerIndex Type char[] alphanumeric = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".ToCharArray(); Combinations<char> C = new Combinations<char>(alphanumeric, 10); Console.WriteLine(String.Format("{0} choose {1} = {2}", C.UpperIndex, C.LowerIndex, C.Count)); 36 choose 10 = 254186856 Finally, these common features are formalized through the IMetaCollection interface that each of these classes implement. IMetaCollection At this point, we've covered everything that is required to understand and use these classes. The remainder of this discussion describes a bit of the background and decision making processes used in its implementation. The numbers of permutations, combinations, and variations all grow exponentially. As such, a meta-collection enumerator on any but a trivial set of items will quickly exceed any available computation time. For example, a system that can enumerate a permutation of 10 items in 1 second will take over 1000 years to enumerate a permutation of 20 items. Since the performance for even the best algorithm will degrade, pretty much any algorithm will do. However, no developer could hold their head high without evaluating options and choosing the best algorithm for the job, even if it only knocks 50 years off a 1000 year run. The ability to calculate a permutation is core to all of the combinatorial classes. Several algorithms have been developed for calculating permutations, three of which were evaluated for this implementation, namely Recursive, Lexicographic, and Heap's algorithms [1]. The Lexicographic algorithm [3] is perfectly suited for the IEnumerable interface, since it uses the same GetNext() style and requires very little adaptation. Both Recursive and Heap's algorithms are more efficient at generating permutations of integers, but need to be adapted for an IEnumerable interface by either converting to an iterative algorithm or using C# continuation. IEnumerable GetNext() The first attempt was to take the more efficient heap based algorithms and un-roll them into iterative algorithms, and then to make them conform to the one result per call behavior of the IEnumerable interface. Both heap algorithms stored quite a bit of state information and did not un-roll easily. Once done, they each went from about twice as fast as lexicographic to about twice as slow. The second attempt was to use the continuation feature added to C# in .NET 2.0. This feature provides the yield return syntax for quickly creating enumerators. It is also capable, with a bit of extra work, to handle recursive enumerators. The good news is that this mechanism works, but the performance was even worse than un-rolling the recursive algorithm by a factor of 4. yield return The lexicographic algorithm was therefore chosen as the best algorithm for this implementation. The next issue revolved around performance of comparisons. All algorithms tested had to be changed to accommodate non-integer data. The lexicographic algorithm needs to compare objects to determine their sort order to be able to create a unique lexicographic order. The standard way of resolving this involves the IComparable or IComparer interfaces to determine the order. As this is a generic collection, it is relatively straightforward to adapt the integer comparison to an IComparer provided comparison. Unfortunately, this comparison is called a lot and any inefficiency is magnified. Since this is a generic type comparison, the CLR does not optimize this nearly as efficiently as a value type. IComparable IComparer The other problem with the use of a direct comparison on objects to permute is that the permutations will always be Repetitive instead of Non-Repetitive. That is, you cannot create a Non-Repetitive permutation of {A A B}. Together with the above performance issues, another solution was required. {A A B} The final solution was to have a parallel array of integers on which the comparisons are performed. Both arrays will have items swapped in parallel, creating the correct output. The performance improves to within a few percent of the integer only solution that the algorithm started with. And the repetitive and non-repetitive set solutions are done by having different integer assignments in the parallel array. For repetitive, the assignment for {A A B} is {1 2 3}, and for non-repetitive, the assignment is {1 1 2}. The IComparer is used to sort the list, and then once more to check for neighboring duplicates if and only if the repetitive mode is chosen, but is not used during the GetNext() call. {1 2 3} {1 1 2} Most of the research around algorithms appear to have been focused on permutations, with less work being done on combinations and variations. As such, the combination and variation implementations use internal permutations to calculate their sets. For example, the Combinations class uses a Permutations<bool> class to indicate the positions to be included. The permutations of this underlying class indicate the subset to be selected by the combinations. Variations also use a similar mechanism, except for variations with repetition. Permutations<bool> Therefore, the work on making permutations work efficiently is inherited by combinations and variations. Additional performance improvements in the combinations and variations implementations were not sought. First, these classes are not designed to efficiently calculate tables of Binomial Coefficients. However, they do need to provide the Count of the collection that they are currently enumerating. As discussed above, these values can get really big, and can easily overflow 64 bit integers. But, the divisors in the count formulas bring the values back down to a manageable level. So, to properly count these collections, some type of large-integer calculations need to be performed. The outputs of these counts are obviously always integers, so the prime factors of the denominators of the formulas will always be found in the prime factors of the numerators. Rather than performing large integer computations, a list of numerator prime factors and a list of denominator prime factors are calculated. Then, the list of denominators is removed from the list of numerators, and the product of the remaining numerators is returned. No attempt was made to compare the efficiency of this process, see paragraph above. If you troll the code, you will find a SmallPrimeUtility class that is used for the above algorithm. I make no warranty of the suitability of this class for anything else, it is a down and dirty class with no elegance to it at all. It is the Hyundai Excel of classes, it gets you where you're going, but not much else. SmallPrimeUtility The classes created here are designed to exhaustively enumerate massive numbers of collections. For all but the most trivial sets, the amount of time required to enumerate exceeds all available computing capacity. So, there are times when it is necessary to brute force your way through a problem, but the fact that you can't for large N is what makes Computer Science fun. For small problems, this set of classes is suitable; for larger problems, this set can assist in quickly validating other more interesting algorithms. For large problems, you need a keen mind and not a dumb computer. For more complex problems related to permutations or combinations, the total solution space, S, grows far too quickly for it to be feasible to evaluate every option. These classes are designed to enumerate every permutation in S; however, many problems present a smaller feasible search space, F. [5]. For example, in the following numeric substitution problem: S F F O U R + F I V E --------- N I N E There are 8 variables {F O U R I V E N} to be chosen from 10 digits {0 1 2 3 4 5 6 7 8 9}. This implies that there are variations of 10 choose 8 possibilities in the solution space S, which is 10! / 2! = 1,814,800 variations for the solution space S. However, several observations can be made; for example, the fourth column provides us with R + E = E, which implies R = 0. This has simplified the problem to 7 unknown variables to be chosen from 9 digits, or 9! / 2! = 181,480 variations. Further, the first column implies that F <= 4, as we can't allow an overflow on this column. This removes 5/9th of the remaining variations that need to be tested, leaving 80,640 variations to test. Additional simplifications exist, and will reduce the space even further. {F O U R I V E N} {0 1 2 3 4 5 6 7 8 9} The point being that the more we know about the specifics of a problem, the more we are likely to be able to reduce the size of the feasible search space, the more likely we can solve the problem in a reasonable amount of time. The FOUR + FIVE problem above is just one of a nearly inexhaustible supply of numeric substitution letter problems. This can be used to show that THREE + SEVEN = EIGHT, and even that WRONG + WRONG = RIGHT. Using the attached sample, you can enter the operands and the sum, and have the program search for any and all solutions. The program uses the Variations class to exhaustively search every possible variation, and checks if the variation satisfies the equation. To compute a problem, simply enter the operands, e.g., "FOUR" and "FIVE", into the operands fields, and the sum, e.g. "NINE", into the sum field. The Problem frame will automatically update to present the summation problem. Please ensure that there are no more than 10 distinct characters entered, as no validation is done on this input. Entering more than 10 will ensure that no solution will be found. There are many more ways to ensure that no solution exists such as A + BC = DEFGH. The interface will happily accept these inputs and then return that no solutions were found. Once the problem is entered, clicking Solve will enumerate all possible variations of the digits 1 through 10 for each character in the input. The status bar will display the total number of Variations that will be checked; this will vary based on the number of unique characters in the inputs. This will vary from a low of 90 variations to solve A + A = B, to a high of 3,628,800 when 10 characters are used such as in ABCDE + ABCDE = FGHIJ. The progress bar will indicate the overall progress, which should only take a few seconds for any of these problems. The Solution(s) frame will show the total number found and the most recently found solution, as the program progresses. After all solutions have been found, select the drop down list in the Solution(s) frame to move through all of the solutions that were found. The Facet.Combinatorics namespace is contained in the sample's Combinatorics sub-directory. It contains all of the code required to support Permutations, Combinations, and Variations. It has no dependencies aside from the standard .NET 2.0 System references. This directory can be lifted into other solutions to utilize these classes, no separate library is created. Facet.Combinatorics System The form code is plain vanilla, not production ready UI stuff which can be safely ignored. It has all of the UI elements, and defers the problem solving to the TextSumProblem class which encapsulates this particular problem. TextSumProblem The meat of the sample is in the TextSumProblem which creates every possible variation for the problem. The core of the solution is to generate all variations of integers that are possible candidates for the letters of the problem. In the sample, the Solve() method has logic similar to: Solve() int[] ints = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; Variations<int> variations = new Variations<int>(ints, 8); foreach(IList<int> variation in variations) { if(Satisfies(variation) == true) { // Huzzah, found a solution... } } Finally, the TextSumProblem has a pair of events that signal when a candidate solution has been tried and also when a solution has been found. The form subscribes to these events to provide real time feedback of the problem's progress. Any number of funny examples can be created to "prove" all sorts of contradictions. Most of them will have multiple solutions (ID + EGO = SELF has 1200 solutions, although it's a bit of a cheat), drop me a note if you find a good one that has a single unique solution, I've never been able to find.
http://www.codeproject.com/Articles/26050/Permutations-Combinations-and-Variations-using-C-G?fid=1313343&df=90&mpp=10&sort=Position&spc=None&select=4049142&tid=3994991
CC-MAIN-2016-18
en
refinedweb
iHazeHullBox Struct ReferenceA predefined hull. More... [Mesh plugins] #include <imesh/haze.h> Inheritance diagram for iHazeHullBox: Detailed DescriptionA predefined hull. Definition at line 83 of file haze.h. Member Function Documentation get box settings, min and max The documentation for this struct was generated from the following file: Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structiHazeHullBox.html
CC-MAIN-2016-18
en
refinedweb
Skilled people who love games wanted! Let's start from the beginning. You have a skill, be it programming, drawing, coding, or making noise. You also have a passion for video games, in both the creating & playing sense, &... You want to combine these two factors of your life into one, yet you lack the versatility of knowing all the disciplines needed to create a video game- Hell, you may even know all the disciplines, yet you lack the creativity needed to dream-up & fashion into working-order something that people would be willing to pay for... let-alone like! That's what I'm here for- To form a team of people who have the combined skills to create a video game & fine-tune them to corrolate & create said video game through the use of each-others abilities. The available main positions are as follows: Programmers (1-4ppl): Good, clean code is the hallmark of any game that is true to the concept. Without working code, there would be no game. As a programmer you will be coding the game & the accompanying website + anything else that has code (simple as that). Required: Can demonstrate their abilities. Can write in at least one language- C++ & XNA is preferred however if you can prove that you can do your job just as good if not better in another language- present your case! Can write at least basic A.I. scripts, collision detection, player input, website coding, multiplayer functionality (lan, direct ip, dedicated server)... & anything else code-based that makes a game tick. Can do all this without infringing upon Copyrights. -Unique & original work only. (Though with open-source code the line is a bit blurry- to be discussed if it comes to that). Bonus Points: Can do other jobs from this skills list or misc. ones not mentioned.(Be specific) Can code for consoles (XNA comes in handy here). Artists / Animators (1-4): Artwork is what catches the eye of the player- be it an eye-sore or eye-candy. As such, just like the coding; it too can either make or break a game. As an artist / animator, you will be drawing all pictures & artwork for the game & accompanying website. This job includes sprite & map design. Required: Can demonstrate their abilities. Can understand & draw what other people describe to them & ad-lib where needed. Can draw & inject own style & creativity into artwork without persuasion yet at the same time can stick to given guidelines. Can draw on the computer- graphic tablets are recommended for fast development but not needed if you can prove otherwise. Can draw sprites, characters, effects, weapons, maps, & also knows what needs doing to get all said features at a state where all that is left is for the programmer to code user input etc. Can work fast & effectively whilst still producing high-quality work. Programmers cannot code characters that are non-existant or missing their walking sprites! Can do all this without infringing upon Copyrights. -Unique & original work only. Bonus Points: Can do other jobs from this skills list or misc. ones not mentioned. (Be specific) Can model & create objects, characters etc. in 3D. SFX Engineer (1): I believe that games, no matter how small, require a distinct sound-track & some decent sound effects. Although music for small-scale games isn't expected to be fantastic- the better it is, the better for the overall success of a game. As the SFX Engineer, you will be responsible for all game music, including weapon sounds, voices, etc. You will also be responsible for creating sounds for the game website or anything else that makes even just 1dB of sound. Required: Can demonstrate their abilities. Can create & edit sounds through the use of various programs. Can find & locate sounds / voice talent etc. as needed. Can create sounds that fit the style of the game, which includes weapon & special move sounds, button SFX, background music, opening & closing credit music. Can write voice scripts for characters- you gotta be funny! Can do all this without infringing upon Copyrights. -Unique & original work only. Bonus Points: Can do other jobs from this skills list or misc. ones not mentioned. (Be specific) Can use Sibelius or other similar scoring programs. Can play 1 or more instruments to a high degree that would allow for some unique, recorded music. Required of Everyone: The ability to read & write in English to a high-standard. (I don't want our work being slowed down due to one programmer not understanding what we want done). MSN Messenger or the willingness to get MSN Messenger, for the purposes of one-on-one discussions, team meetings etc. Secrecy- as I'm sure you'd all understand, leaking information, alpha's, beta's etc. to anyone, even to just a close gaming friend, can destroy what the game had going before it... gets going... let-alone any possible hype. Dedication- Although we will be approaching this as a "hobby group" (for lack of a better description), we will still be making the game to a general deadline. From experience, it pays to be act more organised & professional regardless of the situation you are actually in... - Remember; the better the content & the faster it comes out, the faster we can get our return (in whatever form that may be). - So if you're not up to dedicating about 2-3 nights to the game-making process per week + 1 formal group meeting every 2 weeks (which is insanely laid-back & casual as it is)... then think twice before you apply. It's not as much work as it seems! -2-3 nights per week is just an approximation. If you're good at you're job, you'll be using less time than that... Rewards from joining the team: I'm sure you're all wondering the cliche "What's in it for me?". Nice. Here's what's in it for you: Although this team is essentially starting off as a group of people doing a hobby together, there is always the possibility of going "legit" & creating a studio. As such, although this team will, for at least the first game or two, follow under the "hobby" sign, I still fully expect to make money from the games we create. (Sweet, sweet moolah!) These details will discussed more when you join the team. Also, we cannot forget that from joining the team you will get your name in the credits! In a nutshell, you get your name in the credits, a bit of pocket-money (according to success of the game), an awesome game to put on your resume & an overall feeling of accomplishment... Once again, you do want to make a game... don't you? I would also like to point out that although I am forming this team around hobbyists, producing a game of a decent (if not high) calibre is hard & does require a certain level of talent & developed skills. That is the reason behind me choosing who joins the team, as opposed to accepting anyone. For those who don't fit the bill for the above positions or are unsuccessful at applying, not to worry! I will post QA Testing positions for Beta etc. To apply, send an email to [email protected], with an accompanying cover letter & a demonstration of your abilities in an attachment- ensure it can be viewed by anyone (especially if you're using a special not run-of-the-mill program), as a demonstration of your talent that cannot be viewed leaves alot to be desired (baad!) My name is James, & I hope to be hearing from some of you soon! Note: I will not be replying in the threads where I post this- so if you're after a position do not post a reply / comment etc. Email it through as mentioned above. Peace... Skilled people who love games wanted! Page 1 of 1 Want to code for a game(s)? 3 Replies - 13395 Views - Last Post: 15 June 2011 - 03:56 PM #1 Skilled people who love games wanted! Posted 05 May 2009 - 11:46 PM Replies To: Skilled people who love games wanted! #2 Re: Skilled people who love games wanted! Posted 06 May 2009 - 12:32 PM Unless I'm mistaken, this seems more like an advertisement for a job than a Corner Cubicle topic... #3 Re: Skilled people who love games wanted! Posted 07 May 2009 - 03:40 PM #4 Re: Skilled people who love games wanted! Posted 15 June 2011 - 03:56 PM Email mei cant program anything but i do have an idia of how you program i know a little bit of how to program i am a computer technician i can fix servers networks and a computer if maby i can be of another use to you i also like games i know most of the computer bianary code and how it works get back to me on this one plz thanx This post has been edited by shanesutton17: 15 June 2011 - 03:57 PM Page 1 of 1
http://www.dreamincode.net/forums/topic/103605-skilled-people-who-love-games-wanted/page__pid__631743__st__0
CC-MAIN-2016-18
en
refinedweb
- I need help for this copy byte code ... - Trying to get Java to ask the user for a number - I need some help... Please ! - [SOLVED] I'm not sure what I need to do next. - flight reservation code problem... - Need help with user imput - Switch problem. How do I return to my menu? - Why wont my program calculate sales tax? - Need Help on Looping Program!! Beginner!! - Array - Problem with the loop ..... ;( - Help!! Need to add a few things and I be lost!! - My program wont print the sum of the rows in a two-dimensional array - Need help with my hangman program! - Random suit and card with arrays - A deck of cards - Syntax error on token ";", { expected after this token please HELP - [SOLVED] Help reading a file to decode or encode - Unisex batroom - [SOLVED] Beginner, stuck on implementing while loop, compiles fine but still won't run - Putting codes in order...... - simple translator GUI ?? - Wheel of Fortune - Simple Game - non-static method getDimensions() cannot be referenced ... - I'm confused - HELP WITH MY BEGINER JAVA CODE.. - StdDraw Cannot be Resolved?? - [SOLVED] Printing an ArrayList of user-defined Objects - About loop - Need help with making a menu that sits on left side of JFrame - Need help - JOptionPane display on seperate lines - Help with conversion - Temperature Converting Application - Memory Card Game JButtons Array - Screen is flippering / flickering. - code not returning sum - Java Maze Problem (Please Help Guys) - Help on developing code for autoform fill in website - [SOLVED] homework troubles, any help would be appreciated - Trouble with writing/reading array to/from file - Unwanted appendices - A GUI issue. - Complete Noob -- A little help would be appreciated... - Password writing file - Java Homework Help - Check OS - Begginer Java - Bug in my code? - HTML Parsing - Help using String and character analyzers such as isDigit() and isLetter() - Run Time Error - NEED SOME HELP - I am having major issues with layouts - [SOLVED] where I'm wrong? - Java GUI not working - Initializing a downloaded class in another class - Java Multithreading Example - Issues - Midi message from an input device. Help needed with the messages I'm receiving. - extract selected text out of a pdf file using java - Exception in thread "main" java.lang.NullPointerException - Need help with what to do next - Help writing the IF statement for Y/N - [SOLVED] Linked List (adding alphabetically) - [SOLVED] how to format double value? - Creating a javadoc help please. - LinkedList : Adding elements in constructor - Couple compile errors for dice game - [SOLVED] java calendar - Java If Statement - Code help - Need help on access encrypted folder - substring StringIndexOutOfBoundsException - Code help with reset button error - i need help - [SOLVED] Not finding main class - Trying to make 3 images become random in an Applet. Need help - Eclipse not recognizing my method for class. - What am I missing? - Elements in double array (java) - [SOLVED] Printing A Histogram... - Initializing and Receiving Variables, Creating Methods - Help with inserting into a b-tree - I'm stuck on my Camelot game - [SOLVED] Please help with my while loop that turned into infinite loop! - Check if 'int' is empty - Scheduling algorithm - Creating object everytime object is called - BlueJ trouble or program trouble (Combining Arraylists) - [SOLVED] Traversing two different linked list - Reading ints into a multidimensional array - Auto Update JLabel on JButton Press? - Please Help!!! - Strange boxes appearing - Arraylist input help, quick question. - Can't Figure out Why I am Getting Null Pointer Exception - Please help me to do the search edit and delete. PLEASE :( - Literation Code - Help With errors. Please. - String Arrays [] - Need help with a Java Assignment regarding enums and arrays - Help! | Slecht/niet runnen van .java - Help with assignment about a board game (enum and array question) - Popout Jframe, Everything is STACKING!!! Help! - help creating a virtual Object - help with circular linked list - Need help with Huffman coding, please? - Seperating String with comma - Why isn't this code working? - File Input Trouble - How to get rid of this? - GUI Help - Threads in BankAccount program - More GUI help - Google Chart from database Servlet not working - applet is not initialized. Help will be appreciated - repaint in a JButton while loop. - Network connectivity issues with my project - Need direction on how to get the output desired from Parking Ticket Simulator - Seeking help with a program. - Total Sales Error - [SOLVED] URLConnection inconsistency + SSCCE - How do I add Junit package - Happy Pi Day ; a Pi Game! - fcntl.ioctl equalient code in java ? - Exception at main... - Matrix multiplying using separate input classes and a GUI - [SOLVED] Array Population via .txt File - [SOLVED] Using variable across methods - [Newbie] Method will not execute. - My code return null, Why? - Super-beginner needs help! URGENT - n00b asking jspinner question - Overloading Method - Type mismatch: cannot convert from void to ClassX.MethodX - JAVA LOOP HELP!!! - replaceAll() help - Need Help! Out of bounds error on ARRAYS!! - [SOLVED] Help with my Java program: Making change from an entered double. - Not Reading From Text File - Help needed to figure out why I keep getting error messages in Eclipse - [SOLVED] Simple Library Program / Need feedback. - How to use GNU Crypto LIbrary in Matlab via Java Interface - Switch Case Problem help - Java Programming - Dice Game Help! - Decimal Bianry Converter - Certain Expectation for assignment - Why isn't this code working? - [SOLVED] Can't reassign boolean value! - Begginer : help would be much appreciated for this exercice! - need help on my project.. - Error In WavFile..... - [SOLVED] Array Index out of bounds Exception - How can you get the compiler to run more than one public class? - JSF - input field is not updated - Need helpt with sorting arrays? - Trouble Creating a DFS Maze - My polymorphism code won't work - problem with enum (int to months) - What did I do wrong here? - [SOLVED] Help with my Java program: Applet that draws 5 random circles. - Leap Year/Calendar Program. toString() method error. - [SOLVED] Error in calling JFrame's super. - Count number of 'coin'p coins in a "machine" - Inheritance not working! - Sorting a Linked List - Search Term Program Problem - Trying to print a method for a table - Dijkstra's code help - Help with understanding simple GUI Code - [SOLVED] Need help with Pick A Card Program Please - Passing variables with void methods - [SOLVED] System exit command is not working properly - need help with phone number program for homework - "Variable not initialized" - Can my board and rent fees be set up like this for a monopoly(Multi array for rents?) - i am having a problem with the distance formula,help needed! - Need help with calculating stock code. - Accessing Private Int in a Subclass - Need help with calculating stock with joptionpane - null pointer exception on extending a server class - Jquery issue? dont know if I can ask on thsi forum. - [SOLVED] Try Catch Problems - hi its mery , and iam new i have this code - problem with Random() - urgent help! - problem with printing to text area - Warning when I do a build - Problem with my poker program - Java Code Help on Draw method - Modulus with BigIntegers - Problem: Grabbing Ints from JTextField - [SOLVED] Combing println with for loop - How to use Get and Set methods - char problems - string identifier symbol not found - cannot find symbol error with defining a string - Am I doing this right? (demonstrate TWO sorting algorithms) - creating a simple console menu - [SOLVED] Exception in thread "main" java.lang.NoClassDefFoundError - Exception in thread "main" java.lang.NullPointerException - Error with taking float numbers from an input dialog box. - Starting a Path - Adding add/remove button to add/remove tab - how to add a JFrame listener for a JPanel - Formatting white space to 3 characters in parentheses - Help with using mergesort to sort a list of names alphabetically? - Problem of Making Jar - NullPointerException trouble - new to java, and need help with assigment - [SOLVED] Translate sentence to morse cod? - Regular expression handling - System.out.println("Message"); - this is easy for those who know java!! - Hello Hello - Game for Small Children Program - Image Doesn't Display with Qualified Image Path???? - No suitable driver found for jdbc:mysql://localhost/books - won't load png - png problem continued - how to plot the line graph using jfreechart reading from text file - java.lang.String cannot be cast to java.util.Hashtable - Help with Group Layout - Matrix Generator - Java Script or HTML - Read input, read file, find match, and output... URGENT HELP! - What is worng in my java code - Please help me solve this problem, this is really urgent! - HTML Unit Help - how to plot a line graph by reading the values from database? - [SOLVED] Char Array Increment + 1 - JOptionPane menu and multiple methods - [SOLVED] A little help please~~How should i fix it?? - I need a little help - Help with Bounded Buffer problem - [SOLVED] (LWJGL)openGL arraylist rendering problem - NEEDING HELP UNDERSTANDING A CODE FROM THREE CLASSES!!! - problem in plotting graph for same x-axis values - JDialog from JFrame button (GUI) - Switch position of nodes on list - AS Assignment project - Creating a game in Greenfoot - Needing some help with array getter and setter methods!! - Need help finishing my program - Problem w/ Hashmap. Null Pointer? - Chaos Problems. - Inheritance; Problem with Test class - Using a HTML5, CSS3 and jQuery website template, can't get the Java pop-ups to work.. - Problem with sorting by alphabetical order - [SOLVED] Write a program that merges two files containing alphabetized lists of student record - [SOLVED] Keyboard Listeners and ImagePanel - TFTP Server not allowing data transfer - How to load specific .midi (music) files from the cache?
http://www.javaprogrammingforums.com/sitemap/f-62-p-16.html?s=9f35327b73a0658ade09f5044aab0251
CC-MAIN-2016-18
en
refinedweb
Ticket #2562 (closed Bugs: fixed) warning: type qualifiers ignored on function return type Description Sample source: #include <boost/program_options.hpp> namespace po = boost::program_options; int main() { int x; po::options_description desc(""); desc.add_options()("x,x", po::value<int>(&x)->default_value(2), "x"); return 0; } Building with -Wignored-qualifiers on g++ 4.3.2 on Ubuntu 8.10 x86_64. /home/yang/work/boost/boost/any.hpp: In member function âvoid boost::program_options::typed_value<T, charT>::notify(const boost::any&) const [with T = int, charT = char]â: boost_program_options_warning.cc:8: instantiated from here /home/yang/work/boost/boost/any.hpp:200: warning: type qualifiers ignored on function return type Attachments Change History comment:2 Changed 7 years ago by vladimir_prus - Owner changed from vladimir_prus to nasonov - Component changed from program_options to any This is problem with boost::any, not program_options, so I'm changing component. Note that IIUC, boost::any is not actively maintained, so I'm not sure if this will be fixed soon. Note also that I don't have any opinion whether this warning is a actual problem with boost::any, or a bogus warning. comment:4 Changed 6 years ago by Sascha Ochsenknecht <s.ochsenknecht@…> - Cc s.ochsenknecht@… added - Owner changed from no-maintainer to vladimir_prus - Component changed from any to program_options Note: See TracTickets for help on using tickets. Whoops, I meant to mark this as a bug in svn trunk.
https://svn.boost.org/trac/boost/ticket/2562
CC-MAIN-2016-18
en
refinedweb
NAME sockatmark - determine whether socket is at out-of-band mark SYNOPSIS #include <sys/socket.h> int sockatmark(int sockfd); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): sockatmark(): _POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600 DESCRIPTION sockatmark() returns a value indicating whether or not the socket referred to by the file descriptor sockfd is at the out-of-band mark. If the socket is at the mark, then 1 is returned; if the socket is not at the mark, 0 is returned. This function does not remove the out-of- band mark. RETURN VALUE A successful call to sockatmark() returns 1 if the socket is at the out-of-band mark, or 0 if it is not. On error, -1 is returned and errno is set to indicate the error. ERRORS EBADF sockfd is not a valid file descriptor. EINVAL sockfd is not a file descriptor to which sockatmark() can be applied. VERSIONS sockatmark() was added to glibc in version 2.2.4. CONFORMING TO POSIX.1-2001. NOTES If sockatmark() returns 1, then the out-of-band data can be read using the MSG_OOB flag of recv(2). Out-of-band data is only supported on some stream socket protocols. sockatmark() can safely be called from a handler for the SIGURG signal. sockatmark() is implemented using the SIOCATMARK ioctl(2) operation. BUGS Prior to glibc 2.4, sockatmark() did not work. EXAMPLE The following code can be used after receipt of a SIGURG signal to read (and discard) all data up to the mark, and then read the byte of data at the mark: char buf[BUF_LEN]; char oobdata; int atmark, s; for (;;) { atmark = sockatmark(sockfd); if (atmark == -1) { perror("sockatmark"); break; } if (atmark) break; s = read(sockfd, buf, BUF_LEN) <= 0); if (s == -1) perror("read"); if (s <= 0) break; } if (atmark == 1) { if (recv(sockfd, &oobdata, 1, MSG_OOB) == -1) { perror("recv"); ... } } SEE ALSO fcntl(2), recv(2), send(2), tcp(7) COLOPHON This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/precise/man3/sockatmark.3.html
CC-MAIN-2016-18
en
refinedweb
Selection sort in Java is used to sort the unsorted values in an array. In selection sorting algorithm, the minimum value in an array is swapped to the very first position in that array. In the next step the first value of array is left and minimum element from the rest of the array is swapped to second position. This procedure is repeated till the array is sorted completely. Selection sort is probably the most spontaneous sorting algorithm. Selection sort is simple and more efficient when it comes to more complex arrays. The complexity of selection sort in worst-case is Θ(n2), in average-case is Θ(n2), and in best-case is Θ(n2). Following example shows the selection sort in Java. In selection sort algorithm, first assign minimum index in key as index_of_min=a. Then the minimum value is searched. Index of minimum value in key is assigned as index_of_min=b. Minimum value and the value of minimum index are then swapped. In the next step the first element (value) is left, and remaining values are sorted by following same steps. This process is repeated till the whole list is sorted. Example of Selection Sort in Java: public class selectionSort{ public static void main(String a[]){ int i; int array[] = {15, 67, 40, 92, 23, 7, 77, 12}; System.out.println("\n\n RoseIndia\n\n"); System.out.println(" Selection Sort\n\n"); System.out.println("Values Before the sort:\n"); for(i = 0; i < array.length; i++) System.out.print( array[i]+" "); System.out.println(); selection_srt(array, array.length); System.out.print("Values after the sort:\n"); for(i = 0; i <array.length; i++) System.out.print(array[i]+" "); System.out.println(); System.out.println("PAUSE"); } public static void selection_srt(int array[], int n){ for(int x=0; x<n; x++){ int index_of_min = x; for(int y=x; y<n; y++){ if(array[index_of_min]<array[y]){ index_of_min = y; } } int temp = array[x]; array[x] = array[index_of_min]; array[index_of_min] = temp; } } } Output: C:\array\sorting>javac selectionSort.java C:\array\sorting>java selectionSort RoseIndia Selection Sort Values Before the sort: 15 67 40 92 23 7 77 12 Values after the sort: 7 12 15 23 40 67 77 92: Selection Sort in Java Post your Comment
http://roseindia.net/java/beginners/arrayexamples/selection-sort-in-java.shtml
CC-MAIN-2016-18
en
refinedweb
User Tag List Results 1 to 1 of 1 Thread: HABTM Question w/ checkboxes - Join Date - Jul 2004 - Location - NC - 194 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) HABTM Question w/ checkboxes HABTM Question w/ checkboxes Hello, I have an address table that has a HABTM relationship w/ a preferences table. I want to set the preferences table to handle a variety of preferences for each address. For instance, for each address the preferences table could carry values for make_private = t/f, report_updates=t/f ect. In my view where I set each value I have this code: Code: <% @addresses.each_with_index do |address, index| %> <tr class="<%=cycle('odd', 'even') %>"> <td width="10"> <%=check_box("address", address.id, address.has_update_preference?(address.id)=='1' ? {:checked=>'checked'} : {:checked => ''}) %> </tr> <% end if @addresses -%> Code: def save_preferences unless params[:address].nil? params[:address].each do |key,val| pref = Preferences.find(:first,:conditions=>["address_id=? && preference_name='notify_update'",key]) if pref.nil? pref = Preferences.new(:address_id =>key, :preference_name=>'notify_update', :preference_value => val) pref.save else pref.update_attributes(:address_id =>key, :preference_name=>'notify_update', :preference_value => val) end end end end Code: def has_update_preference?(id) update_preference = Preferences.find(:first, :select=>["preference_value"],:conditions=>["preference_name='notify_update' && address_id=?", id]) unless update_preference.nil? return update_preference[:preference_value] else return 0; end end That's a lot of database calls for each checkbox, it seems rails would have a more glorious way of doing this. I would also like to extend this address preferences concept to handle preferences on individual address fields IE address.email = preferences.email_public_private t/f Any suggestions? Thanks, Eric Bookmarks
http://www.sitepoint.com/forums/showthread.php?593566-HABTM-Question-w-checkboxes&p=4108027
CC-MAIN-2016-18
en
refinedweb
Logon, Browsing, and Resource Sharing: The Basics This chapter describes how to configure and use the Windows 95 logon process, network browsing, and peer resource sharing capabilities. This section summarizes key Windows 95 features that you can use to make network logon, resource browsing, and peer resource sharing easier and more secure for computers running Windows 95 on your network. On This Page Unified System Logon Basics Network Browsing Basics Peer Resource Sharing Basics Logon, Browsing, and Resource Sharing: The Issues Overview of Logging on to Windows 95 Configuring Network Logon Using Login Scripts Technical Notes for the Logon Process Browsing Overview Browsing on Microsoft Networks Browsing on NetWare Networks Overview of Peer Resource Sharing Using File and Printer Sharing for Microsoft Networks Using File and Printer Sharing for NetWare Networks Troubleshooting for Logon, Browsing, and Peer Resource Sharing Unified System Logon Basics Windows 95 offers a consistent user interface for logging on to and validating access to network resources. The first time the user logs on to Windows 95, logon dialog boxes appear for each network client on that computer and for Windows 95. If the user's password for Windows 95 or for another network is made the same as the password for the primary logon client, Windows 95 automatically logs the user on to Windows 95 and all networks using that password every time the user logs on. This means that, for users, network logon is simplified in that a single logon dialog box is presented each time the operating system starts. For network administrators, it means they can use existing user accounts to validate access to the network for users running Windows 95. Note: The Passwords option in Control Panel provides a way to synchronize logon passwords for different networks so they can be made the same if one is changed. For more information, see Chapter 14, "Security." When a user logs on to other networks with different passwords and chooses to save them, the passwords are stored in a password cache. The Windows 95 password unlocks this password cache. Thereafter, Windows 95 uses the passwords stored in the password cache to log a user on to other networks so no additional passwords need to be typed. For NetWare networks, Windows 95 provides graphical logon to Novell NetWare versions 3.x, or 4.x if the network is configured for bindery emulation, plus a NetWare-compatible login script processor. This means that if you are using Microsoft Client for NetWare Networks, Windows 95 can process NetWare login scripts. If drive mappings and search drives are specified in a login script, then under Windows 95 the same user configuration is used for network connections as was specified under the previous operating system, with no administrative changes necessary. For Microsoft networks, Windows 95 supports network logon using domain user accounts and login script processing (as supported by LAN Manager version 2.x and Windows NT). Network Browsing Basics Network Neighborhood is the central point for browsing in Windows 95. It offers the following benefits: Users can browse the network as easily as browsing the local hard disk. Users can create shortcuts to network resources on the desktop. Users can easily connect to network resources by clicking the Map Network Drive button that appears on most toolbars. Users can open files and complete other actions by using new common dialog boxes in applications. This new standard provides a consistent way to open or save files on both network and local drives. The network administrator can customize Network Neighborhood by using system policies, as described in Chapter 15, "User Profiles and System Policies." A custom Network Neighborhood can include shortcuts to commonly used resources, including Dial-Up Networking resources. In any situation in which you can type a path name. On NetWare networks, you can use the UNC name or standard NetWare syntax. For the previous example, you would type corp/docs:word\q1. (Notice that, in the NetWare environment, "/" and "\" are interchangeable.) However, Windows 95 does not support the NetWare 4.0 naming convention of \\\nwserver_sys\directory_path\filename.ext where \\\nwserver_sys is the name of the NetWare Directory Services (NDS) server volume object. Peer Resource Sharing Basics The two peer resource sharing services in Windows 95 — Microsoft File and Printer Sharing for NetWare Networks and File and Printer Sharing for Microsoft Networks — are 32-bit, protected-mode networking components that allow users to share directories, printers, and CD-ROM drives on computers running Windows 95. File and Printer Sharing services work with existing servers to add complementary peer resource sharing services. For example, a NetWare network and its users will realize the following benefits by using File and Printer Sharing for NetWare Networks: Users can share files, printers, and CD-ROM drives without running two network clients. This saves memory, improves performance, and reduces the number of protocols running on your network. (Under Windows for Workgroups, Novell users had to also run a Microsoft network client to take advantage of peer resource sharing.) Security is user-based, not share-based. You can administer user accounts, passwords, and group lists in one place (on the NetWare server) because File and Printer Sharing for NetWare Networks uses the NetWare server's authentication database. Users running VLM or NETX clients can access shared resources on computers running Windows 95. The computer running Windows 95 looks as if it is just another NetWare server if it uses SAP Advertising, new Windows 95 VFAT 32-bit file system, 32-bit NDIS drivers, 32-bit IPX/SPX-compatible protocol, and the burst-mode protocol. Similar benefits are available when you use File and Printer Sharing for Microsoft Networks. You can also use either share-level security or, on a Windows NT network, user-level security to protect access to peer resources. Logon, Browsing, and Resource Sharing: The Issues This section summarizes the issues you need to consider when planning to use logon, browsing, and resource sharing features in Windows 95. The network logon issues include the following: To use unified logon, a logon server (such as a Windows NT domain controller or a NetWare preferred server) must be available on the network and contain user account information for the user (unless, of course, the user is logging on as a guest). The Windows 95 logon processor can parse most statements in the NetWare login scripts. However, any statements loading TSRs must be removed from the scripts and loaded from AUTOEXEC.BAT. Because the Windows 95 logon processor operates in protected mode, it is not possible to load TSRs for global use from the login script. These TSRs should be loaded from AUTOEXEC.BAT before protected-mode operation begins, or using other methods described in "Using Login Scripts" later in this chapter. In some cases, login scripts load backup agents as TSRs. In such cases, protected-mode equivalents built into Windows 95 can be used, making it unnecessary to load these TSRs. The network browsing issues include the following: You can plan ahead to configure workgroups for effective browsing by using WRKGRP.INI to control the workgroups that people can choose. For information about configuring WRKGRP.INI, see Chapter 5, "Custom, Automated, and Push Installations." If your enterprise network based on Microsoft networking is connected by a slow-link WAN and includes satellite offices with only Windows 95, then workstations in the satellites cannot browse the central corporate network. Consequently, they can connect to computers outside of their workgroups only by typing the computer name in a Map Network Drive dialog box. To provide full browsing capabilities, the satellite office must have a Windows NT server. You can use system policies, such as Hide Drives In My Computer or Hide Network Neighborhood, to limit or prevent browsing by users. For information, see Chapter 15, "User Profiles and System Policies." The resource sharing issues include the following:, there must be a NetWare server, then a Windows NT server or domain must be available to validate user accounts. If you plan to use Net Watcher to remotely monitor connections on a computer running File and Printer Sharing services, that computer must have the Microsoft Remote Registry service installed. This is also true if you want to use Registry Editor or System Policy Editor to change settings on a remote computer. For information, see Chapter 16, "Remote Administration." If you are configuring a user's workstation to act as a peer server, you might also want to specify that this computer cannot run MS-DOS – based applications (which take exclusive control of the operating system, shutting down File and Printer Sharing services). To do this, you can set the system policy named Disable Single-Mode MS-DOS Applications. Overview of Logging on to Windows 95 There can be two levels of system logon on Microsoft or NetWare networks: Log on to Windows 95 by using a user name and a password that is cached locally Log on to a NetWare network or a Windows NT domain for validation, "Overview of Logging on to Windows 95" earlier in this chapter When other network vendors make 32-bit, protected-mode networking clients available, network logon will be automatically available for those networks because of the Windows 95 network provider interface, as described in Chapter 32, "Windows 95 Network Architecture." Windows 95 provides a single unified logon prompt. This prompt allows the user to log on to all networks and Windows 95 at the same time. The first time a user starts Windows 95, there are separate logon prompts for each network, plus one for Windows 95. If these passwords are made identical, the logon prompt for Windows 95 is not displayed again. Logging on to Windows 95 unlocks the password cache file (.PWL) that caches encrypted passwords. This is the only logon prompt that appears if no other network clients are configured on that computer. To log on to Windows 95 when no other network logon is configured When the Welcome to Windows dialog box appears after starting Windows 95 for the first time, specify the user name and password. Windows 95 uses this logon information to identify the user and to find any user profile information. User profiles define user preferences, such as the fonts and colors used on the desktop, and access information for the user. (For more information, see Chapter 15, "User Profiles and System Policies.") To log on to Windows 95 on a Microsoft network When the Enter Network Password dialog box appears after starting Windows 95 for the first time, specify the user name and password. For network logon on a Microsoft network, type the name of the Windows NT domain, LAN Manager domain, or Windows NT computer that contains the related user account. This dialog box appears for logging on to Windows NT networks After the user name and password pair are validated by the network server, the user is allowed to use resources on the network. If the user is not validated, the user cannot gain access to network resources. The first time Windows 95 starts, the Welcome to Windows dialog box appears, prompting you to type the user name and password defined for Windows 95. To log on to Windows 95 on a NetWare network To log on to a NetWare network, type the name of the NetWare server, which is the preferred server where the related user account is stored. This dialog box appears for logging on to NetWare networks After the user name and password pair are validated by the NetWare server, the user is allowed to use resources on the network. If the user is not validated, the user will be prompted to type a password when connecting to a NetWare server during this work session. The first time Windows 95 starts, the Welcome to Windows dialog box appears, prompting you to type the user name and password defined for Windows 95. Type this information and click OK. The next time this computer is started, Windows 95 displays the name of the last user who logged on and the name of the domain or preferred server used for validation. If the same user is logging on again, only the password for the network server or domain needs to be entered. If another user is logging on, that user's unique user name and password must be entered. If the passwords are the same for the network and Windows 95, the second dialog box for logging on to Windows 95 does not appear again. Configuring Network Logon If you install either Client for Microsoft Networks or Client for NetWare Networks, you can configure a computer running Windows 95 to participate on a Windows NT or NetWare network. Before you can use network logon on a computer running Windows 95, however, you must have a Windows NT domain controller or NetWare server on the network that contains user account information for the Windows 95 user. For more information about setting up permissions on a Windows NT or NetWare server, see the administrator's documentation for the server. For related information, see Chapter 8, "Windows 95 on Microsoft Networks" and Chapter 9, "Windows 95 on NetWare Networks." The validation of a user's network password at system startup might not be required for accessing network resources later during that work session. However, system startup is the only time the login script can run, and it is the only time at which user profiles and system policies can be downloaded on the local computer. Therefore, proper network logon can information about enforcing logon password requirements, see Chapter 14, "Security." Tip Logon validation will control only user access to network resources, not access to running Windows 95. To require validation by a network logon server before allowing access to Windows 95, you must use system policies. For information, see "Setting Network Logon Options with System Policies" later in this chapter. Notice, however, that Windows 95 security cannot prevent a user from starting the computer by using Safe Mode or a floppy disk. If you require complete user validation before starting the computer in any way, use Windows NT as the sole operating system. Configuring Logon for Client for Microsoft Networks When the computer is configured to use Client for Microsoft Networks as the Primary Network Logon client, you can specify Microsoft Windows NT logon options in the Network option in Control Panel. This section describes how to configure these options. Network logon automatically validates the user on the specified Windows NT domain during the process of logging on to Windows 95. If this option is not configured, the user cannot access most network resources. If this option is configured and the user does not provide a correct password, Windows 95 operation might seem normal, but the user will not have access to most network resources. When you configure network logon options, you can specify whether you want to automatically establish a connection for each persistent connection to a network resource or verify whether to reestablish connections at system startup. You can also specify basic network logon options in custom setup scripts used to install Windows 95. For complete procedures for configuring network logon and persistent connections for Client for Microsoft Networks, see Chapter 8, "Windows 95 on Microsoft Networks." For information about defining network logon options in custom setup scripts, see Chapter 5, "Custom, Automated, and Push Installations." For information about controlling network logon by using system policies, see Chapter 15, "User Profiles and System Policies." Configuring Logon for NetWare Networks Each Windows 95 user must have an account on the NetWare server before being able to use its files, applications, or print queues. The NetWare server account contains user credentials (user names and passwords). With Client for NetWare Networks, there is no real-mode logon before Windows 95 starts, just the single, unified logon prompt for Windows 95 that allows users to log on to the system and to all networks at the same time. The first time a user starts Windows 95, there are two separate logon prompts: one for Windows 95 and one for the NetWare preferred server. As long as the two passwords are the same, the second logon prompt for Windows 95 is not displayed again. If the computer uses a Novell-supplied real-mode network client, network logon occurs in real mode, and uses all the NetWare configuration settings that were in place before Windows 95 was installed. There are no required changes. To configure Client for NetWare Networks for network logon, you need to specify whether Client for NetWare Networks is the Primary Network Logon client, which means the following: System policies and user profiles are downloaded from NetWare servers, if you use these features. Users are prompted first to log on to a NetWare server for validation when Windows 95 starts (before being prompted to log on to any other networks). For this computer, the last logon script runs from a NetWare server. Tip When you start Windows 95 with Client for NetWare Networks configured as the Primary Network Logon client, Windows 95 automatically prompts you to provide logon information such as your password on the NetWare server. You should never run the Novell-supplied LOGIN.EXE utility from a batch file or at the command prompt when you are using Client for NetWare Networks. When you designate Client for NetWare Networks as the Primary Network Logon client, you must also specify a preferred NetWare server. Windows 95 uses the preferred server to validate user logon credentials and to find user profiles and system policy files. You can change the preferred NetWare server at any time. The following procedure describes how to configure Client for NetWare Networks to log on to a NetWare network. If you use a NETX or VLM client, you can configure the setting for the preferred server using NET.CFG or using the /ps option in STARTNET.BAT, AUTOEXEC.BAT, or wherever you start NETX or VLM. For more information, consult your Novell-supplied documentation. To use a NetWare server for network logon In the Network option in Control Panel, select Client for NetWare Networks in the Primary Network Logon box. Double-click Client for NetWare Networks in the list of installed components. In the Client for NetWare Networks properties, set values for the configuration options, as described in the following table. Client for NetWare Networks attempts to connect to the preferred server rather than the first server that responds to the Get Nearest Server broadcast. Client for NetWare Networks also attempts a number of server connections in case the client computer can't establish a connection with the preferred server. Tip for Passwords on Windows 95 and NetWare Servers After you log on to the network and you are validated by a NetWare server, Windows 95 automatically supplies the same user name and password for logging on to Windows 95. You are asked to supply your user name and password to log on to Windows 95 only if the user name or password is different from your NetWare user account. Therefore, you might want to keep your user name and password the same for both the Windows 95 and the NetWare networks. Maintaining the same user name and password for both networks also makes it easier for network administrators to coordinate user accounts. For more information about passwords, including brief information on changing passwords on a NetWare server, see Chapter 14, "Security." With NETX and VLM clients, network logon occurs in real mode during system startup. Therefore, the logon prompt for Windows 95 always appears when these clients are used because the unified logon process is not available. Setting Network Logon Options with System Policies The network administrator can define system policies to enforce requirements for network logon. For example, you may want to ensure that users cannot access the local computer without network validation, or you may want to disable password caching. For network logon in general, use these Microsoft Client for NetWare Networks, use this policy: Disable Automatic NetWare Login, to specify that when Windows 95 attempts to connect to a NetWare server, it does not automatically try to use the user's network logon name and password and the Windows logon password to make the connection. For Client for Microsoft Networks, use these feature when password caching has been disabled using system policies. The Quick Logon feature requires password caching to function properly. For information about these policies and others that enforce password requirements, see Chapter 15, "User Profiles and System Policies." which also describes how to implement ensure that network logon is configured properly on a specific computer. Using Login Scripts This section summarizes some information about using login scripts on Windows NT and NetWare networks. For details about using login scripts for push installation of Windows 95, see Chapter 5, "Custom, Automated, and Push Installations." Using Login Scripts with Microsoft Networking This section summarizes how to use login scripts for Windows 95 on Windows NT networks. Login scripts are batch files or executable files that run automatically when a user logs on to a computer running either Windows NT, Windows 95, or MS-DOS. Login scripts are often used to configure users' working environments by making network connections and starting applications. There are several reasons that you might want to use login scripts: You want to manage part of the user environment (such as network connections) without managing or dictating the entire environment. You want to create common network connections for multiple users. You already have LAN Manager 2.x running on your network, and you want to continue to use login scripts you have created for that system. To assign a user a login script, designate the path name of the login script file in the user's account on the server. Then, whenever that user logs on, the login script is downloaded and run. You can assign a different login script to each user or create login scripts for use by multiple users. To create a batch-file login script, create an MS-DOS batch file. (For more information on creating batch files, see the Windows NT Server System Guide or your MS-DOS documentation.) There are several special parameters you can use when creating login scripts, as shown in the following table. A login login scripts always work for users, you should be sure that login scripts for all user accounts in a domain exist on every primary and backup domain controller in the domain. You can do this by using the Windows NT Replicator service, as described in the Windows NT Server System Guide. Home directories on Windows NT networks are used to store user profiles and can also serve as private storage spaces for users. Typically, users also control access to their home directories and can restrict or grant access to other users. To ensure access to user profiles, you should assign each user a home directory on a server. You can also assign users home directories on their own workstations (although this means that users won't have access to their user profiles from other computers); you might want to do this if you don't want the user to be able to access files and directories on the rest of the workstation. Using Login Scripts on NetWare Networks On NetWare networks (version 3.x or using the bindery), the system login script named NET$LOG.DAT is stored in the PUBLIC directory on the server. Individual user scripts are stored in their MAIL subdirectories. The network administrator can use SYSCON (or NWADMIN for VLM) to edit login scripts for any NetWare-compatible client running under Windows 95. Login scripts are stored differently on NetWare 3.x servers (using bindery services) versus NetWare 4.x servers (using NDS). On a bindery server, the System login script is stored in the NET$LOG.DAT file in the PUBLIC directory, and User login scripts are stored in the LOGIN file in MAIL subdirectories that correspond to the users' internal IDs. On an NDS server, the Container, Profile, and User login scripts are stored in the NDS database as properties of those objects. The issues related to running login scripts depend on whether the computer is configured with Client for NetWare Networks or uses a Novell-supplied network client. Running Login Scripts with Client for NetWare Networks If the computer is running Client for NetWare Networks, the special Windows 95 login script processor runs the login script after the user completes entries in the network logon dialog box during system startup. Microsoft Client for NetWare Networks makes only bindery connections. When it connects to a NetWare 4.x server, the server must be running bindery emulation, so that the login scripts can be accessed in the same way as on a bindery server. If bindery-type login script files aren't available, you must use SYSCON from a NetWare 3.x server to connect to the NetWare 4.x server and create bindery-type System and User login scripts. The Windows 95 login script processor runs NetWare 3.x. The Login Script Processor window Any NetWare or MS-DOS command (in conjunction with NetWare login script commands) can be used in a login script except those that load TSRs. The Windows 95 VM, which is subsequently shut down when login script processing is completed. In these cases, the login script processor displays an error message. For loading components such as backup agents, protected-mode equivalents in Windows 95 can be used instead of running TSRs. If you need to run a TSR to support an application, use one of the options described in the following table. 1 The IPX/SPX-compatible protocol (NWLINK) is loaded after real mode is complete but before login scripts are processed, so this protocol is available for TSRs loaded from WINSTART.BAT. 2 The TSR must be loaded in each separate VM Logon Script Processor window appears and displays all subsequent statements as they run. If any #DOS_command statement is included in the script, a special VM is used to process the command. An MS-DOS Prompt window appears while the command is running and then closes automatically when the command is complete. The following list presents some tips for testing and running login scripts with Client for NetWare Networks: In your testing laboratory, run the login script on a NETX computer and check the drive mappings and printer capture statements. Then run the script under Client for NetWare Networks and make sure the results are the same. Insert PAUSE statements frequently in the scripts you are testing so that you can study each screenful of information as it appears in the Logon Script Processor window. While testing scripts, check carefully for script errors that appear in the Logon Script Processor window. Insert PAUSE statements following any text that you want the user to read during system logon. Note: The Windows 95 login script processor can handle any documented NetWare login script commands. Any undocumented variations on NetWare commands might not be processed as legal statements. You can make persistent connections (using the same drive letter each time) to NetWare volumes and directories by using the Windows 95 user interface. Using persistent connections eliminates the need for some NetWare MAP commands in login scripts. However, if persistent connections are made to a server, you should avoid using the ATTACH command in login scripts. For information about making persistent connections, see "Connecting to Drive and Printer Resources" later in this chapter. Client for NetWare Networks also differs from NETX and VLM in that it does not map the first network drive to the logon directory of the preferred server. All subsequent connections to NetWare servers must be made by using Windows 95 tools. Running Logon Scripts with Novell-Supplied Clients If a computer is running the Novell-supplied NETX or VLM networking client, login scripts are processed as they were before Windows 95 was installed. With NETX or VLM, login scripts are run during system startup after real mode at the command prompt before Windows 95 switches to protected mode. Therefore, all statements and TSRs will run as expected and be available globally for all applications created for Windows or MS-DOS. Important: Users running a Novell-supplied client should always log on to the NetWare server before running Windows 95. Otherwise, many operational problems will occur. For example, if a user instead logs on at command prompt while already running Windows 95, then all the drive mappings created by the login scripts will be local only to that VM. Technical Notes for the Logon Process The notes in this section provide a brief overview of the logon process in Windows 95. If user profiles are enabled (using the Passwords option in Control Panel or by setting the related system policy), then a logon dialog box will always appear at system startup (even if the user's password is blank) because the user must be identified so the operating system can load the correct profile. If user profiles are not enabled, then password the user wants to use. On a portable computer that has a network adapter that can be changed (for example, using the adapter on a docking station versus using a PCMCIA card), the logon dialog box appears when there is an active network. Only the Windows 95 system logon dialog box appears when the network is not active. Windows password and the passwords for any other network providers are all blank, then Windows 95 can attempt an automatic or "silent" logon (opening the user's password file with a blank password). You might choose this configuration, for example, for peer servers that are physically secure from user access when you want such servers to be able to automatically recover from power outages or other failures without user intervention. Note: The administrator can use system policies to restrict users' access to the Passwords option in Control Panel or to require a minimum password length to prevent automatic logon using blank passwords. Browsing Overview Browsing in Windows 95 is the same for all network providers, whether the network is based on Windows NT Server, Novell NetWare, another network, or Windows 95 95 on Microsoft and NetWare networks, see "Browsing on Microsoft Networks" and "Browsing on NetWare Networks" later in this chapter. Using Network Neighborhood When you use Network Neighborhood, you can access shared resources on a server without having to map a network drive. Browsing and connecting to the resource consists of a single step: clicking an icon. For information about what happens internally when Network Neighborhood is used to browse multiple networks, see the description of the Multiple Provider Router in Chapter 32, "Windows 95 Network Architecture." Using Workgroups in Windows 95 On Microsoft networks, computers are logically grouped in workgroups for convenient browsing of network resources. If share-level security is used, each computer in the workgroup maintains its own security system for validating local user logon and access to local resources. NetWare networks do not use the workgroup concept, so computers running Windows 95 with. To browse a server quickly without mapping a drive. To create a shortcut on the desktop to a network resource In Network Neighborhood, find the network resource for which you want to create a shortcut. Using the right mouse button, drag the icon for that resource onto the desktop. In the context menu, click Create Shortcut Here. Double-click the shortcut icon to view the contents of the network directory in a new window. This shortcut is available every time you start Windows 95. As the network administrator, you can use system policies to create a custom Network Neighborhood for individuals or multiple users. You can create shortcuts using UNC names for any network connections, including Dial-Up Networking connections, as part of the custom Network Neighborhood provided when using system policies. However, do not place directories in the custom Network Neighborhood. Windows 95 does not support this feature, and unpredictable results can occur. In System Policy Editor, enable the policy named Custom Network Neighborhood: Use Registry mode to enable this option on a local or a remote computer Use Policy mode to create or modify a policy file for one or more users You can also set the following system policies to control users' access to built-in Windows 95 browsing features: Hide Network Neighborhood, to prevent access to Network Neighborhood No Entire Network In Network Neighborhood, to prevent access to the Entire Network icon in Network Neighborhood No Workgroup Contents In Network Neighborhood, to prevent workgroup contents from being displayed in Network Neighborhood For more information about specific policies and about using System Policy Editor, see Chapter 15, "User Profiles and System Policies." Browsing in Common Dialog Boxes The new common dialog boxes (such as File Open and File Save) are standard in programs that use the Windows 95 user interface. They provide a consistent way to open or save files on network resources and local drives. Also, you can browse Network Neighborhood and you can perform most basic file management tasks by using a common dialog box. Note: Windows-based applications created for earlier versions of Windows do not use the new common dialog boxes. In Windows 95, you can create new directories (also called folders) when you are saving a document (unlike Windows 3.1 in which you had to start File Manager or exit to the MS-DOS command prompt). This means that you can also create a new directory on a shared network resource when saving documents, as shown in the following procedure. This procedure can be used in any application that uses the Window 95 common dialog boxes. To create a new directory on the network while saving a file In the File menu, click Save As. In the Save In list, select a network location. If you need to, you can click Network Neigborhood in this list to browse for the computer on which you want to save the file. Click the Create New Folder icon, and type text for the new directory label. In the File Name box, type a name for the file, and then click Save. Connecting to Drive and Printer Resources The toolbar is available in every window and includes the Map Network Drive button. If you click this button, the Map Network Drive dialog box appears. 95 is started. You can display this dialog box by right-clicking the Network Neighborhood icon. When installing a new printer, you can specify a shared printer resource by using the UNC name or the Point and Print method. For example, for the shared printer named HP_III on the server CORP, the name UNC is \\CORP\HP_III. For more information about Point and Print, see Chapter 23, "Printing and Fonts." Browsing with the Net View Command Browsing network resources at the command prompt is handled by the real-mode networking components. You can use the net view command to perform most of the same browsing actions as Network Neighborhood or Windows Explorer, except that it cannot provide a list of workgroups. For specific notes about using the net commands on NetWare networks, see "Browsing on NetWare Networks" later in this chapter. To display a list of computers with shared resources in a workgroup At the command prompt, type the following and then press ENTER. net view [\\computername] – Or – net view [/workgroup:workgroupname] Where computername is the name of the computer with shared resources you want to view; /workgroup specifies that you want to view the names of the computers that share resources in another workgroup; and workgroupname is the name of the workgroup that has computer names you want to view. Browsing on Microsoft Networks The Windows 95 browsing scheme for Microsoft networks is based on the scheme currently used for Windows NT and Windows for Workgroups. The Windows 95 browse service attempts to minimize the network traffic related to browsing activity, while also providing an implementation that scales well to support both small and large networks. This section describes how the browse service designates browse servers and maintains the browse list. Designating a Browse Master for Microsoft Networks The Windows 95 browse service uses the concept of a master browse server and a backup browse server to maintain the browse list. There is only one master browse server for a given Windows 95 workgroup for each protocol used in the workgroup; however, there can be one or more backup browse servers for each protocol for a given workgroup. The master browse server is responsible for maintaining the master list of workgroups, domains, and computers in a given workgroup. To minimize the network traffic that the master browse server can be subjected to when handling browsing services, backup browse servers can be designated in a workgroup to help off-load some query requests. Usually, there is one browse server for every 15 computers assigned to a given workgroup. When Windows 95 is started on a computer, the computer first checks to see if a master browse server is already present for the given workgroup. If a master browse server does not exist, an election creates a master browse server for the workgroup. If a master browse server already exists, Windows 95 checks the number of computers in the workgroup, and the number of browse servers present. If the number of computers in the workgroup exceeds the defined ratio of browse servers to computers in a workgroup, an additional computer in the workgroup might become a backup browse server. The Browse Master parameter in the Advanced properties for File and Printer Sharing for Microsoft Networks provides a mechanism for controlling which computers can become browse servers in a workgroup. If this parameter is set to Automatic, the master browse server can designate that computer as a backup browse server when needed, or that computer can be elected as master browse server. For information about configuring this parameter, see "Using File and Printer Sharing for Microsoft Networks" later in this chapter. Tip for Using the Net View Command to Check the Browse Server could reset this computer by quitting Windows 95. Another computer will then be promoted to master browse server for the workgroup. Building the Browse List for Microsoft Networks In Windows 95, 95 in the Map Network Drive and Connect Network Printer dialog boxes, or anywhere that Windows 95 presents lists of resources that can be browsed. The browse list can also be displayed by using the net view command. The list can contain the names of domains, workgroups, and computers running the File and Printer Sharing service, including the following: Computers running Windows 95, Windows for Workgroups, and Windows NT Workstation Windows NT Server domains and servers Workgroups defined in Windows 95, Windows for Workgroups, and Windows NT Workgroup Add-On for MS-DOS peer servers LAN Manager 2.x domains and servers Adding New Computers to the Browse List When a computer running Windows 95. Removing Computers from the Browse List. Technical Notes on Browsing on Microsoft Networks This section presents some brief notes related to browsing on Microsoft networks. The Windows 95 browser has been updated to support browsing across TCP/IP subnetworks. To take advantage of this, the network must use a WINS server or you must use #DOM entries in LMHOSTS files. For information about creating LMHOSTS files, see Appendix G, "HOSTS and LMHOSTS Files for Windows 95." Microsoft LAN Manager-compatible networks such as IBM® LAN Server and Microsoft LAN Manager for UNIX® support browsing of servers and shared directories using the Windows 95 user interface or net view. DEC™ PATHWORKS™ is an example of a Microsoft LAN Manager-compatible network that does not support browsing. AT&T® StarLAN is an example of a Microsoft Network-compatible network that is not based on Microsoft LAN Manager and that does not support remote browsing of servers and shared directories. These servers do not appear in Network Neighborhood; with Windows 95, however, users can still access the servers and shared directories through a network connection dialog box. When a known slow network connection is used (for example, the remote access driver), Windows 95. Browsing on NetWare Networks The Windows 95 user interface includes support for browsing and connecting to network resources on Novell NetWare and other networks. Except for workgroups, this support is the same whether you use Client for NetWare Networks or the Novell-supplied NETX or VLM client. After you connect to a NetWare volume or a computer running File and Printer Sharing for NetWare Networks, you can drag and drop directories and files to move and copy them between your computer and the NetWare server. For information about printer connections, see Chapter 23, "Printing and Fonts." Using Network Neighborhood on NetWare Networks Network Neighborhood is the primary way you can browse the network. When you open Network Neighborhood on a computer running a NetWare-compatible networking client, all the NetWare bindery-based servers your computer is connected to are displayed. All computers running File and Printer Sharing for NetWare Networks that use Workgroup Advertising also appear in Network Neighborhood. your computer has both Client for Microsoft Networks and Client for NetWare Networks installed, then you will also see a list of computers running Windows for Workgroups, Windows 95, and Windows NT. The list of NetWare servers is at the beginning of the list of workgroups or domains in the Entire Network window. In both the Network Neighborhood and Entire Network views, you can open a server to access its contents without having to map a network drive. You will be asked for security information, if necessary, and you can choose to save your password in the password cache so that you will not have to type it again. If the computer is running Client for NetWare Networks, drive mappings are limited to the available drive letters. However, Windows 95 supports unlimited UNC connections. (If the computer is running NETX or VLM, it is limited to only eight server connections.) To connect to a NetWare server in Network Neighborhood In Network Neighborhood, right-click a NetWare server. In the context menu, click Attach As. Then type a user name and password, and click OK. If you want to map a directory on this server, double-click the server icon. Right-click the directory you want to map, and click Map Network Drive in the context menu. Fill in the Map Network Drive dialog box, and click OK. Tip You can also create a shortcut to frequently used resources. For information, see "Using Network Neighborhood." When you double-click a shortcut, you have to supply only a password to connect to it. The toolbar on every window includes the Map Network Drive button, which you can use to specify the name of a NetWare server and volume (or directory) that you want to map to a drive letter. To connect to a directory as the root of the drive In Network Neighborhood, right-click a directory on a NetWare server. In the context menu, click Map Network Drive. In the Map Network Drive dialog box, make sure Connect As Root Of The Drive is checked, and then click OK. With this option enabled, if you switch to this mapped directory in a VM windows, you will see the prompt as drive:\> not drive:\directory>). You cannot go further up the directory tree from the command prompt. The context menu for a NetWare server shows everything you can do with the related server, volume, or directory. To view the context menu, in Network Neighborhood, right-click a NetWare server. The following table describes the commands available on the context menu. menu, click Properties, and then click the Tools tab. Use the buttons to run Net Watcher or System Monitor, or to administer the file system. For more information about preparing computers for remote administration under Windows 95, and about using Net Watcher and other tools, see Chapter 16, "Remote Administration." Managing Connections with Client for NetWare Networks Client for NetWare Networks is different from NETX and VLM in that it does not map the first network drive to the logon directory of the preferred server. All subsequent connections to NetWare servers must be made in the Windows 95 user interface. With Client for NetWare Networks, you can manage connections to the NetWare network by using Network Neighborhood and common network-connection dialog boxes such as the Open and Save dialog boxes. . Using Commands to Connect to NetWare Servers If you are running Client for NetWare Networks, all NetWare commands run in the same way as they do for a Novell-supplied networking client. The ATTACH and SLIST commands provided with Windows 95 use the same syntax and work in exactly the same way as the counterparts provided by Novell. The following should be noted about certain Novell-supplied commands: For the ATTACH command, configure the networking client to use SAP Browsing. It is recommended that you do not use the LOGIN utility to create an attachment to a computer running File and Printer Sharing for NetWare Networks. Use the ATTACH command instead. For the MAP command, drive mappings in Windows 95 are global to all sessions. You can also use the Microsoft networking net commands at the command prompt or in login scripts to manage connections on NetWare networks. For example, the net use command can be used to do the following: Perform the same functions as the NetWare ATTACH and MAP commands. Supply similar functionality to the CAPTURE utility for printing when programs require printing to a specific port. You can use the Windows 95 net view command to perform the same function as the NETX SLIST or VLM NLIST SERVER commands. The following brief procedures show built-in Windows 95 commands that can be used at the command prompt or in scripts to manage resource connections. To view NetWare servers At the command prompt or in a login script, type net view For example: D:\WIN\COMMAND>net view NetWare Servers ---------------------------- \\386 \\TRIKE \\WRK To view volumes on a server At the command prompt or in a login script, type net view \\servername For example: D:\WIN\COMMAND>net view \\trike Shared resources at \\trike Sharename Type Comment ---------------------------------- SYS Disk PUBLIC Disk. Use the /network parameter to specify the volumes on the particular network you want to view. For example: net view \\nwserver_name /network:nw To create a drive connection At the command prompt or in a login script, type net usedrive: \\servername\volume For example: D:\WIN\COMMAND>net use l: \\trike\sys The password is invalid for \\TRIKE\SYS. Enter user name for server TRIKE:joed Enter the password for user JoeD on server TRIKE: The net use command is equivalent to MAP drive:=servername\volume: and it maps only to the root of the volume. Tip To use the next available drive letter when connecting to the volume, replace the drive letter with an asterisk (*). By typing the net use command without parameters, you can list the current network connections. For example: To delete a drive connection At the command prompt or in a login script, type net usedrive: /d For example: D:\WIN\COMMAND>net use l: /d The /d switch and the NetWare command MAP DEL drive are equivalent. To create a print connection At the command prompt or in a login script, type net useport: \\servername\queuename For example: D:\WIN\COMMAND>net use lpt3: \\trike\pscript1 This is equivalent to CAPTURE l=port S=servername Q=queuename. To delete a print connection At the command prompt or in a login script, type net useport: /d For example: D:\WIN\COMMAND>net use lpt3: /d This is equivalent to ENDCAP L=port#. The net command in Windows 95 instead of net view, MAP instead of net use, or CAPTURE instead of net use to connect to a printer. Using Windows NT to Connect to NetWare Servers If your site includes both a Novell NetWare network and a Windows NT Server network, computers using Microsoft networking will need to communicate and share resources with the NetWare network. This section summarizes several options using Windows NT. Windows NT Gateway Service for NetWare. For Microsoft networking clients that cannot use multiple protocols, you can configure a computer running Windows NT Server 3.5 as a file or print gateway using Windows NT Gateway Service for NetWare to connect to and share NetWare resources. Notice that a Microsoft Windows NT Client Access License is required if the computer will be connecting to servers running Windows NT Server. For information, contact your Microsoft reseller. As shown in the following illustration, Windows NT Gateway Service for NetWare acts as a translator between the SMB protocol used by Microsoft networks and the NCP protocol used on NetWare networks. The file gateway uses a NetWare account on the Windows NT Server computer to create a validated connection to the NetWare server, which then appears on the Windows NT Server computer as a redirected drive. When the administrator shares the redirected drive, it looks similar to any other shared resource on the Windows NT Server computer. A print gateway functions in much the same way as the file gateway: the NetWare printer appears on the Windows NT network as if it were any other shared printer. Because access over the gateway is slower than direct access from the client for computers running Windows 95 that require frequent access to NetWare resources, Client for NetWare Networks is a better solution. For information about setting up a Windows NT Server computer with Gateway Service for NetWare, see Windows NT Server Services for NetWare Networks in the Windows NT Server 3.5 documentation set. Microsoft File and Print Services for NetWare. This utility for Windows NT Server provides users running a NetWare-compatible client with access to basic NetWare file and print services and to powerful server applications on the same Windows NT Server-based computer. You can use Microsoft File and Print Services for NetWare to add a multipurpose file, print, and application server to your NetWare network without changing users' network client software. Microsoft Directory Service Manager for NetWare. This utility for Windows NT Server allows you to maintain a single directory for managing mixed Windows NT Server and NetWare 2.x and 3.x server network. For more information about these features or how to obtain Microsoft File and Print Services for NetWare, or the Microsoft Directory Service Manager for NetWare, contact your Microsoft sales representative. Overview of Peer Resource Sharing When a computer is running File and Printer Sharing services, other users running a compatible network client can connect to shared printers, volumes, CD-ROM drives, and directories on that computer by using the standard techniques for connecting the network resources, as described in "Browsing on NetWare Networks" and "Browsing on Microsoft Networks" earlier in this chapter. Using computers running Windows 95 as peer servers allows you to add secure storage space and printing to the network at a low cost. The peer service is based on a 32-bit, protected-mode architecture, which means all the Windows 95 benefits for robust, high performance are available. In addition, administrators can take advantage of features provided with Windows 95, such as Net Watcher and system policies, 16, "Remote Administration." Installing Peer Resource Sharing If you use custom setup scripts or choose the Custom option as the Setup Type in Windows 95 Setup, you can specify that File and Printer Sharing services be installed with Windows 95. Otherwise, you can add the service later by using the Network option in Control Panel. Tip For a computer that will share resources with other users on the networks, choose which File and Printer Sharing service to install based on what other users require: • If most users who need to share these resources are running NETX, VLM, or Client for NetWare Networks, then install File and Printer Sharing for NetWare Networks. • If most users who need to share these resources are running Client for Microsoft Networks, Windows NT, Windows for Workgroups, or Workgroup Add-on for MS-DOS, then install File and Printer Sharing for Microsoft Networks. To install File and Printer Sharing after Setup In the Network option in Control Panel, click Add. In the Select Network Component Type dialog box, double-click Service. In the Select Network Service dialog box, click Microsoft in the Manufacturers list. Then, in the Network Service list, click the File and Printer Sharing service you want to install. For information about enabling File and Printer Sharing in custom setup scripts, see Chapter 5, "Custom, Automated, and Push Installations." For information about controlling peer resource sharing capabilities using system policies, see Chapter 15, "User Profiles and System Policies." Overview of Security for Peer Resource Sharing For File and Printer Sharing for Microsoft Networks (but not NetWare), Windows 95 supports share-level security similar to the security provided with Windows for Workgroups. This level of security associates a password with a shared disk directory or printer. Share-level security for peer resource sharing can be implemented in a Windows 95-only peer-to-peer network or on a network supported by Windows NT or other Microsoft Windows network-compatible servers. For File and Printer Sharing services on both Windows NT and NetWare networks, Windows 95 95 peer server can be accessed only by users with accounts in the central database. Users can also be assigned specified access rights in Windows 95 for particular resources. For information about using and managing security, see Chapter 14, 95 95 by using NetWare utilities such as FILER. For File and Printer Sharing on Microsoft Networks, you can set up user rights remotely by using User Manager for Windows NT. You can use Net Watcher to monitor, add, and remove shared resources, as described in Chapter 16, "Remote Administration." When a user requests access to a shared resource under user-level security, Windows 95 checks for the user's logon name against the list of user accounts maintained on the server. If this is a valid user logon name, Windows 95 then checks whether this user has access privileges for this resource. If the user has access privileges, then the requested operation is allowed. For an example of how pass-through validation works with peer resource sharing, see Chapter 14, "Security." Using File and Printer Sharing for Microsoft Networks File and Printer Sharing for Microsoft Networks is the 32-bit, protected-mode Windows 95 SMB server (VSERVER.VXD), that supports all Microsoft networking products that use the SMB file-sharing protocol, including Windows for Workgroups, Windows NT, LAN Manager, LAN Manager for UNIX, AT&T StarLAN, IBM LAN Server, 3Com® 3+Open® and 3+Share®, and DEC PATHWORKS. Windows 95 enhances the features of Windows for Workgroups peer services by providing administrative control over whether peer sharing services are enabled, by adding user-based security capabilities, and by supporting long filenames. in the following circumstances only: If you need to set Browse Master properties, as described in "Browsing on Microsoft Networks" earlier in this chapter. If Access Control properties, see Chapter 14, "Security." To specify Browse Master settings In the Network option in Control Panel, double-click File and Printer Sharing for Microsoft Networks in the list of installed components. In Advanced properties for File and Printer Sharing for Microsoft Networks, select Browse Master in the Property list. Select an option in the Value list, as described in the following table. At least one computer in the workgroup must have the value of Automatic or Yes. To specify LM Announce settings In Advanced properties for File and Printer Sharing for Microsoft Networks, select LM Announce in the Properties list. Select an option in the Value list, as described in the following 95 (or Windows NT in the domain) is a member of that LAN Manager 2.x domain. To make a computer running Windows 95 a member of a LAN Manager 2.x domain Set the workgroup name for the computer to be the same as the LAN Manager 2.x domain name. You can share a directory , specify the type of access and define a password for the shared resource. To share a directory (folder) with user-level security In Windows Explorer, right-click the icon for the directory you want to share. In the context menu that appears, click Sharing. Click the Sharing tab, online Help. Using File and Printer Sharing for NetWare Networks filenames and is Plug and Play-aware. This new implementation differs from peer resource sharing in Windows for Workgroups in two fundamental ways: File and Printer Sharing for NetWare Networks uses the NCP protocol instead of the SMB protocol. This means that any NetWare-compatible client (Client for NetWare Networks, on Service Advertising Protocol (SAP, the NetWare broadcasting protocol) Advertising will not appear in a workgroup, but it will appear in the Entire Network list. For a computer running NETX or VLM, any shared directories on a peer server that uses SAP advertising appear the same as volumes on any server. Any shared printers will. Sharing Resources on a NetWare Network: An Example During the beta test phase for Windows 95, one NetWare system administrator found the peer resource sharing service to be an administrative lifesaver. A vice president at the company had CD-ROM hardware problems just when he needed immediate access to a tax program that was available only on compact disc. The quick-thinking administrator installed File and Printer Sharing for NetWare Networks on a computer that had a CD-ROM drive. After making sure the vice president was assigned access rights, the administrator mapped a drive on the vice president's computer to access the shared CD-ROM. The Windows 95 peer resource sharing service allowed the administrator to provide an immediate software solution to a hardware problem that would have taken much longer to solve. Sharing Resources on a NetWare Network To allow NETX and VLM clients on the network to access resources on the peer server, you must enable SAP Browsing in the properties for File and Print Sharing for NetWare Networks. The computer then appears as a server in SLIST listings,, then the File and Printer Sharing service still runs on the computer, but the related sharing options are not available. Configuring Browsing for Resource Sharing on NetWare Networks After you install File and Printer Sharing for NetWare Networks, you need to choose the method that computers browsing on the network will use to find this computer. You can browse by using two options: Workgroup Advertising, which uses the same broadcast method as used by workgroups on Microsoft networks. SAP Advertising, which is used by Novell NetWare 2.15 and above, 3.x, and 4.x. To specify the browsing preference In the Network option in Control Panel, double-click File and Printer Sharing for NetWare Networks in the list of installed components. In Advanced properties, select Workgroup Advertising to define how you want computers running Client for NetWare Networks to see and connect to this peer server. – Or – Select SAP Advertising if you want NETX and VLM clients to be able to connect to this peer server. If you select Workgroup Advertising, you can set the following values. following values. By default, computers running File and Printer Sharing for NetWare Networks are placed in and browsed by workgroups. You can use the Identification properties in the Network option in Control Panel to specify the workgroup and computer name for the computer. Although computers that use SAP advertising appear in the list of NetWare servers, you cannot use them in all the same ways that you use NetWare servers. When using NETX, you cannot log on to a computer running Windows 95 at the command line, although you can attach to one and map drives to its directories. When using VLM, you cannot log on to a computer running Windows 95 at the command line, but you can run a login /ns command and use the Login button in the NWUSER utility. If you run SYSCON on a NetWare server, you can change the server to one of the computers running Windows 95. However, the computer running Windows 95 95 and display its volume information (if you are attached to it). This shows all the available shared disk resources for the computer running Windows 95. In Windows 95, you can do the same things to resources on computers running File and Printer Sharing for NetWare Networks as you can to any other network resource. If you have appropriate rights to connect to the shared resources, you can also create a link to the computer or map a drive to its shared directories, and so on. Note: Each computer configured with File and Printer Sharing for NetWare Networks logs on to the NetWare server that provides security, to get access to the bindery, using the Windows_Passthru account. This logon process takes place in the background, without user intervention. One connection to that NetWare server is used as needed for each computer running File and Printer Sharing for NetWare Networks, and it is disconnected if it is not needed for 30 seconds. If a connection already exists, Windows 95 uses that connection and makes a new connection only when required. Controlling Access to Peer Server Resources on NetWare Networks You can add to the list of users who can access the resources on the peer server. To do this, add the users to the NetWare pass-through server that provides security. These users can then be given Basics" earlier in this chapter. To ensure all users have the required server access Make sure that one NetWare server on the network has the accounts for all users or all servers, and then set that server as the security provider for every computer configured with File and Printer Sharing for NetWare Networks. If server access is not set properly, each time the computer running Windows 95 is started a message warns that the pass-through server has not been specified. To share a directory and specify users on a NetWare network In Windows Explorer, right-click the directory you want to share. In the context menu, click Sharing. In the Sharing dialog box, type a share name for the directory. Click the Add button. In the Add Users dialog box, select the user name in the list on the left, and then click the related button to specify the kind of access that user is allowed. For information about using the Add Users dialog box, see online Help. For more information about specifying directory access rights, see Chapter 14, "Security." Notice in the illustration that the list of users shown in the Add Users dialog box is from the TRIKE server's bindery. This means two things: All user management is done in the name space of the existing NetWare server. The NetWare server is administered by using all the same tools that are currently in place; Windows 95 has not added another namespace to administer. Only valid user accounts and groups on TRIKE can be specified for shared resources on the peer server. When the computer running Windows 95 receives a request from a user attempting to access a shared device, Windows 95 uses the NetWare server to validate the user name or group membership. If the name or group membership is validated, then Windows 95 checks to see if this validated name or group has been granted access rights to the shared resource, and then it grants or denies the connection request. Share Names vs. NetWare Volume Names either Microsoft networking \\server\sharename shares or NetWare server/volume shares. Windows 95 does not make Using Bindery Emulation for Pass-Through Security 95 can use the bindery of one NetWare server. Usually, companies have multiple NetWare servers for different departments, and individual users log on to a different server by department. Problems can occur when the list of accounts differs between NetWare servers. For example, assume that AnnieP and YusufM log on to the SALES server, and KrisI is on the R&D server. AnnieP can select only one server for pass-through validation, so she must select the SALES server, because that's where this account is located for logon. She can grant access to YusufM, but not to KrisI. Troubleshooting for Logon, Browsing, and Peer Resource Sharing This section provides some general methods for troubleshooting. Setup doesn't run the login script. If the network logon server or domain controller is not validating the user account, the login. You cannot browse to find SMB-based servers in the workgroup while using Client for Microsoft Networks. There might be no SMB-based servers in the workgroup (computers running Windows NT, LAN Manager, or File and Printer Sharing for Microsoft Networks). Windows 95 does not support browsing in a workgroup that does not contain an SMB-based server if the computer is running Client for Microsoft Networks. The following presents a solution. To ensure there is an SMB-based server in the workgroup On a computer running File and Printer Sharing for Microsoft Networks, make sure the service is configured as the master browser server. – Or – Make sure that a Windows NT server computer is a member of the workgroup (or domain).. User cannot connect to any network resource. Check the workgroup assignment. Check the domain or preferred server assignment for the protected-mode network client. Check the rights for the user as defined on the domain or preferred server. Check the basic network operations. Use net view \\computer name to view shared resources. Check for the termination of the local network cable. Others cannot connect to my shared resources. In the Network option in Control Panel, verify that the File and Print Sharing service appears in the list of installed components. Make sure other users are running a common protocol. Network Neighborhood doesn. Check the network cable termination. You can't 95 95. Access is denied for Windows for Workgroups users trying to connect to shared resources on a computer running File and Printer Sharing for Microsoft Networks. If the user with the Windows for Workgroups client computer is logging on to a different domain from the computer running File and Printer Sharing services (the peer server), then Windows 95 cannot confirm logon validation for access to shared resources. To solve this problem, do one of the following: Upgrade the Windows for Workgroups clients to Windows 95 (recommended). Set the LM Announce option to Yes in the Advanced properties for File and Printer Sharing for Microsoft Networks on the peer server. Switch to share-level security on the peer server. Change the logon domain for the Windows for Workgroups clients. This problem will not occur in these cases: if the client computers are running Windows 95.
https://technet.microsoft.com/en-us/library/cc751090(d=printer).aspx
CC-MAIN-2016-18
en
refinedweb
Introduction We would urge you to first do this tutorial and then study the Allegro Prolog documentation if necessary. This is a basic tutorial on how to use Prolog with AllegroGraph 3.3. It should be enough to get you going but if you have any questions please write to us and we will help you. In this example we will focus mainly on how to use the following constructs: When consulting the Reference Guide, one should understand the conventions for documenting Prolog functors. A Prolog functor clause looks like a regular Lisp function call, the symbol naming the functor being the first element of the list and the remaining elements being arguments. But arguments to a Prolog functor call can either be supplied as input to the functor, or unsupplied so that the clause might return that argument as a result by unifying some data to it, or may be a tree of nodes containing both ground data an Prolog variables. The common example is the functor append which has three arguments and succeeds for any solution there the third argument is the same as the first two arguments appended. The remarkable thing about Prolog semantics is that append is a declarative relation that succeeds regardless which arguments are supplied as inputs and which are supplied as outputs. <ret> indicates where the user would type a return to ask Prolog to find the next result. > (?- (append (1 2) (3) ?z)) ?z = (1 2 3) <ret> No. > (?- (append (1 2) ?y (1 2 3))) ?y = (3) <ret> No. > (?- (append ?x ?y (1 2 3))) ?x = () ?y = (1 2 3) <ret> ?x = (1) ?y = (2 3) <ret> ?x = (1 2) ?y = (3) <ret> ?x = (1 2 3) ?y = () <ret> No. > (?- (append ? (1 ?next . ?) (1 2 1 3 4 1 5 1))) ?next = 2 <ret> ?next = 3 <ret> ?next = 5 <ret> No. The last example successively unifies to each element in the list immediately preceded by a 1. It shows the power of unification against partially ground tree structure. Now we return to the the notational convention: A functor argument that is an input to the functor and which must be supplied is marked in the documentaiton with a +. A functor argument that is returned by the functor and which must not be supplied is marked with a -. An argument that can be either is marked with ±. (Prolog documentation generally used ? for this, but in Lisp-bnased Prologs that character is used as the first character of a symbol that is a Prolog variable, overloading using it to indicate and input-output argument would be very confusing.) Within this convention append would be notated as (append ±left ±right ±result ). But a functor like part= which simply checks whether two parts are the same UPI and which requires two actual which requires two actual future-part or UPI arguments, would be documented (part= +upi1 +upi2). The rest of this tutorial will be based on a tiny genealogy database of the Kennedy family. Please open the file kennedy.ntriples that came with this distribution in a text editor or with TopBraidComposer and study the contents of the file. Notice that people in this file have a type, sometimes multiple children, multiple spouses, multiple professions, and go to multiple colleges or universities. This tutorial uses Lisp as the base language but there is also a Java example with the same content. First let us get AllegroGraph ready to use: > (require :agraph) ;; .... output deleted. > (in-package :triple-store-user) #<The db.agraph.user package> > (enable-!-reader) #<Function read-!> t > (register-namespace "ex" "" :errorp nil) "" Now we can create a triple-store and load it with data. The function create-triple-store creates a new triple-store and opens it. If you use the triple-store name "temp/test", then AllegroGraph will create a new directory named temp in your current directory (use the top-level command :pwd if you want to see what this is). It will then make another directory named test as a sub-directory of temp. All of this triple-store's data will be placed in this new directory temp/test: > (defun fill-kennedy-db () (create-triple-store "temp/test" :if-exists :supersede) (time (load-ntriples #p"sys:agraph;tutorial-files;kennedy.ntriples")) (index-all-triples)) fill-kennedy-db > (fill-kennedy-db) ;; .... output deleted. So let us first look at person1 in this database: > (print-triples (get-triples-list :s !ex:person1)) <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . Now we are ready to try the select statement in combination with the Prolog q functor. Let us try to find all the children of person1. Just type the following in the listener. Afterward, I'll explain. > (select (?x) (q- !ex:person1 !ex:has-child ?x)) (("") ("") ("") ("") ("") ("") ("") ("") ("")) select is a wrapper used around one or more Prolog statements. The first element after select is template for the format and variables that you want to bind and return. So in this example above we want to bind the variable ?x. The rest of the elements tell Prolog what we want to bind ?x to. This statement has only one clause, namely (q- !ex:person1 !ex:has-child ?x)` If you have studied how get-triples works you probably can guess what happens here. q- is a Prolog functor that is our link to the data in the triple-store. It calls get-triples and unifies the ?x with the objects of all triples with subject !ex:person1 and predicate !ex:has-child. So let us make it a little bit more complex. Let us find all the children of the children of person1. Here is how you do it: > (select (?y) (q- !ex:person1 !ex:has-child ?x) (q- ?x !ex:has-child ?y)) (("") ("") ("") ("") ("") ("") ("") ("") ("") ("") ...) Although Prolog is a declarative language, a procedural reading of this query works better for most people. So the previous query can be read as Find all triples that start with !ex:person1 !ex:has-child. For each match set ?xto the object of that triple; then for each triple that starts with ?x !ex:has-childfind the ?y The following example should now be easy to understand. Here we are trying to find all the spouses of the grand-children of ?z. Notice that we ignore the ?x and ?y in the query. The select will only return the ?z > (select (?z) (q- !ex:person1 !ex:has-child ?x) (q- ?x !ex:has-child ?y) (q- ?y !ex:spouse ?z)) (("") ("") ("") ("") ("") ("") ("") ("") ("") ("") ...) Now if you wanted to you could get the other variables back. Here is the same query but now you also want to see the grand-child. > (select (?y ?z) (q- !ex:person1 !ex:has-child ?x) (q- ?x !ex:has-child ?y) (q- ?y !ex:spouse ?z)) (("" "") ("" "") ("" "") ("" "") ("" "") ("" "") ("" "") ("" "") ("" "") ("" "") ...) So now we understand the select and the q statement. We are halfway there. Let us now define some Prolog functors. The following defines a functor that says: ?x is a male if in the triple store I can find an ?x that has the !ex:sex !ex:male. > (<-- (male ?x) (q- ?x !ex:sex !ex:male)) male Let us try it out by finding all the sons of person1. > (select (?x) (q- !ex:person1 !ex:has-child ?x) (male ?x)) ;;; Note how we use NO q here! (("") ("") ("") ("")) Now this is not too exciting, and it is equivalent to the following: (select (?x) (q- !ex:person1 !ex:has-child ?x) (q- ?x !ex:sex !ex:male)) So let us make it more complex: > (<-- (female ?x) (q- ?x !ex:sex !ex:female)) female > (<-- (father ?x ?y) (male ?x) (q- ?x !ex:has-child ?y)) father > (<-- (mother ?x ?y) (female ?x) (q- ?x !ex:has-child ?y)) mother The female, father, mother relations are all simple to understand. The following adds the idea of multiple rules (or functors). Notice how we define the parent relationship with two rules, where the first rule uses <-- and the second rule uses <-. The reason is that <-- means: wipe out all the previous rules that I had about parent and start anew whereas <- means, add to the existing rules for parent. The following should be read as: ?xis the parent of ?yif ?xis the father of ?yor ?xis the parent of ?yif ?xis the mother of ?y. (<-- (parent ?x ?y) (father ?x ?y)) parent (<- (parent ?x ?y) (mother ?x ?y)) parent So let us try it out by finding the grand children of person1 > (select (?y) (parent !ex:person1 ?x) (parent ?x ?y)) (("") ("") ("") ("") ("") ("") ("") ("") ("") ("") ...) We could have done the same thing by defining a grandparent functor. See the next definition. > (<-- (grandparent ?x ?y) (parent ?x ?z) (parent ?z ?y)) grandparent > (<-- (grandchild ?x ?y) (grandparent ?y ?x)) grandchild And here it gets really interesting because we now go for the first time to a recursive functor. > (<-- )) (("") ("") ("") ("") ("") ("") ("") ("") ("") ("") ...) And then here are some puzzles that you can work out for yourself.. Note the use of not and part= in these statements. 'not' can contain any expression. part= will compare its two arguments as UPIs; It will not unify. > (<-- the value computed by the progn body into the Prolog clause. There is a problem with the syntax for the Prolog cut and AllegroGraph's future-part syntax. Prolog uses the exclamation point ! to denote the cut operation. When executed, a cut clears all previous backtracking points within the current predicate. For example, > (<-- (parent ?x) (parent ?x ?) !) defines a predicate that tests whether the argument person a parent, but if so succeeds only once. ?) \!) Be aware that sometimes names with syntax parent/2 will appear in Prolog documentation and in the debugger. The portion of the name is the predicate name -- also called a functor and the same as the Lisp symbol naming the predicate. The non-negative integer after the slash is the arity, which is the number of arguments to the predicate. Two predicates with the same functor but different arity are completely unrelated to one another. In the example above the predicate parent/1 has no relation to the parent/2 predicate defined earlier in this document and which it calls.
http://franz.com/agraph/support/documentation/3.3/prolog-tutorial.html
CC-MAIN-2016-18
en
refinedweb
., vim.wikia.com/wiki/Open_file_under_cursor). you can use both: ctrl+g to specify a line number and jump to it, no command line that i know of... if you know your way in basic python i guess it could be a breeze to set such plugin use alt+f3, and f3 to cycle through the results. no sticky highlight yet though ( jon, please... ) Preferences\User File Preferences, add and edit the following:wordSeparators ./\()"'-:,.;<>~!@#$%^&*|+=]{}`~? btw, read Preferences\Default File Preferences for a complete list of all you can change. good luck,vim. These shouldn't be that hard to code a plugin for... I think nick did the second one already... anyways, when opening a file under cursor... how does that work? If I have put cursor under site.css where is it going to get the rest of the path from? the current open file? or project (if any?) ?It shouldn't be hard if u have the full path to it like "c:\Documents and Settings....\site\text.css"I'll have a go at it and see what I can do, but I bet sublimator is gonna woop my ass at writing the plugin haha he's a python beast! Here is some code that will run any command you want (in your case a ruby or python script) and display the results in a new buffer: This is a modified version of some of the code from the Mercurial plugin I submitted a while ago. # Runs a system command from the command line # Captures and returns both stdout and stderr as an array, in that respective order def doSystemCommand(commandText): p = subprocess.Popen(commandText, shell=True, bufsize=1024, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p.wait() stdout = p.stdout stderr = p.stderr return [stdout.read(),stderr.read()] def displayResults(Results, view): if(Results[1] != None and Results[1] != ""): createWindowWithText(view, "An error or warning occurred:\n\n" + str(Results[1])) elif(Results[0] != None and Results[0] != ""): createWindowWithText(view, str(Results[0])) # Open a new buffer containing the given text def createWindowWithText(view, textToDisplay): NewView = sublime.Window.newFile(view.window()) NewView.insert(NewView.size(), textToDisplay) That will run any command you provide it and capture stdout and stderr. It displays stdout only if theres no output on stderr. You should be able to take that code and roll a plugin to fit your custom needs. Very nice work sam Thanks! Hi guys I've recently discovered Sublime and my first impressions are extremely favourable. I love the minimap feature plus the editor looks really cool too. I've had a look through the various menu options, but something that I use a lot, but can't find (and it doesn't seem to work when I hold down the alt key - although I've tried others as well), is a column select option. Is this feature available now (ie is it me? ) or is it on the roadmap for future releases? Cheers,Mick try alt+ctrl together with up/down arrow keys, is that what you need? Sort well, it is a different concept here, because it is not a selection, but rather a multiplication of the cursor. imo, this far stronger, because you can edit all those places instantly, and of course mark the required characters as you like. for example, if you have a rectangular area you want to mark and replace/edit, it will look the same, but if the area is of certain structure, which is not with the same length on each line - here sublime start to shine, e.g. try and edit the following on the editors you have mentioned: L_acc = L_deposit_l( gbk1[index1][1] ); L_accb = L_deposit_l( gbk2[index2][1] ); L_gbk12 = L_add( L_acc, L_accb ); tmp = extract_l( L_shr( L_gbk12,1 ) ); L_acc = L_mult(tmp, gcode0); now, try and add _xyz on all functions calls... in sublime you just do the multiple cursor magic at the start of the line, going down 5 lines, now jump with ctrl+left arrow 3 times, insert _xyz - DONE! L_acc = L_deposit_l_xyz( gbk1[index1][1] ); L_accb = L_deposit_l_xyz( gbk2[index2][1] ); L_gbk12 = L_add_xyz( L_acc, L_accb ); tmp = extract_l_xyz( L_shr( L_gbk12,1 ) ); L_acc = L_mult_xyz(tmp, gcode0); Thanks very much for the reply, Vim. The way this works in Sublime is way more powerful than the 'standard' column or region select. I can see that this will prove extremely useful and will save me loads of time. This works perfectly except for cut/copy/paste scenarios. For example, in the following trivial example, how would I column select the numbers at the end of test and stick them at the end of line, so : test1 line test2 line test3 line becomes test line1 test line2 test line3 I guess this question really is, is there a way to column-paste? true, for me, this is the mostly used plugin. i used it all the time with column related tasks. yup that + pastecolumn is there too, it's just awesome Aha perfect, I hadn't spotted those plugins. Thanks guys.
https://forum.sublimetext.com/t/sublime-as-a-replacement-for-my-current-editor/233/9
CC-MAIN-2016-18
en
refinedweb
How to find core dependencies ? Can anyone help me with the following scenario: path to Ext library: /var/ext4.1.1a/src/ path to Ext JS application: /var/webapp/app1/ I want to compile a build and extract into one JS file only classes, and their dependencies from Ext library that are used by my application, but without application specific code (without classes from /var/webapp/app1/). I found on forum something like this: sencha -sdk-path=/var/ext4.1.1a/src/ \ compile \ -classpath=/var/webapp/app1/ \ exclude \ -namespace=Ext \ and \ concatenate \ -out=out.js but in this case will be included also application code (one i want to skip). Thanks GM. As a starter, out of my head, here some code: Code: sencha compile -cl /var/ext4.1.1a/src,/var/webapp/app1/ \ union -r -f /var/webapp/app1 and \ exclude -n Ext and \ concat -compress -o ext-deps.js and \ metadata -f -o ext-deps.txt then exclude all classes based on a namespace selector (exclude -n Ext), then concat and compress the files into ext-deps.js, plus create a metadata file that lists all file entries (good for debugging). hth Hi, thanks for suggestion, i don't know how this tool work, but using your example, even if I remove " exclude -n Ext", my output file have 0 kb. In my case, the difference is that I don't develop Ext widgets (i mean I don't extend Ext components, but inside my JS classes I instantiate Ext components: .... var dialog = new Ext.Window({el: 'message_dialog_div', layout: 'fit', x: windowXCoord, y: windowYCoord, height: this.defaultMessageDialogHeight, width: this.defaultMessageDialogWidth, modal: true, shadow: false, autoScroll: true, closable: true, items: Ext.create('Ext.tab.Panel', { el: 'message_dialog_tabs', activeTab: 0, autoTabs: true, border: false, items: tabItems }), buttons: [{ text: this.getLocalizedString(TITLE_CLOSE), handler: function () { dialog.close(); if (callback) { callback();}} }]}); ...... ) My guess is that the dependency intersection works only if classes extends Ext components, or it's still have some bugs. Anyway, it will be very useful to have more examples (use cases), more than ones from here: Thank you I think the compiler automatically picks up dependencies from 'Ext.create' calls in your code, but I am not absolutely sure. In any case, it might be helpful to state any dependencies explicitly via Ext.require above the Ext.application call or in the 'uses' or 'requires' properties of your application config (that you pass to Ext.application). Doing this would also benefit when running your application with Ext.Loader during development (it will avoid synchronous loading of files). You should also make sure that your path definitions are actually handled correctly by Sencha Cmd (not sure how it handles absolute unix paths, I am on Windows) - just a guess here. So a simple Code: sencha compile -cl /var/ext4.1.1a/src,/var/webapp/app1/ \ concat -compress -o all.js and \ metadata -f -o all.txt At the beginning 'sencha compile' will output the classpath entries. If you run 'sencha -d ...' to get debug output you will even see the files it finds in the classpath. You mentioned a sample command in your first post - did this actually work for you? the sample from first post seems that include all classes (Ext library and application), and "exclude \ -namespace=Ext" simply remove all classes from Ext library, but only classes, not functions, closures. So it not work for me. My classes does not extend Ext classes, so I don't have properties like extend, use, require. But even if, i use in my code Ext objects. If you use compile -debug, you can see a lot of useful messages such as: ab-layout - is one of my custom classes. [DBG] Detected instantiation reference to Ext.ComponentMgr in file ... ab-layout.js [DBG] [ [1001] : Class was referenced but not explicitly required <> Ext.ComponentManager ] :: ( ...ab-layout.js => 126 : 0 ) [DBG] Adding dynamic requirement on Ext.ComponentManager to ab-layout.js as a 'requires' so those messages suggest me that the this tool is able to find objects instantiated with new operator or using Ext.create, and it try to add a dynamic reference to parent class, but something goes wrong after ... and the tool doesn't collect all classes (with references) used in application. maybe using Ext.require in each class, can help, but I'm not shore and i have dozens of classes ... it can take a lot of time and possible to not solve my problem. - maybe using Ext.require in each class, can help, but I'm not shore and i have dozens of classes ... it can take a lot of time and possible to not solve my problem. the sample from first post seems that include all classes (Ext library and application), and "exclude \ -namespace=Ext" simply remove all classes from Ext library, but only classes, not functions, closures. 1. Yes, i will try with a small peace of code. 2. I'm talking about new generated output file, in this file when "exclude -namespace Ext" is used, all Ext classes are not included, but functions and closures are. Thanks again.
https://www.sencha.com/forum/showthread.php?251458-How-to-find-core-dependencies&p=922034&viewfull=1
CC-MAIN-2016-18
en
refinedweb
_lwp_cond_reltimedwait(2) - no-fault memory-to-memory copy #include <strings.h> int uucopy(const void *s1, void *s2, size_t n); The uucopy() function copies n bytes from memory area s1 to s2. Copying between objects that overlap could corrupt one or both buffers. Unlike bcopy(3C), uucopy() does not cause a segmentation fault if either the source or destination buffer includes an illegal address. Instead, it returns -1 and sets errno to EFAULT. This error could occur after the operation has partially completed, so the contents of the buffer at s2 are defined if the operation fails. Upon successful completion, uucopy() returns 0. Otherwise, the function returns -1 and set errno to indicate the error. The uucopy() function will fail if: Either the s1 or s2 arguments points to an illegal address. See attributes(5) for descriptions of the following attributes:
http://docs.oracle.com/cd/E26505_01/html/816-5167/uucopy-2.html
CC-MAIN-2016-18
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards The scoped_ptr class template stores a pointer to a dynamically allocated object. (Dynamically allocated objects are allocated with the C++ new expression.) The object pointed to is guaranteed to be deleted, either on destruction of the scoped_ptr, or via an explicit reset. See the example. The scoped_ptr template is a simple solution for simple needs. It supplies a basic "resource acquisition is initialization" facility, without shared-ownership or transfer-of-ownership semantics. Both its name and enforcement of semantics (by being noncopyable) signal its intent to retain ownership solely within the current scope. Because it is noncopyable, it is safer than shared_ptr or std::auto_ptr for pointers which should not be copied. Because scoped_ptr is simple, in its usual implementation every operation is as fast as for a built-in pointer and it has no more space overhead that a built-in pointer. scoped_ptr cannot be used in C++ Standard Library containers. Use shared_ptr if you need a smart pointer that can. scoped_ptr cannot correctly hold a pointer to a dynamically allocated array. See scoped_array for that usage. The class template is parameterized on T, the type of the object pointed to. T must meet the smart pointer common requirements. namespace boost { template<class T> class scoped_ptr : noncopyable { public: typedef T element_type; explicit scoped_ptr(T * p = 0); // never throws ~scoped_ptr(); // never throws void reset(T * p = 0); // never throws T & operator*() const; // never throws T * operator->() const; // never throws T * get() const; // never throws void swap(scoped_ptr & b); // never throws }; template<class T> void swap(scoped_ptr<T> & a, scoped_ptr<T> & b); // never throws } typedef T element_type; Provides the type of the stored pointer. explicit scoped_ptr(T * p = 0); // never throws Constructs a scoped_ptr, storing a copy of p, which must have been allocated via a C++ new expression or be 0. T is not required be a complete type. See the smart pointer common requirements. ~scoped_ptr(); // never throws Destroys the object pointed to by the stored pointer, if any, as if by using delete this->get(). The guarantee that this does not throw exceptions depends on the requirement that the deleted object's destructor does not throw exceptions. See the smart pointer common requirements. void reset(T * p = 0); // never throws Deletes the object pointed to by the stored pointer and then stores a copy of p, which must have been allocated via a C++ new expression or be 0. The guarantee that this does not throw exceptions depends on the requirement that the deleted object's destructor does not throw exceptions. See the smart pointer common requirements. T & operator*() const; // never throws Returns a reference to the object pointed to by the stored pointer. Behavior is undefined if the stored pointer is 0. T * operator->() const; // never throws Returns the stored pointer. Behavior is undefined if the stored pointer is 0. T * get() const; // never throws Returns the stored pointer. T need not be a complete type. See the smart pointer common requirements. void swap(scoped_ptr & b); // never throws Exchanges the contents of the two smart pointers. T need not be a complete type. See the smart pointer common requirements. template<class T> void swap(scoped_ptr<T> & a, scoped_ptr<T> & b); // never throws Equivalent to a.swap(b). Matches the interface of std::swap. Provided as an aid to generic programming. Here's an example that uses scoped_ptr. #include <boost/scoped_ptr.hpp> #include <iostream> struct Shoe { ~Shoe() { std::cout << "Buckle my shoe\n"; } }; class MyClass { boost::scoped_ptr<int> ptr; public: MyClass() : ptr(new int) { *ptr = 0; } int add_one() { return ++*ptr; } }; int main() { boost::scoped_ptr<Shoe> x(new Shoe); MyClass my_instance; std::cout << my_instance.add_one() << '\n'; std::cout << my_instance.add_one() << '\n'; } The example program produces the beginning of a child's nursery rhyme: 1 2 Buckle my shoe The primary reason to use scoped_ptr rather than auto_ptr is to let readers of your code know that you intend "resource acquisition is initialization" to be applied only for the current scope, and have no intent to transfer ownership. A secondary reason to use scoped_ptr is to prevent a later maintenance programmer from adding a function that transfers ownership by returning the auto_ptr, because the maintenance programmer saw auto_ptr, and assumed ownership could safely be transferred. Think of bool vs int. We all know that under the covers bool is usually just an int. Indeed, some argued against including bool in the C++ standard because of that. But by coding bool rather than int, you tell your readers what your intent is. Same with scoped_ptr; by using it you are signaling intent. It has been suggested that scoped_ptr<T> is equivalent to std::auto_ptr<T> const. Ed Brey pointed out, however, that reset will not work on a std::auto_ptr<T> const. One common usage of scoped_ptr is to implement a handle/body (also called pimpl) idiom which avoids exposing the body (implementation) in the header file. The scoped_ptr_example_test.cpp sample program includes a header file, scoped_ptr_example.hpp, which uses a scoped_ptr<> to an incomplete type to hide the implementation. The instantiation of member functions which require a complete type occurs in the scoped_ptr_example.cpp implementation file. Q. Why doesn't scoped_ptr have a release() member? A. When reading source code, it is valuable to be able to draw conclusions about program behavior based on the types being used. If scoped_ptr had a release() member, it would become possible to transfer ownership of the held pointer, weakening its role as a way of limiting resource lifetime to a given context. Use std::auto_ptr where transfer of ownership is required. (supplied by Dave Abrahams) Revised 09 January 2003 Copyright 1999 Greg Colvin and Beman Dawes. Copyright 2002 Darin Adler. Copyright 2002 Peter Dimov. Permission to copy, use, modify, sell and distribute this document is granted provided this copyright notice appears in all copies. This document is provided "as is" without express or implied warranty, and with no claim as to its suitability for any purpose.
http://www.boost.org/doc/libs/1_32_0/libs/smart_ptr/scoped_ptr.htm
CC-MAIN-2016-18
en
refinedweb
Overview 'Introduction to Statistical Analysis Using IBM SPSS Statistics' classroom course. Introduction to Statistical Analysis Using IBM SPSS Statistics is a two day self-paced training course that View this course in other countries Training Paths that reference this course are: Audience This basic course is intended Prerequisites Skills taught Please refer to course overview for description information. Course outline Remarks eLabs will be available for a total of 336 hours in the 30 day window. Special Note for IBM Business Partners authorized to remarket public classes and on-site classes: This course is excluded from the IBM Training Services for Enterprise - on-site Education Program for IBM Business Pop-up blocking mechanisms must be set to allow pop-ups from elearn.ihost.com - Cookies must be enabled - Flash-enabled, with minimum 10.x installed - ActiveX enabled
http://www-304.ibm.com/jct03001c/services/learning/ites.wss/us/en?pageType=course_description&courseCode=0K512
CC-MAIN-2013-48
en
refinedweb
The intention of this article is to assist the readers realize why a same piece of code when executed on 32 bit environment, WOW (Windows on Windows) environment and 64 bit environment, consumes different amounts of memory. It’s known and evident that running an application over WOW consumes more memory than 32 bit and running it on 64 bit consumes more than WOW and 32 bit environment. Although there is no definite formula to find out the exact percentage increase in memory, the below discussion, by comparing 32 bit against 64 bit and 32 bit against WOW, helps understand what causes the memory usage to increase. Reader who has the basic knowledge of WOW, 32 bit applications, 64 bit applications and platform porting background would make the most of this article. To be on the safer side, if you decide to go through this article in any case, let us summarize WOW in one line. "WOW simulates an environment of a different platform than the one which is sitting beneath, so that, an application which would have otherwise been incompatible will now run." The primary change between 64 bit and 32 bit is the width of the address field, which has increased to 8 bytes from 4 bytes. So, evidently, more the number of address fields \ pointers in our application, more is the memory consumption in 64 bit. This document is as an exercise to traverse through a .Net process and explore all the underlying address fields/ pointers, present at different places with in a process, which ultimately remains responsible for the increase in memory consumption in a 64 bit process over a 32 bit process. One of the primary reasons for the increase in memory is Data alignment. Data alignment is putting the data at a memory offset which is a multiple of the "WORD" size. When a processor reads from or writes to the memory, it is in "WORD" sized chunks which is, 4 bytes in a 32 bit environment and 8 bytes in a 64 bit environment. When the size of a given object doesn't make up to the multiple of "WORD", the operating system will have to make it equal to the size of the very next multiple of "WORD". This is done by adding some meaningless information (Padding) at the end of the object. In a 32 bit .Net process, the size of an object is at least 12 bytes and in a 64 bit .Net process, it is at least 24 bytes. The header which consists of two pointer types takes away 8 bytes and 16 bytes respectively in 32 and 64 bit environment. Even if the object doesn't have any members with in it, it consumes those additional 4 bytes \ 8 bytes for padding purpose which makes it's size 12 and 24 respectively in 32 bit and 64 bit process. If an object in 32 bit environment having only one member which is short, the size of that object should be ideally size of header(8 bytes) + size of short (2 bytes). But, it ends up being size of the header (8bytes) + size of short (2 bytes) + padding bytes (2bytes) and hence data alignment leads to unavoidable wastage of memory. This wastage gets exaggerated in 64 bit environment simply because the WORD size becomes 8 bytes instead of 4 bytes. If an object in 64 bit has just one member which is a short it would take 24 bytes (16 bytes of header + 2 bytes of short + 6 bytes of padding \ adjustment). The wastage \padding is not a constant factor in each object instead it depends on the ‘Type’ and ‘Number’ of members of that object. Eg: The object which contains just one ‘int’ and pays 24 bytes could have had 2 ‘int’s at the same cost resulting in zero wastage. An object which contains just one ‘short’ at the cost of 24 bytes could have 4 ‘short’s at the same cost and zero wastage. Any .NET object would have a sync block (pointer to a sync block) and a type handler (Method table pointer). The header size increases by 8 bytes in 64 bit since the header essential has two pointers and pointer in 64 bit is 8 bytes against 4 byte in 32 bit environment. This means, if an application has 10000 objects (be it of any type) the memory straight away increases by 80,000 bytes between 32 and 64 bit environments, even if they are blank objects. The stack segment of the process does contribute to the increase in memory in 64 bit as well. Each item \line in a stack has two pointers one for the callee address and the other being the return address. Just to get a feel of how significant it’s contribution is, let us consider the below program. namespace Memory_Analysis { class Program { static void Main(string[] args) { A obj = new A(); Console.ReadLine(); } } class A { char Data1; char Data2; short Data3; int Data4; } The stack segment for this code when executed would be having around 6000 lines(Measured by SOS). 6000 lines would result in 1200 addresses (pointers) fields because, as said above, each line in the stack would have two addresses a callee and a return address. Each address field leads to an increase of 4 bytes in 64 bit. So there will be an increase of (1200 * 4) 4800 bytes in the stack segment itself for the code segment which is as small as above. Now coming to the method tables, each class which has at least one live instance would have a method table. Each method table would again have 2 address fields (entry point and description). If an application has 100 methods including all the classes within it, that would lead to ((100* 2) * 4) 800 bytes of increased memory in 64 bit just because of the method tables. Similarly, others who have address fields and contribute to the memory increase are GCHandles and FinalizationQueue. Other than the stack and the heap, the assemblies that get loaded in to its AppDomain also contribute to the increase in memory. Below is a snap shot of the header of an AppDomain. As we can see, there are at least 15 address fields in the header. Parent Domain: 0014f000 ClassLoader: 001ca060 System Domain: 000007fefa1c5ef0 LowFrequencyHeap: 000007fefa1c5f38 HighFrequencyHeap: 000007fefa1c5fc8 StubHeap: 000007fefa1c6058 Stage: OPEN Name: None Shared Domain: 000007fefa1c6860 LowFrequencyHeap: 000007fefa1c68a8 HighFrequencyHeap: 000007fefa1c6938 StubHeap: 000007fefa1c69c8 Stage: OPEN Name: None Assembly: 00000000011729a0 Domain 1: 00000000003a34a0 LowFrequencyHeap: 00000000003a34e8 HighFrequencyHeap: 00000000003a3578 StubHeap: 00000000003a3608 Stage: OPEN SecurityDescriptor: 00000000003a4d40 Name: ConsoleApplication2.vshost.exe After the header, the Appdomain would consist of a list of all the assemblies within the app domain. Under each assembly it again consists of a list of all the modules within that assembly. Below is a portion of snap shot of the list of assemblies and the modules within each assembly. The below snap shot contains only that portion of the AppDomain which has reference to our sample "Memory_Analysis" assembly. Assembly: 000000001ab3c330 [C:\Users\ing06996\Documents\Visual Studio 2008\Projects\ Memory_Analysis \ Memory_Analysis \bin\x64\Debug\ Memory_Analysis.exe] ClassLoader: 000000001ab3c3f0 SecurityDescriptor: 000000001ab3b5b0 Module Name 000007ff001d1b08 C:\Users\ing06996\Documents\Visual Studio 2008\Projects\ Memory_Analysis \ Memory_Analysis \bin\x64\Debug\ Memory_Analysis.exe The AppDomain which loads our sample application "Memory_Analysis" has to also load all the referenced dlls from our sample applications, including the .Net dlls like MSCOREE.dll and MSCORWKS.DLL. For each such referenced DLL, there would be entries similar to the one shown in the above snapshot. Further to this, within each module, there would be several address fields as mentioned below in the snapshot Assembly: 0090ae40 LoaderHeap: 00000000 TypeDefToMethodTableMap: 00170148 TypeRefToMethodTableMap: 00170158 MethodDefToDescMap: 001701a4 FieldDefToDescMap: 001701b4 MemberRefToDescMap: 001701c8 FileReferencesMap: 00170214 AssemblyReferencesMap: 00170218 MetaData start address: 0016207c Upon measuring using SOS & WinDBG, a simple assembly like our "Memory_Analysis" had around 80 address fields loaded in the AppDomain which means an increase in memory by (80 * 4) 320 bytes . More the number of referenced assemblies and more the number of modules, higher will be the memory consumption. After having compared 32 bit Vs 64 bit, let us now explore the differences between running a 32 bit process on a 32 bit environment and running a 32 bit process on a WOW environment. WOW (Windows on Windows) as we know, is a simulated environment where a 64 bit Operating system provides a 32 bit environment so that the 32 bit processes can run seamlessly. The trigger point to this discussion is the fact that, running a 32 bit process on WOW takes more memory than running a 32 bit process on 32 bit environment. The discussion below, tries to explore some of the reasons why running on a WOW ends up consuming more memory. Before finding out the reasons for hike in memory, it is important to realize the magnitude of hike. Yet again there is no formula to find out the exact percentage increase in memory when run on WOW mode. Nevertheless, an example and some explanation might help us realize the magnitude of increase. Let us consider the below piece of code. class MemOnWOW { int i; private MemOnWOW() { i = 10; } static void Main(string[] args) { MemOnWOW p = new MemOnWOW(); System.Console.ReadLine(); } } This when built for a 32 bit platform and executed in a 32 bit environment consumes a total size of 80,596 KB and when executed over a WOW environment, consumes a total size of 115,128 KB, which means an increment of 34,532 KB. ** Total size: Total size includes Managed heap, Unmanaged Heap, Stack, Image, Mapped Files, Shareable, Private data and Paged tables. Now, let us list down the contribution to the increase in memory by each of the segments within the total memory. The assemblies of the WOW, which are located at the location C:\windows\sysWOW64, add up to the increase in memory by around 12MB. These assemblies are required to bring up and execute the WOW environment within which the 32 bit process runs. Below is the list of some of the Dlls which are required to bring up the WOW environment and their roles. Manages process and thread creation. Exception dispatching File system redirection Registry Redirection Manages the 32-bit CPU context of each running thread inside Wow64. Provides processor architecture-specific support for switching CPU mode from 32-bit to 64-bit and vice versa. Intercepts the GUI system calls. It is important to understand the overhead added by WOW execution environment. You may want to go through the detailed explanation given by Mark Russinovich in his book Windows Internals. Whenever there is a system call which is made to the underlying kernel (Call to the API exposed by the kernel) or whenever there is a call back from the kernel, the input parameters and output parameters have to be converted from 32 bit to 64 and 64 to 32 bit. Wow which sits in between has the responsibility of converting the 32 bit user mode object to 64 bit user mode object before sending it to the kernel. Similarly, WOW will also have to convert 64 bit kernel objects to 32 bit kernel object before presenting it to the 32 bit user application. Similar conversions do take place when the kernel throws an exception object and it has to reach the user application. This is also called as "Exception hooking \ Exception Dispatching". Evidently, in the scenarios explained above where there are additional intermediate (user and kernel) objects created, memory is bound to increase. These conversions between kernel and user mode can also lead the performance degradations and not just the memory increase. During the execution of a .Net application there could be several files which get memory mapped. Some of them are the globalization files (*.nls), font files (*.ttf) etc. When run under WOW, there are additional WOW related files which gets memory mapped and hence leading to memory hike. The number of files which gets memory mapped depends on the kind of assemblies referenced, resources used within the application etc. For the example "MemOnWOW" that we have considered above, the additional mapped files in the WOW mode are the globalization related .nls files which consume around 2.5 MB of excess memory. Under the WOW execution, the unmanaged heap would be used by the WOW environment. In our current example, it consumes around 2 MB of the process memory. This unmanaged heap would never be used when a 32 bit managed application is run directly under 32 bit environment. Managed heap would be kind enough to stay neutral and consume exactly the same amount of memory, whether it is WOW or direct 32 bit environment. Private data worth around 18 MB is added to the total memory size of the process when run under WOW, because of it being in WOW mode. If there are additional private data because of the application itself, it would be in addition to this 18MB (in WoW). The stack’s contribution to the increase in memory, when run over WOW, varies from application to application as it depends on the program length (how long the stack is). As we know, Stacks are used to store function parameters, local function variables and function invocation records (who has invoked the function) for individual threads. If there are 3 threads, there would be 3 different stacks in the stack segment. When run under WOW, for each thread the stack segment would have to maintain 2 different and independent stacks one as the 32 bit stack and the other as a 64 bit stack. So, if there are 3 threads in an application, there would be 6 different stacks in the stack segment when run on WOW. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/526984/32-bit-vs-64-bit-memory?msg=4481660
CC-MAIN-2013-48
en
refinedweb
26 USC § 6— (B) any part of any installment under section 6166 (including any part of a deficiency prorated to any installment under such section). (2) Security Source(Aug. 16, 1954, ch. 736, 68A Stat. 762; Pub. L. 85–866, title II, § 206(c),Sept. 2, 1958, 72 Stat. 1684; Pub. L. 91–172, title I, § 101(j)(37),Dec. 30, 1969, 83 Stat. 530; Pub. L. 91–614, title I, § 101(h),Dec. 31, 1970, 84 Stat. 1838; Pub. L. 93–406, title II, § 1016(a)(7),Sept. 2, 1974, 88 Stat. 929; Pub. L. 94–455, title XIII, § 1307(d)(2)(C), title XVI, § 1605(b)(3), title XIX, § 1906(b)(13)(A), title XX, § 2004(c)(1), (2),Oct. 4, 1976, 90 Stat. 1727, 1754, 1834, 1867, 1868; Pub. L. 96–223, title I, § 101(f)(1)(H),Apr. 2, 1980, 94 Stat. 252; Pub. L. 96–589, § 6(i)(8),Dec. 24, 1980, 94 Stat. 3410; Pub. L. 97–34, title IV, § 422(e)(1),Aug. 13, 1981, 95 Stat. 316; Pub. L. 100–418, title I, § 1941(b)(2)(B)(viii),Aug. 23, 1988, 102 Stat. 1323; Pub. L. 107–134, title I, § 112(d)(3),Jan. 23, 2002, 115 Stat. 2435.) Amendments 2002—Subsec. (d)(3). Pub. L. 107–134added par. (3). 1988—Subsec. (b)(1). Pub. L. 100–418substituted “or 44” for “44, or 45” in two places. 1981—Subsec. (a)(2)(B). Pub. L. 97–34struck out reference to section 6166A. 1980—Subsec. (b)(1). Pub. L. 96–223inserted references to chapter 45. Subsec. (c). Pub. L. 96–589substituted “Claims in cases under title 11 of the United States Code or in receivership proceedings” for “Claims in bankruptcy or receivership proceedings” in heading, and substituted reference to cases under title 11 of the United States Code, for reference to bankruptcy proceedings in text. 1976—Subsec. (a)(1). Pub. L. 94–455, § 1906(b)(13)(A), struck out “or his delegate” after “Secretary”. Subsec. (a)(2). Pub. L. 94–455, § 2004(c)(1), struck out in subpar. (A) “that the payment, on the due date, of” before “any part of the amount”, in subpar. (B) provisions relating to payment, on the date fixed for payment of any installment, and subpar. (C) which related to payment upon notice and demand of a deficiency prorated under the provisions of section 6161, inserted in subpar. (B) “or 6166A” after “section 6166”, substituted in subpar. (B) “under such section” for “the date for payment for which had not arrived”, and inserted in text following subpar. (B) provisions relating to extension of time for payment in the case of an amount referred to in subpar. (B). Subsec. (b). Pub. L. 94–455, §§ 1307(d)(2)(C), 1605(b)(3), 2004(c)(2), among other changes, inserted reference to chapter 41, effective on or after Oct. 4, 1976, and reference to chapter 44, applicable to taxable years of real estate investment trusts beginning after Oct. 4, 1976, and struck out provisions relating to grant of extensions with respect to hardships to taxpayers, applicable to the estates of decedents dying after Dec. 31, 1976. Subsec. (d)(2). Pub. L. 94–455, § 1906(b)(13)(A), struck out “or his delegate” after “Secretary”. 1974—Subsec. (b). Pub. L. 93–406inserted references to chapter 43. 1970—Subsec. (a)(1). Pub. L. 91–614substituted “6 months (12 months in the case of estate tax)” for “6 months”. 1969—Subsec. (b). Pub. L. 91–172inserted references to chapter 42. 1958—Subsec. (a)(2). Pub. L. 85–866inserted provisions allowing Secretary or his delegate to extend time for payment for reasonable period, not exceeding 10 years from date prescribed by section 6151 (a), if he finds that payment on date fixed for payment of any installment under section 6166, or any part of such installment, or payment of any part of a deficiency prorated under section 6166 to installments the date for payment of which had arrived would result in undue hardship. 1980 Amendments Amendment by Pub. L. 96–589effective Oct. 1, 1979, but not applicable to proceedings under Title 11, Bankruptcy, commenced before Oct. 1, 1979, see section 7(e) ofPub. L. 96–589, set out as a note under section 108 of this title. Section 101(i) ofPub. L. 96–223, as amended by Pub. L. 99–514, § 2,Oct. 22, 1986, 100 Stat. 2095, provided that: “(1) In general.—The amendments made by this section [enacting sections 4986 to 4998, 6050C, 6076, and 7241 of this title and amending this section and sections 164, 6211, 6212, 6213, 6214, 6302, 6344, 6501, 6511, 6512, 6601, 6611, 6652, 6653, 6862, 7422, and 7512 of this title] shall apply to periods after February 29, 1980. “(2) Transitional rules.—For the period ending June 30, 1980, the Secretary of the Treasury or his delegate shall prescribe rules relating to the administration of chapter 45 of the Internal Revenue Code of 1986 [formerly I.R.C. 1954]. To the extent provided in such rules, such rules shall supplement or supplant for such period the administrative provisions contained in chapter 45 of such Code (or in so much of subtitle F of such Code [section 6001 et seq. of this title] as relates to such chapter 45).” Effective Date of 1976 Amendment Amendment by section 1307(d)(2)(C) ofPub. L. 94–455effective on and after Oct. 4, 1976, see section 1307(e)(6) ofPub. L. 94–455, set out as a note under section 501 of this title. For effective date of amendment by section 1605(b)(3) ofPub. L. 94–455, see section 1608(d) ofPub. L. 94–455, set out as a note under section 856 of this title. Amendment by section 2004(c)(1), (2) ofPub. L. 94–455applicable to estates of decedents dying after Dec. 31, 1976, see section 2004(g) ofPub. L. 94–455, set out as an Effective Date note under section 6166 1969 Amendment Amendment by Pub. L. 91–172effective Jan. 1, 1970, see section 101(k)(1) ofPub. L. 91–172, set out as an Effective Date note under section 4940 of this title. Effective Date of 1958 Amendment Section 206(f) ofPub. L. 85–866, as amended by Pub. L. 99–514, § 2,Oct. 22, 1986, 100 Stat. 2095, provided that: “The amendments made by this section [enacting section 6166 of this title and amending this section and sections 6503 and 6601 of this title] shall apply to estates of decedents with respect to which the date for the filing of the estate tax return (including extensions thereof) prescribed by section 6075(a) of the Internal Revenue Code of 1986 [formerly I.R.C. 1954] is after the date of the enactment of this Act [Sept. 2, 1958]; except that (1) section 6166(i) of such Code as added by this section shall apply to estates of decedents dying after August 16, 1954, but only if the date for the filing of the estate tax return (including extensions thereof) expired on or before the date of the enactment of this Act [Sept. 2, 1958], and (2) notwithstanding section 6166(a) of such Code, if an election under such section is required to be made before the sixtieth day after the date of the enactment of this Act [Sept. 2, 1958] such an election shall be considered timely if made on or before such sixtieth.
http://www.law.cornell.edu/uscode/text/26/6161
CC-MAIN-2013-48
en
refinedweb
Your Account Hear us Roar Earlier today I received a couple of emails that pointed out an error I had in my code. In the CronJob class I have two initialize methods and Ruby does not allow methods to be overloaded. If you want to fix the error, replace the two initialize methods with the code below: def initialize(minute="*", hour="*", day="*", month="*", weekday="*", command="*") @minute = minute @hour = hour @day = day @month = month @weekday = weekday @command = command end © 2013, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreillynet.com/cs/user/view/cs_msg/42162
CC-MAIN-2013-48
en
refinedweb
This chapter discusses support in the Oracle Java Database Connectivity (JDBC) Oracle Call Interface (OCI) and JDBC Thin drivers for login authentication, data encryption, and data integrity, particularly, with respect to features of the Oracle Advanced Security option. Oracle Advanced Security, previously known as the Advanced Networking Option (ANO) or Advanced Security Option (ASO), provides industry standards-based data encryption, data integrity, third-party authentication, single sign-on, and access authorization. In 11g release 2 (11.2), both the JDBC OCI and thin drivers support all the Oracle Advanced Security features. Earlier releases of the JDBC drivers did not support some of the ASO features. Note:This discussion is not relevant to the server-side internal driver because all communication through server-side internal driver is completely internal to the server. This chapter contains the following sections: Support for Oracle Advanced Security Support for Login Authentication Support for Strong Authentication Support for OS Authentication Support for Data Encryption and Integrity Secure External Password Store Oracle Advanced Security provides the following security features: Data Encryption Sensitive information communicated over enterprise networks and the Internet can be protected by using encryption algorithms, which transform information into a form that can be deciphered only with a decryption key. Some of the supported encryption algorithms are RC4, DES, 3DES, and AES. To ensure data integrity during transmission, Oracle Advanced Security generates a cryptographically secure message digest, using MD5 or SHA-1 hashing algorithms, and includes it with each message sent across a network. This protects the communicated data from attacks, such as data modification, deleted packets, and replay attacks. Strong Authentication To ensure network security in distributed environments, it is necessary to authenticate the user and check his credentials. Password authentication is the most common means of authentication. Oracle Advanced Security enables strong authentication with Oracle authentication adapters, which support various third-party authentication services, including SSL with digital certificates. Oracle Advanced Security supports the following industry-standard authentication methods: Kerberos Remote Authentication Dial-In User Service (RADIUS) Secure Sockets Layer (SSL) JDBC OCI Driver Support for Oracle Advanced Security If you are using the JDBC OCI driver, which presumes you are running from a computer with an Oracle client installation, then support for Oracle Advanced Security and incorporated third-party features is fairly similar to the support provided by in any Oracle client situation. Your use of Advanced Security features is determined by related settings in the sqlnet.ora file on the client computer. Starting from Oracle Database 11g Release 1 (11.1), the JDBC OCI driver attempts to use external authentication if you try connecting to a database without providing a password. The following are some examples using the JDBC OCI driver to connect to a database without providing a password: Example 9-1 Using SSL authentication to connect to the database. Example 9-1 Using SSL Authentication to Connect to the Database import java.sql.*; import java.util.Properties; public class test { public static void main( String [] args ) throws Exception { String url = "jdbc:oracle:oci:@" +"(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=stadh25)(PORT=1529))" +"(CONNECT_DATA=(SERVICE_NAME=mydatabaseinstance)))"; Driver driver = new oracle.jdbc.OracleDriver(); Properties props = new Properties(); Connection conn = driver.connect( url, props ); conn.close(); } } Example 9-2 uses a data source to connect to the database. Example 9-2 Using a Data Source to Connect to the Database import java.sql.*; import javax.sql.*; import java.util.Properties; import oracle.jdbc.pool.*; public class testpool { public static void main( String args ) throws Exception { String url = "jdbc:oracle:oci:@" +"(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=stadh25)(PORT=1529))" +"(CONNECT_DATA=(SERVICE_NAME=mydatabaseinstance)))"; OracleConnectionPoolDataSource ocpds = new OracleConnectionPoolDataSource(); ocpds.setURL(url); PooledConnection pc = ocpds.getPooledConnection(); Connection conn = pc.getConnection(); } } JDBC Thin Driver Support for Oracle Advanced Security The JDBC Thin driver cannot assume the existence of an Oracle client installation or the presence of the sqlnet.ora file. Therefore, it uses a Java approach to support Oracle Advanced Security. Java classes that implement Oracle Advanced Security are included in the ojdbc5.jar and ojdbc6.jar files. Security parameters for encryption and integrity, usually set in sqlnet.ora, are set using a Java Properties object or through system properties. Basic login authentication through JDBC consists of user names and passwords, as with any other means of logging in to an Oracle server. Specify the user name and password through a Java properties object or directly through the getConnection method call. This applies regardless of which client-side Oracle JDBC driver you are using, but is irrelevant if you are using the server-side internal driver, which uses a special direct connection and does not require a user name or password. Starting with 11g release 1 (11.1), the Oracle JDBC Thin driver implements a challenge-response protocol to authenticate the user. Oracle Advanced Security enables Oracle Database users to authenticate externally. External authentication can be with RADIUS, KERBEROS, Certificate-Based Authentication, Token Cards, and Smart Cards. This is called strong authentication. Oracle JDBC drivers provide support for the following strong authentication methods: Kerberos RADIUS SSL (certificate-based authentication) Operating System (OS) authentication allows Oracle to pass control of user authentication to the operating system. It allows the users to connect to the database by authenticating their OS user name in the database. No password is associated with the account since it is assumed that OS authentication is sufficient. In other words, the server delegates the authentication to the client OS. You need to perform the following steps to achieve this: Use the following command to check the value of the Oracle OS_AUTHENT_PREFIX initialization parameter: SQL> SHOW PARAMETER os_authent_prefix NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ os_authent_prefix string ops$ SQL> Note:Remember the OS authentication prefix. You need to create a database user to allow an OS authenticated connection, where the user name must be the prefix value concatenated to the OS user name. Add the following line in the t_init1.ora file: REMOTE_OS_AUTHENT = TRUE When a connection is attempted from the local database server, the OS user name is passed to the Oracle server. If the user name is recognized, the Oracle the connection is accepted, otherwise the connection is rejected. The configuration steps necessary to set up OS authentication on Linux are the following: Use the following commands to create an OS user w_rose: # useradd w_rose # passwd w_rose Changing password for w_rose New password: password Retype new password: password Use the following command to create a database user to allow an OS authenticated connection: CREATE USER ops$w_rose IDENTIFIED EXTERNALLY; GRANT CONNECT TO ops$w_rose; Use the following commands to test the OS authentication connection: su - w_rose> The configuration steps necessary to set up OS authentication on Windows are the following: Note:Oracle JDBC Thin drivers do not support NTS. Create a local user, say, w_rose, using the Computer Management dialog box. For this you have to do the following: Click Start. From the Start menu, select Programs, then select Administrative Tools and then select Computer Management. Expand Local Users and Groups by clicking on the Plus ("+") sign. Click Users. Select New User from the Action menu. Enter details of the user in the New User dialog box and click Create. Note:The preceding steps are only for creating a local user. Domain users can be created in Active Directory. Use the following command to create a database user to allow an OS authenticated connection: CREATE USER "OPS$yourdomain.com\p_floyd" IDENTIFIED EXTERNALLY; GRANT CONNECT TO "OPS$yourdomain.com\p_floyd"; Note:When you create the database user in Windows environment, the user name should be in the following format: <OS_authentication_prefix_parameter>$<DOMAIN>\<OS_user_name> When using a Windows server, there is an additional consideration. The following option must be set in the %ORACLE_HOME%\network\admin\sqlnet.ora file: SQLNET.AUTHENTICATION_SERVICES= (NTS) Use the following commands to test the OS authentication connection: C:\> set ORACLE_SID=DB11G C:\> sqlplus / SQL*Plus: Release 11.2.0.1.0 - Production on Thu July 12 11:47:01 2007 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Now that you have set up OS authentication to connect to the database, you can use the following JDBC code for connecting to the database: String url = "jdbc:oracle:thin:@oracleserver.mydomain.com:5521:dbja" Driver driver = new oracle.jdbc.OracleDriver(); DriverManager.registerDriver(driver); Properties props = new Properties(); Connection conn = DriverManager.getConnection( url, props); The preceding code assumes that it is executed by p_floyd on the client machine. The JDBC drivers retrieve the OS user name from the user.name system property that is set by the JVM. As a result, the following thin driver-specific error no longer exists: ORA-17443=Null user or password not supported in THIN driver Note:By default, the JDBC driver retrieves the OS user name from the user.namesystem property, which is set by the JVM. If the JDBC driver is unable to retrieve this system property or if you want to override the value of this system property, then you can use the OracleConnection.CONNECTION_PROPERTY_THIN_VSESSION_OSUSERconnection property. For more information, see Oracle Javadoc at You can use Oracle Advanced Security data encryption and integrity features in your Java database applications, depending on related settings in the server. When using the JDBC OCI driver, set parameters as you would in any Oracle client situation. When using the Thin driver, set parameters through a Java properties object. 9-1 shows how these possible settings on the client-side and server-side combine to either enable or disable the feature. By default, remote OS authentication (through TCP) is disabled in the database for security reasons. Table 9-1. Note:The term checksum still appears in integrity parameter names, but is no longer used otherwise. For all intents and purposes, checksum and integrity are synonymous. This section covers the following topics: JDBC OCI Driver Support for Encryption and Integrity JDBC Thin Driver Support for Encryption and Integrity Setting Encryption and Integrity Parameters in Java If you are using the JDBC OCI driver, which presumes an Oracle-client setting with an Oracle client installation, then you can enable or disable data encryption or integrity and set related parameters as you would in any Oracle client situation, through settings in the SQLNET.ORA file on the client. To summarize, the client parameters are shown in Table 9-2: Note:For the Oracle Advanced Security domestic edition only, settings of RC4_128and RC4_256are also possible. The JDBC Thin driver support for data encryption and integrity parameter settings parallels the JDBC OCI driver support discussed in the preceding section. Corresponding parameters can be set through a Java properties object that you would then be used when opening a database connection. Table 9-3 lists the parameter information for the JDBC Thin driver. These parameters are defined in the oracle.jdbc.OracleConnection interface. Note: Because Oracle Advanced Security support for the Thin driver is incorporated directly into the JDBC classes JAR file, there is only one version, not separate domestic and export editions. Only parameter settings that would be suitable for an export edition are possible. The letter C in DES40C and DES56C refers to Cipher Block Chaining (CBC) mode. Use a Java properties object, that is, an instance of java.util.Properties, to set the data encryption and integrity parameters supported by the JDBC Thin driver. The following example instantiates a Java properties object, uses it to set each of the parameters in Table 9-3, and then uses the properties object in opening a connection to the database: ... Properties prop = new Properties(); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_ENCRYPTION_LEVEL, "REQUIRED"); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_ENCRYPTION_TYPES, "( DES40C )"); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_CHECKSUM_LEVEL, "REQUESTED"); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_CHECKSUM_TYPES, "( MD5 )"); OracleDataSource ods = new OracleDataSource(); ods.setProperties(prop); ods.setURL("jdbc:oracle:thin:@localhost:1521:main"); Connection conn = ods.getConnection(); ... The parentheses around the values encryption type and checksum type allow for lists of values. When multiple values are supplied, the server and the client negotiate to determine which value is to be actually used. Example 9-3 is a complete class that sets data encryption and integrity parameters before connecting to a database to perform a query. Note:In the example, the string "REQUIRED" is retrieved dynamically through functionality of the AnoServicesand Serviceclasses. You have the option of retrieving the strings in this manner or including them in the software code as shown in the previous examples Before running this example, you must turn on encryption in the sqlnet.ora file. For example, the following lines will turn on AES256, AES192, and AES128 for the encryption and MD5 and SHA1 for the checksum: SQLNET.ENCRYPTION_SERVER = ACCEPTED SQLNET.CRYPTO_CHECKSUM_SERVER = ACCEPTED SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER= (MD5, SHA1) SQLNET.ENCRYPTION_TYPES_SERVER= (AES256, AES192, AES128) SQLNET.CRYPTO_SEED = 2z0hslkdharUJCFtkwbjOLbgwsj7vkqt3bGoUylihnvkhgkdsbdskkKGhdk Example 9-3 Setting Data Encryption and Integrity Parameters import java.sql.*; import java.util.Properties; import oracle.net.ano.AnoServices; import oracle.jdbc.*; public class DemoAESAndSHA1 { static final String USERNAME= "scott"; static final String PASSWORD= "tiger"; static final String URL = "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=WXYZ)(PORT=5561))" +"(CONNECT_DATA=(SERVICE_NAME=mydatabaseinstance)))";(); } } Oracle Database 11g provides support for the Secure Sockets Layer (SSL) protocol. SSL is a widely used industry standard protocol that provides secure communication over a network. SSL provides authentication, data encryption, and data integrity. It provides a secure enhancement to the standard TCP/IP protocol, which is used for Internet communication. SSL uses digital certificates that comply with the X.509v3 standard for authentication and a public and private key pair for encryption. SSL also uses secret key cryptography and digital signatures to ensure privacy and integrity of data. When a network connection over SSL is initiated, the client and server perform an SSL handshake that includes the following steps: Client and server negotiate about the cipher suites to use. This includes deciding on the encryption algorithms to be used for data transfer. Server sends its certificate to the client, and the client verifies that the certificate was signed by a trusted certification authority (CA). This step verifies the identity of the server. If client authentication is required, the client sends its own certificate to the server, and the server verifies that the certificate was signed by a trusted CA. Client and server exchange key information using public key cryptography. Based on this information, each generates a session key. All subsequent communications between the client and the server is encrypted and decrypted by using this set of session keys and the negotiated cipher suite. Note:In Oracle Database 11g Release 1 (11.1), SSL authentication is supported in the thin driver. So, you do not need to provide a user name/password pair if you are using SSL authentication. The following terms are commonly used in the SSL context: certificate: A certificate is a digitally signed document that binds a public key with an entity. The certificate can be used to verify that the public key belongs to that individual. certification authority: A certification authority (CA), also known as certificate authority, is an entity which issues digitally signed certificates for use by other parties. cipher suite: A cipher suite is a set of cryptographic algorithms and key sizes used to encrypt data sent over an SSL-enabled network. private key: A private key is a secret key, which is never transmitted over a network. The private key is used to decrypt a message that has been encrypted using the corresponding public key. It is also used to sign certificates. The certificate is verified using the corresponding public key. public key: A public key is an encryption key that can be made public or sent by ordinary means such as an e-mail message. The public key is used for encrypting the message sent over SSL. It is also used to verify a certificate signed by the corresponding private key. wallet: A wallet is a password-protected container that is used to store authentication and signing credentials, including private keys, certificates, and trusted certificates required by SSL. The Java Secure Socket Extension (JSSE) provides a framework and an implementation for a Java version of the SSL and TLS protocols. JSSE provides support for data encryption, server and client authentication, and message integrity. It abstracts the complex security algorithms and handshaking mechanisms and simplifies application development by providing a building block for application developers, which they can directly integrate into their applications. JSSE is integrated into Java Development Kit (JDK) 1.4 and later, and supports SSL version 2.0 and 3.0. Oracle strongly recommends that you have a clear understanding of the JavaTM Secure Socket Extension (JSSE) framework by Sun Microsystems before using SSL in the Oracle JDBC drivers. The JSSE standard application programming interface (API) is available in the javax.net, javax.net.ssl, and javax.security.cert packages. These packages provide classes for creating and configuring sockets, server sockets, SSL sockets, and SSL server sockets. The packages also provide a class for secure HTTP connections, a public key certificate API compatible with JDK1.1-based platforms, and interfaces for key and trust managers. SSL works the same way, as in any networking environment, in Oracle Database 11g. This section covers the following: Managing Certificates and Wallets Keys and certificates containers To establish an SSL connection with a JDBC client, Thin or OCI, Oracle database server sends its certificate, which is stored in its wallet. The client may or may not need a certificate or wallet depending on the server configuration. The Oracle JDBC Thin driver uses the JSSE framework to create an SSL connection. It uses the default provider (SunJSSE) to create an SSL context. However you can provide your own provider. You do not need a certificate for the client, unless the SSL_CLIENT_AUTHENTICATION parameter is set on the server. Java clients can use multiple types of containers such as Oracle wallets, JKS, PKCS12, and so on, as long as a provider is available. For Oracle wallets, OraclePKI provider must be used because the PKCS12 support provided by SunJSSE provider does not support all the features of PKCS12. In order to use OraclePKI provider, the following JARs are required: oraclepki.jar osdt_cert.jar osdt_core.jar All these JAR files should be under $ORACLE_HOME/jlib directory. Since Oracle Database 11g Release 1 (11.1) support for Kerberos has been introduced. Kerberos is a network authentication protocol that provides the tools of authentication and strong cryptography over the network. Kerberos helps you secure your information systems across your entire enterprise by using secret-key cryptography. The Kerberos protocol uses strong cryptography so that a client or a server can prove its identity to its server or client across an insecure network connection. After a client and server have used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business. The Kerberos architecture is centered around a trusted authentication service called the key distribution center, or KDC. Users and services in a Kerberos environment are referred to as principals; each principal shares a secret, such as a password, with the KDC. A principal can be a user such as scott or a database server instance. A good Kerberos client providing klist, kinit, and other tools, can be found at the following link: This client also provides a nice GUI. You need to make the following changes to configure Kerberos on your Windows machine: Right-click the My Computer icon on your desktop. Select Properties. The System Properties dialog box is displayed. Select the Advanced tab. Click Environment Variables. The Environment Variables dialog box is displayed. Click New to add a new user variable. The New User Variable dialog box is displayed. Enter KRB5CCNAME in the Variable name field. Enter FILE:C:\Documents and Settings\ <user_name> \krb5cc in the Variable value field. Click OK to close the New User Variable dialog box. Click OK to close the Environment Variables dialog box. Click OK to close the System Properties dialog box. Note: C:\WINDOWS\krb5.inifile has the same content as krb5.conffile. Perform the following steps to configure Oracle Database to use Kerberos: Use the following command to connect to the database: SQL> connect system Enter password: password Use the following commands to create a user CLIENT@US.ORACLE.COM that is identified externally: SQL> create user "CLIENT@US.ORACLE.COM" identified externally; SQL> grant create session to "CLIENT@US.ORACLE.COM"; Use the following commands to connect to the database as sysdba and dismount it: SQL> connect / as sysdba SQL> shutdown immediate; Add the following line to $T_WORK/t_init1.ora file: OS_AUTHENT_PREFIX="" Use the following command to restart the database: SQL> startup pfile=t_init1.ora Modify the sqlnet.ora file to include the following lines: names.directory_path = (tnsnames) #Kerberos sqlnet.authentication_services = (beq,kerberos5) sqlnet.authentication_kerberos5_service = dbji sqlnet.kerberos5_conf = /home/Jdbc/Security/kerberos/krb5.conf sqlnet.kerberos5_keytab = /home/Jdbc/Security/kerberos/dbji.oracleserver sqlnet.kerberos5_conf_mit = true sqlnet.kerberos_cc_name = /tmp/krb5cc_5088 # logging (optional): trace_level_server=16 trace_directory_server=/scratch/sqlnet/ Use the following commands to verify that you can connect through SQL*Plus: > kinit client > klist Ticket cache: FILE:/tmp/krb5cc_5088 Default principal: client@US.ORACLE.COM Valid starting Expires Service principal 06/22/06 07:13:29 06/22/06 17:13:29 krbtgt/US.ORACLE.COM@US.ORACLE.COM Kerberos 4 ticket cache: /tmp/tkt5088 klist: You have no tickets cached > sqlplus '/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oracleserver.mydomain.com)(PORT=5529)) (CONNECT_DATA=(SERVICE_NAME=mydatabaseinstance)))' Note:For information about using Kerberos, refer to the following web sites This following example demonstrates the new Kerberos authentication feature that is part of Oracle Database 11g Release 2 (11.2) JDBC thin driver. This demo covers two scenarios: In the first scenario, the OS maintains the user name and credentials. The credentials are stored in the cache and the driver retrieves the credentials before trying to authenticate to the server. This scenario is in the module connectWithDefaultUser(). Note: > /usr/kerberos/bin/kinit client where, the password is welcome. Use the following command to list your tickets: > /usr/kerberos/bin/klist The second scenario covers the case where the application wants to control the user credentials. This is the case of the application server where multiple web users have their own credentials. This scenario is in the module connectWithSpecificUser(). Note:To run this demo, you need to have a working setup, that is, a Kerberos server up and running, and an Oracle database server that is configured to use Kerberos authentication. You then need to change the URLs used in the example to compile and run it. Example 9-4 Using Kerberos Authentication to Connect to the Database import com.sun.security.auth.module.Krb5LoginModule; import java.io.IOException; import java.security.PrivilegedExceptionAction; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.HashMap; import java.util.Properties; import javax.security.auth.Subject; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.PasswordCallback; import javax.security.auth.callback.UnsupportedCallbackException; import oracle.jdbc.OracleConnection; import oracle.jdbc.OracleDriver; import oracle.net.ano.AnoServices; public class KerberosJdbcDemo { String url ="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)"+ "(HOST=oracleserver.mydomain.com)(PORT=5561))(CONNECT_DATA=" + "(SERVICE_NAME=mydatabaseinstance)))"; public static void main(String[] arv) { /* If you see the following error message [Mechanism level: Could not load * configuration file c:\winnt\krb5.ini (The system cannot find the path * specified] it's because the JVM cannot locate your kerberos config file. * You have to provide the location of the file. For example, on Windows, * the MIT Kerberos client uses the config file: C\WINDOWS\krb5.ini: */ // System.setProperty("java.security.krb5.conf","C:\\WINDOWS\\krb5.ini"); System.setProperty("java.security.krb5.conf","/home/Jdbc/Security/kerberos/krb5.conf"); KerberosJdbcDemo kerberosDemo = new KerberosJdbcDemo(); try { System.out.println("Attempt to connect with the default user:"); kerberosDemo.connectWithDefaultUser(); } catch (Exception e) { e.printStackTrace(); } try { System.out.println("Attempt to connect with a specific user:"); kerberosDemo.connectWithSpecificUser(); } catch (Exception e) { e.printStackTrace(); } } void connectWithDefaultUser() throws SQLException { OracleDriver driver = new OracleDriver(); Properties prop = new Properties(); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_AUTHENTICATION_SERVICES, "("+AnoServices.AUTHENTICATION_KERBEROS5+")"); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_AUTHENTICATION_KRB5_MUTUAL, "true"); /* If you get the following error [Unable to obtain Princpal Name for * authentication] although you know that you have the right TGT in your * credential cache, then it's probably because the JVM can't locate your * cache. * * Note that the default location on windows is "C:\Documents and Settings\krb5cc_username". */ // prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_AUTHENTICATION_KRB5_CC_NAME, /* On linux: > which kinit /usr/kerberos/bin/kinit > ls -l /etc/krb5.conf lrwxrwxrwx 1 root root 47 Jun 22 06:56 /etc/krb5.conf -> /home/Jdbc/Security/kerberos/krb5.conf > kinit client Password for client@US.ORACLE.COM: > klist Ticket cache: FILE:/tmp/krb5cc_5088 Default principal: client@US.ORACLE.COM Valid starting Expires Service principal 11/02/06 09:25:11 11/02/06 19:25:11 krbtgt/US.ORACLE.COM@US.ORACLE.COM Kerberos 4 ticket cache: /tmp/tkt5088 klist: You have no tickets cached */ prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_AUTHENTICATION_KRB5_CC_NAME, "/tmp/krb5cc_5088"); Connection conn = driver.connect(url,prop); String auth = ((OracleConnection)conn).getAuthenticationAdaptorName(); System.out.println("Authentication adaptor="+auth); printUserName(conn); conn.close(); } void connectWithSpecificUser() throws Exception { Subject specificSubject = new Subject(); // This first part isn't really meaningful to the sake of this demo. In // a real world scenario, you have a valid "specificSubject" Subject that // represents a web user that has valid Kerberos credentials. Krb5LoginModule krb5Module = new Krb5LoginModule(); HashMap sharedState = new HashMap(); HashMap options = new HashMap(); options.put("doNotPrompt","false"); options.put("useTicketCache","false"); options.put("principal","client@US.ORACLE.COM"); krb5Module.initialize(specificSubject,newKrbCallbackHandler(),sharedState,options); boolean retLogin = krb5Module.login(); krb5Module.commit(); if(!retLogin) throw new Exception("Kerberos5 adaptor couldn't retrieve credentials (TGT) from the cache"); // to use the TGT from the cache: // options.put("useTicketCache","true"); // options.put("doNotPrompt","true"); // options.put("ticketCache","C:\\Documents and Settings\\user\\krb5cc"); // krb5Module.initialize(specificSubject,null,sharedState,options); // Now we have a valid Subject with Kerberos credentials. The second scenario // really starts here: // execute driver.connect(...) on behalf of the Subject 'specificSubject': Connection conn = (Connection)Subject.doAs(specificSubject, new PrivilegedExceptionAction() { public Object run() { Connection con = null; Properties prop = new Properties(); prop.setProperty(AnoServices.AUTHENTICATION_PROPERTY_SERVICES, "(" + AnoServices.AUTHENTICATION_KERBEROS5 + ")"); try { OracleDriver driver = new OracleDriver(); con = driver.connect(url, prop); } catch (Exception except) { except.printStackTrace(); } return con; } });(); } } } class KrbCallbackHandler implements CallbackHandler { public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (int i = 0; i < callbacks.length; i++) { if (callbacks[i] instanceof PasswordCallback) { PasswordCallback pc = (PasswordCallback)callbacks[i]; System.out.println("set password to 'welcome'"); pc.setPassword((new String("welcome")).toCharArray()); } else { throw new UnsupportedCallbackException(callbacks[i], "Unrecognized Callback"); } } } } Since Oracle Database 11g Release 1 (11.1), support for Remote Authentication Dial-In User Service (RADIUS) has been introduced.. This section contains the following sections: Configuring Oracle Database to Use RADIUS Perform the following steps to configure Oracle Database to use RADIUS: Use the following command to connect to the database: SQL> connect system Enter password: password Use the following commands to create a new user aso from within a database: SQL> create user aso identified externally; SQL> grant create session to aso; Use the following commands to connect to the database as sysdba and dismount it: SQL> connect / as sysdba SQL> shutdown immediate; Add the following lines to the t_init1.ora file: os_authent_prefix = "" Note:Once the test is over, you need to revert the preceding changes made to the t_init1.ora file. Use the following command to restart the database: SQL> startup pfile=?/work/t_init1.ora Modify the sqlnet.ora file so that it contains only these lines: sqlnet.authentication_services = ( beq, radius) sqlnet.radius_authentication = <RADUIUS_SERVER_HOST_NAME> sqlnet.radius_authentication_port = 1812 sqlnet.radius_authentication_timeout = 120 sqlnet.radius_secret=/home/Jdbc/Security/radius/radius_key # logging (optional): trace_level_server=16 trace_directory_server=/scratch/sqlnet/ Use the following command to verify that you can connect through SQL*Plus: >sqlplus 'aso/1234@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oracleserver.mydomain.com)(PORT=5529)) (CONNECT_DATA=(SERVICE_NAME=mydatabaseinstance)))' This example demonstrates the new RADIUS authentication feature that is a part of Oracle Database 11g Release 2 (11.2) JDBC thin driver. You need to have a working setup, that is, a RADIUS server up and running, and an Oracle database server that is configured to use RADIUS authentication. You then need to change the URLs given in the example to compile and run it. Example 9-5 Using RADIUS Authentication to Connect to the Database import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.Properties; import oracle.jdbc.OracleConnection; import oracle.jdbc.OracleDriver; import oracle.net.ano.AnoServices; public class RadiusJdbcDemo { String url ="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)"+ "(HOST=oracleserver.mydomain.com)(PORT=5561))(CONNECT_DATA=" + "(SERVICE_NAME=mydatabaseinstance)))"; public static void main(String[] arv) { RadiusJdbcDemo radiusDemo = new RadiusJdbcDemo(); try { radiusDemo.connect(); } catch (Exception e) { e.printStackTrace(); } } /* * This method attempts to logon to the database using the RADIUS * authentication protocol. * * It should print the following output to stdout: * ----------------------------------------------------- * Authentication adaptor=RADIUS * User is:ASO * ----------------------------------------------------- */ void connect() throws SQLException { OracleDriver driver = new OracleDriver(); Properties prop = new Properties(); prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_AUTHENTICATION_SERVICES, "("+AnoServices.AUTHENTICATION_RADIUS+")"); // The user "aso" needs to be properly setup on the radius server with // password "1234". prop.setProperty("user","aso"); prop.setProperty("password","1234"); Connection conn = driver.connect(url,prop); scripts and application code, and simplifies maintenance because you do not need to change your code each time user names and passwords change. In addition, if you do not have to change the application code, then it also becomes easier to enforce password management policies for these user accounts. You can set the oracle.net.wallet_location connection property to specify the wallet location. The JDBC driver can then retrieve the user name and password pair from this wallet. See Also:Oracle Database Advanced Security Administrator's Guide for information about configuring your client to use secure external password store and for information about managing credentials in it.
http://docs.oracle.com/cd/E11882_01/java.112/e10589/clntsec.htm
CC-MAIN-2013-48
en
refinedweb
Web service abstraction class. More... #include <brisawebservice.h> Web service abstraction class. BrisaWebService is used to receive and respond UPnP action and event requests. Currently this class is used mostly with BrisaService and BrisaEventController. Definition at line 90 of file brisawebservice.h. Constructor for BrisaWebService Definition at line 53 of file brisawebservice.cpp. Destructor for BrisaWebService Definition at line 105 of file brisawebservice.h. Reimplements genericRequestReceived() This signal is emmited when BrisaWebService receives a request, the main difference is that this signal has a pointer to the class that is emmiting the signal. Definition at line 114 of file moc_brisawebservice.cpp. This signal is emmited when BrisaWebService receives a request. Definition at line 107 of file moc_brisawebservice.cpp. This method receives all web service requests and emits a genericRequestReceived() signal. If the request method is of "POST" type, the web service will reply a default message. Note: Reimplemented from libQxt. Definition at line 58 of file brisawebservice.cpp. Reimplements respond(). This method responds only a HTTP header using the given session and request ID. Definition at line 94 of file brisawebservice.cpp. Reimplements respond() This method responds only a HTTP header to the session and request ID stored in BrisaWebService Definition at line 89 of file brisawebservice.cpp. Reimplements respond(). We recommend using this method given the fact that it supports asynchronous requests. Definition at line 84 of file brisawebservice.cpp. Responds response to the session and request ID currently stored in BrisaWebService, if using this method the response must be synchronous because the request and session ID can change quickly. Definition at line 79 of file brisawebservice.cpp.
http://brisa.garage.maemo.org/apidoc/qt/html/class_brisa_core_1_1_brisa_web_service.html
CC-MAIN-2013-48
en
refinedweb
NAME vm_page_free, vm_page_free_toq, vm_page_free_zero, vm_page_try_to_free -- free a page SYNOPSIS #include <sys/param.h> #include <vm/vm.h> #include <vm/vm_page.h> void vm_page_free(vm_page_t m); void vm_page_free_toq(vm_page_t m); void vm_page_free_zero(vm_page_t m); int vm_page_try_to_free(vm_page_t m); DESCRIPTION The vm_page_free_toq() function moves a page into the free queue, and disassociates it from its object. If the page is held, wired, already free, or its busy count is not zero, the system will panic. If the PG_ZERO flag is set on the page, it is placed at the end of the free queue; otherwise, it is placed at the front. If the page's object is of type OBJT_VNODE and it is the last page associated with the object, the underlying vnode may be freed. The vm_page_free() and vm_page_free_zero() functions both call vm_page_free_toq() to actually free the page, but vm_page_free_zero() sets the PG_ZERO flag and vm_page_free() clears the PG_ZERO flag prior to the call to vm_page_free_toq(). The vm_page_try_to_free() function verifies that the page is not held, wired, busy or dirty, and if so, marks the page as busy, drops any protection that may be set on the page, and frees it. RETURN VALUES vm_page_try_to_free() returns 1 if it is able to free the page; otherwise, 0 is returned. SEE ALSO vm_page_busy(9), vm_page_hold(9), vm_page_wire(9) AUTHORS This manual page was written by Chad David <davidc@acns.ab.ca>.
http://manpages.ubuntu.com/manpages/precise/man9/vm_page_try_to_free.9freebsd.html
CC-MAIN-2013-48
en
refinedweb
Hello all, Looking at the OOHaskell black (grey?) magic, and wondering if there would be an interesting way to construct class interfaces using the OOHaskell paradigm? I'm trying to do it as so (assume relevant types/proxies declared): type FigureInter = Record ( Draw :=: IO () :*: HNil ) figure self = do return emptyRecord where _ = narrow self :: FigureInter abstrFigure self = do super <- figure self visible <- newIORef True returnIO $ setVisible .=. (\b -> writeIORef visible b) .*. isVisible .=. readIORef visible .*. draw .=. return () .<. super but ghci complains (you know how it likes to complain), with Couldn't match expected type `Record t2' against inferred type `F (Proxy Draw) (m ())' In the second argument of `(.*.)', namely `draw .=. (return ())' In the second argument of `(.*.)', namely `(isVisible .=. (readIORef visible)) .*. (draw .=. (return ()))' In the first argument of `(.<.)', namely `(setVisible .=. (\ b -> writeIORef visible b)) .*. ((isVisible .=. (readIORef visible)) .*. (draw .=. (return ())))' Anyone have any tips they care to share? :) Scott
http://www.haskell.org/pipermail/haskell-cafe/2007-July/028019.html
CC-MAIN-2013-48
en
refinedweb
03 March 2010 17:49 [Source: ICIS news] BRUSSELS (ICIS news)--Europe’s ethylene capacity could contract by 3m tonnes/year by 2012 as expansions in the Middle East and Asia result in product backing up into Europe, a consultant at an industry conference said on Wednesday. The large global producers with facilities in most regions of the world can close plants to balance their systems and optimise operating rates, said Kevin Boyle, a principal with Kevin L Boyle Consulting. He was speaking at the 5th ICIS World Olefins Conference in ?xml:namespace> Access to feedstocks will become a critical decision point, he said, and crackers that are under threat are those that obtain feedstock from refineries at risk of reduced operations. Boyle said facilities that have access to imported ethylene may have an advantage, noting that ethylene exports from the Middle East to Asia have increased, and this may happen to In the scenario he outlined at the conference, Boyle estimated Asian ethylene demand to grow by 4% per year in the 2009-2012 period and capacity by 10% per year. This would result in net imports of ethylene equivalent falling from 14.6m tonnes in 2009 to 10.8m tonnes in 2012, he said. In the same period, he said, Operating rates would fall to low levels in all regions, with Assuming a 75% operating rate in Asia, he said around 90% of Asian import demand would be met by the This would leave around 3m tonnes/year of ethylene equivalent that could be sent to The 5th ICIS World Olefins Conference takes place on 3 March. ($1 = €0.74) For more on eth
http://www.icis.com/Articles/2010/03/03/9339690/europe-c2-capacity-to-fall-by-3m-tonnesyear-by-2012.html
CC-MAIN-2013-48
en
refinedweb
public class SaslException extends IOException addSuppressed, fillInStackTrace, getLocalizedMessage, getMessage, getStackTrace, getSuppressed, printStackTrace, printStackTrace, printStackTrace, setStackTrace clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait public SaslException() public SaslException(String detail) detail- A possibly null string containing details of the exception. Throwable.getMessage() public SaslException(String detail, Throwable ex) detail- A possibly null string containing details of the exception. ex- A possibly null root exception that caused this exception. Throwable.getMessage(), getCause()(Throwable) or Throwable } initCausein class Throwable cause- the cause (which is saved for later retrieval by the Throwable.getCause()method). (A nullvalue is permitted, and indicates that the cause is nonexistent or unknown.) Throwableinstance. public String toString() toStringin class Throwable.
http://docs.oracle.com/javase/7/docs/api/javax/security/sasl/SaslException.html
CC-MAIN-2013-48
en
refinedweb
I am in favor of failing noisily. It is too much at this point for these tools to be there to fix user-introduced errors. On 9/19/12 1:42 PM, "Mike Reinstein" <reinstein.mike@gmail.com> wrote: >I think we should either make a plugin installation fail if a file is >already present, OR alternatively we could run the plugin uninstall logic >prior to doing the install. This would clear things out. I can see pros >and >cons to both approaches. The nice thing about uninstall is if you were to >accidentally delete a key file that the plugin uses, the pluginstall step >would fix that. Let's say for example you borked plugin.xml for example >but >still have references in your android manifest file, or the xcodeproj >file. >This would allow you to run pluginstall and it would fix all the plugin >references. > >-Mike > >On Wed, Sep 19, 2012 at 4:33 PM, Braden Shepherdson ><braden@chromium.org>wrote: > >> Java's classpath loading expects the path to match the package name, so >> you're right that it will break the build tools. We should require >>plugins >> to have unique namespaces, I agree. Maybe make installing a plugin fail >>if >> a file already exists? >> >> >> On Wed, Sep 19, 2012 at 2:59 PM, Mike Reinstein >><reinstein.mike@gmail.com >> >wrote: >> >> > Yeah, I think there is some weirdness around using the plugin name to >> > provide the namespace container for 2 reasons: >> > >> > 1. Im an Android N00b but I think >>src/Child_Browser/com/phonegap/... >> > break the build tools >> > 2. the name field should really be UTF8 aware, and the idea of >>having >> > directories with crazy ass characters in them is not appealing. :/ >> > >> > maybe instead we use the package name? e.g., >> > >> > www/com.phonegap.childbrowser/stuff >> > platforms/ios/Plugins/com.phonegap.childbrowser/CDVChildBrowser.h >> > >> > etc? Thoughts? >> > >> > >> > >> > On Wed, Sep 19, 2012 at 2:48 PM, Anis KADRI <anis.kadri@gmail.com> >> wrote: >> > >> > > Yes I agree with all of it too. Except maybe for the android >> > namespacing. I >> > > believe it should be read from the package statement in the java >>files >> or >> > > specified somewhere in the plugin.xml. >>src/Child_Browser/com/phonegap >> > does >> > > not sound right to me. >> > > I definitely agree with keeping the plugin.xml around for >> uninstallation >> > > purposes. >> > > -a >> > > >> > > On Wed, Sep 19, 2012 at 11:12 AM, Filip Maj <fil@adobe.com> wrote: >> > > >> > > > +1 to all of your points Mike >> > > > >> > > > On 9/19/12 9:55 AM, "Mike Reinstein" <reinstein.mike@gmail.com> >> wrote: >> > > > >> > > > >Hey folks, >> > > > > >> > > > >As you may know, pluginstall is the low level awesome utility >> written >> > by >> > > > >Andrew Lunny, and it's a very key element in the cordova command >> line >> > > > >tools >> > > > >that several people have been working on. Andrew and I spoke >> yesterday >> > > > >over >> > > > >email, and he's planning to be afk from this tool for several >>weeks >> as >> > > he >> > > > >gets caught up with work, and finishes the major phonegap build >> > release >> > > > >they are doing. I'm starting this thread to have a discussion >>about >> > some >> > > > >changes; I want to make sure we can come to some agreement before >> > > > >proceeding with actual merging and cleanup. >> > > > > >> > > > >For brevity's sake I'm going to assume you already have a working >> > > > >knowledge >> > > > >of how pluginstall works but of course feel free to ask >>questions as >> > > > >needed. These are the major points to discuss: >> > > > > >> > > > > >> > > > > - IMO pluginstall should force namespacing to prevent multiple >> > > plugins >> > > > > from having colliding assets. For example, Child Browser >>plugin >> > would >> > > > >put >> > > > > it's native files in ios projects under Plugins/Child_Browser >>and >> > for >> > > > > android projects under src/Child_Browser/com/phonegap/.. >> > > > > - I've looked at a few plugins, and i think we can change from >> > > explicit >> > > > > to copying declarations in the manifest file to implicit; >>rather >> > than >> > > > > having to declare source-files, header-files, and asset-files >>in >> > > > > plugin.xml, all resources should be copied in, and use the >>file >> > > > >extension >> > > > > to dictate how it's handled (for example .h files will always >>be >> > > added >> > > > >as >> > > > > headers to an ios project's xcodeproj file, etc) >> > > > > - rename <config-file> to <xml-graft> and <plugins-plist> to >> > > > > <plist-add>. Since these tags represent operations being >> performed >> > on >> > > > >data >> > > > > files, let's identify and rename them as such. >> > > > > - change the cli signature to be use more descriptive flags. >>For >> > > > >example >> > > > > maybe something like pluginstall --project <project path> >> > --platform >> > > > > <ios|android> -- --plugin <plugin archive path> This will be >> > > important >> > > > >as >> > > > > the options available for this tool grow >> > > > > - multiple calls to addplugin should not pile up duplicate >> > references >> > > > >to >> > > > > files. We can probably solve this quickly by internally >>calling >> > > > >uninstall >> > > > > plugin, then install >> > > > > - for uninstall purposes, let's keep a copy of plugin.xml in >>each >> > > > > platforms namespaced directory so we have a manifest for >>removing >> > it >> > > > >later. >> > > > > for example from bullet 1, the ios platform would have the >>file >> > > > > Plugins/Child_Browser/plugin.xml >> > > > > >> > > > > >> > > > > -Mike >> > > > >> > > > >> > > >> > >>
http://mail-archives.apache.org/mod_mbox/incubator-callback-dev/201209.mbox/%3CCC7F829E.E1BE%25fil@adobe.com%3E
CC-MAIN-2013-48
en
refinedweb
Wren/all Please remember SPJ's request on the Records wiki to stick to the namespace issue. We're trying to make something better that H98's name clash. We are not trying to build some ideal polymorphic record system. To take the field labelled "name": in H98 you have to declare each record in a different module and import every module into your application and always refer to "name" prefixed by the module. DORF doesn't stop you doing any of that. So if you think of each "name" being a different meaning, carry on using multiple modules and module prefixes. That's as easy (or difficult) as under H98. You can declare fieldLabel "name" in one module, import it unqualified into another and declare more records with a "name" label -- contrary to what somebody was claiming. Or you can import fieldLabel "name" qualified, and use it as a selector function on all record types declared using it. It's just a function like any other imported/qualified function, for crying out loud! So if there's 'your' "name" label and 'my' "name", then use the module/qualification system as you would for any other scoped name. Then trying to apply My.name to Your.record will get an instance failure, as usual. (And by the way, there's no "DORFistas", let's avoid personalising this. There are people who don't seem to understand DORF -- both those criticising and those supporting.) AntC ----- Original Message Follows ----- > On 2/25/12 10:18 AM, Gábor Lehel wrote: > >! > > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org >
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-February/021974.html
CC-MAIN-2013-48
en
refinedweb
Acme.Missiles Description The launchMissiles action, as mentioned in: - Beautiful concurrency, by Simon Peyton Jones, to appear in "Beautiful code", ed Greg Wilson, O'Reilly 2007. Synopsis - launchMissiles :: IO () - withMissilesDo :: IO a -> IO a - launchMissilesSTM :: STM () Documentation launchMissiles :: IO ()Source Cause serious international side effects. Launching missiles in the STM monad withMissilesDo :: IO a -> IO aSource launchMissilesSTM :: STM ()Source Launch missiles within an STM computation. Even if the memory transaction is retried, only one salvo of missiles will be launched. Example: import Acme.Missiles import Control.Concurrent import Control.Concurrent.STM main :: IO () main = withMissilesDo $ do xv <- atomically $ newTVar (2 :: Int) yv <- atomically $ newTVar (1 :: Int) atomically $ do x <- readTVar xv y <- readTVar yv if x > y then launchMissilesSTM else return () threadDelay 100000
http://hackage.haskell.org/package/acme-missiles-0.2/docs/Acme-Missiles.html
CC-MAIN-2013-48
en
refinedweb
Programming Guide XNA Game Studio 3.1. - Provides functionality to accomplish common game development tasks. - Graphics - Describes how the XNA Framework Graphics libraries provide low-level resource loading and rendering capabilities. - Math - Provides classes and methods for manipulating vectors and matrices. - Input - Provides classes and methods for retrieving user input for keyboard, mouse, and Xbox 360 controller devices. - Audio - Provides classes and methods for playing audio files. - Media - Describes how the XNA Framework Microsoft.Xna.Framework.Media namespace provides classes and methods for retrieving system media, including pictures and songs. - Storage - Provides classes that allow reading and writing of files. - Gamer Services - Contains introductory articles describing how to use gamer services: working with player profiles and preferences, the Xbox Guide user interface, Guide-based messaging, and other features provided by Xbox LIVE. - Networking - Contains introductory articles describing how to create and join multiplayer game sessions, manage game state across clients, and interact with the friends list. - Hardware and Platforms - Provides information about programming for specific hardware types and platforms using the XNA Framework. - Extended Tutorials - Describes how to integrate XNA Framework features and follow best practices for creating games. Show:
http://msdn.microsoft.com/en-us/library/bb198548(v=xnagamestudio.31).aspx
CC-MAIN-2013-48
en
refinedweb
{-# LANGUAGE KindSignatures #-} {-| This module provides an API similar to "Control.Pipe" for those who prefer the classic 'Pipe' API. This module differs slightly from "Control.Pipe" in order to promote seamless interoperability with both pipes and proxies. See the \"Upgrade Pipes to Proxies\" section below for details. -} module Control.Proxy.Pipe ( -- * Create Pipes await, yield, pipe, -- * Compose Pipes (<+<), (>+>), idP, -- * Synonyms Pipeline, -- * Run Pipes -- $run -- * Upgrade Pipes to Proxies -- $upgrade ) where import Control.Monad (forever) import Control.Proxy.Class (Proxy(request, respond, (>->), (?>=))) import Control.Proxy.Synonym (Pipe, Consumer, Producer, C) import Control.Proxy.Trans.Identity (runIdentityP) {-| Wait for input from upstream 'await' blocks until input is available from upstream. -} await :: (Monad m, Proxy p) => Pipe p a b m a await = request () {-| Deliver output downstream 'yield' restores control back downstream and binds its value to 'await'. -} yield :: (Monad m, Proxy p) => b -> p a' a b' b m () yield b = runIdentityP $ do respond b return () -- | Convert a pure function into a pipe pipe :: (Monad m, Proxy p) => (a -> b) -> Pipe p a b m r pipe f = runIdentityP $ forever $ do a <- request () respond (f a) infixr 9 <+< infixl 9 >+> -- | Corresponds to ('<<<')/('.') from @Control.Category@ (<+<) :: (Monad m, Proxy p) => Pipe p b c m r -> Pipe p a b m r -> Pipe p a c m r p1 <+< p2 = p2 >+> p1 -- | Corresponds to ('>>>') from @Control.Category@ (>+>) :: (Monad m, Proxy p) => Pipe p a b m r -> Pipe p b c m r -> Pipe p a c m r p1 >+> p2 = ((\() -> p1) >-> (\() -> p2)) () -- | Corresponds to 'id' from @Control.Category@ idP :: (Monad m, Proxy p) => Pipe p a a m r idP = runIdentityP $ forever $ do a <- request () respond a {-| A self-contained 'Pipeline' that is ready to be run 'Pipeline's never 'request' nor 'respond'. -} type Pipeline (p :: * -> * -> * -> * -> (* -> *) -> * -> *) = p C () () C {- $run The "Control.Proxy.Core.Fast" and "Control.Proxy.Core.Correct" modules provide their corresponding 'runPipe' functions, specialized to their own 'Proxy' implementations. Each implementation must supply its own 'runPipe' function since it is the only non-polymorphic 'Pipe' function and the compiler uses it to select which underlying proxy implementation to use. -} {- $upgrade You can upgrade classic 'Pipe' code to work with the proxy ecosystem in steps. Each change enables greater interoperability with proxy utilities and transformers and if time permits you should implement the entire upgrade for your libraries if you want to take advantage of proxy standard libraries. First, import "Control.Proxy" and "Control.Proxy.Pipe" instead of "Control.Pipe". Then, add 'ProxyFast' after every 'Pipe', 'Producer', or 'Consumer' in any type signature. For example, you would convert this: > import Control.Pipe > > fromList :: (Monad m) => [b] -> Producer b m () > fromList xs = mapM_ yield xs ... to this: > import Control.Proxy > import Control.Proxy.Pipe -- transition import > > fromList :: (Monad m) => [b] -> Producer ProxyFast b m () > fromList xs = mapM_ yield xs The change ensures that all your code now works in the 'ProxyFast' monad, which is the faster of the two proxy implementations. Second, modify all your 'Pipe's to take an empty '()' as their final argument, and translate the following functions: * ('<+<') to ('<-<') * 'runPipe' to 'runProxy' For example, you would convert this: > import Control.Proxy > import Control.Proxy.Pipe > > fromList :: (Monad m) => [b] -> Producer ProxyFast b m () > fromList xs = mapM_ yield xs ... to this: > import Control.Proxy > import Control.Proxy.Pipe > > fromList :: (Monad m) => [b] -> () -> Producer ProxyFast b m () > fromList xs () = mapM_ yield xs Now when you call these within a @do@ block you must supplying an additional @()@ argument: > examplePipe () = do > a <- request () > fromList [1..a] () This change lets you switch from pipe composition, ('<+<'), to proxy composition, ('<-<'), so that you can mix proxy utilities with pipes. Third, wrap your pipe's implementation in 'runIdentityP' (which "Control.Proxy" exports): > import Control.Proxy > import Control.Proxy.Pipe > > fromList xs () = runIdentityP $ mapM_ yield xs Then replace the 'ProxyFast' in the type signature with a type variable @p@ constrained by the 'Proxy' type class: > fromList :: (Monad m, Proxy p) => [b] -> () -> Producer p b m () This change upgrades your 'Pipe' to work natively within proxies and proxy transformers, without any manual conversion or lifting. You can now compose or sequence your 'Pipe' within any feature set transparently. Finally, replace each 'await' with @request ()@ and each 'yield' with 'respond'. Also, replace every 'Pipeline' with 'Session'. This lets you drop the "Control.Proxy.Pipe" import: > import Control.Proxy > > fromList :: (Monad m, Proxy p) => [b] -> () -> Producer p b m () > fromList xs () = runIdentityP $ mapM_ respond xs Also, I encourage you to continue using the 'Pipe', 'Consumer' and 'Producer' type synonyms to simplify type signatures. The following examples show how they cleanly mix with proxies and their extensions: > import Control.Proxy > import Control.Proxy.Trans.Either as E > import Control.Proxy.Trans.State > > -- A Producer enriched with pipe-local state > example1 :: (Monad m, Proxy p) => () -> Producer (StateP Int p) Int m r > example1 () = forever $ do > n <- get > respond n > put (n + 1) > > -- A Consumer enriched with error-handling > example2 :: (Proxy p) => () -> Consumer (EitherP String p) Int IO () > example2 () = do > n <- request () > if (n == 0) > then E.throw "Error: received 0" > else lift $ print n -}
http://hackage.haskell.org/package/pipes-3.0.0/docs/src/Control-Proxy-Pipe.html
CC-MAIN-2013-48
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Movable but non-copyable types can be safely inserted into containers and movable and copyable types are more efficiently handled if those containers internally use move semantics instead of copy semantics. If the container needs to "change the location" of an element internally (e.g. vector reallocation) it will move the element instead of copying it. Boost.Container containers are move-aware so you can write the following: #include <boost/container/vector.hpp> #include <cassert> //Remember: 'file_descriptor' is NOT copyable, but it //can be returned from functions thanks to move semantics file_descriptor create_file_descriptor(const char *filename) { return file_descriptor(filename); } int main() { //Open a file obtaining its descriptor, the temporary //returned from 'create_file_descriptor' is moved to 'fd'. file_descriptor fd = create_file_descriptor("filename"); assert(!fd.empty()); //Now move fd into a vector boost::container::vector<file_descriptor> v; v.push_back(boost::move(fd)); //Check ownership has been transferred assert(fd.empty()); assert(!v[0].empty()); //Compilation error if uncommented since file_descriptor is not copyable //and vector copy construction requires value_type's copy constructor: //boost::container::vector<file_descriptor> v2(v); return 0; }
http://www.boost.org/doc/libs/1_53_0/doc/html/move/move_and_containers.html
CC-MAIN-2013-48
en
refinedweb
On 24/03/16 16:42, Alex Bennée wrote: >> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h >> > index 05a151da4a54..cc3d2ca25917 100644 >> > --- a/include/exec/exec-all.h >> > +++ b/include/exec/exec-all.h >> > @@ -257,20 +257,32 @@ struct TranslationBlock { >> > struct TranslationBlock *page_next[2]; >> > tb_page_addr_t page_addr[2]; >> > >> > - /* the following data are used to directly call another TB from >> > - the code of this one. */ >> > - uint16_t tb_next_offset[2]; /* offset of original jump target */ >> > + /* The following data are used to directly call another TB from >> > + * the code of this one. This can be done either by emitting direct or >> > + * indirect native jump instructions. These jumps are reset so that >> > the TB >> > + * just continue its execution. The TB can be linked to another one by >> > + * setting one of the jump targets (or patching the jump >> > instruction). Only >> > + * two of such jumps are supported. >> > + */ >> > + uint16_t jmp_reset_offset[2]; /* offset of original jump target */ >> > +#define TB_JMP_RESET_OFFSET_INVALID 0xffff /* indicates no jump generated >> > */ >> > #ifdef USE_DIRECT_JUMP >> > - uint16_t tb_jmp_offset[2]; /* offset of jump instruction */ >> > + uint16_t jmp_insn_offset[2]; /* offset of native jump instruction */ >> > #else >> > - uintptr_t tb_next[2]; /* address of jump generated code */ >> > + uintptr_t jmp_target_addr[2]; /* target address for indirect jump */ >> > #endif >> > - /* list of TBs jumping to this one. This is a circular list using >> > - the two least significant bits of the pointers to tell what is >> > - the next pointer: 0 = jmp_next[0], 1 = jmp_next[1], 2 = >> > - jmp_first */ >> > - struct TranslationBlock *jmp_next[2]; >> > - struct TranslationBlock *jmp_first; >> > + /* Each TB has an assosiated circular list of TBs jumping to this one. >> > + * jmp_list_first points to the first TB jumping to this one. >> > + * jmp_list_next is used to point to the next TB in a list. >> > + * Since each TB can have two jumps, it can participate in two lists. >> > + * The two least significant bits of a pointer are used to choose >> > which >> > + * data field holds a pointer to the next TB: >> > + * 0 => jmp_list_next[0], 1 => jmp_list_next[1], 2 => jmp_list_first. >> > + * In other words, 0/1 tells which jump is used in the pointed TB, >> > + * and 2 means that this is a pointer back to the target TB of this >> > list. >> > + */ >> > + struct TranslationBlock *jmp_list_next[2]; >> > + struct TranslationBlock *jmp_list_first; > OK I found that tricky to follow. Where does the value of the pointer > come from that sets these bottom bits? The TB jumping to this TB sets it? Yeah, that's not easy to describe. Initially, we set: tb->jmp_list_first = tb | 2 That makes an empty list: jmp_list_first just points to the this TB and the low bits are 2. After that we can add a TB to the list in tb_add_jump(): tb->jmp_list_next[n] = tb_next->jmp_list_first; tb_next->jmp_list_first = tb | n; where 'tb' is going to jump to 'tb_next', 'n' (can be 0 or 1) is an index of jump target of 'tb'. (I simplified the code here) Any ideas how to make it more clear in the comment? Kind regards, Sergey
https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg05833.html
CC-MAIN-2020-16
en
refinedweb
#82 – How XAML Handles Whitespace October 2, 2010 Leave a comment In general, embedded spaces and line feeds in a XAML file are ignored. You can normally include spaces or line feeds between consecutive items. Here are some guidelines: - You must have at least one space preceding each XAML attribute - You must not have any whitespace following the open angle bracket ‘<‘ in an element tag - You must not have any whitespace between the ‘/’ and ‘>’ characters in a self-closing element - Whereever a namespace prefix is used, with the ‘:’ character, you must not have whitespace on either side of the ‘:’ - When a property value is expressed as text within quotation marks and it represents textual content, embedded whitespaces and line feeds are preserved. (E.g. Embedded line feed in a button label) - When text is used as a value for a content property, i.e. not in quotation marks - All leading and trailing whitespace is removed (i.e. the string starts with the first non-whitespace character) - Internal whitespace is converted to a single space
https://wpf.2000things.com/2010/10/02/82-how-xaml-handles-whitespace/
CC-MAIN-2020-16
en
refinedweb
Provided by: allegro5-doc_5.2.6.0-1_all NAME al_load_ttf_font_f - Allegro 5 API SYNOPSIS #include <allegro5/allegro_ttf.h> ALLEGRO_FONT *al_load_ttf_font_f(ALLEGRO_FILE *file, char const *filename, int size, int flags) DESCRIPTION Like al_load_ttf_font(3alleg5), but the font is read from the file handle. The filename is only used to find possible additional files next to a font file. Note: The file handle is owned by the returned ALLEGRO_FONT object and must not be freed by the caller, as FreeType expects to be able to read from it at a later time.
http://manpages.ubuntu.com/manpages/focal/man3/al_load_ttf_font_f.3alleg5.html
CC-MAIN-2020-16
en
refinedweb
An upgrade control panel and upgrade helpers for plone upgrades. Project description Introduction This product aims to simplify running and writing third-party Generic Setup upgrade steps in Plone. It provides a control panel for running multiple upgrades at once, based on the upgrade mechanism of Generic Setup (portal_setup). Further a base class for writing upgrade steps with a variety of helpers for common tasks is provided. Table of Contents - Introduction - Features - Installation - Manage upgrades - The bin/upgrade script - Upgrade step helpers - Upgrade directories - JSON API - Authentication and authorization - Versioning - API Discovery - Listing Plone sites: - Listing profiles and upgrades - Executing upgrades - Recook resources - Combine bundles - Import-Profile Upgrade Steps - IPostUpgrade adapter - Savepoints - Memory optimization while running upgrades - Prevent ftw.upgrade from marking upgrades as installed - Links - Changelog - 3.0.0 (2020-03-23) - 2.16.0 (2020-02-14) - 2.15.2 (2020-01-27) - 2.15.1 (2019-12-16) - 2.15.0 (2019-12-12) - 2.14.1 (2019-11-08) - 2.14.0 (2019-10-31) - 2.13.0 (2019-08-22) - 2.12.2 (2019-06-19) - 2.12.1 (2019-06-18) - 2.12.0 (2018-07-26) - 2.11.1 (2018-04-05) - 2.11.0 (2018-01-31) - 2.10.0 (2018-01-08) - 2.9.0 (2017-12-14) - 2.8.1 (2017-10-13) -) Features - Managing upgrades: Provides an advanced view for upgrading third-party Plone packages using Generic Setup. It enables upgrading multiple packages at once with an easy to use user interface. By resolving the dependency graph it is able to optimize the upgrade step order so that the upgrade is hassle free. - Writing upgrades: The package provides a base upgrade class with various helpers for common upgrade tasks. - Upgrade directories with less ZCML: By registering a directory as upgrade-directory, no additional ZCML is needed for each upgrade step. By using a timestamp as version number we have less (merge-) conflicts and less error potential. - Import profile upgrade steps: Sometimes an upgrade step consists solely of importing a purpose-made generic setup profile. A new upgrade-step:importProfile and 5.1 for instructions on how to install bin/upgrade. The bin/upgrade console script enables management of upgrades on the filesystem (creating new upgrades, changing upgrade order) as well as interacting with an installed Plone site, listing profiles and upgrades and installing Upgrade step helpers The UpgradeStep base class provides various tools and helpers useful when writing upgrade steps. It can be used by registering the classmethod directly. Be aware that the class is very special: it acts like a function and calls itself automatically. Example upgrade step definition (defined in a upgrades.py): from ftw.upgrade import UpgradeStep class UpdateFooIndex(UpgradeStep): """The index ``foo`` is a ``FieldIndex`` instead of a ``KeywordIndex``. This upgrade step changes the index type and reindexes the objects. """ def __call__(self): index_name = 'foo' if self.catalog_has_index(index_name): self.catalog_remove_index(index_name) self.catalog_add_index(index_name, 'KeywordIndex') self.catalog_rebuild_index(index_name) Registration in configure.zcml (assuming it’s in the same directory): <configure xmlns="" xmlns: <genericsetup:upgradeStep </configure> Updating objects with progress logging Since an upgrade step often updates a set of objects indexed in the catalog, there is a useful helper method self.objects() which combines querying the catalog with the Progress Logger. The catalog is queried unrestricted so that we handle all the objects. Here is an example for updating all objects of a particular type: from ftw.upgrade import ProgressLogger from ftw.upgrade import UpgradeStep class ExcludeFilesFromNavigation(UpgradeStep): def __call__(self): for obj in self.objects({'portal_type': 'File'}, 'Enable exclude from navigation for files'): obj.setExcludeFromNav(True) When running the upgrade step you’ll be shown a progress log: INFO ftw.upgrade STARTING Enable exclude from navigation for files INFO ftw.upgrade 1 of 10 (10%): Enable exclude from navigation for files INFO ftw.upgrade 5 of 50 (50%): Enable exclude from navigation for files INFO ftw.upgrade 10 of 10 (100%): Enable exclude from navigation for files INFO ftw.upgrade DONE: Enable exclude from navigation for files Methods The UpgradeStep class has various helper functions: - self.getToolByName(tool_name) - Returns the tool with the name tool_name of the upgraded site. - self.objects(catalog_query, message, logger=None,. The default value None indicates that we are not configuring this feature and it should use the default configuration, which is usually 1000. See the Savepoints section for more details. In order to disable savepoints completely, you can use savepoints=False. This method will remove matching brains from the catalog when they are broken because the object of the brain no longer exists.(). - self.catalog_has_index(name) - Returns whether there is a catalog index name. - self.catalog_add_index(name, type_, extra=None) - Adds a new index to the portal_catalog tool. - self.catalog_remove_index(name) - Removes an index from the portal_catalog tool. - self.actions_remove_action(category, action_id) - Removes an action identified by action_id within the given category from the portal_actions tool. - self.catalog_unrestricted_get_object(brain) - Returns the unrestricted object of a brain. Dead brains, for which there is no longer an object, are removed from the catalog and None is returned. - self.catalog_unrestricted_search(query, full_objects=False) Searches the catalog without checking security. When full_objects is True, unrestricted objects are returned instead of brains. Upgrade steps should generally use unrestricted catalog access since all objects should be upgraded - even if the manager running the upgrades has no access on the objects.. - self.actions_remove_type_action(portal_type, action_id) - Removes a portal_types action from the type identified by portal_type with the action id action_id. - self.set_property(context, key, value, data_type='string') - Safely set a property with the key key and the value value on the given context. The property is created with the type data_type if it does not exist. - self.add_lines_to_property(context, key, lines) - Updates a property with key key on the object context adding lines. The property is expected to be of type “lines”. If the property does not exist it is created. - self.setup_install_profile(profileid, steps=None) - Installs the generic setup profile identified by profileid. If a list step names is passed with steps (e.g. [‘actions’]), only those steps are installed. All steps are installed by default. - self.ensure_profile_installed(profileid) - Install a generic setup profile only when it is not yet installed. - self.install_upgrade_profile(steps=None) - Installs the generic setup profile associated with this upgrade step. The. - self.uninstall_product(product_name) - Uninstalls a product using the quick installer. - self.migrate_class(obj, new_class) - Changes the class of an object. It has a special handling for BTreeFolder2Base based containers. - self.remove_broken_browserlayer(name, dottedname) - Removes a browser layer registration whose interface can’t be imported any more from the persistent registry. Messages like these on instance boot time can be an indication of this problem: WARNING OFS.Uninstalled Could not import class 'IMyProductSpecific' from module 'my.product.interfaces' - self.update_security(obj, reindex_security=True) - Update the security of a single object (checkboxes in manage_access). This is usefuly in combination with the ProgressLogger. It is possible to skip reindexing the object security in the catalog (allowedRolesAndUsers). This speeds up the update but should only be disabled when there are no changes for the View permission. - self.update_workflow_security(workflow_names, reindex_security=True, savepoints=None) Update all objects which have one of a list of workflows. This is useful when updating a bunch of workflows and you want to make sure that the object security is updated properly. The update done is kept as small as possible by only searching for types which might have this workflow. It does support placeful workflow policies. To further speed this up you can pass reindex_security=False, but you need to make sure you did not change any security relevant permissions (only View needs reindex_security=True for default Plone). By default, transaction savepoints are created every 1000th object. This prevents exaggerated memory consumption when creating large transactions. If your server has enough memory, you may turn savepoints off by passing savepoints=None. -. Progress logger When an upgrade step is taking a long time to complete (e.g. while performing a data migration), the administrator needs to have information about the progress of the update. It is also important to have continuous output for avoiding proxy timeouts when accessing Zope through a webserver / proxy. The ProgressLogger makes logging progress very easy: from ftw.upgrade import ProgressLogger from ftw.upgrade import UpgradeStep class MyUpgrade(UpgradeStep): def __call__(self): objects = self.catalog_unrestricted_search( {'portal_type': 'MyType'}, full_objects=True) for obj in ProgressLogger('Migrate my type', objects): self.upgrade_obj(obj) def upgrade_obj(self, obj): do_something_with(obj) The logger will log the current progress every 5 seconds (default). Example log output: INFO ftw.upgrade STARTING Migrate MyType INFO ftw.upgrade 1 of 10 (10%): Migrate MyType INFO ftw.upgrade 5 of 50 (50%): Migrate MyType INFO ftw.upgrade 10 of 10 (100%): Migrate MyType INFO ftw.upgrade DONE: Migrate MyType Workflow Chain Updater When the workflow is changed for a content type, the workflow state is reset to the init state of new workflow for every existing object of this type. This can be really annoying. The WorkflowChainUpdater takes care of setting every object to the correct state after changing the chain (the workflow for the type): from ftw.upgrade.workflow import WorkflowChainUpdater from ftw.upgrade import UpgradeStep class UpdateWorkflowChains(UpgradeStep): def __call__(self): query = {'portal_type': ['Document', 'Folder']} objects = self.catalog_unrestricted_search( query, full_objects=True) review_state_mapping={ ('intranet_workflow', 'plone_workflow'): { 'external': 'published', 'pending': 'pending'}} with WorkflowChainUpdater(objects, review_state_mapping): # assume that the profile 1002 does install a new workflow # chain for Document and Folder. self.setup_install_profile('profile-my.package.upgrades:1002') The workflow chain updater migrates the workflow history by default. The workflow history migration can be disabled by setting migrate_workflow_history to False: with WorkflowChainUpdater(objects, review_state_mapping, migrate_workflow_history=False): # code If a transition mapping is provided, the actions in the workflow history entries are migrated according to the mapping so that the translations work for the new workflow: transition_mapping = { ('intranet_workflow', 'new_workflow'): { 'submit': 'submit-for-approval'}} with WorkflowChainUpdater(objects, review_state_mapping, transition_mapping=transition_mapping): # code Placeful Workflow Policy Activator When manually activating a placeful workflow policy all objects with a new workflow might be reset to the initial state of the new workflow. ftw.upgrade has a tool for enabling placeful workflow policies without breaking the review state by mapping it from the old to the new workflows: from ftw.upgrade.placefulworkflow import PlacefulWorkflowPolicyActivator from ftw.upgrade import UpgradeStep class ActivatePlacefulWorkflowPolicy(UpgradeStep): def __call__(self): portal_url = self.getToolByName('portal_url') portal = portal_url.getPortalObject() context = portal.unrestrictedTraverse('path/to/object') activator = PlacefulWorkflowPolicyActivator(context) activator.activate_policy( 'local_policy', review_state_mapping={ ('intranet_workflow', 'plone_workflow'): { 'external': 'published', 'pending': 'pending'}}) The above example activates a placeful workflow policy recursively on the object under “path/to/object”, enabling the placeful workflow policy “local_policy”. The mapping then maps the “intranet_workflow” to the “plone_workflow” by defining which old states (key, intranet_workflow) should be changed to the new states (value, plone_workflow). Options - activate_in: Activates the placeful workflow policy for the passed in object (True by default). - activate_below: Activates the placeful workflow policy for the children of the passed in object, recursively (True by default). - update_security: Update object security and reindex allowedRolesAndUsers (True by default). Inplace Migrator The inplace migrator provides a fast and easy way for migrating content in upgrade steps. It can be used for example to migrate from Archetypes to Dexterity. The difference between Plone’s standard migration and the inplace migration is that the standard migration creates a new sibling and moves the children and the inplace migration simply replaces the objects within the tree. Example: into above Deferrable upgrades Deferrable upgrades are a special type of upgrade that can be omitted on demand. They still will be proposed and installed by default but can be excluded from installation by setting a flag. Deferrable upgrades can be used to decouple upgrades that need not be run right now, but only eventually, from the critical upgrade path. This can be particularly useful for long running data migrations or for fix-scripts. Upgrade-steps can be marked as deferrable by setting a class attribute deferrable on a subclass of UpgradeStep: # my/package/upgrades/20180709135657_long_running_upgrade/upgrade.py from ftw.upgrade import UpgradeStep class LongRunningUpgrade(UpgradeStep): """Potentially long running upgrade which is deferrable. """ deferrable = True def __call__(self): pass When you install upgrades from the command line, you can skip the installation of deferred upgrade steps with: $ bin/upgrade install -s plone --proposed --skip-deferrable When you install upgrades with the @@manage-upgrades view, deferrable upgrade steps show an additional icon and can be deselected manually., "deferred":, "deferred":, "deferred": false, executed executing works" Combine bundles CSS and JavaScript bundles can be combined: $ curl -uadmin:admin -X POST "OK" This is for Plone 5 or higher. This runs the same code that runs when you import a profile that makes changes in the resource registries. Import-Profile Upgrade Steps Sometimes an upgrade step consists solely of importing a purpose-made generic setup profile. Creating such upgrade steps are often much simpler than doing the change in python, because we can simply copy the necessary parts of the new default generic setup profile into the upgrade step profile. Normally to do this, we would have to register an upgrade step and a Generic Setup profile and write an upgrade step handler importing the profile. ftw.upgrade makes this much simpler by providing an importProfile ZCML directive specifically for this use case. Example configure.zcml meant to be placed in your upgrades sub-package: <configure xmlns="" xmlns: <include package="ftw.upgrade" file="meta.zcml" /> <upgrade-step:importProfile </configure> This example upgrade(). IPostUpgrade adapter By registering an IPostUpgrade adapter it is possible to run custom code after running upgrades. All adapters are executed after each time upgrades were run, regardless of which upgrades are run. The name of the adapters should be the profile of the package, so that ftw.upgrade is able to execute the adapters in order of the GS dependencies. Example adapter: from ftw.upgrade.interfaces import IPostUpgrade from zope.interface import implements class MyPostUpgradeAdapter(object): implements(IPostUpgrade) def __init__(self, portal, request): self.portal = portal self.request = request def __call__(self): # custom code, e.g. import a generic setup profile for customizations Registration in ZCML: <configure xmlns=""> <adapter factory=".adapters.MyPostUpgradeAdapter" provides="ftw.upgrade.interfaces.IPostUpgrade" for="Products.CMFPlone.interfaces.siteroot.IPloneSiteRoot zope.interface.Interface" name="my.package:default" /> </configure> Savepoints Certain iterators of ftw.upgrade are wrapped with a SavepointIterator, creating savepoints after each batch of items. This allows us to keep the memory footprint low. The threshold for the savepoint iterator can be passed to certain methods, such as self.objects in an upgrade, or it can be configured globally with an environment variable: UPGRADE_SAVEPOINT_THRESHOLD = 1000 The default savepoint threshold is 1000. Memory optimization while running upgrades Zope is optimized for executing many smaller requests. The ZODB pickle cache keeps objects in the memory, so that they can be used for the next request. Running a large upgrade is a long-running request though, increasing the chance of a memory problem. ftw.upgrade tries to optimize the memory usage by creating savepoints and triggering the pickle cache garbage collector. In order for this to work properly you should configure your ZODB cache sizes correctly (zodb-cache-size-bytes or zodb-cache-size). Prevent ftw.upgrade from marking upgrades as installed ftw.upgrade automatically marks all upgrade steps of a profile as installed when the full profile is imported. This is important for the initial installation. In certain situations you may want to import the profile but not mark the upgrade steps as installed. For example this could be done in a big migration project where the default migration path cannot be followed. You can do that like this for all generic setup profiles: from ftw.upgrade.directory.subscribers import no_upgrade_step_marking with no_upgrade_step_marking(): # install profile with portal_setup or for certain generic setup profiles: from ftw.upgrade.directory.subscribers import no_upgrade_step_marking with no_upgrade_step_marking('my.package:default'): # install profile with portal_setup Links - Github: - Issues: - Pypi: - Continuous integration: This package is copyright by 4teamwork. ftw.upgrade is licensed under GNU General Public License, version 2. Changelog 3.0.0 (2020-03-23) - Add support for Plone 5.2 and Python 3. [buchi] - Also look for the instance port number in wsgi.ini. [maurits] 2.16.0 (2020-02-14) - Allow additional indexes to be reindexed in the WorkflowChainUpdater. [tinagerber] 2.15.2 (2020-01-27) - Fix missing values in IntIds catalog as we go within migrate_intid(). [djowett-ftw] 2.15.1 (2019-12-16) - Cleanup broken catalog brains on NotFound. [jone] 2.15.0 (2019-12-12) - Add context manager for disabling upgrade step marking. [jone] - Do not mark upgrade steps as installed when not doing a full import. [jone] 2.14.1 (2019-11-08) - Migrate creators even when dublin core behaviors are not enabled. [jone] - Migrate empty values in RichTextFields correctly. Fixes. [djowett-ftw] 2.14.0 (2019-10-31) - Added --allow-outdated option to install command. This allows installing upgrades or profiles on a not up-to-date site. Fixes issue 182 <>. [maurits] 2.13.0 (2019-08-22) - Added combine_bundles command for Plone 5. This combines JS/CSS bundles together. [maurits] 2.12.2 (2019-06-19) - Make sure to always use a portal_migration tool wrapped in a RequestContainer. (Fixes “AttributeError: REQUEST” on a Plone 5.1.x upgrade) [lgraf] 2.12.1 (2019-06-18) - Choose actual port used by ZServer layer to run CommandAndInstance tests against. [lgraf] - Disable Diazo on upgrades-plain for Plone 5.1.5 support. [jone] 2.12.0 (2018-07-26) - Allow marking upgrades as deferred so they won’t be proposed by default. [deiferni] 2.11.1 (2018-04-05) - Fix connection problem when zope.conf contains ip-address. [jone] - Make sure remove_broken_browserlayer() helper doesn’t fail if the browser layer registration to be removed doesn’t exist (any more). [lgraf] 2.11.0 (2018-01-31) - Provide upgrade step handler interfaces and handler class in wrapper. [jone] - Do not propose executed upgrades newer than current db version. [jone] 2.10.0 (2018-01-08) - Support installing proposed upgrades of specific Generic Setup profiles. Use bin/upgrade install --proposed the.package:default. [jone] 2.9.0 (2017-12-14) - Optimize memory footprint after every upgrade step. [jone] - Reduce memory footprint in SavepointIterator by garbage-collecting connection cache. [jone] - Set the default savepoint threshold to 1000; make it configurable. [jone] - Enable savepoint iterator by default. Affects self.objects. [jone] - Use a SavepointIterator in the WorkflowSecurityUpdater in order not to exceed memory. [mbaechtold] 2.8.1 (2017-10-13) - Also catch AttributeErrors when accessing objects of broken brains. [buchi]] 1.7.4 (2014-05-12) - Extend workflow updater to migrate workflow history. [jone] - Fix workflow updater to always update objects. The objects are updated even when it seems that the object was not update or has no longer a workflow. This fixes issues when updating a workflow, in which case the old workflow and the new workflow has the same ID. [jone] - Make sure the transaction note does not get too long. Zope limits the transaction note length. By actively managing the transaction note we can provide fallbacks for when it gets too long because a lot of upgrade steps are installed at the same time. [jone] 1.7.3 (2014-04-30) - Add uninstall_product method to upgrade step class. [jone] 1.7.2 (2014-02-28) - Update provided interfaces when migrating objects to new class. [jone] 1.7.1 (2014-01-09) - Fix LocationError on manage-ugprades view on cyclic dependencies. [jone] 1.7.0 (2013-09-24) - Add a update_workflow_security helper function to the upgrade step. [jone] 1.6.0 (2013-08-30) - Fix inplace modification bug when updating the catalog while iterating over a catalog result. [jone] - Implement new importProfile directive for creating upgrade steps that just import a specific upgrade step generic setup profile. [jone] 1.5 (2013-08-16) - Add a WorkflowChainUpdater for changing workflow chains without resetting existing objects to the initial review state of the new workflow. [jone] 1.4.0 (2013-07-18) - Added helper for adding a type_action. [phgross] - Add objects method to UpgradeStep for easy querying the catalog and doing stuff with progress logging combined. [jone] - Make ProgressLogger an iterator too, because it is easier to use. [jone] - Improve logging while installing upgrade steps. Show duration for installing. [jone] - Fix upgrade step icons for Plone 4.3. [jone] - Add update_security helper. [jone] - Fix incomplete status info entry prodcued by placeful workflow policy activator. [jone] 1.3 (2013-06-13) - Implement a placeful workflow policy activator. [jone] - Added remove_broken_browserlayer method to step class. [lgraf] 1.2.1 (2013-04-23) - Keep modification date on reindexObject wihtout idxs. [mathias.leimgruber] 1.2 (2013-01-24) - onegov.ch approved: add badge to readme. [jone] - Remove ‘step’ and ‘for’ values from internal data structure. This is needed for allowing us to serialize the data (json). [jone] - Add IPostUpgrade adapter hook. [jone] - Refactor dependency sorting into seperate function. [jone] - Add security declarations. [jone] - Fix wrong tool usage when installing a profile in step class. [jone] 1.1 (2012-10-08) - Add catalog_unrestricted_get_object and catalog_unrestricted_search methods to step class. [jone] - Handle profiles of packages which were removed but have leftover generic setup entries. [jone] 1.0 (2012-08-13) - Add installed upgrades to transaction note. Closes #7 [jone] - Add migrate_class helper with _p_changed implementation supporting BTreeFolder2Base containers. [jone] - Remove purge_resource_registries() helper because it does not behave as expected. [jone] - Set min-height of upgrade output frame to 500px. [jone] - Print exceptions to browser log stream. [jone] 1.0b2 (2012-07-04) - Fix the upgrade registration problem (using a classmethod does not work since registration fails). [jone] - Let @@manage-upgrade be usable without actually installing the GS profile. [maethu] 1.0b1 (2012-06-27) - First implementation. [jone] Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ftw.upgrade/
CC-MAIN-2020-16
en
refinedweb
Unity 2018.4 Store (IL2CPP) Target Support Windows (IL2CPP) Target Support Component Installers macOS Additional Resources 发行说明 2018.4.3f1 Release Notes - 2D Fixed artifacts when rendering with TilemapRenderer while 2D Animation Package is in the Project. (1154202) - 2D Fixed ETC texture compression Split Alpha Channel not working for SpiteAtlas Variants. (1126070) - Android Fixed TouchScreenKeyboard.hideInput not hiding the input field. (1158215) - Android Added support to modify Unity Player command line arguments from custom UnityPlayerActivity. (1158838) - Android Fixed missing UI and render texture glitches after restarting the game. (1145018) - Animation Fixed Bezier curve segments conversion to Hermite when evaluating Animation Curves with weighted tangents. (1143424) - Animation Fixed position keyframing when in root motion with scale values. (1158974) - Asset Pipeline Fixed an edge case where loading identically named assets by type could fail. (1156154) - Editor Enabled new licensing system for selected customers. (1162261) - Editor Fixed a case where a Version Controlled Project would not update ProjectSettings/ProjectVersion.txt on editor start. (1131599) - Editor Fixed crash while setting version control mode to visible meta files. (1128867) - Editor Fixed import of Complete Project category Asset Store Packages from 2018.1-2018.3, which could break Package Manager configuration in the importing project. (1139264) - Graphics Fixed Video Player Leaks GfxDriver Memory. (1136233) - IL2CPP Corrected loading of static field initializer data when the field is of type ReadOnlySpan. (1146277) - IL2CPP Fixed an issue in the IL2CPP runtime that can cause intermittent crashes when dealing with certain kinds of generic method metadata. (1145468) - IL2CPP Fixed code generation for multidimensional arrays used as fields of structs. (1155344) - IL2CPP Fixed Stopwatch not working correctly across system time changes on iOS and macOS. (1152381) - IL2CPP Fixed the behavior of Bind a Unix socket on Posix platforms. (1150549) - IL2CPP Fixed UnityTls being incorrectly stripped with Medium and High Managed Stripping Levels. (1134343) - IL2CPP Prevented a hang during async write operations on Windows platforms. (1156384) - IL2CPP Prevented a memory leak when interfaces are set up for doubly nested arrays. (1151219) - OSX Fixed the editor crash while calling GetVSyncsPerSecond() during playmode. (1148335) - Particles Ensured Particle System Sprites render using the correct color. (1110578) - Particles Fixed a crash when using an invalid Texture in the Particle System Shape module. (1144240) - Particles Fixed an issue where the Particle System Inspector can become slow after editing its material but not saving the changes. (1154688) - Particles Improved parameter clamping in the External Forces module, to prevent/allow negative values where appropriate. (1144031) - Physics Fixed any hit being returned by MeshCollider.Raycast instead of the closest one. (1136868) - Physics Fixed crash that happened when passing a zero direction vector to batched physics queries. (1134317) - Physics Fixed issue with bounds in SkinnedMeshRenderer. (1153167) - Prefabs Fixed drag-select in Prefab Mode selects GameObjects with Gizmos in any loaded scene. (1140279) - Prefabs Fixed error message when deleting prefab asset whilst it is open in the Prefab Editor and version control is enabled. (1086613) - Prefabs Fixed nested Canvases not getting treated as nested Canvases in Prefab Mode if the Canvas had no visual elements (CanvasRenderers) under it. This could cause properties to get reset due to being driven as a Screen Space Canvas. (1103699) - Prefabs Fixed Prefab Mode reparenting to root GameObject by dragging is broken after having changed transform type. (1142496) - PS4 Fixed for occasionaly incorrect particle texturing. (1150123) - Scripting Fixed an issue where references to UnityEditor.iOS.Xcode might not be added to Visual Studio project files. (1151078) - Scripting Fixed Asmdef Inspector breaking when the dll reference is missing. (1139847) - Scripting Fixed problem in namespace parser regarding reading nested classes inside partial monobehaviour. (1148723) - Services Ensured Crash Reporting doesn't capture log messages that occur after the exception being reported. (1140382) - Shaders Fixed runtime shader load performance regression by removing randomish up-front warmup of all subshaders. (1105268) - Timeline Fixed Empty Timeline window leaks object. (1154475) - Timeline Fixed Timeline Editing > TrimEnd does not update until exiting and entering Preview Mode. (1156717) - Universal Windows Platform Fixed "Unable to find method Internal_ScriptableRuntimeReflectionSystemWrapper_Tick" error on startup on .NET scripting backend. (1146945) - Universal Windows Platform Fixed files in StreamingAssets directory not being treated as generic data files and therefore not getting consumed by various VS tools like the XAML compiler. (1110262) - Universal Windows Platform Fixed System.IO APIs not working on files outside of application and AppData directories on IL2CPP scripting backend. (1063768) - Version Control Fixed Script Execution Order inspector "Apply" button in some cases throwing errors under Perforce. (1153207) - Video Fixed VideoPlayer audio sync issues on Windows. (1145040) - Video Fixed VideoPlayer hanging when seeking backwards or forwards on Android. (1160422) - Web Fixed encoding support in url escaping. (1152780) - Windows Fixed locked cursor getting placed slightly off center in the editor and the standalone player. (824304) - XR Fixed crash when resizing player window after switching from non-VR ro VR. (1148813) - XR Fixed incorrect window used as anchor when switching away from on screen keyboard on HoloLens. (1156228) 变更集: 8a9509a5aff9 Unity 2018.4.3
https://unity3d.com/cn/unity/whats-new/2018.4.3
CC-MAIN-2020-16
en
refinedweb
There’s a Gender Extension for PHP Unlike %s\n", $name, $data['country']); break; case Gender::IS_MOSTLY_FEMALE: printf("The name %s is mostly female in %s\n", $name, $data['country']); break; case Gender::IS_MALE: printf("The name %s is male in %s\n", $name, $data['country']); break; case Gender::IS_MOSTLY_MALE: printf("The name %s is mostly male in %s\n", $name, $data['country']); break; case Gender::IS_UNISEX_NAME: printf("The name %s is unisex in %s\n", $name, $data['country']); break; case Gender::IS_A_COUPLE: printf("The name %s is both male and female in %s\n", $name, $data['country']); break; case Gender::NAME_NOT_FOUND: printf("The name %s was not found for %s\n", $name, $data['country']); break; case Gender::ERROR_IN_NAME: echo "There is an error in the given name!\n"; break; default: echo "An error occurred!\n"; break; } While we have this code here, let’s take a look at it. Some really confusing constant names in there – how does a name contain an error? What’s the difference between unisex and couple names? Digging deeper, we see some more curious constants. For example, the class has short names of countries as constants (e.g. BRITAIN) which reference an array containing both an international code for the country ( UK) and the full country name ( GREAT BRITAIN). $gender = new Gender\Gender; var_dump($gender->country(Gender\Gender::BRITAIN)); array(2) { 'country_short' => string(2) "UK" 'country' => string(13) "Great Britain" } Only, UK isn’t the international code one would expect here – it’s GB. Why they chose this route rather than rely on an existing package of geonames or even just an accurate list of constants is anyone’s guess. Once in use, the class uses the get method to return the gender of a name, provided we’ve given it the name and the country (optional – searches across all countries if omitted). But the country has to be the constant of the class (so you need to know it by heart or use their values when adding it to the UI because it won’t match any standard country code list) and it also returns an integer – another constant defined in the class, like so: const integer IS_FEMALE = 70 ; const integer IS_MOSTLY_FEMALE = 102 ; const integer IS_MALE = 77 ; const integer IS_MOSTLY_MALE = 109 ; const integer IS_UNISEX_NAME = 63 ; const integer IS_A_COUPLE = 67 ; const integer NAME_NOT_FOUND = 32 ; const integer ERROR_IN_NAME = 69 ; There’s just no rhyme or reason to any of these values. Another method, isNick, checks if a name is a nickname or alias for another name. This makes sense in cases like Bob vs Robert or Dick vs Richard, but can it really scale past these predictable English values? The method is doubly confusing because it says it returns an array in the signature, whereas the description says it’s a boolean. Finally, the similarNames method will return an array of names similar to the one provided, given the name and a country (if country is omitted, then it compares names across all countries). Does this include aliases? What’s the basis for similarity? Are Mario and Maria similar despite being opposite genders? Or is Mario just similar to Marek? Is Mario similar to Marek at all? There’s no information. I just had to find out for myself, so I installed it and tested the thing. Installation I tested this on an isolated environment via Homestead Improved with PECL pre-installed. sudo pecl install gender echo "extension=gender.so" | sudo tee /etc/php/7.1/mods-available/gender.ini sudo phpenmod gender pear run-scripts pecl/gender The last command will ask where to put a dictionary. I assume this is there for the purposes of extending it. I selected ., as in “current folder”. Let’s try it out by making a simple index.php file with the example content from above and testing that first. Sure enough, it works. Okay, let’s change the country to $country = Gender::CROATIA;. Okay, sure, it’s not a common name, and not in that format, but it’s most similar to Milena, which is a female name in Croatia. Let’s see what’s similar to Milena via similar.php: <?php namespace Gender; $gender = new Gender; $similar = $gender->similarNames("Milena", Gender::CROATIA); var_dump($similar); Not what I expected. Let’s see the original, Milene. So Milena is listed as a name similar to Milene, but Milene isn’t similar to Milena? Additionally, there seem to be some encoding issues on two of them? And the Croatian alphabet doesn’t even have the letter “y”, we definitely have neither of those similar names, regardless of what’s hiding under the question mark. Okay, let’s try something else. Let’s see if Bob is an alias of Robert in alias.php: <?php namespace Gender; $gender = new Gender; var_dump($gender->isNick('Bob', 'Robert', Gender::USA)); Indeed, that does seem to be true. Low hanging fruit, though. Let’s see a local one. var_dump($gender->isNick('Tea', 'Dorotea', Gender::CROATIA)); Oh come on. What about the Mario / Maria / Marek issue from the beginning? Let’s see similarities for them in order. Not good. A couple more tries. To make testing easier, let’s change the $name and $country lines in index.php to: $name = $argv[1]; $country = constant(Gender::class.'::'.strtoupper($argv[2])); Now we can test from the CLI without editing the file. Final few tries. I have a female friend from Tunisia called Manel. I would assume her name would go for male in most of the world because it ends with a consonant. Let’s test hers and some other names. No Tunisia? Maybe it isn’t documented in the manual, let’s output all the defined constants and check. // constants.php <?php $oClass = new ReflectionClass(Gender\Gender::class); var_dump($oClass->getConstants()); No, looks like those docs are spot on. At this point, I stop my playing around with this tool. The whole situation is made even more interesting by the fact that this is a simple class, and definitely doesn’t need to be an extension. No one will call this often enough to care about the performance boost of an extension vs. a package, and a package can be installed by non-sudo users, and people can contribute to it more easily. How this extension, which is both inaccurate and incomplete, and could be a simple class, ended up in the PHP manual is unclear, but it goes to show that there’s a lot of cleaning up to be done yet in the PHP core (I include the manual as the “core”) before we get PHP’s reputation up. In the 9 years (nine!) since development on this port started, not even all countries have been added to the internal list and yet someone decided this extension should be in the manual. Do you have more information about this extension? Do you see a point to it? Which other oddball extensions or built-in features did you find in the manual or in PHP in general?
https://www.sitepoint.com/theres-a-gender-extension-for-php/
CC-MAIN-2020-16
en
refinedweb
Euler problems/91 to 100 Contents Problem 91 Find the number of right angle triangles in the quadrant. Solution: problem_91 = undefined Problem 92 Investigating a square digits number chain with a surprising property. Solution: problem_92 = undefined Problem 93 Using four distinct digits and the rules of arithmetic, find the longest sequence of target numbers. Solution: problem_93 = undefined Problem 94 Investigating almost equilateral triangles with integral sides and area. Solution: problem_94 = undefined Problem 95 Find the smallest member of the longest amicable chain with no element exceeding one million. Solution which avoid visiting a number more than one time : import Data.Array.Unboxed import qualified Data.IntSet as S import Data.List). Problem 96 Devise an algorithm for solving Su Doku puzzles. Solution: problem_96 = undefined Problem 97 Find the last ten digits of the non-Mersenne prime: 28433 × 27830457 + 1. Solution: problem_97 = (28433 * 2^7830457 + 1) `mod` (10^10) Problem 98 Investigating words, and their anagrams, which can represent square numbers. Solution: problem_98 = undefined Problem 99 Which base/exponent pair in the file has the greatest numerical value? Solution: problem_99 = undefined Problem 100 Finding the number of blue discs for which there is 50% chance of taking two blue. Solution: problem_100 = undefined
http://wiki.haskell.org/index.php?title=Euler_problems/91_to_100&oldid=15354
CC-MAIN-2020-16
en
refinedweb
The class TestSetUp (see Figure C-29) is a subclass of TestDecorator that implements setUp( ) and tearDown( ) methods for the decorated Test . This allows the Test object's test fixture behavior to be modified without subclassing it. TestSetUp belongs to the namespace CppUnit . It is declared in the file extensions/TestSetUp.h and implemented in the file TestSetUp.cpp . class TestSetUp : public TestDecorator A constructor taking the Test to decorate. Calls setUp( ) , runs the decorated Test , and calls tearDown( ) . A Protected method called prior to running the decorated Test , allowing custom test fixture behavior to be implemented. A Protected method called after running the decorated Test , allowing the test fixture to be cleaned up. A copy constructor declared private to prevent its use. A copy operator declared private to prevent its use. None.
https://flylib.com/books/en/1.104.1.111/1/
CC-MAIN-2020-16
en
refinedweb
This. from django.db.models.fields import CharField import cipher class EnField(CharField): def from_db_value(self, value, expression, connection): """ Decrypt the data for display in Django as normal. """ return cipher.decrypt(value) def get_prep_value(self, value): """ Encrypt the data when saving it into the database. """ return cipher.encrypt(value) As you can see here, it is really straightforward to extend an existing field, such as CharField. I choose CharField in this example, as it tends to render easily everywhere in the Django framework with ease, so it is the most straightforward to test this concept with. You may also wish to use the binhex module's base64 encoding, but most databases should allow binary data to be stored into a VARCHAR. You may also opt to use a binary field as well. It is also the simplest when it comes to playing with a model in the Python shell. Next, let's see how all the magic works in the cipher module. from Crypto.Cipher import AES from django.conf import settings import hashlib, random def __random5(): """ Generate a random sequence of 5 bytes for use in a SHA512 hash. """ return bytes(''.join(map(chr,random.sample(range(255),5))), 'utf-8') def __fill(): """ This is used to generate filler data to pad our plain text before encryption. """ return hashlib.sha512(__random5()).digest() def __cipher(): """ A simple constructor we can call from both our encrypt and decrypt functions. """ key=hashlib.sha256(bytes(settings.SECRET_KEY, 'utf-8')).digest() # Key is generated by our SECRET_KEY in Django. return AES.new(key) # Here you should perhaps use MODE_CBC, and add an initialization vector for additional security. ECB is the default, and isn't very secure. def encrypt(data): """ The entrypoint for encrypting our field. """ FILL=__fill()+__fill()+__fill() # This is used to generate filler so we can satisfy the block size of AES. It is best to pad with random data, than to pad with say nulls. return __cipher().encrypt(bytes(data, 'utf-8')+b'|'+FILL[len(data)+1:]) def decrypt(data): """ Entrypoint for decryption """ return __cipher().decrypt(data).split(b'|')[0].decode('utf-8') Pretty neat, huh? Feel free to change how the filler is generated, the cipher being used, and of course the options passed to the cipher to further customize this solution. I have tested this using Python 3.5, and Django 2.2.9. Although, it should work on future versions of both Python and Django. For obvious reasons, you should be using caching on your Django site if you plan on displaying these fields on your front-end. The best part about doing this as a field, rather than placing this code in your model, through a signal, or in a form, is that it 100% works in the Django admin, and any other place you may reference this field within your Django codebase. This is an interesting example, of when to use a custom model field in Django, rather than adding the logic to the model or the form.
http://pythondiary.com/blog/Jan.13,2020/creating-transparently-encrypted-field-django.html
CC-MAIN-2020-16
en
refinedweb
10.5.1. Reading Data¶ In this chapter, we will use a data set developed by NASA to test the wing noise from different aircraft to compare these optimization algorithms. We will use the first 1500 examples of the data set, 5 features, and a normalization method to preprocess the data. %matplotlib inline import d2l from mxnet import autograd, gluon, init, nd from mxnet.gluon import nn import numpy as np # Save to the d2l package. def get_data_ch10(batch_size=10, n=1500): data = np.genfromtxt('../data/airfoil_self_noise.dat', delimiter='\t') data = nd.array((data - data.mean(axis=0)) / data.std(axis=0)) data_iter = d2l.load_array((data[:n, :-1], data[:n, -1]), batch_size, is_train=True) return data_iter, data.shape[1]-1 10.5.2. Implementation from Scratch¶ We have already implemented the mini-batch SGD algorithm in the Section 3.2.. # Save to the d2l package. def train_ch10(trainer_fn, states, hyperparams, data_iter, feature_dim, num_epochs=2): # Initialization w = nd.random.normal(scale=0.01, shape=(feature_dim, 1)) b = nd.zeros(1) w.attach_grad() b.attach_grad() net, loss = lambda X: d2l.linreg(X, w, b), d2l.squared_loss # Train).mean() l.backward() trainer_fn([w, b], states, hyperparams) n += X.shape[0] if n % 200 == 0: timer.stop() animator.add(n/X.shape[0]/len(data_iter), d2l.evaluate_loss(net, data_iter, loss)) timer.start() print('loss: %.3f, %.3f sec/epoch'%(animator.Y[0][-1], timer.avg())) return timer.cumsum(), animator.Y[0]. def train_sgd(lr, batch_size, num_epochs=2): data_iter, feature_dim = get_data_ch10(batch_size) return train_ch10( sgd, None, {'lr': lr}, data_iter, feature_dim, num_epochs) gd_res = train_sgd(1, 1500, 6) loss: 0.246, 0.051 sec. sgd_res = train_sgd(0.005, 1) loss: 0.242, 0.260 sec/epoch When the batch size equals 100, we use mini-batch SGD for optimization. The time required for one epoch is between the time needed for gradient descent and SGD to complete the same epoch. mini1_res = train_sgd(.4, 100) loss: 0.243, 0.007 sec/epoch Reduce the batch size to 10, the time for each epoch increases because the workload for each batch is less efficient to execute. mini2_res = train_sgd(.05, 10) loss: 0.245, 0.026 sec. d2l.set_figsize([6, 3]) d2l.plot(*list(map(list, zip(gd_res, sgd_res, mini1_res, mini2_res))), 'time (sec)', 'loss', xlim=[1e-2, 10], legend=['gd', 'sgd', 'batch size=100', 'batch size=10']) d2l.plt.gca().set_xscale('log') 10.5.3. Concise Implementation¶ In Gluon, we can use the Trainer class to call optimization algorithms. Next, we are going to implement a generic training function that uses the optimization name trainer_name and hyperparameter trainer_hyperparameter to create the instance Trainer. # Save to the d2l package. def train_gluon_ch10(trainer_name, trainer_hyperparams, data_iter, num_epochs=2): # Initialization net = nn.Sequential() net.add(nn.Dense(1)) net.initialize(init.Normal(sigma=0.01)) trainer = gluon.Trainer( net.collect_params(), trainer_name, trainer_hyperparams) loss = gluon.loss.L2Loss()) l.backward() trainer.step(X.shape[0]) n += X.shape[0] if n % 200 == 0: timer.stop() animator.add(n/X.shape[0]/len(data_iter), d2l.evaluate_loss(net, data_iter, loss)) timer.start() print('loss: %.3f, %.3f sec/epoch'%(animator.Y[0][-1], timer.avg())) Use Gluon to repeat the last experiment. data_iter, _ = get_data_ch10(10) train_gluon_ch10('sgd', {'learning_rate': 0.05}, data_iter) loss: 0.244, 0.030 sec/epoch 10.
http://classic.d2l.ai/chapter_optimization/minibatch-sgd.html
CC-MAIN-2020-16
en
refinedweb
Find and replace a variable item Hi guys, first time posting! I’m working on a .xml file and need to replace every item that says: lootmax=“6” The issue I’m having, is the 6 is variable, there’s over 500 lines that have ranging numbers but I want to set them all to one number. Hope this is clear enough, many thanks. If even possible, I’d like to up all the numbers incrementally from their original values, for example the file may have the following: lootmax=“2” lootmax=“4” lootmax=“6” lootmax=“1” lootmax=“5” lootmax=“9” Would I be able to find all values called lootmax="#" and up that incrementally by 1? So they would all now be lootmax=“3” lootmax=“5” lootmax=“7” lootmax=“2” lootmax=“6” lootmax=“10” - Ekopalypse last edited by Ekopalypse Replacing with a fixed number can be solved with a regular expression like this find what: (?!lootmax=")\d This would look for the text lootmax="followed by ONE digit In replace with you put in what you like to have. If it might be possible that there are two digit numbers or even greater numbers you have to provide an additional multiplier +-> \d+ Doing math with regular expression is, in my opinion, choosing the wrong tool for the job. You can workaround it by doing mapping things like if you find 2 replace by 3, if 3 then replace with 4 … but that means that you have to create the mapping list yourself. I think for such cases a programming language might be the better choice. There are a plenty of plugins available, like pythonscript, luascript … which allow you to manipulate the text from a written script. Let us know what you want to do. So, what you are asking for is not doable in a simple search-and-replace, even with regex. There are tricks you could play (and that @guy038 might chime in with) to simulate adding one, since you always want to do that, but search-and-replace/regex are not really designed for arbitrary math. The scripting plugins – like PythonScript or LuaScript – allow you to bring to bear the full power of a programming language onto the contents of the files opened inside Notepad++. The docs that bundle with PythonScript even give an example of “add 1 to matched number”: def add_1(m): return 'Y' + str(number(m.group(1)) + 1) # replace X followed by numbers by an incremented number # e.g. X56 X39 X999 # becomes # Y57 Y40 Y1000 editor.rereplace('X([0-9]+)', add_1); It’s a couple of tweaks to make it work for you: Assuming literal ASCII quotes, rather than the smartquotes that show up in your post, then the regex could be something like: def add_1(m): return (m.group(1)) + str(int(m.group(2)) + 1) + (m.group(3)) editor.rereplace('(lootmax=")([0-9]+)(")', add_1); The regex puts lootmax="into group(1), the number in group(2), and the end-quote in group(3). The function replaces that with group(1), then the value-of-group(2)-plus-1, then group(3). The editor.rereplace()call runs that regex-and-function on the whole active document. (I see that @Ekopalypse chimed in suggesting PythonScript and LuaScript while I was writing this up.) Question for Python experts: I noticed that @bruderstein’s example add_1()definition in the PythonScript manual included the number()function, which doesn’t seem to exist in either Python 2 or Python 3. Traceback (most recent call last): File "C:\usr\local\apps\notepad++\plugins\Config\PythonScript\scripts\18860-add1.py", line 4, in <module> editor.rereplace('lootmax="([0-9]+)"', add_1); File "C:\usr\local\apps\notepad++\plugins\Config\PythonScript\scripts\18860-add1.py", line 2, in add_1 return 'Y' + str(number(m.group(1)) + 1) NameError: global name 'number' is not defined Was that an old/deprecated function? Or something in another library? Or just a mistake? I made mine work with int(), but I was just curious whether that’s something that should be raised as an issue. - Ekopalypse last edited by @PeterJones, you are right, int is the right function. I’m unaware whether there was a number function in some pre 2.7 versions but since 2.7 there is nothing listed. @PeterJones thank you for your reply, and to others also contributing. Peter, is it possible to send you a private message of the .XML file I have? My knowledge in doing such a task is, minimal to say the least. @PeterJones said in Find and replace a variable item: issue#141 created. PR#142 created. I forgot to go back and look. This has been fixed in v1.5.3. @Tazhar0001 said in Find and replace a variable item: Peter, is it possible to send you a private message of the .XML file I have? My knowledge in doing such a task is, minimal to say the least. There are no PM on this, and I don’t advertise my email address here (avoiding spambot harvesting). The [example data]( you showed near the start of this thread does work with the script I provided (well, I guess my previous post was missing the first line needed): from Npp import * def add_1(m): return (m.group(1)) + str(int(m.group(2)) + 1) + (m.group(3)) editor.rereplace('(lootmax=")([0-9]+)(")', add_1); Even with data like: <tag other="y" lootmax="4" /> <tag other="z" lootmax="6" /> <other>blah</interferes> <tag other="p" lootmax="1" /> <tag other="d" lootmax="5" /> <tag other="z" lootmax="9" /> all the lootmax="#"get one added properly. If you need help with PythonScript - Install PythonScript (if you don’t already have it): Plugins > Plugins Admin, check the box on ☐ PythonScript, and click Install; restart Notepad++ as needed - Plugins > PythonScript > New script, named lootmax.py - Paste the contents of the script from this post. Save. - Set Notepad++'s active file to your file with the lootmaxvalues. Run **Plugins > Python Script > Scripts > lootmax`; the replacement should occur.
https://community.notepad-plus-plus.org/topic/18860/find-and-replace-a-variable-item/?page=1
CC-MAIN-2020-16
en
refinedweb
- ChatterFeed - 1Best Answers - 0Likes Received - 0Likes Given - 0Questions - 30Replies Building Report Hi, Need to create a report which comprise of 3 objects. 2 objects are in Master detail relationship, while the other is having a lookup with one of those 2 . for eg. Objects P and Q are in a master-detail relationship, while R is having a lookup with P. Test Code Failing at 90% Basically, I have a VF page which is for a survey object that I am creating. The goal is to automatically send out a URL after a client has completed certain milestones. The URL will be part of an email template that will send a link to the site.com hosted version of the VF page with the clients account ID in the URL.. The client will complete the survey and upon clicking save, the survey will be inserted into the clients object by the extension which will draw the account ID from the URL....It all works fine in sandbox, testing just fails as I do not know how to test for AccountID when it gets it from the URL. These are my three codes: VF page: <apex:page <apex:includeScript <apex:includeScript <apex:stylesheet <apex:form <apex:pageBlock <apex:pageBlockSection <apex:pageBlockSection <apex:inputField <div id="slider" style="length:50"/> <apex:inputField <apex:inputField </apex:pageBlockSection> </apex:pageBlock> <apex:commandButton </apex:form> <script> function getCleanName(theName) { return '#' + theName.replace(/:/gi,"\\:"); } var valueField = getCleanName( '{!$Component.theForm.theBlock.theSection.value}' ); $j = jQuery.noConflict(); $j( "#slider" ).slider( {enable: true, min: 0, max: 10, orientation: "horizontal", value: 0} ); $j( "#slider" ).on( "slidechange", function( event, ui ) { var value = $j( "#slider" ).slider( "option", "value" ); $j( valueField ).val( value ); } ); </script> </apex:page> This is my Extension: public class extDestinySurvey { public Destiny_Survey__c Dess {get;set;} private Id AccountId { get { if ( AccountId == null ) { String acctParam = ApexPages.currentPage().getParameters().get( 'acct' ); try { if ( !String.isBlank( acctParam ) ) AccountId = Id.valueOf( acctParam ); } catch ( Exception e ) {} } return AccountId; } private set; } public extDestinySurvey(ApexPages.StandardController controller) { Dess = (Destiny_Survey__c)controller.getRecord(); } public PageReference saveDestinySurvey() { Dess.Account__c = AccountId; upsert Dess; // Send the user to the detail page for the new account. return new PageReference('/apex/DestinyAccount?id=' + AccountId + '&Sfdc.override=1'); } } This is my Test that fails at 90%: @isTest private class TestExtDestinySurvey { static testMethod void TestExtDestinySurvey(){ //Testing Survey extension // create a test account that will be use to reference the Destiny Survey Record Account acc = new Account(Name = 'Test Account'); insert acc; //Now lets create Fact Finder record that will be reference for the Standard Account Destiny_Survey__c DessTest = new Destiny_Survey__c(Account__c = acc.id, Explain_why_you_gave_your_rating__c = 'Because', How_likely_are_you_to_refer_Destiny__c = 8); //call the apepages stad controller Apexpages.Standardcontroller stdDess = new Apexpages.Standardcontroller(DessTest); //now call the class and reference the standardcontroller in the class. extDestinySurvey ext = new extDestinySurvey(stdDess); //call the pageReference in the class. ext.saveDestinySurvey(); } } Regarding related lists hi i have three custom objects bills customers and products and what I am trying to do is that when we create a bill it should be populated on the customers related list.. And the bill should contain the related products on the .. there can be multiple products and it won't be possible to create a new product on every bill we should be able to select multiple products for a bill . how can we achieve this... Why do we not have the Evaluation criteria of "When the record is only updated " in Workflow Hi, Why cany be have a Evaluation criteria of only on 'Update of a record '? How is this gonna impact ? Please let me know. Thanks, SQL query for salesforce objects Hi, I have a Contact and Opportunity(Donations) objects. Opportunity is child obj for Contact. In our case, for one contact there may be several opportunity( donation) records. Close date and Amount are two fields in opportunity. How to query for a condition in sql, most recent close date and most recent amount ? for a given date range and given amount range. select name,max(date),amount from donations where closedate> '2011-3-4' and closedate < '2012-2-4' and amount>10 and amount<100 group by name but it throws me an error on amount field as it is not in aggregate function. how to modify this query such that for each contact record, it pulls donation's recent close date and recent amount. ( Consider for 1 contact we have 10 donations. so we need to pull 10th donation details.) NOT EQUAL operator IN SOQL for picklist values How to use not qual to operator in SOQL In sql, fieldname !='%xyz%' how about soql? here am trying to check for the picklist values. Thanks in advance Workflow Condition Not Working Hi, I made a email alter when work flow condition is set as mentioned below. In Lead there is a Column called Lead Status When lead status is changed to MQL or SQL it must trigger email to User. I tried with Operator Contains(MQL, SQL) its not working, But when i mention equal to MQL or SQL it is working. Please suggest me how to add this condition. Thanks Sudhir Need to send email to multiple users (case owner, case creator)whenever case is closed HI, i need to send email to case owner and case creator whenever case is closed. i wrote after update trigger for this. but its not working. can anyone help me out from this. Thanks & Regards Ram Trigger - i want reserve items Hi im newbiee I have problem i write : trigger createReservation on Rezerwacja__c (before insert, before update) { Set<id> rezSet = new Set<id>(); for (Rezerwacja__c rez : trigger.new){ rezSet.add(rez.Item__c); } List<Rezerwacja__c> rezList = [SELECT Rental_Date__c, Date_of_return__c, Status__c FROM Rezerwacja__c where id in :trigger.new]; List <Item__c> itList = [SELECT id, name FROM Item__c WHERE Item__c.Rezerwacja__c IN : rezList]; Map<Id, List<Rezerwacja__c>> rezMap = new Map<Id, List<Rezerwacja__c>>() i wants the item was not available in a given period of time Can somebody help me ?? Trigger Update event - how to "Save and Send updates" via the trigger Simply updating the event does not send out the invites. Is there a way via apex to send updates to attendees when an event is updated? Looping through multi-select picklist values Could I get simple example of how to loop through a multi-select picklist field? Thanks!
https://developer.salesforce.com/forums/ForumsProfile?userId=005F0000003Fl4bIAC&communityId=09aF00000004HMGIA2
CC-MAIN-2020-16
en
refinedweb
span8 span4 span8 span4 I have created a shared resource directory (named PROJ) based on this article. I'm using it in a (type: text) private parameter (named SHAREDRESOURCE_PROJ) as I would do with an FME parameter : $(FME_SHAREDRESOURCE_PROJ)\Templates And I access that parameter from a number of scripted private parameters like this: import fme templateA = fme.macroValues[‘SHAREDRESOURCE_PROJ’] + '/templateA.ini' return templateA When executing my workspace from FME Server it throws the error ' failed to evaluate Python script'. If I replace $(FME_SHAREDRESOURCE_PROJ) with the actual path to the folder it works so the problem is the resource directory parameter, but I don't want to use the hardcoded path. What am I doing wrong? Since you've tagged your question with FME Server 2018, I would definitely recommend that you create the resource connection in the GUI. The document you linked to is for FME Server 2016 and should not be used for FME Server 2018. That said, you need to use the complete name in the scripted parameters etc, for example: A = fme.macroValues['FME_SHAREDRESOURCE_PROJ'] There is more information on the subject here: I don't know why that document is not in the Administrator's Guide anymore. It does work and that way you don't have to provide a network directory as it uses the default location of the Resources folder. Apart from that, it works if I use the resource connection as you say but that's not what I want to do. I want to use a private parameter pointing to a folder in my resource connection, and then use that private parameter inside another (actually a number of) private parameter. That's what's giving me an error. The reason to do it like this is because the folder name inside the resource connection may change and otherwise I'd have to change it in all parameters accessing that folder. This way it's just a change in one parameter. Why do you mention FME_SHAREDRESOURCE_PROJ and SHAREDRESOURCE_PROJ, which one did you define? You need to reference the value from the Name column when you defined the resource connection: I defined the resource directly in the FME Server configuration file (FMEServerDir>Server\fmeServerConfig.txt) with the name FME_SHAREDRESOURCE_PROJ: SHAREDRESOURCE_TYPE_8=FILE SHAREDRESOURCE_NAME_8=FME_SHAREDRESOURCE_PROJ SHAREDRESOURCE_DISPLAYNAME_8=PROJ SHAREDRESOURCE_DESCRIPTION_8=This shared resource is the project data directory SHAREDRESOURCE_ISMIGRATABLE_8=true SHAREDRESOURCE_DIR_8=C:/ProgramData/Safe Software/FME Server///resources/proj/ SHAREDRESOURCE_SECURITY_DEFAULT_ROLES_8=fmeadmin SHAREDRESOURCE_SECURITY_DEFAULT_OWNER_8=admin SHAREDRESOURCE_PROJ is the name of the private parameter. It could be simply PROJ or whatever. If that's confusing let me change my example: PROJ private parameter: $(FME_SHAREDRESOURCE_PROJ)\Templates Scripted private parameter: import fme templateA = fme.macroValues[‘PROJ’] + '/templateA.ini' return templateA Answers Answers and Comments 11 People are following this question. Create a Private Parameter Value From Attribute Value 1 Answer FME Hub Transformer Issues 3 Answers Python Custom Transformer Bug using fme.macroValue 2 Answers Problem with custom coordinate system: "inverse calculation failed to converge" and some changes to MyCoordSysDef.fme are ignored by FME 2 Answers Is there / or how do I create a workspace that can change a private parameter [string] in other (different) workspaces? 1 Answer
https://knowledge.safe.com/questions/84087/accesing-fme-server-custom-shared-resources-from-a.html
CC-MAIN-2020-16
en
refinedweb
At Mergify, we generate a pretty large amount of logs. Every time an event is received from GitHub for a particular pull request, our engine computes a new state for it. Doing so, it logs some informational statements about what it's doing — and any error that might happen. This information is precious to us. Without proper logging, it'd be utterly impossible for us to debug any issue. As we needed to store and index our logs somewhere, we picked Datadog as our log storage provider. Datadog offers real-time indexing of our logs. The ability to search our records that fast is compelling as we're able to retrieve log about a GitHub repository or a pull request with a single click. To achieve this result, we had to inject our Python application logs into Datadog. To set up the Python logging mechanism, we rely on daiquiri, a fantastic library I maintained for several years now. Daiquiri leverages the regular Python logging module, making its a no-brainer to set up and offering a few extra features. We recently added native support for the Datadog agent in daiquiri, making it even more straightforward to log from your Python application. Enabling log on the Datadog agent Datadog has extensive documentation on how to configure its agent. This can be summarized to adding logs_enabled: true in your agent configuration. Simple as that. You then need to create a new source for the agent. The easiest way to connect your application and the Datadog agent is using the TCP socket. Your application will write logs directly to the Datadog agent, which will forward the entries to Datadog backend. Create a configuration file in conf.d/python.d/conf.yaml with the following content: init_config: instances: logs: - type: tcp port: 10518 source: python service: <YOUR SERVICE NAME> sourcecategory: sourcecode conf.d/python.d/conf.yaml Setting up daiquiri Once this is done, you need to configure your Python application to log to the TCP socket configured in the agent above. The Datadog agent expects logs in JSON format being sent, which is what daiquiri does for you. Using JSON allows to embed any extra fields to leverage fast search and indexing. As daiquiri provides native handling for extra fields, you'll be able to send those extra fields without trouble. First, list daiquiri in your application dependency. Then, set up logging in your application this way: import daiquiri daiquiri.setup( outputs=[ daiquiri.output.Datadog(), ], level=logging.INFO, ) This configuration logs to the default TCP destination localhost:10518 — though you can pass the host and port argument to change that. You can customize the outputs as you wish by checking out daiquiri documentation. For example, you could also include logging to stdout by adding daiquiri.output.Stream(sys.stdout) in the output list. Using extra When using daiquiri, you're free to use logging.getLogger to get your regular logging object. However, by using the alternative daiquiri.getLogger function, you're enabling the native use of extra arguments — which is quite handy. That means you can pass any arbitrary key/value to your log call, and see it up being embedded in your log data — up to Datadog. Here's an example: import daiquiri […] log = daiquiri.getLogger(__name__) log.info("User did something important", user=user, request_id=request_id) The extra keyword argument passed to log.info will be directly shown as attributes in Datadog logs: All those attributes can then be used to search or to display custom views. This is really powerful to monitor and debug any kind of service. A log object per object When passing extra arguments, it is easy to make mistakes and forget some. This especially can happen when your application wants to log information for a particular object. The best pattern to avoid this is to create a custom log object per object: import daiquiri class MyObject: def __init__(self, x, y): self.x = x self.y = y self.log = daiquiri.getLogger("MyObject", x=self.x, y=self.y) def do_something(self): try: self.call_this() except Exception: self.log.error("Something bad happened") By using the self.log object as defined above, there's no way for your application to miss some extra fields for an object. All your logs will look in the same style and will end up being indexed correctly in Datadog. Log Design The extra arguments from the Python loggers are often dismissed, and many developers stick to logging strings with various information included inside. Having a proper explanation string, plus a few extra key/value pairs that are parsable by machines and humans, is a better way to do logging. Leveraging engines such as Datadog allow to store and query those logs in a snap. This is way more efficient than trying to parse and grep strings yourselves!
https://pythondigest.ru/view/48963/
CC-MAIN-2020-16
en
refinedweb
TechDraw TemplateHowTo". 3. Use the XML Editor to add a "freecad" namespace clause to the <svg> item. xmlns:freecad="".: Create editable fields 9. Use the XML Editor to add a freecad:editable tag to each editable <text> item. - Assign a meaningful field name to each editable text. Adjust size of the SVG 10. Use the XML editor to adjust the viewBox attribute to match your page size in millimeters. - It is four values, in the format "0 0 width height" 11. Your template will now appear much bigger than desired... Notes Don't use Layers in Inkscape until you've mastered template creation without them. Layers and Groups can automatically insert unwanted transforms into your SVG file. See a Stackoverflow discussion on removing transform clauses in SVG files. -
https://wiki.freecadweb.org/index.php?title=TechDraw_TemplateHowTo&oldid=356243
CC-MAIN-2020-16
en
refinedweb
6 More Must-Do Grav Tweaks: Ready for Hacker News Traffic!. Related Pages Once a blog has enough posts, user retention becomes more difficult – as users find it hard to locate related or interesting posts to read, they leave your site. The related pages plugin helps with that. Out of the box, it includes some sensible defaults for calculating relation scores between posts, and can include title scanning, content parsing, matching by taxonomies, and much more. Once the scanning has been configured, rendering the pages is a matter of including the generated list of related items in an existing template, like so: {% if config.plugins.relatedpages.enabled and related_pages|length > 0 %} <h4>Related Posts</h4> {% include 'partials/relatedpages.html.twig' %} {% endif %} Note that the styling, while decent in its default form, is ultimately up to you. That’s where the next plugin comes in handy. Custom JS and CSS without extending the theme Sometimes, all you need to do is include a small JS or CSS modification in your pages. A full theme extension would, in that case, be overkill. That’s where the assets plugin comes in. Once installed, you have the ability to add JS / CSS frontmatter to your pages: {assets:js order:10} custom-script.js /blog/some-blog/post/script.js //cdnjs.cloudflare.com/ajax/libs/1140/2.0/1140.min.js {/assets} {assets:inline_css} h1 {color: red !important;} {/assets} Notice the inline_ prefix when dealing with inline CSS / JS This then lets you easily tweak the look and feel of certain pages when needed – like adding some fancy demo logic into the mix, custom styling in case the page is of a different type than what you usually post, and more. Search Having many posts is pointless if there’s no way for people to search them and find the one they’re looking for. Given that Grav is a flat-file system with no database, searching takes more than just firing off a MySQL LIKE %% query. The simplesearch plugin (named that way purely because it uses a very simple method of searching – string matching) adds this feature. It comes with its own search field partial, and a search results page, but these can of course be fully customized. For example, after copying the user/plugins/simplesearch/templates/simplesearch_results.html.twig file into user/themes/cacti-swader/templates/ and modifying it, here’s how the implementation looks on my own blog: Caveats Filters To make sure it works across all pages, the filters value in the configuration file must be EMPTY, not non-existent. If you remove it completely, it will fall back to the default of the plugin’s original configuration file, which limits the results to those that have a category of blog. Here’s my configuration in user/config/plugins/simplesearch.yaml: enabled: true built_in_css: true display_button: false min_query_length: 3 route: /search filters: "" template: simplesearch_results order: by: date dir: desc Performance While effective, this plugin will lose performance as your collection of posts grows, because it iterates through every page you have and compares its content to the given search query. The length of the posts and their number directly correlates to the search performance. It is recommended to implement a more powerful search engine once the number of posts reaches critical mass (e.g. over 1000). Multi-language and SimpleSearch If you’re using the language switcher by utilizing a multi-language theme, you might want to modify the langswitcher.html.twig partial to ignore the search query string when switching languages because of this. Use the instructions in the linked issue to fix the problem. And while we’re on the topic of language, to modify the string “Search results” and other values on the search results page, you can either modify the user/plugins/simplesearch/language.yaml file, or change the strings used in the search results template to use those defined in the theme’s language file. To add a comments section, we can use the JsComments plugin. It’s an abstraction of several popular comment systems, allowing for their site-wide installation in an instant, by just modifying the config file and adding the appropriate license key or identification code. Unfortunately, Disqus is still the most viable option today, so let’s go with that. The Admin Panel UI intuitively offers all the options we need to set, most notably the “Enabled” and “Active” option in the main tab, and the Disqus shortname in the Disqus tab: Once done, the JS snippet needs to be injected into the template or page in which we want the comments rendered. This snippet is identical for all providers, and automatically switches out the comment system code when and if you decide on a different provider: {% if config.plugins.jscomments.enabled %} {{ jscomments() }} {% endif %} Here’s how the implementation looks on my blog: Difference from Official Comments There is also an official comments plugin which provides native commenting functionality and saves the comments in local files, but it’s missing some crucial features (see list in README file) and it’s flat-file based, not JS-based, which means they don’t play nice with full page caching. When pages with JS comments are cached, they’re loaded asynchronously after the page is shown, regardless of the level of caching, which makes sure they’re always up to date even if you cache your content very aggressively. Image Optimization and CDNs Any good blog post is rich with text-breaking images to increase readability. But images tend to get pretty heavy sometimes, and bandwidth is precious – especially during traffic spikes. There are two options: CDN, and image optimization. CDN Grav’s CDN plugin works great with pullzone CDNs like MaxCDN. A pullzone CDN is a special type of setup many CDNs offer where, when a file is requested from your site, it is first copied to the CDN, then to all its remote locations for serving around the world, and then served back to the user. Pullzones usually aren’t free, and need additional setting up, plus some CNAME records at your domain’s registrar, but all the effort is worth it when the speed gains and bandwidth savings are considered. Once fully configured on the CDN’s side, implementing a pullzone in Grav is extremely simple. The plugin provides the typical yaml configuration file (fully editable in the Admin UI): enabled: true inline_css_replace: true pullzone: yourdomain.cdn.com tags: 'a|link|img|script' extensions: 'jpe?g|png|gif|ttf|otf|svg|woff|xml|js|css' The above means: Replace all links to resources with the yourdomain.cdn.com domain, including url() calls in CSS, and do the same for all the listed extensions in all a, link, img, and script tags. This effectively makes the CDN auto-serve all static assets. For assets you don’t want CDNed, there’s a nocdn mode, too. Image Optimization Whereas a CDN will serve assets with great efficiency and over great distances, you might also be interested in merely optimizing your images and reducing their filesize. The rather specific optimus plugin ties into the Optimus image optimization service. Optimus is an arguably much more affordable service, clocking in at $19 per year for personal projects, but it lacks the geo-diversity of a CDN and only handles images. That said, this too can lead to some astonishing bandwidth savings. Once installed, Optimus is automatically activated and sends all images for processing to the service. There are two things to keep in mind: - the initial load of images will be slow, because it’ll take time for Optimus to process them and send them back - the images will get saved and cached locally – they get returned back to your server, so it’s still you serving them. The ideal approach would be using Optimus to optimize images, and then forwarding those to a CDN pullzone, thus reducing bandwidth for static assets to near 0%, but at a cost of about an extra $100 per year when looked at in total. There is another way to super-cache your site, though, and that one’s free. Caching with Varnish As a final, super-optimization step, we’ll take advantage of Varnish, the reverse proxy server which caches full pages for arbitrary amounts of time and serves them very, very quickly. It protects your server from traffic spikes, and saves server resources for other processes and sites. After installing it with something like sudo apt-get install varnish, we can modify some settings files. The default value for DAEMON_OPTS in /etc/default/varnish is actually fine as it is by default: DAEMON_OPTS="-a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m" Note: find out about these values in our original Varnish post The other file to pay attention to is /etc/varnish/default.vcl, and its backend value. backend default { .host = "127.0.0.1"; .port = "8080"; } The port should be changed to the port of the regular server, and the host should be renamed to the hostname of the server as defined in the server configuration. For my example during local development, it ends up looking like this: backend default { .host = "test.app"; .port = "80"; } For this to work, the /etc/hosts file of the server you’re running your development sites on needs to have the 127.0.0.1 test.app entry, and you need to have the server configuration of this particular site named test.app, like so: server { listen 80; listen 443 ssl; server_name test.app; ... Now, the combination of these files makes sure that when visiting the page, Varnish actually fetches the content from, saves it, and forwards it to the browser. The same applies to live servers, but obviously those will differ in domain names. Deploying for production entails the following: - change port for Nginx / Apache to something esoteric (9123) - change backend port in /etc/varnish/default.vclto that port - change Varnish front end port to 80 Once done, you successfully got a Varnish-powered supercached site going, and you’re ready for those Hacker News / Reddit traffic spikes! Conclusion As you can see, there’s a lot that can be done with Grav (and around it), and we haven’t even looked at fully custom plugins or themes yet – something we’ll do in the near future. With the setup presented in this post, and the plugins from the previous list, your blog is 100% production ready. Have you given Grav a try yet? How do you feel about our suggestions in this post? Would you approach anything differently? Let us know!
https://www.sitepoint.com/6-more-must-do-grav-tweaks-ready-for-hacker-news-traffic/
CC-MAIN-2020-16
en
refinedweb
The Fundamental Theorem of Arithmetic states that every positive integer can be factored into primes in a unique way. But first, what is a prime number? A prime number is a number greater than 1 that has no positive divisiors except 1 and itself e.g. 2 is prime. The primality of a given number can be find out by trial and error. It consists of testing whether is a multiple of any integer between and . def is_prime(n): """returns True if given number is prime""" if n > 1: for i in range(2, int(n**0.5)+1, 1): if n % i == 0: return False else: return True else: return False Some better algorithms exist to test the primality of a number. Coming back to The Fundamental Theorem of Arithmetic, can be written as . There is no other way to factor it. There are infinite prime numbers. One of the earliest proofs was given by Euclid. Suppose there are finitely many primes. Let’s call them . Now, consider the number . is either prime or not. If it’s prime, then it was not in our list. If it’s not prime, then it’s divisible by some prime, . Notice, can’t be because of remainder 1. Thus, was not in our list. Either way, our assumption that there are finite number of primes is wrong. Hence, there are infinite primes. One of the widely used applications of primes is in the public-key cryptography e.g. in the RSA cryptosystem. The RSA algorithm is based on the concept of trapdoor, a one-way function. Its strength lies in the fact that it’s computationally hard (impossible) to factorize large numbers. For now, there doesn’t exists any efficient algorithm for prime factorization. For example, It’s easy for a computer to multiply two large prime numbers. But let’s say you multiply two large prime numbers together to get a resulting number. Now, if you give this new number to a computer and try to factorize, the computer will struggle e.g. to find which two prime nubmers are multiplied together to get 18848997161 is difficult. The thirst to find the order in primes is one of the reasons for its quest. References:
https://kharshit.github.io/blog/2018/01/05/some-prime-thoughts
CC-MAIN-2020-16
en
refinedweb
NAME Tcl_GetOpenFile - Get a standard IO File * handle from a channel. SYNOPSIS #include <tcl.h> int Tcl_GetOpenFile(interp, string, write, checkUsage, filePtr) ARGUMENTS Tcl_Interp *interp (in) Tcl interpreter from which file handle is to be obtained. interp >result will contain an error message. In the current implementation checkUsage is ignored and consistency checks are always performed. KEYWORDS channel, file handle, permissions, pipeline, read, write Table of Contents
http://www.ue.eti.pg.gda.pl/tcl/www/wtcltk/TclTkMan/tcl7.6b1/GetOpnFl.3.html
CC-MAIN-2017-43
en
refinedweb
Click "Launch and Activation Permissions" Edit Default Click OK Close the DCOMCNFG window Step 2: Install the SSL certificate without using IIS 7 The following solution describes how to resolve the Consider granting access rights to the resource to the ASP.NET request identity. My local computer user? –Kate Oct 11 '16 at 11:19 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up C# to VB.NET: Wednesday, May 16, 2012 11:14 AM Reply | Quote 0 Sign in to vote I have tried those suggestion but no positive result is there. this contact form What am I supposed to say? Join them; it only takes a minute: Sign up Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))? Yükleniyor... Çalışıyor... Dealing with "friend" who won't pay after delivery despite signed contracts I lost my equals key. Your Email This email is in use. Ultra Hack Pro HD 64.754 görüntüleme 3:13 Como resolver erro 0x80070005 - ATUALIZADO - Süre: 4:21. Remember accessing from explorer is different from havingprivilegesto access from your source code.Sai Kumar K Wednesday, May 16, 2012 10:53 AM Reply | Quote 0 Sign in to vote This solves the problem on the windows server 2003 and now my web application generating the excel files with large amount of data.For windows server 2008 someone has mentioned to made Reklam Otomatik oynat Otomatik oynatma etkinleştirildiğinde, önerilen bir video otomatik olarak oynatılır. For more information, see Connecting Between Different Operating Systems share|improve this answer answered Mar 18 '14 at 13:15 Zakaria 541511 What kind of user is it? Not the answer you're looking for? Uwp Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms Lütfen daha sonra yeniden deneyin. 1 Oca 2016 tarihinde yayınlandıIf you ever get this error when attempting to deploy a BizTalk solution into BTS admin from Visual Studio, it means that Sharepoint Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) How to resolve this error. If the SSL certificate is not in available in the bindings list then proceed with the below instructions to set the appropriate permissions. original site Use a separate limited account for the site if you want or enable anonymous access for the site in IIS. Exception Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) ASP.NET is not authorized to access the requested resource. Hresult 0x80070005 Onenote Exception Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) ASP.NET is not authorized to access the requested resource. Regards, Edited by Shweta Jain (Lodha) Thursday, May 17, 2012 3:27 AM spell mistake Proposed as answer by Shweta Jain (Lodha) Thursday, May 17, 2012 3:27 AM Thursday, May 17, ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. I have already fixed the problem regarding login into server 2008. The namespace you are connecting to is encrypted, and the user is attempting to connect with an unencrypted connection Give the user access with the WMI Control (make sure they have C# Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) Thursday, May 28, 2015 2:48 AM Reply | Quote Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. Powershell Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) So, I have to grant user policy for each web app (full read), now get-spweb works fine. How should I respond to absurd observations from customers during software product demos? You can use WMI Administrator tool to debug the issue, can download from I hope this helps you... Possible Issues The user does not have remote access to the computer through DCOM. Should we eliminate local variables if we can? Wmi Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) The following table lists the three categories of errors along with issues that might cause the errors and possible solutions. more hot questions question feed lang-cs about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation How did Adebisi make his hat hanging on his head? navigate here Konuşma metni Etkileşimli konuşma metni yüklenemedi. Are the guns on a fighter jet fixed or can they be aimed? Complete Certificate Request Access Is Denied Exception From Hresult 0x80070005 E_accessdenied Terms of Service Layout: fixed | fluid CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100 Contact Us1-888-484-2983 UK: +44 203 450 5486 Deutschland: +49 69 3807 Typically, DCOM errors occur when connecting to a remote computer with a different operating system version. Highlight the ASP.NET account, and check the boxes for the desired access. Manoj Kumar 32.785 görüntüleme 53:09 How to Fix Microsoft Visual C++ 2015 Redistributable Setup Failed error 0x80240017 - Süre: 6:15. What is the "crystal ball" in the meteorological station? Clickonce Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) Bu videoyu Daha Sonra İzle oynatma listesine eklemek için oturum açın Ekle Oynatma listeleri yükleniyor... English (U.S.) Login Remember me Lost password Live Chat by LivePerson News SEARCH Knowledgebase: Comodo Certification Authority Access Denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) Cause: This error occurs Linux questions C# questions ASP.NET questions fabric questions C++ questions discussionsforums All Message Boards... Did MS change something? the account i use is in Farm Admin group, but not the account used to install sharepoint. Please refer this link for more info. Yükleniyor... Right-click My Computer-> Properties Under COM Security, click "Edit Limits" for both sections. Circular Array Rotation Did Joseph Smith “translate the Book of Mormon”?
http://juicecoms.com/access-is/c-access-is-denied-exception-from-hresult-0x80070005-e-accessdenied.html
CC-MAIN-2017-43
en
refinedweb
The pre tag displays preformatted text in a fixed-width font. Other than this it's identical to p The pre element displays all white space and line breaks exactly as they appear inside the <PRE> and </PRE> tags. Using this tag, you can insert and reproduce formatted text, preserving its original layout. This tag is frequently used to show code listings, tabulated information, and blocks of text that were created for some text-only form, such as email messages. XML tags inside the pre tag are still interpreted - since the pre is just another type of paragraph, span tags and their varients (b, i, u, a etc.) may be used to format the text. See the p element examples of general layout applicable to all paragraphs, including PRE The text inside the PRE tag will keep the same spacing you can see here. <p>Here is a preformatted block of text.</p> <pre> public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } } </pre>
http://bfo.com/products/report/docs/tags/tags/pre.html
CC-MAIN-2017-43
en
refinedweb
The Cable Guy - December 2000 Quick Look at DNS Namespace Planning In Microsoft Windows NT 4.0 and earlier, server and service location operations were done through the resolution of NetBIOS names using NetBIOS name servers, such as those running Windows Internet Name Service (WINS). In Windows 2000, Active Directory-based server and service location are done by resolving Domain Name System (DNS) names using DNS servers. With Active Directory, a DNS infrastructure is required. For example, to locate a domain controller, a computer running Active Directory queries a DNS server for SRV (service location) resource records that correspond to the Active Directory domain name. The DNS name of a domain controller is returned. The computer then queries a DNS server for A (address) resource records that correspond to the DNS domain name of the domain controller. Without a DNS infrastructure, computers running Active Directory are unable to locate domain controllers and other types of application servers. DNS domains and Active Directory domains use similar structure for different namespaces. Each namespace stores different data and manages different objects. DNS uses zones and resource records, while Active Directory uses domains and domain objects. Therefore, it is critical that the DNS namespace is designed with Active Directory in mind and that the external namespace that exists on the Internet is not in conflict with the organization's internal namespace. Designing a DNS Namespace for Windows 2000 Active Directory The recommended approach to DNS design in an Active Directory environment is to design the Active Directory environment first and then support it with an appropriate DNS structure. In some cases, however, the DNS namespace might already be in place. In this case, the Active Directory environment should be designed independently and then implemented either as a totally separate DNS namespace or as a subdomain of the existing DNS namespace. The Windows 2000 Active Directory Architecture white paper describes the Active Directory namespace, including the forest and tree domain structure, organizational units, the global catalog, trust relationships, and replication. The Windows 2000 Guide to Active Directory Design white paper describes the planning, design, and architectural criteria that network architects and administrators should consider when designing an Active Directory namespace for an organization. The recommendations in this white paper assist the network architect in designing an Active Directory namespace that can withstand company reorganizations without expensive restructuring. When designing the DNS namespace, consider the following: Identify the DNS namespace that you will be using for your domain Identify the name that your organization has registered for use on the Internet (for example, <company>.com). If your company does not have a registered name, but you will be connected to the Internet, it is highly recommended to register a name on the Internet. If you choose not to register a name, make sure that you choose one that is unique. You can both review existing names and register for a name at. Use different internal and external namespaces Internally, you could use <comp>.com (an abbreviation of your externally registered name that represents your internal DNS namespace) or a subdomain of the external name such as corp.<company>.com. The subdomain structure might be useful if you already have an existing external DNS namespace. Different geographic locations (wcoast.corp.<company>.com or ecoast.corp.<company>.com) or departments (sales.corp.<company>.com or research.corp.<company>.com) can then be named as subdomains to reflect your Active Directory domain structure. Separate internal and external names on separate servers External DNS servers should include only those names that you want to be visible to the Internet. Internal DNS servers should include names that are for internal use. You can set your internal DNS servers to forward requests that they cannot resolve to external servers for resolution. Different types of clients have different name resolution requirements. Web proxy clients, for example, do not require external name resolution because the proxy server does this for them. Internal and external namespaces that overlap are not recommended. In most cases, the result of this configuration is that computers are unable to locate resources because they receive incorrect IP addresses from DNS. This is especially problematic when network address translation (NAT) is used and the internal IP address is in a range that is unreachable for external clients. Choose computer names that are supported by DNS It is strongly recommended that you use only characters in your computer names that are part of the Internet standard character set permitted for use in DNS host naming. Allowed characters are defined in RFC 952 as follows: all uppercase letters (A through Z), lowercase letters (a through z), numbers (0 through 9), and the hyphen (-). For organizations using Windows NT 4.0 and NetBIOS names, computer names might need to be revised. NetBIOS names can contain the following special characters which are not allowed in DNS names (as specified in RFC 952): ! @ # $ % ^ & ' ) ( - _ { } ~ . [space]. To assist with the transition from Windows NT 4.0 NetBIOS names to Windows 2000 DNS domain names, the Windows 2000 DNS service includes support for extended ASCII and Unicode characters. However, this additional character support can only be used in a pure Windows 2000 environment. Choose your DNS servers At a minimum, DNS servers in your organization must support SRV resource records (as specified in RFC 2052). While SRV resource records can be manually configured on the appropriate DNS server, it is recommended that your DNS servers also support dynamic registration (as specified in RFC 2136). With dynamic registration support, Active Directory servers and clients can register their DNS records during startup, reducing manual configuration overhead. With Windows 2000, the number of records in the DNS needed to support Active Directory clients and servers increases. Therefore, it is recommended that your DNS servers also support incremental zone transfer (as specified in RFC 1995). An incremental zone transfer reduces network overhead by sending only the DNS records that have changed during zone replicationinstead of the entire zone file. If you decide to use the Windows 2000 DNS Server service (which supports RFCs 2052, 2136, and 1995), you have the option of deciding whether a particular zone is integrated into Active Directory. Normally, DNS zones must be administered on the server on which the master copy of the zone resides physically. However, an Active Directory-integrated zone can be modified from any domain controller using a multi-master replication model. For More Information For more information on Active Directory and DNS namespace design, including detailed examples, consult the following resources: - Active Directory Architecture - Deployment Planning Guide (Chapter 9 - Designing the Active Directory Structure) - Guide to Active Directory Design - Best Practices for Designing the Active Directory Structure - Windows 2000 Domain Name System Overview - Windows 2000 DNS White Paper - Windows 2000 DNS chapter of the Windows 2000 Server Resource Kit - Windows 2000 Server Documentation (Active Directory and Networking\DNS) For a list of all The Cable Guy articles, click here.
https://technet.microsoft.com/en-us/library/bb878135.aspx
CC-MAIN-2017-43
en
refinedweb
. Defines the data provider. The generic interface for providing data to the data loader. Example final String jsonString = "{\"countries\":[{\"Name\":\"Afghanistan\",\"Code\": \"AF\"},{\"Name\":\"Åland Islands\",\"Code\": \"AX\"}]}"; // Data Provider DataProxy<ListLoadConfig, String> dataProxy = new DataProxy<ListLoadConfig, String>() { @Override public void load(ListLoadConfig loadConfig, Callback<String, Throwable> callback) { callback.onSuccess(jsonString); } }; Fetches data using GWT Request Factory. Fetches data from in memory cache. Extending example import java.util.ArrayList; import java.util.List; import com.google.gwt.core.client.Callback; import com.google.gwt.user.client.Timer; import com.sencha.gxt.data.shared.loader.DataProxy; import com.sencha.gxt.data.shared.loader.PagingLoadConfig; import com.sencha.gxt.data.shared.loader.PagingLoadResult; import com.sencha.gxt.data.shared.loader.PagingLoadResultBean; public class MemoryPagingProxy<M> implements DataProxy<PagingLoadConfig, PagingLoadResult<M>> { private List<M> data; private int delay = 200; public MemoryPagingProxy(List<M> data) { this.data = data; } public int getDelay() { return delay; } @Override public void load(final PagingLoadConfig config, final Callback<PagingLoadResult<M>, Throwable> callback) { final ArrayList<M> temp = new ArrayList<M>(); for (M model : data) { temp.add(model); } final ArrayList<M> sublist = new ArrayList<M>(); int start = config.getOffset(); int limit = temp.size(); if (config.getLimit() > 0) { limit = Math.min(start + config.getLimit(), limit); } for (int i = config.getOffset(); i < limit; i++) { sublist.add(temp.get(i)); } Timer t = new Timer() { @Override public void run() { callback.onSuccess(new PagingLoadResultBean<M>(sublist, temp.size(), config.getOffset())); } }; t.schedule(delay); } public void setDelay(int delay) { this.delay = delay; } } Using the extension example MemoryPagingProxy<PostTestDto> memoryProxy = new MemoryPagingProxy<PostTestDto>(TestSampleData.getPosts()); final PagingLoader<PagingLoadConfig, PagingLoadResult<PostTestDto>> gridLoader = new PagingLoader<PagingLoadConfig, PagingLoadResult<PostTestDto>>(memoryProxy); Fetches data using GWT RPC. Example // Data Provider RpcProxy<FilterPagingLoadConfig, PagingLoadResult<Data>> rpcProxy = new RpcProxy<FilterPagingLoadConfig, PagingLoadResult<Data>>() { @Override public void load(FilterPagingLoadConfig loadConfig, AsyncCallback<PagingLoadResult<Data>> callback) { } }; // Paging Loader final PagingLoader<FilterPagingLoadConfig, PagingLoadResult<Data>> remoteLoader = new PagingLoader<FilterPagingLoadConfig, PagingLoadResult<Data>>(rpcProxy) { @Override protected FilterPagingLoadConfig newLoadConfig() { return new FilterPagingLoadConfigBean(); } }; Fetches data from a URL, such as a rest endpoint. DataRecordJsonReader jsonReader = new DataRecordJsonReader(factory, RecordResult.class); String path = "data/data.json"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, path); HttpProxy<ListLoadConfig> proxy = new HttpProxy<ListLoadConfig>(builder); final ListLoader<ListLoadConfig, ListLoadResult<Email>> loader = new ListLoader<ListLoadConfig, ListLoadResult<Email>>(proxy, jsonReader); Local storage persistence writer. final Storage storage = Storage.getLocalStorageIfSupported(); final StorageWriteProxy<ForumLoadConfig, String> localWriteProxy = new StorageWriteProxy<ForumLoadConfig, String>(storage); Reads data from a URL inside or outside the domain. String url = ""; ScriptTagProxy<ForumLoadConfig> proxy = new ScriptTagProxy<ForumLoadConfig>(url); Local storage persistence reader. final Storage storage = Storage.getLocalStorageIfSupported(); final StorageReadProxy<ForumLoadConfig> localReadProxy = new StorageReadProxy<ForumLoadConfig>(storage);
http://docs.sencha.com/gxt/4.x/guides/data/DataProxy.html
CC-MAIN-2017-43
en
refinedweb
I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap. I looked through the examples in MatPlotLib and they all seem to already start with heatmap cell values to generate the image. Is there a method that converts a bunch of x,y, all different, to a heatmap (where zones with higher frequency of x,y would be "warmer")? If you don't want hexagons, you can use numpy's histogram2d function: import numpy as np import numpy.random import matplotlib.pyplot as plt # Generate some test data x = np.random.randn(8873) y = np.random.randn(8873) heatmap, xedges, yedges = np.histogram2d(x, y, bins=50) extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]] plt.clf() plt.imshow(heatmap.T, extent=extent, origin='lower') plt.show() This makes a 50x50 heatmap. If you want, say, 512x384, you can put bins=(512, 384) in the call to histogram2d. Example:
https://codedump.io/share/cIUEv59DtGVd/1/generate-a-heatmap-in-matplotlib-using-a-scatter-data-set
CC-MAIN-2017-43
en
refinedweb
Is XMPP the 'Next Big Thing' 162 Open Standard Lover writes "XMPP (eXtensible Messaging and Presence Protocol) has been getting a lot of attention during the last month and it seems that the protocol is finally taking off as a general purpose glue to build distributed web applications. It has been covered that AOL was experimenting with an XMPP gateway for its instant messaging platform. XMPP has been designed since the beginning as an open technology for generalized XML routing. However, the idea of an XMPP application server is taking shape and getting supporters. A recent example shows that ejabberd XMPP server can be used to develop a distributed Twitter-like system." buzzwords are my favorite (Score:5, Funny) Minus two points for not managing to cram the phrases "AJAX" or "Web 2.0" into this writeup. Re:buzzwords are my favorite (Score:4, Insightful) Re:buzzwords are my favorite (Score:5, Funny) Re:buzzwords are my favorite (Score:5, Interesting) Re: (Score:2) Re: (Score:2) AJAX is an ugly hack...[instead], the server can push XML fragments to the client whenever it wants and some client-side JavaScript could then process them into the DOM Umm... Isn't that just a different ugly hack? Browsers force polling. (Score:2) Google's actually come up with a neat hack to deal with this -- leave the connection open for 30 seconds, and if nothing new comes down it, close that connection and open a new one. Technically "polling", but practically just as fast as XMPP. Re: (Score:2) Re: (Score:3, Insightful) Re: (Score:2, Interesting) Remember, YMMV. Re: (Score:3, Insightful) Re: (Score:3, Funny) A twitter-like system could be built on top of xmpp. In much the same way that a gmail-like system can be built on top of SMTP/POP. That doesn't mean that SMTP/POP are web-based. I don't know if a twitter-like [slashdot.org] system would be a wise course of action for an instant messenger system let alone an instant messenger system using XMPP. Do you know what problems could arise from this? Text such as M$, Windoze, Micro$oft, or anything dealing with anti-Microsoft topics would pop-up in your message. This change could occur when you type in MS, Microsoft, Windows, or anything dealing with Microsoft; or just occur spontaneously. ;) Re: (Score:3, Funny) Re:buzzwords are my favorite (Score:4, Informative) Huh? Field test of XMPP based system (Score:4, Informative) Just what we (didn't) need !! (Score:2, Insightful) Is there NOTHING sacred that some lemon won't wrap in XML ??? Oh, no, wait Brilliant !! Re: (Score:2) Re:Just what we (didn't) need !! (Score:4, Informative) Yeah, I know, this is Slashdot, where people like to spew completely uninformed pseudo-opinions, but this one is just too obvious. Well, happy IMing on unencrypted, stone-age, propertiary networks that force-feed you with ads and censor your messages, if that's what you want. Re: (Score:3, Insightful) Well, happy IMing on unencrypted, stone-age, propertiary networks that force-feed you with ads and censor your messages, if that's what you want. XML doesn't solve any of these problems (and they're not all problems.) There's no technical reason that any given messaging service couldn't use SSL, and XMPP is extensible, and an implementation of it can be made proprietary enough to require a client that will force-feed you ads. Any network can censor messages, assuming they can read them. Your post is overrated. Yeah, I know, this is Slashdot, where people like to spew completely uninformed pseudo-opinions, but this one is just too obvious. Oh, sorry, I guess you covered all of that. Re: (Score:2) XMPP is OK but would be better if JSON (Score:2) It is pretty silly that in a a 4-5 word IM message, 75% of the data transferred in an XMPP client is just protocol overhead, and the message is just 25%. If they had used a more lightweight container like JSON for the protocol it would have much less overhead. Frankly, IMO, almost all data transfer protocols would be better suited to the JSON container than XM Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) XMLJPG (Score:2) Re: (Score:2) My first impression based on the tools actually using it (like ejabberd) is more like "IRC wrapped in an XML overcoat where everyone is a lousy sexchat bot". Re: (Score:2) I am a great sexchat bot. Re: (Score:2) Yes, because RFC 822 header fields are the pinnacle of parser research. Have you ever tried to write your own mail client? Have you ever tried to write your own mail server? By comparison, I'm pretty sure I could knock out a minimal compliant XMPP server in an afternoon, and it would support Unicode for free. But anyway, the biggest thing the "X" buys you is a lot of extensions [xmpp.org]. I'd say it's actually delivering on what it promises. Am I too late... (Score:5, Interesting) 'Zemp' would be a nice easy way of saying this. Re:Am I too late... (Score:5, Funny) Re: (Score:2) Re: (Score:2) Re:Am I too late... (Score:5, Informative) A lot of people pronounce it "Jabber". The name "XMPP" arose when they were moving it through the IETF standardisation process. Re: (Score:2) Re:Am I too late... (Score:5, Informative) Re:Am I too late... (Score:5, Insightful) Re: (Score:2) Why is this modded as funny? As a Pidgin user, I'd LOVE to see someone fix the crapstorm that is their poor excuse for a Jabber client. If you've got the ability, do it...it would make a lot of people happy. After all, isn't this what the Open Source ethic is all about? Re:Am I too late... (Score:4, Interesting) If the guy can write an XMPP client, and knows exactly what is wrong with Pidgin's implementation in order to "fix" his client to support it, then he should be more than capable of providing a fix to Pidgin's code, so that he doesn't have to keep fixing his code, and the all of us Pidgin users can benefit as well. Re: (Score:3) What exactly is the problem with Pidgin's XMPP/Jabber support? I use Pidgin for my MSN, AIM, and GTalk accounts.. and all three of them seem to work identically and without issue O_o In case it didn't come across clearly, I actually am interested in knowing what it does so poorly with Jabber, since as an end-user I really can't say that I see what the issue is =( Aikon- Re: (Score:2) I haven't seen the Pidgin code or dealt with the community, so this may very well be off base: perhaps it's darn nigh unfixable, or the authors don't readily accept patches? Re: (Score:2) Re: (Score:2) Re: (Score:2) Hell, I'm just happy that I don't have to track license counts any more (like I did with e/pop Professional). Re: (Score:2) For a GUI client, I like Psi. Right now I am using the xmpp extension for irssi. Re: (Score:2) Re: (Score:2) Care to elaborate? (Score:2) Re: (Score:2) XMPP as a silver bullet? (Score:5, Interesting) Re:XMPP as a silver bullet? (Score:5, Informative) Re: (Score:2) If I encrypt everything in a proprietary method (or with a proprietary key) and layer that into XMPP, you can still be locked in. It's kind of like saying because it's stored in XML it's open... <document> h5847uhlib43o8fvacgos8 5rw4978hefw9348fqw34fg f438gqwoluiaf4687wgoasd </document> There's my open document, so you can read it. (No, I didn't include a DTD, but just Re: (Score:2, Funny) Aha! So the gov't *is* hiding UFO's in secret hangers. And do something about that stuttering problem of yours. Re: (Score:2) However it's true on the intersite scale. All the services use Bantu, which is XMPP. They actually advertise themselves as the IM used by US govt. [bantu.com] Performance (Score:3, Interesting) Re: (Score:3, Informative) Re: (Score:2, Insightful) A brief explanation (Score:4, Informative) If someone connected to a gmail jabber server sends a message to andrewducker@livejournal.com then google chat automatically connects to the livejournal jabber server and passes the message over. You can see how this could be extended to allow federations of application servers to communicate. Heck, you could reimplement email over this without massive difficulty. Re:A brief explanation (Score:4, Interesting) Heck, you could reimplement email over this without massive difficulty. In reality I think it was one of the first things they implemented in Jabber. A lot of clients, especially the hardcore jabber clients, have different messaging modes: one mode composes a single message, another mode opens up a little chatbox. If you examine the former, you'll find that it's exactly like e-mail, although really it's just a jabber message. Everybody ends up using the chatbox because that's what jabber is for, and many popular clients (eg Pidgin) have only that. In terms of server and protocol, in my opinion Jabber is fully able to do e-mail. In fact, I'm sure Jabber servers already have e-mail gateways. You just need a client that operates in a manner that implements e-mail as we are used to; for example, most clients just pop up offline messages as soon as you connect, or mark them on your roster instead of presenting you with a stored list of messages that you can manipulate mailbox style. Re: (Score:3, Interesting) only thing about using xmpp as a mail "replacement", can it do attachments? no, im not talking about file transfers, im talking about attaching the file to the xmpp message and have it be stored on the server if the recipient is offline at the moment. thats the one strong suit of mail vs sms or im right now, that you can fire and forget a file rather then having to watch for a person being available to accept it. still, xmpp will Re: (Score:2, Informative) Actually... Yes you can. There's this newfangled thing called the <data> element. But, it got pushed into a XEP like two months ago, so client support is rather limited, and it's only good for 8k for now... :( If people starts using this much however, I can't see why the XMPP server wouldn't be able to store the file temporarily and then push it once the user is online. But, I think this is better for an xmpp/http hybrid. Re: (Score:2) now to see thunderbird and the rest adopt it as a "mail" protocol of sorts Re: (Score:2) Re: (Score:2) Re: (Score:2) In fact, the only reason I can see for a company to move to XMPP (or to include an XMPP g Re: (Score:2) Client-side IM services achieved "commodity" status some time ago; monetizing them shouldn't be on the agenda, for the most part. However, server technologies & protocol extensions haven't reached that stage. IMHO any company focusing on client lock-in is shooting their own foot. A more reasonable alternative is to monetize server development. Find a niche or market segment that would b WTF? (Score:2, Interesting) What the hell does that mean? I don't know whether to apply the "alphabetsoup" tag or the "stopturningnounsintoverbs" tag. New Here (Score:4, Funny) Could an ejabbered XMMP server really be said to be Twitter-like? I don't think that Twitter-like systems are the way to go here. That's really cool, we could really use a Twitter-like enjabered XMMP server here. It will revolutionise computing! Re: (Score:2) Re: (Score:2) "that ejabberd XMPP server can be used to develop a distributed Twitter-like system." What the hell does that mean? I don't know whether to apply the "alphabetsoup" tag or the "stopturningnounsintoverbs" tag. Where do you see a verb that used to be a noun? ejabberd is an XMPP-server (that apache HTTP server -> that ejabberd XMPP server) "can be used to develop" should be obvious "a distributed": in this case this means that the network does not have a central server; the administration is distributed among the different XMPP-servers "Twitter-like system": a system like twitter (twitter.com afaik). Reading comprehension isn't your strong point, n'est-ce pas? Re: (Score:2) I'll be curious to see if XMPP makes it into the world of intra-application messa Re: (Score:2) "that ejabberd XMPP server can be used to develop a distributed Twitter-like system." You mix twat,tit and throw in some dimwit and you get a twit. Do a while (42) {} on it and here is your twitter. On a more serious note the more obscure parts of the XMPP spec can be read in so many ways that there is always a way to create non-interoperable clients. For example - the thread support which is even in the original RFC is still not implemented in any of the clients (I did a patch for pidgin a while back, it is still sitting in the queue). Same for whiteboarding and many other things. Classi Re: (Score:2) "XMPP has been designed since the beginning as an open technology for generalized XML routing." The best reasoning I have is the use of "has been designed since" Re: (Score:2) So XMPP was designed from the beginning as an open technology for generalized XML routing." Better? Distributed computing and "the cloud" (Score:2) I'm interested in how this protocol can help glue together applications for better/easier/simpler distributed computing With all the cheap servers with multi-cores we have, it seems like we all have the ability today to do distributed computing on our own grid. This site (and corresponding book) Enterprise Integration Patterns [enterprise...tterns.com] was very enlightening to me as I thought more about messaging and less about imperative programming. New technologies like Terracotta [terracotta.org] (for Java) make distribution simple, too. Eve Thanks Google (Score:5, Insightful) Re: (Score:2, Interesting) PFTLOGCWCUWMUA (Score:3, Funny) Ugh! XMPP is a PITA (Score:4, Interesting) XMPP also requires you to keep a fair amount of state information. Stuff I seemingly would think should be kept by the server. I suppose by making the server really dumb (basically a router) you really put the eXtensible in XMPP but at the cost of a more complex client. On its surface XMPP looks great: an open-source IM protocol!! Once you, the newb, get into it it gets really ugly. Then again, maybe I made a poor choice in a python package or I just happened to not find that key page with google that basically explains my problem away (and that's all it is is acclimation, it's not terribly difficult once you "get it"). Not even the wikipedia page [wikipedia.org] explains inner-working details of XMPP. And FWIW, I was *trying* to do what this story was saying XMPP is going to be so great for: server glue for a distributed web-based application. Where I sit now with what [little] I know: I completely disagree until someone wraps it all up into a super-easy library (which shouldn't be too hard). Re:XMPP is a PITA (Score:5, Informative) Re: (Score:3, Informative) The best way, I'm afraid, is to read the RFCs [xmpp.org] (mostly 3920 and 3921. There are updated, clearer drafts, 3920bis and 3921bis, a link away from that page) and XEPs [xmpp.org] (XMPP Extension Proposal). There's also a book [oreilly.com], but I heard that it's a bit outd Re: (Score:2) What the hell? [jajcus.net] How do you get javascript from that?!?!?! As for this part: I don't need the library to hold my hand and explain XML, the concept of "online" vs. "away", etc. I want a page that says you have to: connect, authenticate, pull down a roster, send presence notifications, and then send messa Re: (Score:2) PyXMPP -- Python Jabber/XMPP implementation How do you get javascript from that?!?!?! Woops, you're right. I misread /.'s domain shortcut, jajcus, for JSJaC [in-berlin.de]. Regarding doc, I'm didn't mean that you needed to read all the basics down to XML Namespaces or whatever, but I was surprised by your frustration with interacting with XMPP, and interpreted it as a lack of doc for semi-advanced topics. I couldn't believe that PyXMPP wouldn't provide even a basic tutorial on getting online and sending a message. Indeed, I haven't found anything after some quick googling. As I said, there's a host of libra Re: (Score:3, Informative) If you are enough of a programmer to deal with Jabber, which means being comfortable with XML, this is by far the easiest bit of working with Jabber. All the tricky bits like connecting and stuff are the harder bits worth writing a library for. Look for a library that handles: Re: (Score:2) The little bits that I found were either sparse, incomplete, not authorative, outdated or wrong. Often it was *all* of that at the same time. Even most of the sample implementations and libraries were outright broken. XMPP sounds like a great idea in theory and the existence of fairly mature implementations (in erlang FWIW) suggest that *someone* must know how to put t Re: (Score:2) Learning to use the libraries, then, requires reading the protocol specificat Re: (Score:2) As far as implementations, there are 3 major Open Source servers: ejabberd (Erlang), OpenFire (aka WildFire, written in Java) and jabberd2 (C), not to mention djabberd, LiveJournal's Perl-based jabber server framework. Re: (Score:3, Informative) We use jabber.py [sourceforge.net]. It hasn't changed in years, but our needs are pretty simple and it meets all of them easily. For example Honestly, I don't know how you could make that a whole lot simpler. Re: (Score:3, Interesting) Yeah, that's brilliantly simple. Except when you have no clue what you're doing. Look at the documentation provided by jabberpy. It's computer-generated pydoc with pseudo code on making a simple client. Did they have time to write an entire jabber library but couldn't taken the 45 seconds you did to write actual code instead of pseudo code? Now when you take your bare example and try to receive messages then your simplicity isn't there any Re: (Score:2) What I was addressing was that the fault is in the library (or libraries) you've tried, not that writing a client is inherently difficult. XMPP isn't a big deal, except on a phone (Score:2) Re: (Score:2) Re: (Score:2) I actually have a smartphone with a Jabber client, and since I primarily use IRC, I wrote some scripts to glue messaging between xmpp-irssi and irc (mostly a gateway between the two so that I could bridge my IRC session to my phone.) The dr The 'X' means Extensible (Score:2) One XMPP decentralized online social networking ?! (Score:2) [1] [xmpp.org] Anything can be the next big thing (Score:2) 1) can overcome real difficulties (Something which overcomes the design principle that everything should be "downwards compatible" to be transported over web servers/proxies/http or funny extensions/hacks of these, overcomes a real problem). 2) Is supported by google has chances to be a big thing. Let's hope not (Score:2, Interesting) For one thing, it is an example of how NOT to use namespaces in XML. Many elements are needlessly separated out, causing a lot of confusion and problems for simple xml parsers. Namespaces do not solve the problem of name conflicts, as the xmpp site still has a registry of namespace names. Separating out extensions - maybe, but the whole point of namespaces is to avoid conflicts when two _disjoint_ e Re: (Score:2) Re: (Score:2)
https://developers.slashdot.org/story/08/02/04/1320210/is-xmpp-the-next-big-thing
CC-MAIN-2017-43
en
refinedweb
.'" Unix (Score:5, Insightful) Can you imagine if Bell Labs had sued for control of the Unix APIs? We'd never have GNU, Linux, or many other projects that rely on those. It would be a different world. Re: (Score:2) But just to play Devil's Advocate here I'd say there is a downside in that no corp that doesn't expressly make their living using the GPL "Blessed Three" which is 1.-Selling Services/Support, 2.- Selling Hardware or 3.- The tin cup will touch a GPLed company with a 50 foot pole because this ruling as far as the corps will be concerned makes any and all code of that company public domain and thus worthless if you don't use the blessed three. Now some may consider that a good thing, after all Red hat has been Re: (Score:2) WTF does public domain have to do with the GPL? Re:Unix (Score:5, Interesting) We just had a test of this on a major GPL company. Trolltech was sold to Nokia for $153m. Their product was GPL / commercial and profitable, though $150m was grossly overpaying. Nokia LGPLed it which killed the 2 distribution model and thus Trolltech's way to make money on software. When Nokia sold Trolltech to Digia I think it was about $5m total. Re:Unix (Score:4, Interesting) Even though we rarely agree i have to give you credit is that is a GREAT example of what happens when a company that doesn't make their living using the GPL model tries to buy a GPL company, it ends up in a mess. And I have taken shit over the years for pointing out that the GPL works best (and I would argue ONLY) with what I call the blessed three...what is wrong with that? Red Hat has made a billion dollar business out of the blessed three so it obviously works, it simply doesn't work with all kinds of software. for example I'd have a hard time seeing how you could make a profitable business out of desktops or video games using the blessed three model, which is probably why we have seen no serious competition from GPL software on those fronts. It simply doesn't fit into the methods of making money with GPLed software. Personally i think in the long run this may turn out to be a good thing, companies that get bought for insane amounts of money usually end up getting turned into a mess by the buyer if they can't see a quick enough ROI and as companies like Red Hat have shown you have to be in it for the long haul with the GPL, you can't just flip companies for quick cash it just doesn't work that way. So maybe this will bring some sanity into the market and the only ones that will buy GPL companies will be those already making money using the GPL that will know how to treat the purchases right, not bring a mess of uncertainty like Oracle did with Sun. Re: (Score:3) If it is any consolation Oracle has had problems with companies like Peoplesoft as well. They don't do well with merges it ain't just the GPL. If I had to guess I'd assume that primarily Oracle doesn't care about making the money directly. I think they were buying Sun's customer base a huge percentage of whom were Sun/Oracle so that they could switch them gradually to x86/Oracle rather than potentially losing them to x86/Postgres or AIX/DB2. I think the complexity of Sun caught them off guard. But no qu Re: (Score:2, Interesting) They effectively did, by suing BSDI at the time and by their descendants suing Linux. They even claimed certain things in K&R's book on C programming were "AT&T Trade Secrets" and "encumbered". Indeed, that was a major thing with AT&T at the time. If you knew how to program in C, you were "encumbered" by AT&T trade secrets and they (sometimes) would try to claim anything you did or developed as a result really belonged to them. That was a common attitude at the time, which is why compilers of Re: (Score:2) Re:Unix (Score:5, Funny) While I agree with the ruling, Oracle didn't sue for control of Android. No, their motive was simply the same as every lawyer in the copyright business: "All your moneys are belong to us". Re:Unix (Score:5, Interesting) One note on this idea that lawyers are about taking everyone's money - it is akin to saying that software developers are out to take money from anyone who is a client of their development skills. Lawyers are proxies - they themselves don't do anything at all. They are like paid soldiers on the legal battlefield. Lawyers don't have standing to sue anyone themselves, nor can they bring suits without a client. The client is the one who is suing, and the client is the one who has a claim. Lawyers can be paid hourly, or flat fee, or contingency percentage. In other words, if the client is asking the lawyer to bring suit, and will pay 10% of the damages in fees, then the client has agreed to that. The real question is: why are you so indignant that lawyers get paid to represent clients? Do you hate the adversarial court system? Then legislate to change it. Do you hate lawsuits? Them legislate change to how lawsuits are brought. The idea that lawyers get PAID (heaven forbid) to represent someone's interests should not be a shock to you. We pay for all kinds of services from waiters to janitors to tax accountants to represent our interests, but if it is a legal representative, OH NOES! I have family members in the legal profession, and they are good people - I get very tired of hearing about how evil lawyers are. If anyone is evil it is the CLIENTS. Re:Unix (Score:5, Insightful) There are too many lawyers that are too willing to say or do ANYthing for money. At least whores have limits. That's not to say all lawyers are that way (I know a few who I believe to be good people doing good things), but too many are. As officers of the court, it is their duty to keep crap out of the courtroom. As for the rest, nobody ever lost their house because someone hired a waiter but they couldn't afford one so they got their own sandwich. Lawyers are hardly the only problem, but they (collectively) have the power to stop it and don't. They are also the public face of the very troubles system. Re:Unix (Score:4, Insightful) Its not the getting paid. Its the creating ways of getting paid by raping and pillaging society. Patent Trolls, Divorce Ninjas, Ambulance Chasers, Corporate Mercenaries, Political Lobbyists, and a whole zoo of lower life form crawling a full kilometer below the most disgusting social muck, all in the name of wealth and power. I'd take the dignity and straight up honesty of a hard working whore over such human toxic waste any day of the week and twice on Sunday. I don't have a problem with the majority of lawyers who are honest, hard working people many who believe in what they do. I have a problem with people whose morality is tied to personal expedience, and whose personal interests transcend any and all moral value. We hung people at Nuremberg, for committing atrocities, but at least they were in fact following orders or following some ideology albeit abhorrent. These people commit atrocities, sell future generations down the stream, gut civil rights, sell their soul and intellect to the highest bidder and remake our system of jurisprudence into a perpetual fart joke (an endless stinking noise we can all share in.) I would gladly double all their wages if they would just grow a soul. Re: (Score:2) Have you been killed dead by pencil sharpeners? If so, please dial our number immediately, you may be entitled to reasonable compensation for your death. We here at Dewey, Cheatem, and Howe are trained in keeping your interest and payout for the pain and agony your death has caused. Pencil sharpeners have been shown to be virtual harbingers of death, cruelly subjecting innocent people to the business end of sharpened pencils. The FDA has issued a ruling requiring pencil sharpener manufacturers to fund a fun Lawyers (Score:4, Insightful) Lawyers are proxies - they themselves don't do anything at all. But they do. They perpetuate a system where you *have* to use an *expensive* lawyer in order to get justice or to protect yourself from legal attacks (which are usually more harmful than physical ones). They are like paid soldiers on the legal battlefield. No. Lawyers are akin to mafiosi operating a protection racket. Lawyers don't have standing to sue anyone themselves, nor can they bring suits without a client. The client is the one who is suing, and the client is the one who has a claim. Many countries have socialized medicine. Most countries have a socialized education system. As long as you must pay, often a sum that will bankrupt the average person, to defend yourself in court, justice is only for the rich. (Preemptive note: the "public defender" option is a fig leaf, it does not work, on purpose). Lawyers can be paid hourly, or flat fee, or contingency percentage. Guido can break your kneecaps, burn your house or rape your sister. I guess it's OK, as long as you have a choice. The real question is: why are you so indignant that lawyers get paid to represent clients? As a Canadian, I know that my basic health-care does not depend on the thickness of my wallet, and yet, doctors here still get paid. Plus, I do have an option to use a private clinic if I want to. I trust you're intelligent enough to compare and contrast. Do you hate the adversarial court system? Then legislate to change it. Legislators are lawyers. Good luck making them legislate against their own interests. Campaign contributors and lobbyists are big businesses, who want to have an unfair legal advantage against those less wealthy. Good luck making legislators legislate against those who pay them. Do you hate lawsuits? Them legislate change to how lawsuits are brought. See above. Similarly: Do you hate the mob? Then change how it operates. The idea that lawyers get PAID (heaven forbid) to represent someone's interests should not be a shock to you. We pay for all kinds of services from waiters(1) to janitors(2) to tax accountants(3) to represent our interests 1) I face no adverse consequences for choosing not to go to a restaurant. Everybody can cook a passable meal. 2) I am not forced to use the services of a janitor. The janitor union does not try to make maintenance as difficult and incomprehensible as possible to lay people. 3) I pay $20/year for a piece of software to do my taxes for me. I could use a free one which is no less good, but I find the paid version more convenient. I have family members in the legal profession So you are biased. and they are good people In your opinion. I am sure that many family members of the RIAA/MPAA/BSA feel the same (organizations chosen to avoid Godwining this thread). I get very tired of hearing about how evil lawyers are. The truth hurts. Wrong in quite a few ways. (Score:5, Insightful) 1. You have to obey the rules to get the Java license, but your compiler, if it isn't being called java, doesn't have to obey them. dalvik. 2. The license doesn't require you implement at least one java standard. You have to implement a minimal functionality and can place your own in a different namespace. But you don't have to implement at least one java standard. But see #1 as to why this doesn't apply 3. dalvik was written because Google didn't want to implement java to their license, not because they couldn't. Your assertion of google's jerkness is predicated on incorrect assertions. Re: (Score:2) Consider the many security problems Java has had wherever a browser plugin exposes it to the real world. Now consider how much more exposed an app on a phone can be. Then you'll see why Google needed more freedom to maneuver than Oracle was willing to give them. Re: (Score:2) That's just dumb. The security problem is that the sandbox has too large of an attack surface. If you're using the OS to contain untrusted code, rather than the sandbox, Java is just as secure or insecure as C or Perl. Re: (Score:2) Who said anything about the OS? Java is supposed to support a sandbox well, but in practice it didn't work out that way. To have a prayer of making it work, Google needed to be free to modify the spec in a few critical places. Re: (Score:3) Or don't use the sandbox. It's an added feature that most languages do not have. Re: (Score:2) if your system has any local user to root exploit then exploiting it from a dalvik application is trivial. there's no separate permission for running local native code(as the sandboxed user created for the application). I think you would be better off comparing dalvik to the numerous j2me implementations, which are often more secure and more transparent about which data the application is accessing - actually to the point that on most phones several API's are totally unusable because the user is bombarded wi Re: (Score:2) Consider how much worse it would be if it was vanilla Java. But most of that malware is of the Trojan type. It claims to be one thing, but does another with the unwitting permission of the user. That is quite different from circumventing a security mechanism to do something that was forbidden. Re: (Score:3, Informative) I wish people would stop thinking this is about Java. Its about Java Mobile Edition. Now Oracle (or Sun, whatever) released Java with a permissive licence that said you can use it pretty much freely, but they kept the Java ME version to themselves, if you wanted the phone edition, then you had to buy a licence form Oracle. Google didn't feel they wanted to give Oracle a percentage of each Android sale so made their own very-compatible version. Much as I dislike Oracle, I don't feel Google played fair on this, Re:Wrong in quite a few ways. (Score:5, Insightful) How exactly did any technology got stolen? 1) Dalvik is completely different from the usual Java VMs (Dalvik is register-based, while Java VMs are stack-based). That's how they got away with a VM that interprets (modified) Java bytecode without infringing any patents related to Java (as was confirmed by the "Oracle vs Google" case). 2) Android uses the exact same interfaces than Java, but that has been standard practice for decades (Unix, for example). No code between Android and Sun/Oracle's Java is the same, except for the stuff that must be the same to implement the same interfaces. The "Oracle vs Google" confirmed that's OK (as the summary says). In the end, what happened is that Google didn't want to pay Sun/Oracle for a license to use Java mobile, so they implemented their own Java-compatible system (or rather, bought it from other people). That's how technology evolves. As someone else said here a few posts back: imagine if anyone wanting to do a Unix-like OS had to pay a license to someone. Where would we be now? Re: (Score:3) AFAIK, Dalvik was created before Google purchased Android. And the developers of Android developed Dalvik because Sun had screwed them over before when they were using the JVM on their previous projects. Re: (Score:2) I've built java apps without either. Not sure exactly why you're having trouble with it.... Re: (Score:3) Let me quote RMS on that one Re:Wrong in quite a few ways. (Score:5, Informative) Stallman has defended right to fork long after XEmacs. At the time what he said was: Re:Wrong in quite a few ways. (Score:4, Insightful) Implementing your own system that meets your needs is not being a jerk. Asserting rights that you do not have is jerky behavior. Oracle is being the jerk in this instance. Re: (Score:2) Re: (Score:2) Re: (Score:2) Yes, it is about being a jerk. Sun was the jerk because they kept misrepresenting what Java was and what they were going to do with it in order to get industry support. They pulled out of standardization efforts and bullied other companies with outrageous legal threats, all the while running Java into the ground technically and having an army of contributors fix their problems for them under exploitative licensing terms. They had everybody by their balls. Android Inc decided to implement their own version of Re: (Score:2) I'm not being dense I don't think Google was being a jerk. Google was doing what they thought was best. Google doesn't work for Sun. Just because Sun wants something doesn't mean Google is being for not doing it. I'm sure Microsoft wasn't fond of Open Office, that doesn't mean that Sun was being a jerk for not canceling the project because Sun objected. I'm hard pressed to see what the difference is between what Google did when they created Davlik and what Sun did when they made Java from Oak. Re: (Score:3) Uh, sun praised google for the creation of android. You know that, right? [theblitzbit.com] . Anything following Oracle's acquisition at that point is entirely moot. Note the date, as well. You are incredibly biased and either accidentally or intentionally missing both a: facts and b: cognitive arguments being made by everyone else. Were you a sun developer or something? Re:Wrong in quite a few ways. (Score:4, Informative) Re: (Score:2) Re:Wrong in quite a few ways. (Score:4, Interesting) If you'd done any research at all, you'd find that Google refers to it as Java all over the place. The word "java" is all over the place in the name of the methods (e.g. java.lang.Number). Thus the "Java all over the place" doesn't play any role. Dalvik is the VM, not the language. I haven't followed the suit very closely, but in previous legal cases, Sun/Oracle not once tried to blur the line between "Java as implementation" (aka JVM and Java runtime library) and "Java as API or language." Do not buy into it: there are two different things commonly referred to as Java: Java as language and APIs and Java as implementation of the language and APIs. Implementation is copyrightable - language and APIs are not. Re:Wrong in quite a few ways. (Score:5, Informative) You're getting your facts wrong. Sun *approved* of Google's efforts, publicly and officially, in the forum of their CEO's blog. Search (e.g., Groklaw [groklaw.net]) [groklaw.net] for more factual background and generally reasoned commentary on the Oracle suit. Larry Re: (Score:2) I'll check out the info, thanks. Re: (Score:2) The jerkiness of Google is they went against the direct wishes of the creator of the project. Sun wanted to make sure that Java was compatible with Java anywhere, which is why they sued Microsoft for adding incompatibilities. Sun decided they wanted to make Java open source, but took measures to make sure any new implementation would be compatible. They did whatever they could to make sure it would be. Then Google decided to make an incompatible version, apparently to avoid the J2ME license issues, but for whatever reason, they made it incompatible. Not cool. The creator of the Mp3 player probably never intended other companies like apple to add a bunch of DRM and make their Media Player incompatible with everything else. Re: (Score:2) you're quite wrong (Score:5, Informative) The creator of the project (Sun) had promised to make Java an open ISO and ECMA standard. That's why people initially adopted it. After several years, all of that turned out to have been a lie. Just because Sun decides they own or control something doesn't mean they do. Google didn't decide to make an incompatible version, Android Inc. did. Android Inc did this before Sun released Java under an open source license. Android Inc decided to do this because Sun had already screwed them once before on J2ME Making Android compatible with J2ME would have made no sense; J2ME was a lousy design. The only thing that's been "not cool" is Sun's long string of lies, their technical ineptness and mismanagement, and Oracle's attempt to establish API copyrights. Everybody else is just trying to dig out from under the consequences of their mess, deceptions and trolling. Re:Unix (Score:5, Insightful) Google couldn't get JSE to work on a phone, but could have implemented JME with little effort. Not true. Google didn't implement JSE because many of the libraries in that standard don't make sense on a touch screen phone. They cut out a bunch of libraries and they changed the I/O library. JME has different rules than JSE. Oracle was protecting its mobile business and wanted a cut of app revenue. Re: (Score:2) License for what? What exactly do you think Sun/Oracle owns that Android should have licensed? That requires them to pay money to Oracle. Furthermore, the original promise of Java was that it was going to be an open standard; Sun kept reneging on that promise. Re:Unix (Score:5, Interesting) They wanted control of what Android has become. They would have had it: Google tried to negotiate terms that would have given them that. But they wouldn't grant the licence necessary to let Android become what it has become. So Google had to do something else. It worked out for us. Android would not have been accepted as fast, nor progressed as fast, nor been as lightweight, if Sun had taken that deal. We wouldn't have as many of these amazing new things. Personally after having followed the case and read through what Oracle has claimed here I am unwilling to use anything from their company ever again - no matter how indirectly derived or loosely controlled. It truly is despicable. Re: (Score:2) Re: (Score:3, Insightful) I am unwilling to use anything from their company ever again - no matter how indirectly derived or loosely controlled. It truly is despicable. - I came to that realisation probably in 2007-8, when I saw the proliferation of their drones disguised as 'contractor architects', whose only real mission was to push Oracle solutions into every single aspect of every business they managed to stick their tentacles into. It's too bad Oracle bought Sun, it should have been Google or IBM. The fact that Oracle is the owner of Java TM and the reference implementation of JVM is extremely unfortunate (and it's part of the reason there is so much misinformation o Re:Unix (Score:4, Informative) Apropos misinformation. The reference implementation of Java 7 is now OpenJDK: [oracle.com] OpenJDK is under the GPLv2. So I don't know how Orcale "own[s]" the reference implementation of the JVM. Re:Unix (Score:5, Informative) I don't care if Oracle released it under the BSD license. They claimed in this suit to have copyrights on max() and min() as if they invented that shit. It took a judge who was a programmer also to call them out on it or that stupid shit would have gone to a jury ignorant of the technology history. "I could implement that in 15 minutes" or some such he said. They claim to have various patents to prevent all of Java, Unix, Linux, Windows and every other thing involving technology everywhere in this universe and any alternates that subordinate all of the open source licenses they grant. A company like that, their output is toxic. The safest course is not to deal with them or anything claiming to be derived from them at all. That means nothing from Solaris (not even ZFS or its derivatives), no MySQL (forks should be safe for now), No BTRFS, no Java - or anything even in the most remote sense related to Java. If they would sue over the API, what wouldn't they sue over? It's a shame as Oracle has bought up some of the greatest stuff in IT - but perhaps that's the point. Uncle Larry isn't and has never been in the giving stuff away business and has no respect for the folks who are. He's found great success in the selling stuff business so when he finds givers in dire straits he takes their scalps and then scalps their customers too. I have no interest in buying him another island. I've got useful stuff to do. Re: (Score:2) Re: (Score:2) Re: (Score:2, Informative) You mean, can you imagine of Bell Lab's hadn't sued for controll of UNIX. Bell did (well, it's parent, AT&T did). No, Bell Labs didn't sue for control of UNIX. BSD was sued by USL (UNIX Systems Laboratory, which at that time was a spinoff from AT&T) because there was UNIX code in BSD UNIX. It turned out there was more BSD code in USL UNIX so BSD countersued and the matter was settled out of court. Although that was a setback to BSD UNIX, it's questionable if that was Linux's most significant advantage. Re: (Score:2, Insightful) According to Linus Torvalds, it is the whole reason Linux even exists. Re:Unix (Score:4, Interesting). Re: (Score:3). That like your opinion, man. There's a million different ways to argue this, from "Stallman poisioned the well by using propaganda techniques to make people into GPL zealots" to "The Linux community had a structure that was better able to scale development and evangelization". The truth is that there were many different causes, and we'll almost certainly never know how important each was, and how much of this was basically random chance. I can counterfact some of your hypotheses above, in case you're interes Re: (Score:3) Well written counter argument. "Stallman poisioned the well by using propaganda techniques to make people into GPL zealots That's very different than the lawsuit. And I think no question that Stallman, had a huge impact on making people believe in the GPL and changing people's mind. But if you look at the LAMP stack: GPL: Linux, MySQL BSD/MIT style: Apache, Perl So at least by the early 1990s were it the case that the BSD/MIT licenses were far superior there were ways people could have seen it. Looking back Reading between the lines (Score:5, Insightful) Re: (Score:2, Insightful) Because it's very difficult to legally distinguish between a troll and a tiny operation with genuine innovation that has gotten screwed over by a large corporation with lots of lawyers, especially for the non-expert patent officer whose job it is to grant the patent. Does it have a product? (Score:3) it's very difficult to legally distinguish between a troll and a tiny operation with genuine innovation Does it develop and license significant know-how related to its patent portfolio? If so, it's genuine innovation (cf. ARM). Re: (Score:2) Does it develop and license significant know-how related to its patent portfolio? If so, it's genuine innovation (cf. ARM). That's a fairly good negative proof, but the absence is hardly a good positive proof. After all, most companies are in the business of using their own innovations not licensing them to third parties. Nor do small companies have a patent portfolio, they may have patented one or a few key innovations that their business model revolves around that would be their unique selling point. And demanding it be in actual use it is a lot harder for small innovators that are looking for funding/production/partners/custo Re: (Score:2) How about "No, it doesn't, because before they finished development MegaResourcedCorp had been selling their unlicensed product for 6 months, and had been given other patents that prevent anyone (including the original inventor) from producing a competing product." Re: (Score:3) Their are penalties for frivolous lawsuits. The problem is there is a huge range between bad lawsuits and frivolous. The bar is high for frivolous. As an American I'd like to see more suits have to pass a quick summary judgement on viability. And much stronger restrictions on changing the initial fillings. Google should have bought Sun (Score:5, Interesting) Google worst decision was to let Oracle buy and cannibalize Sun. It would have saved us and them from all these nonsense. Also Google's philosophy is so much closer to Sun's: great engineering and giving back to open source. The only thing Google is different than Sun is that they know how to profit from their products. Heck, if they didn't want to spend all the money on their own they could lead a group of companies to buy out the IP of Sun. It's really a pity that Oracle got a chance to buy Sun. I couldn't have imagined a worst end for such a great company. Re:Google should have bought Sun (Score:5, Insightful). I expect Ellison to join Ballmer in the stupid executive's retirement home. Both have fucked up hugely. Re: (Score:2) Sure wish I could fuck up as rich^H^H^H^Hbadly as them. Re: (Score:3) Sure wish I could fuck up as rich^H^H^H^Hbadly as them. Royalties from legacy products. Innovation? Oracle & Microsoft are in the same boat, scrambling for a life preserver. People aren't going to keep paying for Oracle and Windows if they're replaced by something that everyone else adopts (like they did Oracle and Windows...) Re: (Score:2) What is competitive with Oracle for large relational databases other than DB2? Oracle financials, Peoplesoft, Siebel, JD Edwards... I think they are fine for now. Re: (Score:3) PostgresSQL. If you're using the non-standard PL/SQL stuff in Oracle you're never getting out though. Re: (Score:2) Postgres is a comparable to about Oracle 8 in terms of features, there are still very good reasons to use Oracle. Besides my point was that even if the database business does die they have many many more products now. Re: (Score:2) From what I've seen, the rest of their products are quite poor. I always joked that they should stick to databases, but even that has significant competition now. Re: (Score:2) No. There is nothing like Peoplesoft or Siebel. Oracle financials is excellent. I would never describe those as poor. Re: (Score:2) Sure wish I could fuck up as rich^H^H^H^Hbadly as them. Royalties from legacy products. Innovation? Oracle & Microsoft are in the same boat, scrambling for a life preserver. Yeah, maybe Oracle and MS are scrambling for a life preserver, but Ellison and Ballmer certainly aren't. Sure, their companies woes might mean that there will be an upper limit on how many Gulfstreams they can buy, but I'm sure that 99.9% of Americans wouldn't mind the difficulties of going into the "stupid executive's retirement home." Re: (Score:2). Not to mention with this case they pretty much settled it if Google starts going after desktop java with an enhanced Dalvik for laptop/desktop replacements. What would be nice though is if the Linux devs could get their hands on a GPL-compatible version of Solaris so they could integrate ZFS and some various other goodies, but I suspect you'd have to pay dearly for that. Re: (Score:2) ZFS is open source. Turns out it sucks on inexpensive (x86 quality) hardware, in ways that are unfixable even by smart people. Apple lost years proving that. But [zfsonlinux.org] But Oracle's BTRFS plays the same role and is even better. Re: (Score:2) The suposed problem you are referring to (ZFS reliability on cheap USB hardware which ignores cache flushes) was in fact well known, and easily fixed. It just took a long time, as no one sane would run ZFS on USB hardware to start with. All Apple proved was that their engineers had a very shallow understanding of ZFS. Re: (Score:2) All Apple proved was that their engineers had a very shallow understanding of ZFS. I wouldn't say that - they have an independent company now continuing the project and many people like it. Steve Jobs killed the project because Jonathan Schwartz blabbed to the press that Apple was going to embrace it, and nobody punks Steve Jobs, or ELSE! Re: (Score:2) Apple doesn't have a filesystem strategy today. They have had this problem since OSX 10.5. Steve Jobs might have an ego about getting punked, but he wasn't crazy. Re: (Score:2) Apple supposedly had problems with their internal drives, where it mattered. As for shallow understanding of ZFS they hired several of the world's leading experts on it for their port. Re: (Score:2) But Oracle's BTRFS plays the same role and is even better. Wow, you haven't deployed either of them at scale, have you? There's still frequent data corruption going in btrfs land and you'll be in for major pain if you try to host a VM storage file on it. They did recently fix the abysmal fsync performance, which is good, and a cursory fsck is available now. The kernel folks are still figuring out how to do cache devices in the dm stack and send/receive are still experimental. The benefits to btrfs being what Re: (Score:2) Wow, you haven't deployed either of them at scale, have you? Nope, parroting other's opinions. Since Oracle is still selling ZFS, it would be hard to see why they would put significant resources into creating a free alternative at this point. AFAIK BTRFS is better for Oracle and more feature rich. Ultimately it is not to Oracle's advantage that Z-OS, I-OS (the IBM one) and AIX are the OSes for very large storage. That's how they could lose to DB2 / Netezza. Re: (Score:2) Re: (Score:2) Maybe if Sun and Motorola had merged, and made a good Solaris/Java-based phone/tablet platform? Re: (Score:2) Of course they could spend their money on any number of products that they want to keep out of competitors hands. Now, I'm the farthest thing you can be from a businessperson, but even I find it hard to imaging that it's a good practice to do this. Wasting tons of money on something just so the other guys don't get it just seems... wasteful. Of course, even Google has succumbed to wasting large sums of cash on patents and the like, so perhaps they're not as smart as we hoped.... Touch down for common sense! (Score:2) And, while the Android method and class names could have been different from the names of their counterparts in Java and still have worked, copyright protection never extends to names or short phrases as a matter of law. That's it. Who disagrees? Re:Touch down for common sense! (Score:5, Informative) "Droid" is a valid Trademark vs. "GetCurrentTime()" being an invalid Copyright. Re: (Score:2, Informative) Trademark's not the same as copyright derp. Try again. Ann Droid vs technical documentation (Score:4, Insightful) Oracle kicks off its legal arguments with the tale of a mythical writer, Ann Droid who copies the titles and some sentences from a Harry Potter book and publishes her book. Oracle then argues that we would not accept that. BUT, API's should rather be compared with writing an anatomy book. We all would have chapters like 'Introduction, digestive tract, neural system etc.'. So if the argument of Oracle hold, no_one_can write another anatomy book (or most technical books). Dictionary (Score:3) Re: (Score:2) A glimpse into the world as seen by Larry Ellison If I were Daniel Webster, I'd be kicking myself for not trying to copyright English.... Re: (Score:2) The BBC [wikia.com] should sue Oracle. Looks like a win for progress (Score:2) Re: (Score:2) Does this mean... (Score:2) Does this mean that mono is protected from Microsoft's .net in the same way? Not trolling, just seriously asking. Re: (Score:2) doubt it, Microsoft never asserted copyright claims to the .NET API words, they do however claim to have a shed-load of patents that they won't use against mono should be become successful, honest. Patent law being a lot more screwed than copyright law is, I wouldn't count on it. Trademark (Score:2) Why didn't Oracle sue over the Trademark instead? It worked for Sun against Microsoft. Re: (Score:2) Why didn't Oracle sue over the Trademark instead? It worked for Sun against Microsoft. Google was careful to not call it Java. oracle can't see the future (Score:3) Re: (Score:3) Don't despair. All the lawyers got paid a few metric assloads of loot. That's trickle-down economics in action. Working for the people. Re: (Score:2) Perhaps to the standards of ordinary people. However the real metric assloads of loot come from settlements in class action suits. Re:Oracle is based on copyright violations then... (Score:4, Informative) Just in case anyone believes this: Oracle founded 1977 DB2 first version 1983 Re: (Score:2) R was a research application it wasn't commercial. Oracle was very very different. There wasn't any code in common.
https://developers.slashdot.org/story/13/03/31/162230/oracle-clings-to-java-api-copyrights
CC-MAIN-2017-43
en
refinedweb
CodePlexProject Hosting for Open Source Software Hi Team, As we do with MVC insert partial view on some part of page by, <% Html.RenderAction("GetDataList", "Home", new { area = "TestModule" }); %> Where in controller action I return object to view (partial view) I done this in orchard but came with an error :- “Multiple types were found that match the controller named 'Home'. This can happen if the route that services this request ('TestModule/{controller}/{action}/{id}') does not specify namespaces to search for a controller that matches the request. If this is the case, register this route by calling an overload of the 'MapRoute' method that takes a 'namespaces' parameter. “ What I should do in Orchard Hope you got my question Thanks, Rajan J Dmello You might need to create a Routes.cs file for your module, I think I had this problem so I added a Routes.cs file (I will double check tonight/tomorrow). Orchard.Blogs\Routes.cs is an example Thanks man I want this only but hard to get even i am trying with same it's good for me if i will get proper steps I created to track this. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/229245
CC-MAIN-2017-43
en
refinedweb
- Chapter Doing Business with SkatesTown When Al Rosen of Silver Bullet Consulting first began his engagement with SkatesTown, he focused on understanding the e-commerce practices of the company and its customers. After a series of conversations with SkatesTown's CTO Dean Caroll, he concluded the following: SkatesTown's manufacturing, inventory management, and supply chain automation systems are in good order. These systems are easily accessible by SkatesTown's Web-centric applications. SkatesTown has solid consumer-oriented online presence. Product and inventory information is fed into the online catalog that is accessible to both direct consumers and SkatesTown's reseller partners via two different sites. Although SkatesTown's order processing system is sophisticated, it is poorly connected to online applications. This is a pain point for the company because SkatesTown's partners are demanding better integration with their supply chain automation systems. SkatesTown's purchase order system is solid. It accepts purchase orders in XML format and uses XML Schema-based validation to guarantee their correctness. Purchase order item stock keeping units (SKUs) and quantities are checked against the inventory management system. If all items are available, an invoice is created. SkatesTown charges a uniform 5% tax on purchases and the highest of 5% of the total purchase or $20 for shipping and handling. Digging deeper into the order processing part of the business, Al discovered that it uses a low-tech approach that has a high labor cost and is not suitable for automation. He noticed one area that badly needed automation: the process of purchase order submission. Purchase orders are sent to SkatesTown by e-mail. All e-mails arrive in a single manager's account in operations. The manager manually distributes the orders to several subordinates. They have to open the e-mail, copy only the XML over to the purchase order system, and enter the order there. The system writes an invoice file in XML format. This file must be opened, and the XML must be copied and pasted into a reply e-mail message. Simple misspellings of e-mail addresses and cut-and-paste errors are common. They cost SkatesTown and its partners both money and time. Another area that needs automation is the inventory checking process. SkatesTown's partners used to submit purchase orders without having a clear idea whether all the items were in stock. This often caused delayed order processing. Further, purchasing personnel from the partner companies would engage in long e-mail dialogs with operations people at SkatesTown. This situation was not very efficient. To improve it, SkatesTown built a simple online application that communicates with the company's inventory management system. Partners could log in, browse SkatesTown's products, and check whether certain items were in stock. The application interface is shown in Figure 3.1. (You can access this application as Example 1 under Chapter 3 in the example application on this book's Web site.) This application was a good start, but now SkatesTown's partners are demanding the ability to have their purchasing applications directly inquire about order availability. Figure 3.1 SkatesTown's online inventory check application. Looking at the two areas that most needed to be improved, Al Rosen chose to focus on the inventory checking process because the business logic was already present. He just had to enable better automation. To do this, he had to better understand how the application worked. Interacting with the Inventory System The logic for interacting with the inventory system is very simple. Looking through the Java Server Pages (JSPs) that made up the online application, Al easily extracted the key business logic operations from /ch3/ex1/inventoryCheck.jsp. Here is the process for checking SkatesTown's inventory: import bws.BookUtil; import com.skatestown.data.Product; import com.skatestown.backend.ProductDB; String sku = ...; int quantity = ...; ProductDB db = BookUtil.getProductDB(...); Product p = db.getBySKU(sku); boolean isInStock = (p != null && p.getNumInStock() >= quantity); Given a SKU and a desired product quantity, an application needs to get an instance of the SkatesTown product database and locate a product with a matching SKU. If such a product is available and if the number of items in stock is greater than or equal to the desired quantity, the inventory check succeeds. Because most of the examples in this chapter talk to the inventory system, it is good to take a deeper look at its implementation. NOTE A note of caution: this book's sample applications demonstrate realistic uses of Java technology and Web services to solve real business problems while, at the same time, remaining simple enough to fit in the book's scope and size limitations. Further, all the examples are directly accessible in many environments and on all platforms that have a JSP and servlet engine without any sophisticated installation. To meet these somewhat conflicting criteria, something has to give. For example: To keep the code simple, we do as little data validation and error checking as possible without allowing applications to break. You won't find us defining custom exception types or producing long, readable error messages. To get away from the complexities of external system access, we use simple XML files to store data. To make deployment easier, we use the BookUtil class as a place to go for all operations that depend on file locations or URLs. You can tune the deployment options for the example applications by modifying some of the constants defined in BookUtil. All file paths are relative to the installation directory of the example application. SkatesTown's inventory is represented by a simple XML file stored in /resources/products.xml (see Listing 3.1). By modifying this file, you can change the behavior of many examples. The Java representation of products in SkatesTown's systems is the com.skatestown.data.Product class. It is a simple bean that has one property for every element under product. Listing 3.1 SkatesTown Inventory Database <?xml version="1.0" encoding="UTF-8"?> <products> <product> <sku>947-TI</sku> <name>Titanium Glider</name> <type>skateboard</type> <desc>Street-style titanium skateboard.</desc> <price>129.00</price> <inStock>36</inStock> </product> ... </products> SkatesTown's inventory system is accessible via the ProductDB (for product database) class in package com.skatestown.backend. Listing 3.2 shows the key operations it supports. To construct an instance of the class, you pass an XML DOM Document object representation of products.xml. (BookUtil.getProductDB() does this automatically.) After that, you can get a listing of all products or you can search for a product by its SKU. Listing 3.2 SkatesTown's Product Database Class public class ProductDB { private Product[] products; public ProductDB(Document doc) throws Exception { // Load product information } public Product getBySKU(String sku) { Product[] list = getProducts(); for ( int i = 0 ; i < list.length ; i++ ) if ( sku.equals( list[i].getSKU() ) ) return( list[i] ); return( null ); } public Product[] getProducts() { return products; } } This was all Al Rosen needed to know to move forward with the task of automating the inventory checking process.
http://www.informit.com/articles/article.aspx?p=26666&seqNum=4
CC-MAIN-2017-43
en
refinedweb
Assignment Operators Overloading in C++ Advertisements You can overload the assignment operator (=) just as you can other operators and it can be used to create an object just like the copy constructor. Following example explains how an assignment operator can be overloaded. #include <iostream> using namespace std; class Distance { private: int feet; // 0 to infinite int inches; // 0 to 12 public: // required constructors Distance() { feet = 0; inches = 0; } Distance(int f, int i) { feet = f; inches = i; } void operator = (const Distance &D ) { feet = D.feet; inches = D.inches; } // method to display distance void displayDistance() { cout << "F: " << feet << " I:" << inches << endl; } }; int main() { Distance D1(11, 10), D2(5, 11); cout << "First Distance : "; D1.displayDistance(); cout << "Second Distance :"; D2.displayDistance(); // use assignment operator D1 = D2; cout << "First Distance :"; D1.displayDistance(); return 0; } When the above code is compiled and executed, it produces the following result − First Distance : F: 11 I:10 Second Distance :F: 5 I:11 First Distance :F: 5 I:11 cpp_overloading.htm Advertisements
http://www.tutorialspoint.com/cplusplus/assignment_operators_overloading.htm
CC-MAIN-2017-43
en
refinedweb
In the late ’90s I thought COM (Microsoft’s Component Object Model) was the way of the future. The whole architecture starting with the IUnknown interface was very elegant. And to hear Don Box explain it, COM was almost inevitable. I was reminded of COM when I saw the slides for Kevlin Henney’s presentation Worse is better, for better or for worse. Henney quotes William Cook’s paper that has this to say about COM: To me, the prohibition of inspecting the representation of other objects is one of the defining characteristics of object oriented programming. … One of the most pure object-oriented programming models yet defined is the Component Object Model (COM). It enforces all of these principles rigorously. Programming in COM is very flexible and powerful as a result. And yet programming COM was painful. I wondered at the time why something so elegant in theory was so difficult in practice. I have some ideas on this, but I haven’t thought through them enough to write them down. Just as I was starting to grok COM, someone from Microsoft told me “COM is dead” and hinted at a whole new approach to software development that would be coming out of Microsoft, what we now call .NET. COM had some good ideas, and some of these ideas have been reborn in WinRT. I’ve toyed with the idea of a blog post like “Lessons from COM,” but I doubt I’ll ever write that. This post is probably as close as I’ll get. By the way, this post speaks of COM in the past tense as is conventional: we speak of technologies in the past tense, not when they disappear, but when they become unfashionable. Countless COM objects are running on countless machines, including on Mac and Unix systems, but COM has definitely fallen out of fashion. Related posts: 25 thoughts on “Remembering COM” I share your sentiment. The first time I thought I’ve understood OOP was when reading Don Box’s COM book. However, I never managed to get anything done productively with COM (partly because it was hard to get good information about deep technical topics, especially as a teenager without funds to buy expensive english books in germany) WinRT is the new enhanced COM, and .NET now seems to be on life support. From a user point of view, COM gave us a huge loss of abstraction. I can think of all sorts of reasons why things worked out this way, but COM wound up being the sort of undocumented complexity that makes computer systems hard to use. @Josh Certainly the ASP.NET stack is very much alive and kicking. I agree that for the desktop the picture is confused at best (not least because MS seems unable to decide for more than a couple of years at a time which technology to back) A somewhat different, and personal perspective can be found here where I describe a meeting with the COM folks in the early days. This post reminds me of a classic article by good old Joel Spolsky I didn’t do very much COM programming, but I always got the impression was that the problem wasn’t COM, it was the half-baked interface hierarchies which people built on top of it, like ActiveX. The quote about COM that enforces all the principles is very bad. After all the COM bashing done by Microsoft when .NET was introduced everybody knows COM is not OOP thanks to the lack of inheritance. With Object Oriented Methods if you have to look past the operational signature of an object’s methods the design has failed on a number of counts. Nothing wrong with you OO programmers out there, COM is just a mediocre design (where design also includes publishing that which needs to be known to take the design through to implementation I.e. usable documention). COM is what we used to call a jump table, promoted to a “technology” by obfuscation like only Microsoft knows how! I have used COM for nearly 20years. I’ve used it from Javascript, from VB, from COBOL, and now from C#. I don’t see it as “complex”; I see it as a very elegant way to build component based systems. Personally, I really don’t care what’s fashionable, I care about what is useful and what works. COM components (in a number of different languages) all play nicely together on .NET and I’m in no rush to replace this technology. I have C# COM components that get called from COBOL and I have COBOL COM components that get called from C# and ASP.NET. Everything works because there is a consistent interface and the properties and methods are also implemented with consistency. I LOVE the ability to reuse components across platforms. Much of the so-called “complexity” can be abstracted by using templates and there is nothing difficult about writing COM components in most high level languages. I also heard the rumour that COM would be killed by .NET. I ignored it and I’m glad I did. Pete. Not only biological entities are subject to evolutional forces, so are technologies. No doubt COM will be extinct someday, since its model adds little. Its has week meta information, it is to closed and to complex. I like simple things like “POCO” or “plain old C functions” without all this new garbage. Not really true: if you want to write .Net code that interfaces with Office for example you are using COM to access it. Also lots of win API calls go through COM so if you want to do something that never made its way into .Net (at least yet) you’ll likely end up using COM. Com is great, I’ve been writing all my hardware interfaces (services and dlls) for years in ATL and MFC. Maybe i’m old school but Com is really the only reliable tool I’ve found for automation in MS environments. I think COM lost out to .NET, because of features that weren’t done well, and that .NET succeeded at in a better easier way. For example, take garbage collection, with .NET the garbage collection was all done for you, and things were all so easy. There was an expression, ah all have heard that garbage collection is so important, (I forget exactly) but take it from the programmer, give it to some sophisticated algorithm and better world arose. With .NET the namespace libraries rose up something easy to surf, COM typelibrary were cold and static in comparison. What’s up WinRT? It works all right and nicely at that but it’s distant for sure. Javascript works as well as as .NET, and is jitted away at compiler speeds, it’s changed like COM and .NET the newer algorithmic code arrived. WinRT doesn’t replace COM, it is COM (with a face lift). Javascript/CSS/HTML5 is the replacement for COM and everything else that was windows only. I have tons of code that uses COM and it will be maintained for a while. I’m writing new stuff with J/C/H The fact that you need 4.5 versions of .Net installed on your system to support C# in an effort to kill off C++ and COM speaks for itself. I think of Windows as a house built of COM Objects, all exposed to the development environment via Intellisense. I still code initially in VS6 and recompile the project in VS2012, especially services and COM objects, largely for the ATL support and the much better help systems. I still remember being on a customer’s site at 3am with a COM-based application failing with the legendary “Method ~ of object ~ Failed” or “ActiveX Component Can’t Create Object”. It did at least force me to become very familiar with SysInternals tools like RegMon! The notion that COM is dead is exactly that, a notion. COM, or more specifically IKnown is the root of ALL .Net objects, each and every one of them implements the IKnown vtable, which is why they all can be exposed to other languages as COM objects. As long as there is Microsoft Windows, there will be COM. Those that really understand COM, will be able to achieve what appears to be the impossible to those that don’t know COM. All in all, COM isn’t all that bad, just takes some time to learn how it really works. We are very blessed to have ATL/WTL, that is for sure! Did COM+ become .NET ? COM can’t be dead – how else do you interract with Microsoft Office? Hi everybody, I wrote the essay that was the original motivation for Kevlin’s talk. I always enjoy reading discussion about this stuff. Some points: * Object-oriented programming does NOT require inheritance. This is one of the big mistakes of early descriptions of OO. Inheritance is useful, but not essential. Just look at JavaScript and Go. * COM is very difficult to learn, but it is an amazing technology once you understand it. Large parts of Windows are (still) based on it. I haven’t studied RT but I have heard that it is based on the COM model. * Note that COM supports cross-language integration. This is one of the benefits of its high degree of abstraction. The fact that C, .NET, C++, VB and JavaScript can all interoperate is very useful. * Note that Mozilla uses a variant as well, XPCOM William William Cook wrote: Absolutely, I would add that there is a lot of confusion out there about the differences between inheritance of implementation vs. type definition by inheritance. Smalltalk started with no inheritance. and then added implementation inheritance. It also uses class inheritance strictly as an implementation sharing mechanism, with methods like should_not_implement used in a hierarchy which clearly contradicts the notion of class hierarchy as type hierarchy. Of course your PhD advisor probably did more than most in propagating that “big mistake.” Here is how COM (Component Object Model) can be explained to a lame guy .. Microsoft with such beautiful technologies failed to reap the actual benefits out of it../
https://www.johndcook.com/blog/2012/11/12/remembering-com/
CC-MAIN-2017-43
en
refinedweb
, Feb 22, 2010 at 07:34:47PM +0100, Thorsten Alteholz wrote: > > > ... Are you saying it is NOT doable then? Or pedagogically stepping aside for the pupil to do it? :-))) Looking at the bsearch function and surrounding context I am lost in that syntax by all means, i don't want to burden the rest of the readers with a C/C++/Qt 101 ... trying to subscribe to the devel list now (long auto-response delay (?)) (although, i'm fine continuing this thread in this list ... perhaps we could finish it here, and then use devel for any subsequent code discussion? -- i'll comply whichever way) thanks andrew ... Thorsten On Sun, Feb 21, 2010 at 02:44:33PM -0500, Andrew wrote: > > > > > That said, it might be useful to add an "UNDO" and "REDO" and perhaps a > > > few other essentials to the config. > > > > As you seem to have some experience with lirc and RG, what are the other > > essentials you are missing? > > not sure. I have made several attempts over the years to grab RG by the horns (a-hem, by the keyboards, really -- sorry, couldn't resist), but due to glitches on my end and one missing component or another, have not really graduated past "tinkerer" (hoping to finally change that this time soon). > > > It seems to be easy to connect a button on the remote to any already > > available RG slot. So if I find that UNDO and REDO slot it should be no > > problem to add these ... > > well, there are references to both "undo()" and "redo()" in > src/gui/application/TranzportClient.* files > > so, I added undo and redo to all the listings of slots and signals within > > src/gui/application/LircCommander.* > > as well as my .lircrc, of course (with new config names "UNDO" and "REDO"). > The source then compiled without errors, and LIRC continued to work but only with the original set of singals/slots, my two don't seem to work.... something undeclared somewhere? (but then the compiler would complain, right?) OK -- I guess I'm not even capable of mindless aping/mimicking I altered my own code, previously quoted here: from... > connect(this, SIGNAL(undo()), > m_rgGUIApp, SLOT(undo()) ); > connect(this, SIGNAL(redo()), > m_rgGUIApp, SLOT(redo()) ); to.... connect(this, SIGNAL(undo()), CommandHistory::getInstance(), SLOT(undo()) ); connect(this, SIGNAL(redo()), CommandHistory::getInstance(), SLOT(redo()) ); (and also included the proper header as it is included in the TranzportClient.cpp #include "document/CommandHistory.h" ) ---- However, (in debug mode), rosegarden now says: --- debug quote: ------------- [generic] LircCommander::slotExecute: invoking command: UNDO [generic] LircCommander::slotExecute: invoking command: UNDO failed (command not defined in LircCommander::commands[]) ------ end quote --------------- But! I debuggingly added "emit undo()" next to another working command (trackMute()), like so: case cmd_trackMute: emit trackMute(); emit undo(); then compiled, and result: Clicking the LIRC button mapped to trackMute (TRACK-MUTE), now SUCCESSFULLY TRIGGERS a Rosegarden "Undo" ! So, the failure seems to be with "LircCommander::commands[]", but I cannot see (not surprisingly, with my near-zero knowledge of C/C++) why my additions to the "LircCommander::commands[]" declaration list is inadequate... Thorsten, can you slam-dunk this one in? thanks! andrew PS (i'm not subscribed to the devel list; didn't think this calls for that yet, but if this continues (unlikely), perhaps I should) > ---------------------- > > Regarding controls over the network: I do not see a dire, sine-qua-non need for such, but have a vague notion such a feature might yield interesting things, creative solutions... if the coding isn't tedious, i vot yes on it. > > ----------------------------------- > code changes, FWIW : > > in file LircCommander.h: > > signals: > <snip most of list> > void trackRecord(); > void undo(); > void redo(); > > -------------- > enum commandCode { > <snip most of list> > cmd_trackRecord, > cmd_undo, > cmd_redo, > }; > > > in file LircCommander.cpp: > > LircCommander::LircCommander(LircClient *lirc, RosegardenMainWindow *rgGUIApp) > : QObject() > { > m_lirc = lirc; > m_rgGUIApp = rgGUIApp; > > <skip most of items...> > > connect(this, SIGNAL(trackRecord()), > m_rgGUIApp, SLOT(slotToggleRecordCurrentTrack()) ); > connect(this, SIGNAL(undo()), > m_rgGUIApp, SLOT(undo()) ); > connect(this, SIGNAL(redo()), > m_rgGUIApp, SLOT(redo()) ); > > } > > ------------------------ > > LircCommander::command LircCommander::commands[] = > { > > <skip most of items> > > { "TRACK-RECORD", cmd_trackRecord }, > { "UNDO", cmd_undo }, > { "REDO", cmd_redo }, > > }; > > > ---------------------- > > if (res != NULL) > { > switch (res->code) > { > > <skip most of items> > > case cmd_trackRecord: > emit trackRecord(); > break; > case cmd_undo: > emit undo(); > break; > case cmd_redo: > emit redo(); > break; > > --------- > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > > _______________________________________________ > Rosegarden-user mailing list > Rosegarden-user@... - use the link below to unsubscribe > --
https://sourceforge.net/p/rosegarden/mailman/rosegarden-user/?viewmonth=201002&viewday=22
CC-MAIN-2017-43
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. how to create field unique Hi how to change field default_code in product.product to unique my code from __future__ import division from openerp.osv import fields, osv class product_product(osv.osv): _inherit = 'product.product' _columns = { 'default_code' : fields.char('Internal Reference', size=64, select=True), } _sql_constraints = [('ref_unique','unique(default_code)', 'reference must be unique!')] product_product() thanks Hi , Your code is correct. You must only verify if there is already records with the same default code before restarting server and edit them because the constraint can not be applied in that case. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now is it working fine for you? or check this link:
https://www.odoo.com/forum/help-1/question/how-to-create-field-unique-66649
CC-MAIN-2017-43
en
refinedweb
I have two files: File get-calendar.php namespace genesis\\calendar; class Calendar { var $calendar; public function Get_calendar($date,$ministries,$limit) { global $mysqli; $query = 'SELECT calendar.event_name, calendar.calendar_id, calendar.ministry_id, calendar.start_date, venue.venue_name, ministry.ministry_name FROM calendar LEFT JOIN venue ON calendar.venue_id = venue.venue_id LEFT JOIN ministry ON calendar.ministry_id = ministry.ministry_id WHERE DATE(start_date) ' . $date . ' '; if ($ministries !== 'all') { $query .= 'AND calendar.ministry_id = "' . $ministries . '" '; } $query .= 'ORDER BY calendar.start_date ASC'; if ($limit !== 'all') { $query .= ' LIMIT ' . $limit; } $result = $mysqli->query($query); if ($result) { while ($row = $result->fetch_array(MYSQLI_ASSOC)){ $this->calendar[] = $row; } $result->close(); } else { $this->calendar = NULL; } return $this->calendar; } } File set-calendar.php use genesis\\calendar as C; require_once($_SERVER['DOCUMENT_ROOT'] . "/genesis/data/class-get-calendar.php"); class Calendar { var $date; var $ministries; var $limit; var $calendar; var $num_results; function __construct($date,$ministries = 'all',$limit) { // Validate dates if (isset($_GET['from']) || isset($_GET['to'])) { if (isset($_GET['from'])) { if ($_GET['from'] == 'From') { $from = NULL; } else { $from = strtotime($_GET['from']); if (checkdate(date('m', $from),date('d', $from),date('Y', $from))) { $from = mysql_real_escape_string($_GET['from']); } else { $from = NULL; } } } if (isset($_GET['to'])) { if ($_GET['to'] == 'To') { $to = NULL; } else { $to = strtotime($_GET['to']); if (checkdate(date('m', $to),date('d', $to),date('Y', $to))) { $to = mysql_real_escape_string($_GET['to']); } else { $to = NULL; } } } if (!is_null($from) && !is_null($to)) { $this->date = 'BETWEEN "' . $from . '" AND "' . $to . '"'; } else if (!is_null($from) && is_null($to)) { $this->date = '>= "' . $from . '"'; } else if (is_null($from) && is_null($to)) { $this->date = '>= "' . date('Y-m-d') . '"'; } } else { if (!is_numeric($date)) { $this->date = date('Y-m-d'); } else { if (checkdate(date('m', $date),date('d', $date),date('Y', $date))) { $this->date = $date; } else { $this->date = date('Y-m-d'); } } if (!is_null($this->date)) { $this->date = '>= "' . $this->date . '"'; } else { $this->date = '>= "' . date('Y-m-d') . '"'; } } // Validate ministry ID if (isset($_GET['ministry']) && is_numeric($_GET['ministry'])) { $this->ministries = mysql_real_escape_string($_GET['ministry']); } else if (is_int($ministries) || $ministries == 'all') { $this->ministries = $ministries; } else { $this->ministries = 'all'; } // Validate limit if (is_int($limit)) { $this->limit = $limit; } else { $this->limit = 'all'; } } public function Set_calendar($location) { $calendar = new C\\Calendar(); $calendar = $calendar->Get_calendar($this->date,$this->ministries,$this->limit); if ($calendar == NULL) { $this->calendar = ''; } else { if ($location == 'main') { if (!empty($calendar)) { $this->calendar = '<table> <thead> <tr> <th>Event</th> <th>Date</th> <th>Time</th> <th>Venue</th> <th>Ministry</th> </tr> </thead> <tbody class="vcalendar">'; foreach ($calendar as $key => $value) { $this->calendar .= '<tr class="vevent">'; $this->calendar .= '<td><a class="url uid summary" href="/calendar/event.php?id=' . $calendar[$key]['calendar_id'] . '&ministry=' . $calendar[$key]['ministry_id'] . '">' . $calendar[$key]['event_name'] . '</a></td>'; $date = date('M. j, Y',strtotime(substr($calendar[$key]['start_date'],0,10))); $time = date('g:ia',strtotime(substr($calendar[$key]['start_date'],11,8))); $this->calendar .= '<td><span class="dtstart"><abbr class="value" title="' . date('Y-m-d',strtotime(substr($calendar[$key]['start_date'],0,10))) . 'T' . date('H:i:s',strtotime(substr($calendar[$key]['start_date'],11,8))) . '">' . $date . '</abbr></span></td>'; $this->calendar .= '<td>' . $time . '</td>'; $this->calendar .= '<td class="location">' . $calendar[$key]['venue_name'] . '</td>'; $this->calendar .= '<td class="organiser">' . $calendar[$key]['ministry_name'] . '</td>'; $this->calendar .= '</tr>'; } $this->calendar .= '</tbody> </table>'; } else { $this->calendar = '<h2>There are no events at this time</h2><p>Please check back often.</p>'; } } else if ($location == 'sidebar') { $this->calendar = '<div id="calendar">'; if (empty($calendar)) { $this->calendar .= '<p>There are no events at this time.</p>'; } else { foreach ($calendar as $key => $value) { $this->calendar .= '<div class="calendar-event">'; $number_date = substr($calendar[$key]['start_date'],8,2); if ($number_date < 10) { $number_date = substr($number_date,1,1); } $date = date('F j, Y',strtotime(substr($calendar[$key]['start_date'],0,10))); $time = date('g:i A',strtotime(substr($calendar[$key]['start_date'],11,8))); $this->calendar .= '<div class="calendar-event-numberdate">' . $number_date . '</div>'; $this->calendar .= '<div class="calendar-event-title"><a href="/calendar/event.php?id=' . $calendar[$key]['calendar_id'] . '&ministry=' . $calendar[$key]['ministry_id'] . '">' . $calendar[$key]['event_name'] . '</a></div>'; $this->calendar .= '<div class="calendar-event-description">'; $this->calendar .= '<div class="calendar-event-description-date">' . $date . '</div>'; $this->calendar .= '<div class="calendar-event-description-time">' . $time . '</div>'; $this->calendar .= '<div class="calendar-event-description-location">' . $calendar[$key]['venue_name'] . '</div>'; $this->calendar .= '</div> </div>'; } $this->calendar .= '<p class="small-links"><a href="/calendar/'; if ($this->ministries !== 'all') { $this->calendar .= 'index.php?ministry=' . $calendar[$key]['ministry_id']; } $this->calendar .= '">View More '; if ($this->ministries !== 'all') { $this->calendar .= '<br />' . $calendar[$key]['ministry_name'] . ' '; } $this->calendar .= 'Events >></a></p>'; } $this->calendar .= '</div>'; } } return $this->calendar; } } I want to make the Get_calendar function in get-calendar.php private, but when I do, it throws me an error. I know it has to do with inheritance, but I am not sure ho to fix the problem. I tried putting the require_once from set-calendar.php in the construct, but that didn't work. I am guessing I have to extend Set_calendar in set-calendar.php, but I'm not quite sure the best way to do this. Any advice would be much appreciated! Once declared, the scope of a function cannot change. This is true of all languages which implement classical inheritance. Second, constructors have one and only one purpose - put the object in a ready state for execution. NEVER write a constructor that does anything else - it makes your code un-testable. As written, Get_calendar cannot be private, otherwise Set_calendar in your second class won't be able to access it. To get you the right solution, we'll have to address the bigger issue.... why two calendar classes? What's their relationship to each other supposed to be? Maybe I am doing this wrong, please inform me if you think I am. I am creating a CMS for a church. I would like to use the Calendar in the front end and the back end. I created the Calendar class in the first file, and placed it in one folder, and the Calendar class in the second file, and placed it in the other. I don't want to have to write the first file's Calendar class over and over, seeing that it's use could be used in more than one way (front end to display the calendar, back end to edit a calendar, etc). All of my database query files are in one folder, and all of my view files are in another. Then the front end references that view file to pull the different parts needed (calendar, mass schedule, events, etc). Am I going about this the wrong way? Here's how I might do it. The code below is just a skeleton, but hopefully the structure is clear. First, based on your get-calendar, it looks like a calendar is conceptually a list of events, so let's make that first -- just a generic calendar class. class Calendar { private $events; public function __construct($events) { $this->events = array(); if (isset($events)) { $this->addEvents($events); } } public function addEvent($event) { $this->events[] = $event; } public function addEvents($events) { foreach ($events as $event) { $this->addEvent($event); } } public function getEvents() { return $this->events; } } Then, for translating between Calendar objects and database storage, we could make a calendar database class. class CalendarDatabase { public function findByDateRangeAndMinistry($dateFrom, $dateTo, $ministries, $limit) { $events = // sql select... // database escaping (i.e., mysql_real_escape_string) happens in here only return new Calendar($events); } public function save($calendar) { // sql insert, update.... } } Next, your set-calendar looks like it handles the request, so we could make a calendar request class. class CalendarRequest { public function show() { $database = new CalendarDatabase(); $calendar = $database->findByDateRangeAndMinistry($_GET['from'], $_GET['to'], $_GET['ministry']); return $this->render('calendar/show.php', array('calendar' => $calendar)); } private function render($template, $params) { extract($params); ob_start(); include $template; $content = ob_get_clean(); return $content; } } And finally, the template. // calendar/show.php <?php if ($calendar): ?> <table> <?php foreach ($calendar->getEvents() as $event): ?> <tr> <td><?php echo htmlspecialchars($event['event_name']) ?></td> <td><?php echo htmlspecialchars($event['start_date']) ?></td> ... </tr> <?php endforeach ?> </table> <?php else: ?> There are no events at this time. <?php endif ?> We've broken the logic up into several files, and this allows each component to have a simple and focused responsibility. The Calendar class is responsible for a calendar's behavior, and it doesn't need to know or care about how it's stored or presented. The CalendarDatabase class is responsible for mapping calendar objects to database tables and fields. The template is responsible for presenting a calendar, usually as HTML, but also possibly as JSON or an email message. And finally the CalendarRequest class is the glue. None of the other classes know about each other. CalendarRequest is the mediator that handles the interaction between the other components. The term "MVC" is used a lot these days. In MVC nomenclature, the Calendar and CalendarDatabase classes would be the model, the CalendarRequest class would be the controller, and the template would be the view. Thanks for putting so much time into showing me how you would do this. I really like what you've shown me, but I am still a little confused about how the extract function works with the template. So I am guessing that the render function is what I call when I want to print the calendar events onto a page using: $calendar = new CalendarRequest; echo $calendar->render(); I understand that extract takes the key names from an array and places them in their own variable, but then when calling them, I am assuming in the example below you have the key names being "event_name" and "start_date"? Is extract something that I should use? I feel like this doesn't really allow you to visually see the array that you are calling, making the code cleaner, but a little harder to read if someone else was working on the project with me. To make it visually easier to read, if I decided not to use extract, I would just loop through the array? Thanks again for all of your help with this. I really like how you laid it out much better than I had it. You also cleared up MVC for me, which I have been trying to understand and hadn't quite grasped. You would actually not call render yourself. In fact it's private, so you can't. You would call the show method. $calendarRequest = new CalendarRequest(); $calendarHtml = $calendarRequest->show(); The show method uses the _GET params to ask the database for the appropriate calendar of events, then the show method itself calls render, passing in the calendar it just fetched. This topic is now closed. New replies are no longer allowed.
http://community.sitepoint.com/t/use-private-function-from-one-file-in-a-public-function-in-another/11955
CC-MAIN-2015-06
en
refinedweb
Hibernate 3.3 and Groovy 1.7.6 Arrive PLUS, Grails 1.3.6, Groovy 1.8-beta-3 and GreenHopper 5.4 released. Groovy 1.7.6 and Sneak Peek at 1.8 Groovy 1.7.6 and 1.8-beta-3 have been announced. The former is primarily a bug fix release, but the beta comes with a new preview of performance for primitive operations and functionality for using extended command expressions on the right-hand side of assignments. Closure memorisation and trampoline are both supported, and node metadata can be stored in AST nodes. Another Release from Spring Data The Spring Data project is on a roll! Hot on the heels of the Spring Data Redis release, Riak Support 1.0.0 M1 has arrived. The Riak modules in this release provide integration with Riak store, and this milestone introduces generified RiakTemplate for exception translation, serialisation, and data access. It also features a built-in HTTP REST client based on Spring version 3.0 RestTemplate. Hibernate 3.3 Migrates to Lucene 3.0 Hibernate Search version 3.3 has been announced. This release works with JBoss AS 6, Seam 2.2.1 and Hibernate Core 3.6 . It comes with a new query DSL that provides an API for programmatic queries, and a reworked queuing algorithm which has been tailored for those working with complex object graphs. This release has also migrated to Lucene version 3.0. Grails 1.3.6 Released Grails version 1.3.6 has been released, updating Commons DB CP to 1.3, Spring Framework to 3.0.5 and Commons Codec to version 1.4. An attrs attribute is also supported by tags in the link namespace. GreenHopper 5.4 Released The GreenHopper agile planning tool for JIRA has reached version 5.4. This release introduces a new Time-Tracking Analysis tool that can assist with understanding the Hours Burndown Chart, providing data such as time spent and remaining hours. When releasing versions directly from the Task Board, GreenHopper now creates and releases a new JIRA version which includes all “Done” cards part of the current context. GreenHopper 5.4 can be installed using the Universal Plugin Manager for JIRA. Redis 2.2.0 RC1 Released With Rewrites The first RC of Redis 2.2.0 has been announced. The networking internals have been rewritten with efficiency in mind, VM has been partially rewritten for code cleanness and memory usage, and the Redis-benchmark has been rewritten. Unix domain socket support has also been added. Redis 2.2 should work as a drop in replacement for 2.0. MySQL 5.5 Released – With InnoDB Oracle have announced that MySQL 5.5 is now generally available – and the Community Edition includes InnoDB as the default storage engine. In November, the MySQL Community was plagued by rumours that Oracle were making the InnoDB feature unavailable in MySQL Community builds after they revised their support packages for MySQL Editions. In actual fact, the company were dropping Basic and Silver support for MySQL citing the “very very limited support” these packages offer, but keeping InnoDB.
http://jaxenter.com/hibernate-3-3-and-groovy-1-7-6-arrive-102758.html
CC-MAIN-2015-06
en
refinedweb
The content, links, and pdfs are no longer maintained and might be outdated. D.R. Hopkins**. Introduction Dracunculiasis (guinea-worm disease) is an infection in humans caused by the parasite Dracunculus medinensis, which is contracted by drinking contaminated water from ponds, step wells or other open stagnant sources. After about 1 year, the 0.6-0.9-m long adult female worm emerges slowly through the victim's skin in an attempt to deposit immature larvae in water. Some of the larvae are eaten by a tiny crustacean or copepod (Cyclops), in which the larvae undergo two moults within about 2-3 weeks. People are infected when they drink water containing the copepods with infective larvae. Each infection lasts 1 year, and there is no protective immunity. Humans are the only definitive hosts of D. medinensis, and they are infected only by drinking contaminated water. Once a person is infected, there is no treatment to kill the parasite before it emerges a year later. The disease can be prevented, however, by teaching people to filter their drinking-water through a finely woven cloth or to boil their water if they can afford it; by educating communities to keep people with emerging worms from entering sources of drinking-water; by applying the cyclopsicide temephos to contaminated sources every 4 weeks; or by providing safe sources of drinking-water from borehole wells (1). Dracunculiasis is rarely fatal, but the pain and secondary infections associated with the emerging worm incapacitate infected persons for periods averaging 8 weeks. The worms emerge on the lower leg and are the sole evidence of the infection; however, they may emerge from any part of the body, and a dozen or more may emerge simultaneously from some infected persons. Over half of a village's population may be infected at the same time, and the outbreaks usually coincide with the planting or harvest season and the school year. Thus the impact of this quintessentially rural disease manifests itself in mass temporary crippling, which in turn substantially reduces agricultural production and greatly increases school absenteeism (2). Other indirect adverse effects have been documented on infant nutrition, child care and childhood immunizations (3,4). The Eradication Campaign The global campaign to eradicate dracunculiasis began with an initiative at the Centers for Disease Control and Prevention (CDC) in 1980, which took advantage of the impending International Drinking Water Supply and Sanitation Decade (1981-1990) (5). It was not known how many people were infected by dracunculiasis at that time, but a WHO estimate put the number at about 10 million (6). In addition to India and Pakistan, 16 countries in sub-Saharan Africa were known to be infected (Benin, Burkina Faso, Cameroon, Chad, Cote d'Ivoire, Ethiopia, Ghana, Kenya, Mali, Mauritania, Niger, Nigeria, Senegal, Sudan, Togo, and Uganda). Yemen was discovered to be endemic in 1994. In 1986, Watts published a country-by-country estimate of the numbers of persons infected, which totalled 3.2 million (7). Over 120 million persons were judged to be at risk of the infection in Africa alone. Despite the adoption of dracunculiasis eradication in 1981 as a sub-goal of the Water and Sanitation Decade, one of the main goals of which was to provide safe drinking-water to all who did not yet have it, support for the eradication programme was exceedingly slow in coming. In 1982 the US National Research Council, CDC, and the US Agency for International Development convened an international Workshop on Opportunities for Control of Dracunculiasis in Washington in collaboration with WHO. In 1986, the World Health Assembly adopted its first resolution calling for the "elimination" of dracunculiasis; the first African Regional Conference on Dracunculiasis Eradication met in Niamey, Niger; and The Carter Center (Global 2000) and CDC began assisting the eradication programme in Pakistan. African ministers of health resolved at Brazzaville in 1988 to eradicate dracunculiasis by the end of 1995, a target date which was endorsed by the World Health Assembly in 1991. An international donors' conference co-sponsored by The Carter Center, UNDP and UNICEF at Lagos in 1989 mobilized US$ 10 million for the global programme. As illustrated elsewhere (8), however, by the end of the Water Decade, only four of the 18 endemic countries (India, Pakistan, Ghana, and Nigeria) had begun implementing national eradication programmes, and 10 of the countries only began implementing their programmes in 1993 or 1994. Much more was accomplished in the 1990s. As shown in Figure 1, the numbers of reported cases of dracunculiasis have been reduced by almost 97%, to less than 100,000 in 1997, as compared to the estimated 3.2 million cases in 1986, and the nearly one million cases which were actually reported in 1989. By the end of 1997, Pakistan had been certified by WHO as free of dracunculiasis, India had halted transmission of the disease, and Yemen, the only other known affected country in Asia, had found only seven cases in the entire year. In Africa, Kenya had reported no indigenous cases since May 1994, Cameroon had only one indigenous case since September 1996, and Senegal and Chad reported only 4 and 25 cases in 1997, respectively (Figure 2). Globally, the number of known endemic villages has been reduced from about 23,000 at the beginning of 1993, to less than 10,000 at the beginning of 1998, more than half of which are in Sudan. Remaining Challenges Over 90% of the remaining cases of dracunculiasis are restricted to parts of only five countries (Burkina Faso, Ghana, Niger, Nigeria and Sudan). Each of these five countries presents unique difficulties, but the most serious by far is the continuing civil war in southern Sudan, where access to some of the most highly endemic foci seen anywhere in the world is severely constrained, and where the national eradication programme has not yet had any access at all to several probably endemic areas. Surveillance and control measures were less complete in Sudan in 1997 than in 1996 because of increased strife in 1997. Although the target date for global eradication of dracunculiasis was not met, our goal now is to achieve eradication as soon as possible. Apart from the fighting in Sudan, the Dracunculiasis Eradication Programme (DEP) has suffered for many years, and continues to be plagued by opposing views held by some representatives of major partners in the campaign regarding the most appropriate strategy for implementing the programme. Some of these disagreements resulted from unrecognized differences in what was meant by "integration". When integration means that dracunculiasis eradication activities should be among the responsibilities of all health workers in a country's established public health network, wherever possible, that is entirely appropriate. That is also exactly the approach which has been used in the DEP from the beginning -- to mobilize and support otherwise underutilized members of existing health services at national, regional, and subregional levels. Those pre-existing health workers in turn supervise and support part-time village volunteers, most of whom were recruited by the DEP, because primary health care services had not reached these remote villages. Few of those health workers, and almost none of the village volunteers, are exclusively devoted to work on dracunculiasis. In Africa, 9 of the 15 national programme coordinators of Dracunculiasis Eradication Programmes have other responsibilities in their ministries of health besides dracunculiasis eradication. In south-east Nigeria, 8 of the 10 chairmen of the state task forces for guinea-worm eradication are the state directors of public health services, in charge of all primary health care services; at the local government area (LGA) level, all of the 25 LGA coordinators for the dracunculiasis programme are local government health officials who are responsible for other health programmes. In Izzi and Ebonyi LGAs of Ebonyi State, the most endemic state in Nigeria, 100% of the 348 village level workers in the programme are unpaid volunteers, mostly farmers, not full-time "vertical guinea-worm staff", including the 4% who are community health workers with other medical responsibilities. When the DEP began in south-east Nigeria, it included all of the existing primary health care workers in endemic communities who met the programme's prerequisite criteria of residency in that village and, where possible, literacy. The situation is similar in the samples of other national DEP for which we have data: Niger, Uganda, and Mali. When integration means using the resources which were procured for dracunculiasis eradication for other purposes, that is rarely justifiable, if at all. In my opinion when integration means turning over the active surveillance and stringent case containment which are required at the end of any eradication programme to an integrated health care system which is designed to control, not eradicate, diseases, precisely when the most intensive focus on interrupting transmission is needed, that is unwise. I believe the urgency which is unique to eradication programmes, and the demand for excellence in implementation which that urgency requires, cannot be integrated into broader primary health care or routine health services, even when those services are working well, much less when they are not. The rationale for the strategy of integrating control measures against dracunculiasis into other programmes appears sometimes to be motivated by a belief that the disease is not important enough to merit the intensive effort that is required to eradicate it, and by the wrong impression that control measures to eradicate dracunculiasis need to be "sustained". Aspects of these differences have been addressed recently in some publications (8-11), and I shall not repeat them here, but I would like to end this presentation by reviewing some of the indirect benefits of this eradication campaign. Benefits of the Eradication Programme Reducing the prevalence of dracunculiasis by almost 97% over the past decade is the most conspicuous achievement of the programme so far, even before eradication is fully achieved. The impact of that accomplishment on improved agricultural production alone is a major economic benefit and the World Bank, which considers an annual estimated rate of return (ERR) of greater than or equal to 10% as acceptable, has calculated an ERR of 29%, based on conservative assumptions of the duration of disability from dracunculiasis (12). Indirect contributions of the programme's success so far to improved school attendance, and to the nutrition of infants and the care of toddlers in endemic households, are no less real, despite being harder to quantify. Moreover, while realizing these accomplishments, DEP has accelerated and increased the provision of clean drinking-water by national and international agencies to thousands of endemic or formerly endemic communities, even after the Water and Sanitation Decade. It has also mobilized hundreds of communities to improve their own water supplies. In south-east Nigeria alone, for example, members of endemic villages have created more than 400 hand-dug wells in the past few years, in order to rid themselves of dracunculiasis. This is just one way that eradicating dracunculiasis has helped increase the self-reliance of some affected communities and generated ancillary benefits in the control of other waterborne diseases. The programme has also established community-based health education, village task forces, and surveillance by village volunteers in more than 15,000 remote villages (13). The very existence of some of those villages was previously unknown to other health workers. The nearly 6-month long "guinea-worm cease-fire" in Sudan in 1995 also provided opportunities to treat for the first time over 100,000 persons at risk of onchocerciasis, to vaccinate over 41,000 children against measles, 35,000 against poliomyelitis, and 22,000 against tuberculosis, and to distribute more than 35,000 doses of vitamin A and treat 9000 children with oral rehydration packets, in addition to jump-starting the DEP itself in that country (14). And despite our sometimes divergent views, dracunculiasis eradication has succeeded as much as it has because of a broad coalition of United Nations and bilateral assistance agencies, enormous private sector contributions by the DuPont Corporation, Precision Fabrics Group, American Home Products, nongovernmental organizations, national ministries, and political leaders, all of whom have contributed to help people in endemic communities to rid themselves of this parasite. People in these neglected communities need help. I have yet to visit an African village endemic for dracunculiasis or onchocerciasis which is suffering from too many visits by health care workers from different programmes, as some allege, requiring better integration or coordination of their health activities. The real problem is getting any health services to such communities. In the broad benefits it has provided and in its support of the public health staff and volunteers who are producing those benefits, one can assert with much justification that in addition to eradicating dracunculiasis, the Dracunculiasis Eradication Programme has done more to improve primary health care in endemic communities than many primary health care programmes. Primary health care was not developed in most of these communities before the DEP began, and not nearly enough is being done by health systems to build on that foundation and provide other needed services and support to the same communities once dracunculiasis is gone. I do not presume to represent the inhabitants of these neglected communities, but I do know that if I were in their place, I would prefer an excellent vertical programme to a mediocre integrated programme any day. Acknowledgements The assistance of Dr Eka Braide, Ms Nwando Diallo, Ms Renn Doyle, Ms Wanjira Mathai, Dr Ernesto Ruiz-Tiben and Dr James Zingeser in gathering or preparing some of the data for this paper is gratefully acknowledged. * The views expressed in this article are those of the author and may differ from those held by WHO. ** Associate Executive Director, The Carter Center, Atlanta, GA,
http://www.cdc.gov/mmwr/preview/mmwrhtml/su48a10.htm
CC-MAIN-2015-06
en
refinedweb
{-# OPTIONS -XFlexibleInstances #-} module Test.Hspec.Internal where import System.IO import System.Exit import Data.List (mapAccumL, groupBy, intersperse) import System.CPUTime (getCPUTime) import Text.Printf [Spec] describe n ss = do ss' <- sequence ss return $ map (\ (req, res) -> Spec n req res) ss' -- | Combine a list of descriptions. descriptions :: [IO [Spec]] -> IO [Spec] descriptions = liftM concat . sequence -- | Evaluate a Result. Any exceptions (undefined, etc.) are treated as failures. safely :: Result -> IO Result safely f = Control.Exception.catch ok failed where ok = = return (description, example) -- | -- | Create a document of the given specs. documentSpecs :: [Spec] -> [String] documentSpecs specs = lines $ unlines $ concat report ++ [""] ++ intersperse "" errors where organize = groupBy (\ a b -> name a == name b) (errors, report) = mapAccumL documentGroup [] $ organize specs -- | Create a document of the given group of specs. documentGroup :: [String] -> [Spec] -> ([String], [String]) documentGroup errors specGroup = (errors', "" : name (head specGroup) : report) where (errors', report) = mapAccumL documentSpec errors specGroup -- | Create a document of the given spec. documentSpec :: [String] -> Spec -> ([String], String) documentSpec errors spec = case result spec of Success -> (errors, " - " ++ requirement spec) Fail s -> (errors ++ [errorDetails s], " x " ++ requirement spec ++ errorTag) Pending s -> (errors, " - " ++ requirement spec ++ "\n # " ++ s) where errorTag = " [" ++ (show $ length errors + 1) ++ "]" errorDetails s = concat [ show (length errors + 1), ") ", name spec, " ", requirement spec, " FAILED", if null s then "" else "\n" ++ s ] -- | Create a summary of how long it took to run the examples. timingSummary :: Double -> String timingSummary t = printf "Finished in %1.4f seconds" (t / (10.0^(12::Integer)) :: Double) failedCount :: [Spec] -> Int failedCount ss = length $ filter (isFailure.result) ss where isFailure (Fail _) = True isFailure _ = False -- | Create a summary of how many specs exist and how many examples failed. successSummary :: [Spec] -> String successSummary ss = quantify (length ss) "example" ++ ", " ++ quantify (failedCount ss) "failure" -- | Create a document of the given specs. -- This does not track how much time it took to check the examples. If you want -- a description of each spec and don't need to know how long it tacks to check, -- use this. pureHspec :: [Spec] -- ^ The specs you are interested in. -> [String] pureHspec = fst . pureHspecB pureHspecB :: [Spec] -- ^ The specs you are interested in. -> ([String], Bool) pureHspecB ss = (report, failedCount ss == 0) where report = documentSpecs ss ++ [ "", timingSummary 0, "", successSummary ss] -- | Create a document of the given specs and write it to stdout. -- This does track how much time it took to check the examples. Use this if -- you want a description of each spec and do need to know how long it tacks -- to check the examples or want to write to stdout. hspec :: IO [Spec] -> IO () hspec ss = hspecB ss >> return () -- | Same as 'hspec' except it returns a bool indicating if all examples ran without failures hspecB :: IO [Spec] -> IO Bool hspecB = hHspec stdout -- | Same as 'hspec' except the program exits successfull if all examples ran without failures or -- with an errorcode of 1 if any examples failed. hspecX :: IO [Spec] -> IO a hspecX ss = hspecB ss >>= exitWith . toExitCode toExitCode :: Bool -> ExitCode toExitCode True = ExitSuccess toExitCode False = ExitFailure 1 -- | Create a document of the given specs and write it to the given handle. -- This does track how much time it took to check the examples. Use this if -- you want a description of each spec and do need to know how long it tacks -- to check the examples or want to write to a file or other handle. -- -- > writeReport filename specs = withFile filename WriteMode (\ h -> hHspec h specs) -- hHspec :: Handle -- ^ A handle for the stream you want to write to. -> IO [Spec] -- ^ The specs you are interested in. -> IO Bool hHspec h ss = do t0 <- getCPUTime ss' <- ss mapM_ (hPutStrLn h) $ documentSpecs ss' t1 <- getCPUTime mapM_ (hPutStrLn h) [ "", timingSummary (fromIntegral $ t1 - t0), "", successSummary ss'] return $ failedCount ss' == 0 -- | Create a more readable display of a quantity of something. quantify :: Num a => a -> String -> String quantify 1 s = "1 " ++ s quantify n s = show n ++ " " ++ s ++ "s"
http://hackage.haskell.org/package/hspec-0.3.0/docs/src/Test-Hspec-Internal.html
CC-MAIN-2015-06
en
refinedweb
K’th Largest Item August 1, 2014 We give two solutions, taking our input from a random number generator instead of reading from a file: (define rand (let ((a 69069) (c 1234567) (m (expt 2 32)) (seed 20140801)) (lambda () (set! seed (modulo (+ (* a seed) c) m)) (display seed) (newline) ; make visible for debugging seed))) Our first solution indexes through the n items, keeping a sorted list of the k largest items seen so far. We initialize the list by insertion-sorting the first k items in the input: (define (insert-in-order x xs) (let loop ((xs xs) (zs (list))) (if (null? xs) (reverse (cons x zs)) (if (< x (car xs)) (append (reverse zs) (list x) xs) (loop (cdr xs) (cons (car xs) zs)))))) After the first k items are in the list, in order, the remaining n − k items are inserted into the cdr of the list if they are greater than the current car of the list: (define (kth-largest-list n k) (let loop ((xs (list)) (i k)) (if (positive? i) (loop (insert-in-order (rand) xs) (- i 1)) (let loop ((xs xs) (n (- n k))) (if (zero? n) (car xs) (let ((x (rand))) (if (< x (car xs)) (loop xs (- n 1)) (loop (insert-in-order x (cdr xs)) (- n 1))))))))) That works in time O(n k). A better solution uses a heap (priority queue) instead of a list to hold the k largest integers seen so far, so it works in time O(n log k): (define (kth-largest-heap n k) (let loop ((pq pq-empty) (i k)) (if (positive? i) (loop (pq-insert < (rand) pq) (- i 1)) (let loop ((pq pq) (n (- n k))) (if (zero? n) (pq-first pq) (let ((x (rand))) (if (< x (pq-first pq)) (loop pq (- n 1)) (loop (pq-insert < x (pq-rest < pq)) (- n 1))))))))) We used pairing heaps, but any of the heap data structures that we have used in prior exercises will work. You can run the program at. Too lazy to do the file processing. The program below finds the kth largest in a list. In Haskell. code{white-space: pre;}; } Sorry about the weird looking comment. I experimented with getting syntax highlighting for Haskell. It seems there is no way for me to edit the comment to fix the problem. James Curtis-Smith’s answer is the best approach I can think of, which means to always keep a list of the top k elements and adjust that list when necessary. It requires only one pass on the original list and worst case scenario happens when the original list is sorted so that everytime you read a new element you need to “splice”. In that case it’s O(k*n). There’s another approach that’s also O(k*n), which is removing from the original list the biggest element and doing this k times, but since the original list is so big you probably can’t “remove” things from it. At best you can store them in a hash to discard them later, but it seems more complicated and (worst of all) it’s o(k*n) always, not just in the worst case scenario. Anyway, just wanted to share my analysis. No need to share an actual solution, since it’s all there already. In Python. The python library ‘heapq’ provides a function nlargest(). It basically uses Paul’s algorithm. I think using a heap reduces the complexity from O(n*k) to O(n*log k). Haskell, so compile with ghc, then call with k and filename as command-line arguments. Integers should be one per line in the file. import System.Environment import Data.List main = do (k:file:_) <- getArgs ints [Integer] -> [Integer] klargest k = foldl (\a -> take k . sortBy (flip compare) . (:a)) [] Some of that got eaten. Try again. import System.Environment import Data.List main = do (k:file:_) <- getArgs ints [Integer] -> [Integer] klrg k = foldl (\a -> take k . sortBy (flip compare) . (:a)) []
http://programmingpraxis.com/2014/08/01/kth-largest-item/2/
CC-MAIN-2015-06
en
refinedweb
You can subscribe to this list here. Showing 25 50 100 250 results of 113 The magic goal is=20 maven xjavadoc It needs to be run at the top level and it will create a single combined javadoc set. I have run this for M4 and I will upload it to sourceforge shortly. James On 4/30/05, Jody Garnett <jgarnett@...> wrote: > Paul Selormey wrote: > > Is it possible to make the Javadoc also available for download? > It darn well should be :-) >=20 >). >=20 > I tell you what, you can do some research into that maven build script > and figure out how to build the javadocs, I am pretty sure I can figure > out how to get them built and published. >=20 > It could also be that James wants his CruiseControl system to build > javadocs. >=20 > So lets do some research and figure this one out. > J > _______________________________________________ > Geotools-devel mailing list > Geotools-devel@... > > Paul Selormey wrote: > Is it possible to make the Javadoc also available for download? It darn well should be :-)). I tell you what, you can do some research into that maven build script and figure out how to build the javadocs, I am pretty sure I can figure out how to get them built and published. It could also be that James wants his CruiseControl system to build javadocs. So lets do some research and figure this one out. Jody I am trying to run the Spearfish example for MySQL and I am getting an exception when connecting to the database. I am using MySQL 4.1.9 on Windows XP (SP2) and driver version 3.0.9 (tried also 3.1.8, but get the same exception). Anybody knows what is the cause / how to go arround this?=20 The stack trace: org.geotools.data.DataSourceException: Connection failed:java.sql.SQLException: Unable to connect to any hosts due to exception: java.lang.ArrayIndexOutOfBoundsException: 48 =09at org.geotools.data.jdbc.JDBCDataStore.getConnection(JDBCDataStore.java= :920) =09at org.geotools.data.jdbc.JDBCDataStore.buildFIDMapper(JDBCDataStore.jav= a:948) =09at org.geotools.data.jdbc.FeatureTypeHandler.getFIDMapper(FeatureTypeHan= dler.java:233) =09at org.geotools.data.jdbc.JDBCDataStore.getFeatureSource(JDBCDataStore.j= ava:420) =09at essais.databases.SpearfishMySQL.main(SpearfishMySQL.java:58) Caused by: java.sql.SQLException: Unable to connect to any hosts due to exception: java.lang.ArrayIndexOutOfBoundsException: 48 =09at com.mysql.jdbc.Connection.createNewIO(Connection.java:1797) =09at com.mysql.jdbc.Connection.<init>(Connection.java:562) =09at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java= :361) =09at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(MysqlData= Source.java:394) =09at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(MysqlData= Source.java:129) =09at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(MysqlData= Source.java:100) =09at com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource.getPooled= Connection(MysqlConnectionPoolDataSource.java:54) =09at org.geotools.data.jdbc.ConnectionPool.getConnection(ConnectionPool.ja= va:120) =09at org.geotools.data.jdbc.JDBCDataStore.getConnection(JDBCDataStore.java= :918) =09... 4 more Thanks, Yuri Hello, it would be very nice, if a download version of the latest Geotools library (e.g. v2.1.M4) was placed on the download page (). The only one I found, was version 2.0.0 and it is a "little bit" different to version 2.1.x. Thanks Martin ______________________________________________________________ Verschicken Sie romantische, coole und witzige Bilder per SMS! Jetzt bei WEB.DE FreeMail: Geotools 2.1.M4 is available for download. This release brings us much closer to 2.1.0. The successful RnD work has been on: 1. FeatureType revision to provided "facets" 2. FactoryFinder revision to allow application control I would also like to thank the new geotools volunteers, the number of bug fixes, and the progress of GridCoverage is due to your hard work. The release is made in conjunction with GeoServer 1.3.0-beta3. For more information: The server responsible for many of our mailing lists, subversion, cvs, maven repositories, and/or websites will be updated at 7pm PST tomorrow. Jody Garnett Hmm, It sounds like a bug. I did not extensively test that code as I was not required for the project I was working on. Unfortunately, I am also really busy with another XML parsing framework, so won't have a chance too look into it for a bit yet. David On 4/19/05, Adrien Anselme <adanselm@...> wrote: > Hello, >=20 > I made some GML reading/writing tests using the DocumentWriter class > from org.geotools.xml . >=20 > In that purpose I wrote a simple XML Schema file, city.xsd, as well as > a simple "school case" GML file using this schema : city.xml. >=20 > My piece of code simply reads from city.xml and writes into > city_out.xml. Here is a snippet : >=20 > //-------------------------- > //file to parse > //method for displaying a fileselection dialog: > URL shapeURL =3D fileSelect("gml","GML File","D:\\taf\\city.xml"); > URI u =3D shapeURL.toURI(); >=20 > GMLFeatureCollection doc =3D > (GMLFeatureCollection)DocumentFactory.getInstance(u,null,Level.WARNING); > if(doc=3D=3Dnull) > System.out.println("Document missing"); >=20 > Schema s =3D SchemaFactory.getInstance(new URI(""= )); >=20 > // **writing > //file to write > URL outURL =3D fileSelect("gml","GML File","D:\\taf\\city_out.xml"); > File outf =3D new File(outURL.getPath()); > if(outf.exists()) > outf.delete(); > outf.createNewFile(); >=20 > DocumentWriter.writeDocument(doc,s,outf,null); > //------------------------- >=20 > You'll find all the xml files attached to this mail. >=20 > Here are my questions: >=20 > First, where did the "app" namespace go? This is just a detail but > anyway it would help if it were still here. >=20 > More annoying, the gml:boundedBy element disappeared whereas it is > needed for the document to validate correctly according to the > specified Schema. >=20 > And finally, the gml:polygonProperty element lost its "gml:" namespace > indicator whereas all the other gml elements still have it. I don't > understand this kind of discrimination against this poor > polygonProperty element ;) >=20 > I use the 2.1.M3 development release. >=20 > I found that a simple modification in the WriterContentHandler class > would partially answer my "app" namespace disappearing question : >=20 > private static class WriterContentHandler implements PrintHandler { >=20 > [...] >=20 > WriterContentHandler(Schema schema, Writer writer, Map hints) { > [...] > prefixMappings =3D new HashMap(); > prefixMappings.put(schema.getTargetNamespace(), ""); > //changing this last line in : > //prefixMappings.put(schema.getTargetNamespace(), schema.getPrefix()); > //partially solves the problem. >=20 > [..] >=20 > Could you help me with this please? >=20 > -- > Adrien ANSELME > -------------- > Etudiant en G=E9nie Informatique =E0 l'UTC > >=20 > page professionnelle (cv): > id jabber : aanselme@... >=20 >=20 > Rueben Schulz wrote: >>Because I am unexperienced with all the GIS stuff, following >>questions: >> >>- can Geotools be a substitute for a MapServer (e.g. UMN MapServer) >> >> > >No, geotools is a library. It can be used to write a web mapping server >(WMS). The geoserver project () used >geotools to create a web feature server (serves features as GML) and now >has experimental support for WMS. > > I was going to say that the WMS support is no longer experimental (they pass all the CITE tests now). Indeed if you want to do some serious SLD based GIS work GeoServer far surpasses what map server can do. What is experimenal in that project is the web coverage service and web user interface for map access. Cheers Jody On Thu, 2005-04-28 at 12:12 +0200, Zolt=E1n Dud=E1s wrote: > Hi All ! >=20 >. >=20 As has been noted, you can try boosting your memory. java -Xmx512M your.class But you can also be a lot less memory intensive with your code: >); > renderer.paint((Graphics2D)mapImage.getGraphics(),bounds, > new AffineTransform(),true); >=20 >=20 > I want to to draw a part of the map into a BufferedImage and send it to a= n > applet. Maybe this isn't the best way to do this? If your goal is a BufferedImage, StyledMapPane is probably not the renderer you want. StyledMapPane is intended for interactive use. It renders everything once, and then zooming, panning, rotating, etc. is super fast. But the initial rendering is a bit slow and fairly memory intensive. You would probably be better suited with the LiteRenderer. See and Since your shapefile is over 200MB, the LiteRenderer's one-at-a-time approach will do you well. --=20 Trevor Stone | Software Engineer | tel 800-554-4434 | fax 303-271-1930 | eM= ail tstone@... Tyler Technologies | Eagle Division | 14142 Denver West Pkwy Suite 155 | La= kewood, CO 80401 | Hi,=20 are you using the -Xmx option to the java command in order to increase th= e maximum amount of memory the java VM can use. The default is only 64M. = E.g. adding -Xmx700m to the command line would make maximum size 700M Hope this might help, Colin =20 -----Original Message----- From: geotools-gt2-users-admin@... [mailto:geotools-gt2= -users-admin@...] On Behalf Of Zolt=E1n Dud=E1s Sent: 28 April 2005 11:13 To: geotools-gt2-users@... Subject: [Geotools-gt2-users] Out of memory problem); =20 renderer.paint((Graphics2D)mapImage.getGraphics(),bounds, =20 new AffineTransform(),true); I want to to draw a part of the map into a BufferedImage and send it to a= n applet. Maybe this isn't the best way to do this? What did I wrong? Which class is the suitable to do this? Could you advise me a simple example or a smaller open source project=20 which implements this? Zoli ------------------------------------------------------- SF.Net email is sponsored by: Tell us your software development plans! Take this survey and enter to win a one-year sub to SourceForge.net Plus IDC's 2005 look-ahead and a copy of this survey _______________________________________________ Geotools-gt2-users mailing list Geotools-gt2-users@... This message is intended for the addressee(s) only and should not be read= , copied or disclosed to anyone else outwith the University without the p= ermission of the sender. It is your responsibility to ensure that this message and any attachments= =20are scanned for viruses or other defects. Napier University does not a= ccept liability for any loss or damage which may result from this email or any attachment, or for erro= rs or omissions arising after it was sent. Email is not a secure medium. = University's system is subject to routine monitoring and filtering by the= =20University.=20 = new URL("file://"+SHAPES_HOME+"roads/lkA.shp";); ShapefileDataStore dsRoads = new ShapefileDataStore(roadsURL); FeatureSource fsRoads = dsRoads.getFeatureSource(dsRoads.getTypeNames()[0]); map.addLayer(fsRoads, roadsStyle); StyledMapPane mapPane = new StyledMapPane(); mapPane.setMapContext(map); mapPane.getRenderer().addLayer(new RenderedMapScale()); Renderer renderer= mapPane.getRenderer(); Rectangle bounds= new Rectangle(500,500); BufferedImage mapImage= new BufferedImage(bounds.width,bounds.height,BufferedImage.TYPE_INT_RGB); renderer.paint((Graphics2D)mapImage.getGraphics(),bounds, new AffineTransform(),true); I want to to draw a part of the map into a BufferedImage and send it to an applet. Maybe this isn't the best way to do this? What did I wrong? Which class is the suitable to do this? Could you advise me a simple example or a smaller open source project which implements this? Zoli I am having difficulties with the max/min Scale denominators. I was wondering if anybody would be able to help me. =20 So I made this function public void setMaxScaleDenominator(GisLayer layer, double scale) { FeatureTypeStyle typeStyles[]=3D layer.getStyle().getFeatureTypeStyles(); for (int i =3D 0; i < typeStyles.length; i++) { Rule rules[] =3D typeStyles[i].getRules(); for (int j =3D 0; j < rules.length; j++) { rules[j].setMaxScaleDenominator(scale); } } } =20 That goes through my layer and sets all of the rules of all of the typestyles to have the same scale denominator. So set the scale and nothing happens there is no filtering. I am using a StyledMapPane for my rendering and I want small features to not be drawn when a user zooms out. Thanks =20 Cedar =20 Andrea Aime a =E9crit : > As far as I can tell, in the Java library (rt.jar and friends), Default > is used for publicly visible classes, that can be used as such, without > the need to use a factory. Swing is full of examples (DefaultTableModel= , > DefaultTableCellRenderer, ...). >=20 > A fast search also reveals that the Impl suffix is widely used in=20 > rt.jar, too, but in the com.sun package -> it is used to mark implement= ations that=20 > should not be used alone, but thing you get from a factory. Right. Most of the "Impl" suffix I have found in public classes was=20 outside "java" and "javax" namespace. They were mostly in CORBA and SAX=20 packages ("org" namespace in J2SE). I noticed a slight increase of=20 "Impl" suffix in "java" and "javax" public classes in J2SE 1.5 however. > So, it seems that at least Sun has chosen its semantic for both cases. > Geotools could follow this semantic, too. In my understanding, application of this rule to Geotools would means: - "Impl" suffix for "org.geotools.referencing" and "org.geotools.metadata" packages since they are create from factories (not yet true for metadata, but should be). - "Default" prefix for "org.geotools.parameter" package, since they are created directly without factories. Martin. Trevor Stone a =E9crit : > If you take a look at the fields in a DefaultFeature, you'll note that > basic serialization would bring along a CoordinateSystem, a > GeometryFactory, etc. That would be quite a bit of unnecessary > overhead. CoordinateSystem is serializable and should not be a problem in most=20 case (except for CRS backed by some localization grid). Factories should usually not be serialized. I usually declared all=20 Factory fields as transient and build them again after deserialization=20 when first needed. Martin. On Mon, 2005-04-25 at 15:37 +0200, Markus Denkinger wrote: > Hey, I'm trying to integrate some features of gt2 in a J2EE application. > Thereby I realized that the DefaultFeature and its attributes are not > serializable. Is there a reason for? Is there any other smart way to > make a DefaultFeature persistent? > If you take a look at the fields in a DefaultFeature, you'll note that basic serialization would bring along a CoordinateSystem, a GeometryFactory, etc. That would be quite a bit of unnecessary overhead. What you can do instead is create a wrapper object like so: I've been using that code in a J2EE-like application and it works quite nicely. Much faster than my old version, which used GML. -- Trevor Stone | Software Engineer | tel 800-554-4434 | fax 303-271-1930 | eMail tstone@... Tyler Technologies | Eagle Division | 14142 Denver West Pkwy Suite 155 | Lakewood, CO 80401 | Martin Desruisseaux wrote: >: As far as I can tell, in the Java library (rt.jar and friends), Default is used for publicly visible classes, that can be used as such, without the need to use a factory. Swing is full of examples (DefaultTableModel, DefaultTableCellRenderer, ...). A fast search also reveals that the Impl suffix is widely used in rt.jar, too, but in the com.sun package -> it is used to mark implementations that should not be used alone, but thing you get from a factory. So, it seems that at least Sun has chosen its semantic for both cases. Geotools could follow this semantic, too. Just my 2 cents... Best regards Andrea Aime: "Default" prefix: 4 votes ------------------------- Chris Holmes, Trevor Stone, Justin Deoliveira, Martin Desruisseaux "Impl" suffix: 8 votes ---------------------- David Zwiers, Artie Konin, Andy Turner, Jeff Yutzler, Jody Garnett, Richard Gould, Jesse Eichar, David Blasby Usage in existing API (approximative) ------------------------------------- Geotools: 65 "Default" prefix, 57 "Impl" suffix J2SE 1.5: 48 "Default" prefix, 20 "Impl" suffix Coding convention on the web in favor of "Default" prefix --------------------------------------------------------- (see item 28) Coding convention on the web in favor of "Impl" suffix ------------------------------------------------------ Any comment / opinion? Martin. Greetings, Jody, Monday, April 25, 2005, you wrote: >>. There is no need to sorry :) And I am definitely not a "Mr", Jody :) I'm asking questions here hoping to get an answers, and any answer is highly appreciated. I'm really grateful to you for your participation. I was concerned with your message, hovewer, because I start to suspect that there are some sources of information about geoserver and geotools I'm not aware of. I'm trying to always be in course of the latest changes and future plans and reading gt2-users and geoserver-devel mailing list quite thoroughfully (though can't say that I understand all the things written there :), but may be there are some other sources I should check in order not to miss something important? > We should take the next step of contacting the shapefile module > maintainer, or asking during todays IRC meeting, and figure out what > has happenend. Well, I see that Chris explained this subject fair enough. I'll check if I still got http ports open and submit new task in geotools and geoserver jira, as Dave suggested. -- WBR, Artie Artie Konin wrote: >. We should take the next step of contacting the shapefile module maintainer, or asking during todays IRC meeting, and figure out what has happenend. Jody Hey, I'm trying to integrate some features of gt2 in a J2EE application. Thereby I realized that the DefaultFeature and its attributes are not serializable. Is there a reason for? Is there any other smart way to make a DefaultFeature persistent? THX Argh. My bad. I'm sorry. This one I even knew I needed to do. I can try to look into it next week. It's definitely in 2.0, and should be ported forward. It's not a global charset, it's just for shapefile, where it is in fact relevant. It was just a couple of changes, should not be hard to port. Chris On Mon, 25 Apr 2005, Artie Konin wrote: > Greetings, all. > > I'm currently using latest GeoServer, which makes use of GeoTools > 2.1.x library. I had to switch back to shapefiles due to some weird > things exhibited by PostgisDataStore and now, pitifully, see that the > problem with shape files having non-Latin-1 strings in accompanying > DBFs (solved long ago by Chris Holmes for 2.0.x GeoTools) is > reincarnated. > > Class org.geotools.data.shapefile.dbf.DbaseFileReader assumes that > all strings stored in dbf file are in ISO-8859-1 encoding, which is > illustrated by following lines of code taken from one of the recent > GeoTools source code snapshots: > > charBuffer = CharBuffer.allocate(header.getRecordLength() - 1); > Charset chars = Charset.forName("ISO-8859-1"); > decoder = chars.newDecoder(); > > It is in the 'init()' method of 'DbaseFileReader.java'. I'm pretty > positive that Chris Holmes resolved the problem in GeoTools 2.0 by > adding another factory parameter for Shapefile datastore, namely > "charset". Seems that this fix didn't make its way to 2.1 codebase. > > Maybe someone will look into this and port the solution to current > GeoTools? > > -- > If there was another one, say "Implementation-Version", that'll be > good :) That stays for "Implementation-Revision", of course :) Sorry :) -- WBR, Artie I should add a note of caution regarding Geotools implementation of GeoAPI interfaces. Currently, Geotools implementations has the same name than GeoAPI interfaces except for the package name. For example org.geotools.metadata.Identifier class implements org.opengis.metadata.Identifier interface. We got some negative feedback from users about that. So we are going to use a distinct name. The only thing we have to do is to decide between "Default" prefix or "Impl" suffix. Once the decision is done (hopefully in tomorrow IRC), I will proceed to renaming. It will have absolutly no impact for peoples using GeoAPI interfaces only. Martin. Bill Blanc a =E9crit : > I'm in the beginning stages of evaluating GeoTools and I was wondering > if someone would be willing to submit a geotools2 snippet that shows > how one can load a JPEG ( or png) and it's associated World file as a > layer? It may be a little bit early for a snippet for Raster loading, since the=20 relevant parts in Geotools are moving. Basically, it is possible to do=20 what you are looking for, but the code for doing that is not final.=20 There is the situation: - Some OpenGIS specifications changed (not our fault!!). Since Geotools is deeply commited in following OpenGIS standards, this have a depth impact on our code. We just finished the move for the Coordinate Reference System framework. The move is not yet done for GridCoverage, but it is on the todo list, as you can see from the javadoc here for example (see the orange box): GridCoverage.html - As Rueben said, The code for loading a raster in under heavy development. You have the choice between using existing GridCoverageExchange implementations, or put as much dependencies as possible on J2SE Image I/O framework size rather than GridCoverage (at the cost of building the CRS yourself). Bryce Nordgren has wrote a nice proposal along the line of Image I/O: I believe that Bryce's proposal is the best way to go. It is complementary to GridCoverageExchange (not in opposition), and would allows us to read rasters in a wider range of contexts. But we are not yet there... In the main time, I don't know if someone has wrote a world file reader (maybe Simone would know better than me). But at least, you can read yours JPEG file using the standard J2SE ImageIO reader. - I would like to point out that JPEG is usually not an ideal format for rasters. Indexed PNG are often much better. JPEG images are a little bit blur. More important, you can hardly do any computation on JPEG images. JPEG are YCMB values (Yellow, Cyan, Magenta, Black). They are good for visual appearance, but not for computation. If yours image was a digital elevation model (DEM) for example, it is pretty hard to retrieve back the elevation value (in meters) from a YCMB values. Same applies for temperature maps, winds, and almost any kind of remote sensing data. With indexed PNG, you can maps the pixel values to the "real world" value through some formula. It is also much easier to control the color scale. - So basically you have to get the 3 followings objects by whatever way can fit at this time: - java.awt.image.RenderedImage - org.opengis.referencing.crs.CoordinateReferenceSystem - org.opengis.spatialschema.Envelope Once you have those 3 objects, you can construct a GridCoverage2D from them (for a JPEG images, the SampleDimension[] argument will probably be null): overage2D.html#GridCoverage2D(java.lang.CharSequence,%20java.awt.image.Re= nderedImage,%20org.opengis.referencing.crs.CoordinateReferenceSystem,%20o= rg.opengis.spatialschema.geometry.Envelope,%20org.geotools.coverage.GridS= ampleDimension[],%20org.opengis.coverage.grid.GridCoverage[],%20java.util= .Map) - Now, there is the next problem: we have not yet finished to port the Renderer part from the old CRS API to the new one. You have the choice between 2 renderers. lite-renderer may be more advanced in using the new CRS framework. The refactoring work on J2D-renderer (the other one) has not yet started. I can't speak for lite-renderer. But for the J2D-renderer one, the steps are: - Create a MapPane. - Create a RenderedGridCoverage. This object expect a *legacy* GridCoverage object (the one before the refactoring for the new CRS framework). You can get one using the org.geotools.gc.GridCoverage.fromOpenGIS(...) method. - Invoke MapPane.getRenderer().addLayer(yoursRenderedGridCoverage). Martin. :) > I am not sure how the dbf files is specified, it could be that only > ISO-8859-1 is supported by the file format? We really should not have > to add additional parameters (ie know magic) about the shapefile in > order to make use of it correctly. AFAIK, dbase IV files has no built-in facility of specifying charset of literals they contain. At least OpenOffice Spreadsheet always asks for the encoding when opening DBFs. Older versions of MS Excel did so too (though, newer ones just assume it, and almost always incorrect, should I notice :) So, charset information is inherently something external to dbase IV files (probably because that is pretty old format from the ancient pre-internationalization era), and as such should be specified externally in one way or another. I really see no harm in one more parameter to shapefile factory, as person configuring datastores in GeoServer config usually knows the charset of her dbase files. In worst case, platform-default charset should be used for reading dbfs. Just hardcoding "ISO-8859-1" is definitely erraneus approach as I personally saw tons of dbase files holding data in other charsets. Ok, if that is such a problem, I always can return to a good old practice of patching gt2-main.jar in each new GeoServer release, as I did before Chris made this just a matter of adding a single line in each shapefile datastore. But in this case I should know which revision of GeoTools is used in each particular GeoServer revision to avoid possible incompatibilities when replacing DbaseFileReader.java with my own version with hardcoded "windows-1251" in place of current "ISO-8859-1" :) Is such information available somewhere? I can't see it from the contents of "gt2-main.jar" bundled with GeoServer. JAR's manifest contains these lines: Implementation-Title: org.geotools Implementation-Vendor: GeoTools Implementation-Version: 2.1.x If there was another one, say "Implementation-Version", that'll be good :) -- WBR, Artie
http://sourceforge.net/p/geotools/mailman/geotools-gt2-users/?viewmonth=200504
CC-MAIN-2015-06
en
refinedweb
I'm trying to use SQLObject (0.9.3) in conjunction with the threading module. I'm at a bit of a loss as to where to start debugging this one, so I'll present my mini-example and see if others can help. This is using MySQL, with MySQLdb 1.2.2 (latest version) & SQLObject 0.9.3. class A(SQLObject): class sqlmeta: createSQL = {'mysql': 'ALTER TABLE a ENGINE InnoDB'} cacheValues = False value = IntCol(alternateID=True) def thr1(): for i in range(1000): try: A(value=1) except dberrors.DuplicateEntryError: a = A.byValue(1) for i in range(nthreads): t = threading.Thread(target=thr1,args=()) t.start() If I start 1 thread (nthreads=1), this completes fine. If I start many threads (nthreads=10), I get a bunch of exceptions, with more exceptions, and quicker, if nthreads is large, like 100.. If I change "a = A.byValue(1)" to "pass" the exceptions go away...it seems like the exception puts the connection in a bad state. If I get rid of the InnoDB alter table (and thereby lose row-based locking) then I get lots of the MySQLdb exceptions, with the InnoDB alter table I get only a few. If I change cacheValues to True, w/ InnoDB, I get no significant increase in the number of these exceptions, but they do still occur. I don't see anything especially suspicious if I enable debug and debugOutput. This seems a little similar to the errors at: but I checked the 0.9.3 code and this patch is applied. Can anyone offer any insight? Thanks, nathan View entire thread
http://sourceforge.net/p/sqlobject/mailman/message/18568429/
CC-MAIN-2015-06
en
refinedweb
Hello all, while on the videos of STL happened among some of us an interest in use SEE with C++ Templates, while Stephan not cover Allocators I want to share some code with your guys. Last topic talk on STL was the shared_ptr, its a very powerfull tool that one can use for begin using aligned memory, essential for a good use of multimedia instructions. Another usefull guy is unique_ptr. The later differ from the former in implementation of strict ownership, ie ony one unique_ptr object can own the pointer, it let you use move semantics but hide the copy. Another nice of unique_ptr is it have a specialization for array ( Type[] ) what make the life even easer. Lets go to a simple code: wrap a aligned memory of 16 floats and fill it with non-aligned array of 4 floats. #include <intrin.h> #include <iostream> #include <ostream> #include <memory> #include <algorithm> using namespace std; int main(int argc, char* argv[]) { __m128 a;// forward declaration const char x = 'a'; //force an odd offset float data[] = {1.f, 2.f, 3.f, 4.f}; //alloc and wrap data for 16 floats pointer aligned in 16byte auto alignedBuffer = unique_ptr<float[], decltype(&::_aligned_free) >((float *)::_aligned_malloc(sizeof(float)*16, 16), ::_aligned_free); //load unaligned (the little 'u' after 'load' a = _mm_loadu_ps(data); //store aligned data, i //loop unroll //a=a //Big note here: we dealing with float[] //caution if you use pointer aritmetic ($obj.get()) _mm_store_ps(&alignedBuffer[0], a); a = _mm_add_ps(a, a); _mm_store_ps(&alignedBuffer[4], a); a = _mm_add_ps(a, a); _mm_store_ps(&alignedBuffer[8], a); a = _mm_add_ps(a, a); _mm_store_ps(&alignedBuffer[12], a); for_each(&alignedBuffer[0], &alignedBuffer[16], [](float f) { cout << f << " "; } ); return 0; } Unique and shared _ptr are secure to use inside a vector and other STL classes, You can make, for example, a vector of chinks of aligned data to be processed by your algorithm. I'll back to this thread latter to refine it or add more examples. Please if you like and wish to contribute fell free to add your 2¢ Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
http://channel9.msdn.com/Forums/TechOff/From-c0x-STL-videos-Sharing-experiments
CC-MAIN-2015-06
en
refinedweb
Module::Release - Automate software releases use Module::Release; my $release = Module::Release->new( %params ); # call methods to automate your release process $release->check_vcs; .... Module::Release looks at several sources for configuration information. Module::Release looks at Config to get the values it needs for certain operations. The name of the program to run for the make steps Use this value as the perl interpreter, otherwise use the value in $^X. Do you want debugging output? Set this to a true value: The name of the file to run as Makefile.PL. The default is "Makefile.PL", but you can set it to "Build.PL" to use a Module::Build-based system. The name of the file created by makefile_PL above. The default is "Makefile", but you can set it to "Build" for Module::Build-based systems. Your PAUSE user id. A whitespace separated list of modules for Test::Prereq to ignore. If you don't like what any of these methods do, override them in a subclass. Create the Module::Release object. It reads the configuration and initializes everything. Set up the Module::Release object. preference by setting the makefile_PL and make configuration values. EXPERIMENTAL!! Load MODULE through require (so no importing), without caring what it does. My intent is that MODULE adds methods to the Module::Release namespace so a release object can see it. This should probably be some sort of delegation. Added in 1.21 Returns a list of the loaded mixins Added in 1.21 Returns true if the mixin class is loaded Get the configuration object. By default this is a ConfigReader::Simple object; Returns or sets the name of the local distribution file. You can use the literal argument undef to clear the value. Returns the name of the file on the remote side. You can use the literal argument undef to clear the value. Set the current path for the perl binary that Module::Release should use for general tasks. This is not related to the list of perls used to test multiple binaries unless you use one of those binaries to set a new value. If PATH looks like a perl binary, set_perl uses it as the new setting for perl and returns the previous value. Added in 1.21. Returns the current path for the perl binary that Module::Release should use for general tasks. This is not related to the list of perls used to test multiple binaries. Added in 1.21. Return the list of perl binaries Module::Release will use to test the distribution. Added in 1.21.. Delete PATH from the list of perls used for testing Added in 1.21. Reset the list of perl interpreters to just the one running release. Added in 1.21. If quiet is off, return the value of output_fh. If output_fh is not set, return STDOUT. If quiet is on, return the value of null_fh. Return the null filehandle. So far that's something set up in new and I haven't provided a way to set it. Any subclass can make their null_fh return whatever they like. Get the value of queit mode (true or false). Turn on quiet mode Turn off quiet mode Get the value of the debugging flag (true or false). Turn on debugging Turn off debugging If debugging is on, return the value of debug_fh. If debug_fh is not set, return STDERR. If debugging is off, return the value of null_fh. Run `make realclean` Run `make distclean` Runs `perl Makefile.PL 2>&1`. This step ensures that we start off fresh and pick up any changes in Makefile.PL. Run a plain old `make`. Run `make test`. If any tests fail, it dies. Run `make dist`. As a side effect determines the distribution name if not set on the command line. Run `make disttest`. If the tests fail, it dies. This was the old name for the method, but was inconsistent with other method names. It still works, but is deprecated and will give a warning. Return the distribution version ( set in dist() ) Return the distribution version ( set in dist() ) # XXX make this configurable Run `make manifest` and report anything it finds. If it gives output, die. You should check MANIFEST to ensure it has the things it needs. If files that shouldn't show up do, put them in MANIFEST.SKIP. Since `make manifest` takes care of things for you, you might just have to re-run your release script. Return the name of the manifes file, probably MANIFEST. This is the old name for manifest_name. It still works but is deprecated. Return the filenames in the manifest file as a list. release script does this for you by checking for the special directories for those source systems. Previous to version 1.24, these methods were implemented in this module to support CVS. They are now in Module::Release::CVS as a separate module.. Runs touch on all of the files in MANIFEST. Should I upload to PAUSE? If cpan_user and cpan_pass are set, go for it. Get passwords for CPAN. Read and parse the README file. This is pretty specific, so you may well want to overload it. Read and parse the Changes file. This is pretty specific, so you may well want to overload it. Run a command in the shell. Returns true if the command ran successfully, and false otherwise. Use this function in any other method that calls run to figure out what to do when a command doesn't work. You may want to handle that yourself. Get an environment variable or prompt for it Send the LIST to whatever is in output_fh, or to STDOUT. If you set output_fh to a null filehandle, output goes nowhere. Use this for a string representing a line in the output. Since it's a method you can override it if you like. Send the LIST to whatever is in debug_fh, or to STDERR. If you aren't debugging, debug_fh should return a null filehandle. * What happened to my Changes munging?. This source is in Github: brian d foy, <bdfoy@cpan.org> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~bdfoy/Module-Release/lib/Module/Release.pm
CC-MAIN-2015-06
en
refinedweb
19 November 2013 23:00 [Source: ICIS news] Correction: In the ICIS EVENING SNAPSHOT - Americas Markets Summary dated 19 November 2013, for crude, please read ... Dec WTI: $93.34/bbl, up 31 cents ... instead of ... Dec WTI: $93.54/bbl, up 31 cents .... A corrected version follows. HOUSTON (ICIS)--Here is Tuesday's end of day ?xml:namespace> CRUDE: Dec WTI: $93.34/bbl, up 31 cents; Jan Brent: $106.92/bbl, down $1.55/bbl NYMEX WTI crude futures were mixed in search of near-term direction and ahead of the December contract going off the board on Wednesday. Brent and WTI fell out of step, narrowing the negative trans-Atlantic Brent-WTI arbitrage. RBOB: Dec: $2.6395/gal, down 1.73 cents Reformulated blendstock for oxygen blending (RBOB) gasoline futures settled lower on Tuesday despite some build-up during morning trading on support from Europe and a higher WTI crude futures settlement. NATURAL GAS: Dec: $3.556/MMBtu, down 6.1 cents The December front month closed down for a second session in a row in a day marked by lower-than-average trading volumes. Traders focused on slack near-term demand due to above-average temperatures and returning nuclear power plants, rather than the forecasts for below-average temperatures from the weekend onward, which are expected to encourage strong heating demand over late November. ETHANE: steady at 25.00 cents/gal Ethane spot prices were steady as demand from ethylene plants remains stable. AROMATICS: mixed xylenes up at $3.92-4.15/gal Prompt mixed xylenes (MX) spot prices were discussed at $3.92-4.15/gal FOB (free on board) on Tuesday, sources said. The range was up from $3.85-3.95/gal FOB the previous session. OLEFINS: ethylene offered higher at 57.5 cents/lb, RGP steady at 54.75 cents/lb US November ethylene offer levels moved up to 57.5 cents/lb from 56.5 cents/lb at the close of the previous day against no fresh bids. US November refinery-grade propylene (RGP) was steady at 54.75 cents/lb, based on the most recent reported trade done a day ago.
http://www.icis.com/Articles/2013/11/19/9727170/corrected-evening-snapshot---americas-markets-summary.html
CC-MAIN-2015-06
en
refinedweb
Avoiding traps version 7.0.2 Groovy script variables For users of the Groovy DSL it is important to understand how Groovy deals with script variables. Groovy has two types of script variables. One with a local scope and one with a script-wide scope. Example:() { try { localScope1 } catch (MissingPropertyException e) { println 'localScope1NotAvailable' } try { localScope2 } catch(MissingPropertyException e) { println 'localScope2NotAvailable' } println scriptScope } closure.call() method() Output of groovy scope.groovy > groovy scope.groovy. Configuration and execution phase It is important to keep in mind that Gradle has a distinct configuration and execution phase (see Build Lifecycle). Example 1. Distinct configuration and execution phase build.gradle def classesDir = file('build/classes') classesDir.mkdirs() tasks.register('clean', Delete) { delete 'build' } tasks.register('compile') { dependsOn 'clean' doLast { if (!classesDir.isDirectory()) { println 'The class directory does not exist. I can not operate' // do something } // do something } } build.gradle.kts val classesDir = file("build/classes") classesDir.mkdirs() tasks.register<Delete>("clean") { delete("build") } tasks.register("compile") { dependsOn("clean") doLast { if (!classesDir.isDirectory) { println("The class directory does not exist. I can not operate") // do something } // do something } } Output of gradle -q compile > gradle -q compile The class directory does not exist. I can not operate As the creation of the directory happens during the configuration phase, the clean task removes the directory during the execution phase.
https://docs.gradle.org/current/userguide/potential_traps.html
CC-MAIN-2021-25
en
refinedweb
security_load_policy(3) SELinux API documentationsecurity_load_policy(3) security_load_policy - load a new SELinux policy #include <selinux/selinux.h> int security_load_policy(void *data, size_t len); int selinux_mkload_policy(int preservebools); int selinux_init_load_policy(int *enforce); security_load_policy() loads a new policy, returns 0 for success and -1 for error. selinux_mkload_policy() makes a policy image and loads it. This function provides a higher level interface for loading policy than security_load_policy(), internally determining the right policy version, locating and opening the policy file, mapping it into memory, manipulating it as needed for current boolean settings and/or local definitions, and then calling security_load_policy to load it. preservebools is a boolean flag indicating whether current policy boolean values should be preserved into the new policy (if 1) or reset to the saved policy settings (if 0). The former case is the default for policy reloads, while the latter case is an option for policy reloads but is primarily used for the initial policy load. selinux_init_load_policy() performs the initial policy load. This function determines the desired enforcing mode, sets the enforce argument accordingly for the caller to use, sets the SELinux kernel enforcing status to match it, and loads the policy. It also internally handles the initial selinuxfs mount required to perform these actions. It should also be noted that after the initial policy load, the SELinux kernel code cannot anymore be disabled and the selinuxfs cannot be unmounted using a call to security_disable(3). Therefore, after the initial policy load, the only operational changes are those permitted by security_setenforce(3) (i.e. eventually setting the framework in permissive mode rather than in enforcing one). Returns zero on success or -1 on error. This manual page has been written by Guido Trentalancia <guido@trentalancia.com> selinux(8), security_disable(3), guido@trentalancia.com 3 November 2009 security_load_policy(3) Pages that refer to this page: selinux_config(5)
https://man7.org/linux/man-pages/man3/security_load_policy.3.html
CC-MAIN-2021-25
en
refinedweb
!foldl (a, b, expr, start, lst) - Fold over a list. More... #include "llvm/TableGen/Record.h" Definition at line 1018 of file Record.h. References I, and llvm::Init::IK_FoldOpInit. Definition at line 1504 of file Record.cpp. References llvm::Init::resolveReferences(). Referenced by resolveReferences(). Definition at line 1484 of file Record.cpp. Referenced by resolveReferences(). Convert this value to a literal form. Implements llvm::Init. Definition at line 1537 of file Record.cpp. References llvm::Init::getAsString(), and llvm::Init::getAsUnquotedString(). Get the Init value of the specified bit. Implements llvm::Init. Definition at line 1533 of file Record.cpp. References llvm::tgtok::Bit, and llvm::VarBitInit::get(). Is this a complete value with no unset (uninitialized) subvalues? Reimplemented from llvm::Init. Definition at line 1029 of file Record.h. Definition at line 1500 of file Record.cpp. References llvm::TypedInit::getType(), and ProfileFoldOpInit(). This function 1518 of file Record.cpp. References llvm::ShadowResolver::addShadow(), Fold(), get(), llvm::TypedInit::getType(), and llvm::Init::resolveReferences().
https://www.llvm.org/doxygen/classllvm_1_1FoldOpInit.html
CC-MAIN-2021-25
en
refinedweb
Hello, > > > Checkasm result (Kaby Lake, os 10.12) > > restore_rgb_planes_c: 8371.0 > > restore_rgb_planes_sse2: 6583.7 > > restore_rgb_planes_avx2: 3596.5 > > > > restore_rgb_planes10_c: 16735.7 > > restore_rgb_planes10_sse2: 11478.5 > > restore_rgb_planes10_avx2: 7193.7 > > Curious, on my Haswell (mingw-w64 Win10) i get > > restore_rgb_planes_c: 79500.7 > restore_rgb_planes_sse2: 6872.7 > restore_rgb_planes_avx2: 6715.7 > > restore_rgb_planes10_c: 91394.7 > restore_rgb_planes10_sse2: 14494.0 > restore_rgb_planes10_avx2: 13468.7 > > I check again, i have the same kind of result, than before Strange, that the speed improvment is so small in Haswell > > > > > > Pass fate test for me > > > > > > 0001-checkasm-add-utvideodsp-test : > > I'm not entirely sure of mine, for this checkasm, > > > > 0002-libavcodec-x86-utvideodsp-make-macro-for-func > > Code reorganization > > > > 0003-libavcodec-utvideodsp-add-avx2-version-for-the-dsp > > AVX2 version > > > > 0004-libavcodec-x86-utvideodsp.asm-cosmetic > > Cosmetic > > > > Martin > > Jokyo Images > > Sorry i missed this set. The asm changes look simple and good. Only > thing I'd have done was making sure the constants were wide enough to > avoid having to use vpbroadcast instructions. > I noticed for that matter that said constants already exist in > constants.c, so i just made it use them instead. > Thanks for all the fix. Your comments, for the use of vpbroadcast for constantes load, seems similar to a previous comment by James Darnley (in discussion libavcodec/bswapdsp : add AVX2 for bswap_buf) I use here the same way use by Henrik Gramner in exr_dsp.predictor func (but i'm ok to modify that part if need) Do you think we need to replace all %if cpuflag(avx2) vbroadcasti128 mm, [constantes] %else mova mm, [constantes] %endif by your method ? (for exr_dsp, the answer is probably yes, because it's also use pb_80 (i will send a patch for that)) If yes, is it better to use in asm (for example for bswapdsp) SECTION_RODATA 32 pb_bswap32: times 2 db 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 or adding a constantes (if not exists), in constant.c/h ? Seems like this case will be common for AVX2 version of dsp func. > > The checkasm test is a bit ugly and could use some cosmetics, though. > > Except one thing, (WIDTH_PADDED calc is strange (doesn't remember why i write this, and only works by "luck"), need to be WIDTH + 16 Do you think, it's need more modification (considering your recent patchs) ? Martin
https://ffmpeg.org/pipermail/ffmpeg-devel/2017-November/220524.html
CC-MAIN-2021-25
en
refinedweb
Has anyone tried to mock whole classes (instead of mocking only the objects)? Classes are, like everything else in Ruby, just objects. This allows us to mock them just like we would mock any other object. I have been working on a Rails application (that will be shared with you as soon as I get it translated, I promise) in which I needed to do exactly that. The class I wanted to mock is responsible for communicating to my back-end database and fetching the appropriate objects (instances of itself). The object-finding functionality is implemented as class level methods. If I need the N latest headlines from a reporter named ‘johnson’, I just need to call `Headline.latest(N, “johnson”). If I need to test a class that needs to do a Headline.latest call as part of its job, I don’t want to populate the database with real data (and slow down my tests as I wait for a connection) because I mostly trust that Headline works. It has its own tests to assure that. So I mock the Headline class to make sure my class under test makes the correct calls. I have come up with a simple way to that in Ruby and I am mostly satisfied with the results, but I would like to get some feedback from the community. The source is here: class CacheReporter < MotiroReporter def initialize(headlines_source=Headline) @headlines_source = headlines_source end def latest_headlines return @headlines_source.latest(3, 'mail_list') end end class CacheReporterTest < Test::Unit::TestCase def test_reads_from_database FlexMock.use do |mock_headline_class| mock_headline_class.should_receive(:latest). with(3, ‘mail_list’). once reporter = CacheReporter.new(mock_headline_class) reporter.latest_headlines end end end The main change here can be seen on the CacheReporter constructor. If I were not testing, it wouldn’t even be written, I would just use the Headline class wherever I wanted. But instead of directly using the Headline class inside its methods, it receives it on the constructor. This is what allows us to mock the behavior. Has anyone done anything similar? Can the code be made simpler? Cheers, Thiago A.
https://www.ruby-forum.com/t/mocking-whole-classes/56750
CC-MAIN-2021-25
en
refinedweb
Adversarial DrivingAdversarial Driving This package provides MDP models for safety validation of autonomous vehicles. It is built on top of AutomotiveSimulator.jl. The actions of the AdversarialDrivingMDP represent disturbances to adversarial agents on the road. The reward is designed to encourage critical scenarios for the ego vehicle. For a quick start, look at the two scenarios in examples/. The output of those examples are shown in the following gifs. InstallationInstallation Install with import Pkg; Pkg.add(url="") UsageUsage The AdversarialDrivingMDP is very versatile in its construction and supports the following arguments sut::Agent- The system under test (SUT) or ego-vehicle. This is the entity that the safety validation is being performed upon. adversaries::Array{Agent}- The list of adversaries on the road with the SUT. road::Roadway- The roadway defined in AutonomotiveSimulator.jl. dt::Float64- Simulation timestep. other_agents::Array{Agent}- Other agents on the road that are not being controlled as adversaries and are not the SUT. Default: Agent[]. γ- Discount factor on future rewards. Default: 1. ast_reward- Whether or not to use the AST reward, defined here. Default: false. no_collision_penalty- Penalty (in AST reward) for not finding a collision. Default: 1e3. scale_reward- Whether or not to scale the reward so it is in the range [-1,1]. Default: true. end_of_road- Define an early end of the road. Default: Inf. Maintained by Anthony Corso (acorso@stanford.edu)
https://juliapackages.com/p/adversarialdriving
CC-MAIN-2021-25
en
refinedweb
Recordsets New in version 8.0: This page documents the New API added in Odoo 8.0 which should be the primary development API going forward. It also provides information about porting from or bridging with the "old API" of versions 7 and earlier, but does not explicitly document that API. See the old documentation for that. Interaction with models and records is performed through recordsets, a sorted set of records of the same model. Warning contrary to what the name implies, it is currently possible for recordsets to contain duplicates. This may change in the future. Methods defined on a model are executed on a recordset, and their self is a recordset: class AModel(models.Model): _name = 'a.model' def a_method(self): # self can be anywhere between 0 records and all records in the # database self.do_operation() Iterating on a recordset will yield new sets of a single record ("singletons"), much like iterating on a Python string yields strings of a single characters: def do_operation(self): print self # => a.model(1, 2, 3, 4, 5) for record in self: print record # => a.model(1), then a.model(2), then a.model(3), ... Field access = "Bob" Trying to read or write a field on multiple records will raise an error. Accessing a relational field ( Many2one, One2many, Many2many) always returns a recordset, empty if the field is not set. Danger each assignment to a field triggers a database update, when setting multiple fields at the same time or setting fields on multiple records (to the same value), use write(): # 3 * len(records) database updates for record in records: record.a = 1 record.b = 2 record.c = 3 # len(records) database updates for record in records: record.write({'a': 1, 'b': 2, 'c': 3}) # 1 database update records.write({'a': 1, 'b': 2, 'c': 3}) Set operations Recordsets are immutable, but sets of the same model can be combined using various set operations, returning new recordsets. Set operations do not preserve order. record in setreturns whether record(which must be a 1-element recordset) is present in set. record not in setis the inverse operation set1 <= set2and set1 < set2return whether set1is a subset of set2(resp. strict) set1 >= set2and set1 > set2return whether set1is a superset of set2(resp. strict) set1 | set2returns the union of the two recordsets, a new recordset containing all records present in either source set1 & set2returns the intersection of two recordsets, a new recordset containing only records present in both sources set1 - set2returns a new recordset containing only records of set1which are not in set2 Other recordset operations Recordsets are iterable so the usual Python tools are available for transformation ( map(), sorted(), ifilter(), ...) however these return either a list or an iterator, removing the ability to call methods on their result, or to use set operations. Recordsets therefore provide these operations returning recordsets themselves (when possible): filtered() returns a recordset containing only records satisfying the provided predicate function. The predicate can also be a string to filter by a field being true or false: # only keep records whose company is the current user's records.filtered(lambda r: r.company_id == user.company_id) # only keep records whose partner is a company records.filtered("partner_id.is_company") sorted() returns a recordset sorted by the provided key function. If no key is provided, use the model's default sort order: # sort records by name records.sorted(key=lambda r: r.name) mapped() applies the provided function to each record in the recordset, returns a recordset if the results are recordsets: # returns a list of summing two fields for each record in the set records.mapped(lambda r: r.field1 + r.field2) The provided function can be a string to get field values: # returns a list of names records.mapped('name') # returns a recordset of partners record.mapped('partner_id') # returns the union of all partner banks, with duplicates removed record.mapped('partner_id.bank_ids') Environment The Environment stores various contextual data used by the ORM: the database cursor (for database queries), the current user (for access rights checking) and the current context (storing arbitrary metadata). The environment also stores caches. All recordsets have an environment, which is immutable, can be accessed using env and gives access to the current user ( user), the cursor ( cr) or the context ( context): >>> records.env <Environment object ...> >>> records.env.user res.user(3) >>> records.env.cr <Cursor object ...) When creating a recordset from an other recordset, the environment is inherited. The environment can be used to get an empty recordset in an other model, and query that model: >>> self.env['res.partner'] res.partner >>> self.env['res.partner'].search([['is_company', '=', True], ['customer', '=', True]]) res.partner(7, 18, 12, 14, 17, 19, 8, 31, 26, 16, 13, 20, 30, 22, 29, 15, 23, 28, 74) Altering the environment The environment can be customized from a recordset. This returns a new version of the recordset using the altered environment. sudo() creates a new environment with the provided user set, uses the administrator if none is provided (to bypass access rights/rules in safe contexts), returns a copy of the recordset it is called on using the new environment: # create partner object as administrator env['res.partner'].sudo().create({'name': "A Partner"}) # list partners visible by the "public" user public = env.ref('base.public_user') env['res.partner'].sudo(public).search([]) with_context() - can take a single positional parameter, which replaces the current environment's context - can take any number of parameters by keyword, which are added to either the current environment's context or the context set during step 1 # look for partner, or create one with specified timezone if none is # found env['res.partner'].with_context(tz=a_tz).find_or_create(email_address) with_env() - replaces the existing environment entirely Common ORM methods search() Takes a search domain, returns a recordset of matching records. Can return a subset of matching records ( offsetand limitparameters) and be ordered ( orderparameter): >>> # searches the current model >>> self.search([('is_company', '=', True), ('customer', '=', True)]) res.partner(7, 18, 12, 14, 17, 19, 8, 31, 26, 16, 13, 20, 30, 22, 29, 15, 23, 28, 74) >>> self.search([('is_company', '=', True)], limit=1).name 'Agrolait' Tip to just check if any record matches a domain, or count the number of records which do, use search_count() create() Takes a number of field values, and returns a recordset containing the record created: >>> self.create({'name': "New Name"}) res.partner(78) write() Takes a number of field values, writes them to all the records in its recordset. Does not return anything: self.write({'name': "Newer Name"}) Takes a database id or a list of ids and returns a recordset, useful when record ids are obtained from outside Odoo (e.g. round-trip through external system) or when calling methods in the old API: >>> self.browse([7, 18, 12]) res.partner(7, 18, 12) exists() Returns a new recordset containing only the records which exist in the database. Can be used to check whether a record (e.g. obtained externally) still exists: if not record.exists(): raise Exception("The record has been deleted") or after calling a method which could have removed some records: records.may_remove_some() # only keep records which were not deleted records = records.exists() ref() Environment method returning the record matching a provided external id: >>> env.ref('base.group_public') res.groups(2) ensure_one() checks that the recordset is a singleton (only contains a single record), raises an error otherwise: records.ensure_one() # is equivalent to but clearer than: assert len(records) == 1, "Expected singleton" Creating Models Model fields are defined as attributes on the model itself: from openerp import models, fields class AModel(models.Model): _name = 'a.model.name' field1 = fields.Char() Warning this means you can not define a field and a method with the same name, they will conflict By default, the field's label (user-visible name) is a capitalized version of the field name, this can be overridden with the string parameter: field2 = fields.Integer(string="an other field") For the various field types and parameters, see the fields reference. Default values are defined as parameters on fields, either a value: a_field = fields.Char(default="a value") or a function called to compute the default value, which should return that value: def compute_default_value(self): return self.get_value() a_field = fields.Char(default=compute_default_value) Computed fields Fields can be computed (instead of read straight from the database) using the compute parameter. It must assign the computed value to the field. If it uses the values of other fields, it should specify those fields using depends(): from openerp import api total = fields.Float(compute='_compute_total') @api.depends('value', 'tax') def _compute_total(self): for record in self: record.total = record.value + record.value * record.tax dependencies can be dotted paths when using sub-fields: @api.depends('line_ids.value') def _compute_total(self): for record in self: record.total = sum(line.value for line in record.line_ids) - computed fields are not stored by default, they are computed and returned when requested. Setting store=Truewill store them in the database and automatically enable searching searching on a computed field can also be enabled by setting the searchparameter. The value is a method name returning a Domains: upper_name = field.Char(compute='_compute_upper', search='_search_upper') def _search_upper(self, operator, value): if operator == 'like': operator = 'ilike' return [('name', operator, value)] to allow setting values on a computed field, use the inverseparameter. It is the name of a function reversing the computation and setting the relevant fields: document = fields.Char(compute='_get_document', inverse='_set_document') def _get_document(self): for record in self: with open(record.get_document_path) as f: record.document = f.read() def _set_document(self): for record in self: if not record.document: continue with open(record.get_document_path()) as f: f.write(record.document) multiple fields can be computed at the same time by the same method, just use the same method on all fields and set all of them: discount_value = fields.Float(compute='_apply_discount') total = fields.Float(compute='_apply_discount') @depends('value', 'discount') def _apply_discount(self): for record in self: # compute actual discount from discount percentage discount = record.value * record.discount record.discount_value = discount record.total = record.value - discount onchange: updating UI on the fly When a user changes a field's value in a form (but hasn't saved the form yet), it can be useful to automatically update other fields based on that value e.g. updating a final total when the tax is changed or a new invoice line is added. - computed fields are automatically checked and recomputed, they do not need an onchange for non-computed fields, the onchange()decorator is used to provide new field values: @api.onchange('field1', 'field2') # if these fields are changed, call method def check_change(self): if self.field1 < self.field2: self.field3 = True the changes performed during the method are then sent to the client program and become visible to the user - Both computed fields and new-API onchanges are automatically called by the client without having to add them in views It is possible to suppress the trigger from a specific field by adding on_change="0"in a view: <field name="name" on_change="0"/> will not trigger any interface update when the field is edited by the user, even if there are function fields or explicit onchange depending on that field. Note onchange methods work on virtual records assignment on these records is not written to the database, just used to know which value to send back to the client Low-level SQL The cr attribute on environments is the cursor for the current database transaction and allows executing SQL directly, either for queries which are difficult to express using the ORM (e.g. complex joins) or for performance reasons: self.env.cr.execute("some_sql", param1, param2, param3) Because models use the same cursor and the Environment holds various caches, these caches must be invalidated when altering the database in raw SQL, or further uses of models may become incoherent. It is necessary to clear caches when using CREATE, UPDATE or DELETE in SQL, but not SELECT (which simply reads the database). Clearing caches can be performed using the invalidate_all() method of the Environment object. Compatibility between new API and old API Odoo is currently transitioning from an older (less regular) API, it can be necessary to manually bridge from one to the other manually: - RPC layers (both XML-RPC and JSON-RPC) are expressed in terms of the old API, methods expressed purely in the new API are not available over RPC - overridable methods may be called from older pieces of code still written in the old API style The big differences between the old and new APIs are: - values of the Environment(cursor, user id and context) are passed explicitly to methods instead - record data ( ids) are passed explicitly to methods, and possibly not passed at all - methods tend to work on lists of ids instead of recordsets By default, methods are assumed to use the new API style and are not callable from the old API style. Tip calls from the new API to the old API are bridged when using the new API style, calls to methods defined using the old API are automatically converted on-the-fly, there should be no need to do anything special: >>> # method in the old API style >>> def old_method(self, cr, uid, ids, context=None): ... print ids >>> # method in the new API style >>> def new_method(self): ... # system automatically infers how to call the old-style ... # method from the new-style method ... self.old_method() >>> env[model].browse([1, 2, 3, 4]).new_method() [1, 2, 3, 4] Two decorators can expose a new-style method to the old API: model() the method is exposed as not using ids, its recordset will generally be empty. Its "old API" signature is cr, uid, *arguments, context: @api.model def some_method(self, a_value): pass # can be called as old_style_model.some_method(cr, uid, a_value, context=context) multi() the method is exposed as taking a list of ids (possibly empty), its "old API" signature is cr, uid, ids, *arguments, context: @api.multi def some_method(self, a_value): pass # can be called as old_style_model.some_method(cr, uid, [id1, id2], a_value, context=context) Because new-style APIs tend to return recordsets and old-style APIs tend to return lists of ids, there is also a decorator managing this: returns() the function is assumed to return a recordset, the first parameter should be the name of the recordset's model or self(for the current model). No effect if the method is called in new API style, but transforms the recordset into a list of ids when called from the old API style: >>> @api.multi ... @api.returns('self') ... def some_method(self): ... return self >>> new_style_model = env['a.model'].browse(1, 2, 3) >>> new_style_model.some_method() a.model(1, 2, 3) >>> old_style_model = pool['a.model'] >>> old_style_model.some_method(cr, uid, [1, 2, 3], context=context) [1, 2, 3] Model Reference class openerp.models.Model(pool, cr)[source] Main super-class for regular database-persisted OpenERP models. OpenERP models are created by inheriting from this class: class user(Model): ... The system will later instantiate the class once per database (on which the class' module is installed). Structural attributes _name business object name, in dot-notation (in module namespace) _rec_name Alternative field to use as name, used by osv’s name_get() (default: 'name') _inherit - If _nameis set, names of parent models to inherit from. Can be a strif inheriting from a single parent - If _nameis unset, name of a single model to extend in-place See Inheritance and extension. _order Ordering field when searching without an ordering specified (default: 'id') _auto Whether a database table should be created (default: True) If set to False, override init() to create the database table _table Name of the table backing the model created when _auto, automatically generated by default. _inherits dictionary mapping the _name of the parent business objects to the names of the corresponding foreign key fields to use: _inherits = { 'a.model': 'a_field_id', 'b.model': 'b_field_id' } implements composition-based inheritance: the new model exposes all the fields of the _inherits-ed model but stores none of them: the values themselves remain stored on the linked record. _constraints list of (constraint_function, message, fields) defining Python constraints. The fields list is indicative Deprecated since version 8.0: use constrains() _sql_constraints list of (name, sql_definition, message) triples defining SQL constraints to execute when generating the backing table _parent_store Alongside parent_left and parent_right, sets up a nested set to enable fast hierarchical queries on the records of the current model (default: False) CRUD create(vals) → record[source] Creates a new record for the model. The new record is initialized using the values from vals and if necessary those from default_get(). - AccessError -- - if user has no create rights on the requested object - if user tries to bypass access rules for create on the requested object - ValidateError -- if user tries to enter invalid value for a field that is not in selection - UserError -- if a loop would be created in a hierarchy of objects a result of the operation (such as setting an object as its own parent) browse([ids]) → records[source] Returns a recordset for the ids provided as parameter in the current environment. Can take no ids, a single id or a sequence of ids. unlink()[source] Deletes the records of the current set - AccessError -- - if user has no unlink rights on the requested object - if user tries to bypass access rules for unlink on the requested object - UserError -- if the record is default property for other records write(vals)[source] Updates all records in the current set with the provided values. - AccessError -- - if user has no write rights on the requested object - if user tries to bypass access rules for write on the requested object - ValidateError -- if user tries to enter invalid value for a field that is not in selection - UserError -- if a loop would be created in a hierarchy of objects a result of the operation (such as setting an object as its own parent) - For numeric fields ( Integer, Float) the value should be of the corresponding type - For Boolean, the value should be a bool - For Selection, the value should match the selection values (generally str, sometimes int) - For Many2one, the value should be the database identifier of the record to set Other non-relational fields use a string for value Danger for historical and compatibility reasons, Dateand Datetimefields use strings as values (written and read) rather than dateor datetime. These date strings are UTC-only and formatted according to openerp.tools.misc.DEFAULT_SERVER_DATE_FORMATand openerp.tools.misc.DEFAULT_SERVER_DATETIME_FORMAT One2manyand Many2manyuse a special "commands" format to manipulate the set of records stored in/associated with the field. This format is a list of triplets executed sequentially, where each triplet is a command to execute on the set of records. Not all commands apply in all situations. Possible commands are: (0, _, values) - adds a new record created from the provided valuedict. (1, id, values) - updates an existing record of id idwith the values in values. Can not be used in create(). (2, id, _) - removes the record of id idfrom the set, then deletes it (from the database). Can not be used in create(). (3, id, _) - removes the record of id idfrom the set, but does not delete it. Can not be used on One2many. Can not be used in create(). (4, id, _) - adds an existing record of id idto the set. Can not be used on One2many. (5, _, _) - removes all records from the set, equivalent to using the command 3on every record explicitly. Can not be used on One2many. Can not be used in create(). (6, _, ids) - replaces all existing records in the set by the idslist, equivalent to using the command 5followed by a command 4for each idin ids. Note Values marked as _in the list above are ignored and can be anything, generally 0or False. read([fields])[source] Reads the requested fields for the records in self, low-level/RPC method. In Python code, prefer read_group(*args, **kwargs)[source] Get the list of records in list view grouped by the given groupby fields - cr -- database cursor - uid -- current user id - domain -- list specifying search criteria [['field_name', 'operator', 'value'], ...] - fields ( list) -- list of fields present in the list view specified on the object - groupby ( list) -- list of groupby descriptions by which the records will be grouped. A groupby description is either a field (then it will be grouped by that field) or a string 'field:groupby_function'. Right now, the only functions supported are 'day', 'week', 'month', 'quarter' or 'year', and they only make sense for date/datetime fields. - offset ( int) -- optional number of records to skip - limit ( int) -- optional max number of records to return - context ( dict) -- context arguments, like lang, time zone. - orderby ( list) -- optional order byspecification, for overriding the natural sort ordering of the groups, see also search()(supported only for many2one fields currently) - lazy ( bool) -- if true, the results are only grouped by the first groupby and the remaining groupbys are put in the __context key. If false, all the groupbys are done in one call. list of dictionaries(one dictionary for each record) containing: - the values of fields grouped by the fields in groupbyargument - __domain: list of tuples specifying the search criteria - __context: dictionary with argument like groupby - if user has no read rights on the requested object - if user tries to bypass access rules for read on the requested object Searching search(args[, offset=0][, limit=None][, order=None][, count=False])[source] Searches for records based on the args search domain. - args -- A search domain. Use an empty list to match all records. - offset ( int) -- number of results to ignore (default: none) - limit ( int) -- maximum number of records to return (default: all) - order ( str) -- sort string - count ( bool) -- if True, only counts and returns the number of matching records (default: False) limitrecords matching the search criteria - if user tries to bypass access rules for read on the requested object. search_count(args) → int[source] Returns the number of records in the current model matching the provided domain. name_search(name='', args=None, operator='ilike', limit=100) → records[source] Search for records that have a display name matching the given name pattern when compared with the given operator, while also matching the optional search domain ( args). This is used for example to provide suggestions based on a partial value for a relational field. Sometimes be seen as the inverse function of name_get(), but it is not guaranteed to be. This method is equivalent to calling search() with a search domain based on display_name and then name_get() on the result of the search. (id, text_repr)for all matching records. Recordset operations ids List of actual record ids in this recordset (ignores placeholder ids for records to create) ensure_one()[source] Verifies that the current recorset holds a single record. Raises an exception otherwise. exists() → records[source] Returns the subset of records in self that exist, and marks deleted records as such in cache. It can be used as a test on records: if record.exists(): ... By convention, new records are returned as existing. filtered(func)[source] Select the records in self such that func(rec) is true, and return them as a recordset. sorted(key=None, reverse=False)[source] Return the recordset self ordered by key. - key -- either a function of one argument that returns a comparison key for each record, or None, in which case records are ordered according the default model's order - reverse -- if True, return the result in reverse order mapped(func)[source] Apply func on all records in self, and return the result as a list or a recordset (if func return recordsets). In the latter case, the order of the returned recordset is arbitrary. Environment swapping sudo([user=SUPERUSER])[source] Returns a new version of this recordset attached to the provided user. By default this returns a SUPERUSER recordset, where access control and record rules are bypassed. Note Using sudo could cause data access to cross the boundaries of record rules, possibly mixing records that are meant to be isolated (e.g. records from different companies in multi-company environments). It may lead to un-intuitive results in methods which select one record among many - for example getting the default company, or selecting a Bill of Materials. Note Because the record rules and access control will have to be re-evaluated, the new recordset will not benefit from the current environment's data cache, so later data access may incur extra delays while re-fetching from the database. with_context([context][, **overrides]) → records[source] Returns a new version of this recordset attached to an extended context. The extended context is either the provided context in which overrides are merged or the current context in which overrides are merged e.g.: # current context is {'key1': True} r2 = records.with_context({}, key2=True) # -> r2._context is {'key2': True} r2 = records.with_context(key2=True) # -> r2._context is {'key1': True, 'key2': True} with_env(env)[source] Returns a new version of this recordset attached to the provided environment Warning The new environment will not benefit from the current environment's data cache, so later data access may incur extra delays while re-fetching from the database. Fields and views querying fields_get([fields][, attributes])[source] Return the definition of each field. The returned value is a dictionary (indiced by field name) of dictionaries. The _inherits'd fields are included. The string, help, and selection (if present) attributes are translated. - allfields -- list of fields to document, all if empty or not provided - attributes -- list of description attributes to return for each field, all if empty or not provided fields_view_get([view_id | view_type='form'])[source] Get the detailed composition of the requested view like fields, model, view architecture - view_id -- id of the view or None - view_type -- type of the view to return if view_id is None ('form', 'tree', ...) - toolbar -- true to include contextual actions - submenu -- deprecated - AttributeError -- - if the inherited view has unknown position to work with other than 'before', 'after', 'inside', 'replace' - if some tag other than 'position' is found in parent view - Invalid ArchitectureError -- if there is view type other than form, tree, calendar, search etc defined on the structure Miscellaneous methods default_get(fields) → default_values[source] Return default values for the fields in fields_list. Default values are determined by the context, user defaults, and the model itself. copy(default=None)[source] Duplicate record with given id updating it with default values name_get() → [(id, name), ...][source] Returns a textual representation for the records in self. By default this is the value of the display_name field. (id, text_repr)for each records name_create(name) → record[source] Create a new record by calling create() with only one value provided: the display name of the new record. The new record will be initialized with any default values applicable to this model, or provided through the context. The usual behavior of create() applies. name_get()pair value of the created record Automatic fields id _log_access Whether log access fields ( create_date, write_uid, ...) should be generated (default: True) create_date Date at which the record was created Datetime create_uid Relational field to the user who created the record res.users write_date Date at which the record was last modified Datetime write_uid Relational field to the last user who modified the record res.users Reserved field names A few field names are reserved for pre-defined behaviors beyond that of automated fields. They should be defined on a model when the related behavior is desired: name default value for _rec_name, used to display records in context where a representative "naming" is necessary. active toggles the global visibility of the record, if active is set to False the record is invisible in most searches and listing sequence Alterable ordering criteria, allows drag-and-drop reordering of models in list views state lifecycle stages of the object, used by the states attribute on fields parent_id used to order records in a tree structure and enables the child_of operator in domains parent_left used with _parent_store, allows faster tree structure access parent_right see parent_left Method decorators This module provides the elements for managing two different API styles, namely the "traditional" and "record" styles. In the "traditional" style, parameters like the database cursor, user id, context dictionary and record ids (usually denoted as cr, uid, context, ids) are passed explicitly to all methods. In the "record" style, those parameters are hidden into model instances, which gives it a more object-oriented feel. For instance, the statements: model = self.pool.get(MODEL) ids = model.search(cr, uid, DOMAIN, context=context) for rec in model.browse(cr, uid, ids, context=context): print rec.name model.write(cr, uid, ids, VALUES, context=context) may also be written as: env = Environment(cr, uid, context) # cr, uid, context wrapped in env model = env[MODEL] # retrieve an instance of MODEL recs = model.search(DOMAIN) # search returns a recordset for rec in recs: # iterate over the records print rec.name recs.write(VALUES) # update all records in recs Methods written in the "traditional" style are automatically decorated, following some heuristics based on parameter names. openerp.api.multi(method)[source] Decorate a record-style method where self is a recordset. The method typically defines an operation on records. Such a method: @api.multi def method(self, args): ... may be called in both record and traditional styles, like: # recs = model.browse(cr, uid, ids, context) recs.method(args) model.method(cr, uid, ids, args, context=context) openerp.api.model(method)[source] Decorate a record-style method where self is a recordset, but its contents is not relevant, only the model is. Such a method: @api.model def method(self, args): ... may be called in both record and traditional styles, like: # recs = model.browse(cr, uid, ids, context) recs.method(args) model.method(cr, uid, args, context=context) Notice that no ids are passed to the method in the traditional style. openerp.api.depends(*args)[source] Return a decorator that specifies the field dependencies of a "compute" method (for new-style function fields). Each argument must be a string that consists in a dot-separated sequence of field names: pname = fields.Char(compute='_compute_pname') @api.one @api.depends('partner_id.name', 'partner_id.is_company') def _compute_pname(self): if self.partner_id.is_company: self.pname = (self.partner_id.name or "").upper() else: self.pname = self.partner_id.name One may also pass a single function as argument. In that case, the dependencies are given by calling the function with the field's model. openerp.api.constrains(*args)[source] Decorates a constraint checker. Each argument must be a field name used in the check: @api.one @api.constrains('name', 'description') def _check_description(self): if self.name == self.description: raise ValidationError("Fields name and description must be different") Invoked on the records on which one of the named fields has been modified. Should raise ValidationError if the validation failed. Warning @constrains only supports simple field names, dotted names (fields of relational fields e.g. partner_id.customer) are not supported and will be ignored openerp.api.onchange(*args)[source] Return a decorator to decorate an onchange method for given fields. Each argument must be a field name: @api.onchange('partner_id') def _onchange_partner(self): self.message = "Dear %s" % (self.partner_id.name or "") In the form views where the field appears, the method will be called when one of the given fields is modified. The method is invoked on a pseudo-record that contains the values present in the form. Field assignments on that record are automatically sent back to the client. The method may return a dictionary for changing field domains and pop up a warning message, like in the old API: return { 'domain': {'other_id': [('partner_id', '=', partner_id)]}, 'warning': {'title': "Warning", 'message': "What is this?"}, } Warning @onchange only supports simple field names, dotted names (fields of relational fields e.g. partner_id.tz) are not supported and will be ignored openerp.api.returns(model, downgrade=None, upgrade=None)[source] Return a decorator for methods that return instances of model. - model -- a model name, or 'self'for the current model - downgrade -- a function downgrade(self, value, *args, **kwargs)to convert the record-style valueto a traditional-style output - upgrade -- a function upgrade(self, value, *args, **kwargs)to convert the traditional-style valueto a record-style output The arguments self, *args and **kwargs are the ones passed to the method in the record-style. The decorator adapts the method output to the api style: id, ids or False for the traditional style, and recordset for the record style: @model @returns('res.partner') def find_partner(self, arg): ... # return some record # output depends on call style: traditional vs record style partner_id = model.find_partner(cr, uid, arg, context=context) # recs = model.browse(cr, uid, ids, context) partner_record = recs.find_partner(arg) Note that the decorated method must satisfy that convention. Those decorators are automatically inherited: a method that overrides a decorated existing method will be decorated with the same @returns(model). openerp.api.one(method)[source] Decorate a record-style method where self is expected to be a singleton instance. The decorated method automatically loops on records, and makes a list with the results. In case the method is decorated with returns(), it concatenates the resulting instances. Such a method: @api.one def method(self, args): return self.name may be called in both record and traditional styles, like: # recs = model.browse(cr, uid, ids, context) names = recs.method(args) names = model.method(cr, uid, ids, args, context=context) Deprecated since version 9.0: one() often makes the code less clear and behaves in ways developers and readers may not expect. It is strongly recommended to use multi() and either iterate on the self recordset or ensure that the recordset is a single record with ensure_one(). openerp.api.v7(method_v7)[source] Decorate a method that supports the old-style api only. A new-style api may be provided by redefining a method with the same name and decorated with v8(): @api.v7 def foo(self, cr, uid, ids, context=None): ... @api.v8 def foo(self): ... Special care must be taken if one method calls the other one, because the method may be overridden! In that case, one should call the method from the current class (say MyClass), for instance: @api.v7 def foo(self, cr, uid, ids, context=None): # Beware: records.foo() may call an overriding of foo() records = self.browse(cr, uid, ids, context) return MyClass.foo(records) Note that the wrapper method uses the docstring of the first method. openerp.api.v8(method_v8)[source] Decorate a method that supports the new-style api only. An old-style api may be provided by redefining a method with the same name and decorated with v7(): @api.v8 def foo(self): ... @api.v7 def foo(self, cr, uid, ids, context=None): ... Note that the wrapper method uses the docstring of the first method. Fields Basic fields class openerp.fields.Field(string=None, **kwargs)[source] The field descriptor contains the field definition, and manages accesses and assignments of the corresponding field on records. The following attributes may be provided when instanciating a field: - string -- the label of the field seen by users (string); if not set, the ORM takes the field name in the class (capitalized). - help -- the tooltip of the field seen by users (string) - readonly -- whether the field is readonly (boolean, by default False) - required -- whether the value of the field is required (boolean, by default False) - index -- whether the field is indexed in database (boolean, by default False) - default -- the default value for the field; this is either a static value, or a function taking a recordset and returning a value - states -- a dictionary mapping state values to lists of UI attribute-value pairs; possible attributes are: 'readonly', 'required', 'invisible'. Note: Any state-based condition requires the statefield value to be available on the client-side UI. This is typically done by including it in the relevant views, possibly made invisible if not relevant for the end-user. - groups -- comma-separated list of group xml ids (string); this restricts the field access to the users of the given groups only - copy ( bool) -- whether the field value should be copied when the record is duplicated (default: Truefor normal fields, Falsefor one2manyand computed fields, including property fields and related fields) - oldname ( string) -- the previous name of this field, so that ORM can rename it automatically at migration Computed fields One can define a field whose value is computed instead of simply being read from the database. The attributes that are specific to computed fields are given below. To define such a field, simply provide a value for the attribute compute. - compute -- name of a method that computes the field - inverse -- name of a method that inverses the field (optional) - search -- name of a method that implement search on the field (optional) - store -- whether the field is stored in database (boolean, by default Falseon computed fields) - compute_sudo -- whether the field should be recomputed as superuser to bypass access rights (boolean, by default False) The methods given for compute, inverse and search are model methods. Their signature is shown in the following example: upper = fields.Char(compute='_compute_upper', inverse='_inverse_upper', search='_search_upper') @api.depends('name') def _compute_upper(self): for rec in self: rec.upper = rec.name.upper() if rec.name else False def _inverse_upper(self): for rec in self: rec.name = rec.upper.lower() if rec.upper else False def _search_upper(self, operator, value): if operator == 'like': operator = 'ilike' return [('name', operator, value)] The compute method has to assign the field on all records of the invoked recordset. The decorator openerp.api.depends() must be applied on the compute method to specify the field dependencies; those dependencies are used to determine when to recompute the field; recomputation is automatic and guarantees cache/database consistency. Note that the same method can be used for several fields, you simply have to assign all the given fields in the method; the method will be invoked once for all those fields. By default, a computed field is not stored to the database, and is computed on-the-fly. Adding the attribute store=True will store the field's values in the database. The advantage of a stored field is that searching on that field is done by the database itself. The disadvantage is that it requires database updates when the field must be recomputed. The inverse method, as its name says, does the inverse of the compute method: the invoked records have a value for the field, and you must apply the necessary changes on the field dependencies such that the computation gives the expected value. Note that a computed field without an inverse method is readonly by default. The search method is invoked when processing domains before doing an actual search on the model. It must return a domain equivalent to the condition: field operator value. Related fields The value of a related field is given by following a sequence of relational fields and reading a field on the reached model. The complete sequence of fields to traverse is specified by the attribute Some field attributes are automatically copied from the source field if they are not redefined: string, readonly, required (only if all fields in the sequence are required), groups, digits, size, translate, sanitize, selection, comodel_name, domain, context. All semantic-free attributes are copied from the source field. By default, the values of related fields are not stored to the database. Add the attribute store=True to make it stored, just like computed fields. Related fields are automatically recomputed when their dependencies are modified. Company-dependent fields Formerly known as 'property' fields, the value of those fields depends on the company. In other words, users that belong to different companies may see different values for the field on a given record. Incremental definition A field is defined as class attribute on a model class. If the model is extended (see Model), one can also extend the field definition by redefining a field with the same name and same type on the subclass. In that case, the attributes of the field are taken from the parent class and overridden by the ones given in subclasses. For instance, the second class below only adds a tooltip on the field state: class First(models.Model): _name = 'foo' state = fields.Selection([...], required=True) class Second(models.Model): _inherit = 'foo' state = fields.Selection(help="Blah blah blah") class openerp.fields.Char(string=None, **kwargs)[source] Bases: openerp.fields._String Basic string field, can be length-limited, usually displayed as a single-line string in clients. - size ( int) -- the maximum size of values stored for that field - translate -- enable the translation of the field's values; use translate=Trueto translate field values as a whole; translatemay also be a callable such that translate(callback, value)translates valueby using callback(term)to retrieve the translation of terms. class openerp.fields.Boolean(string=None, **kwargs)[source] Bases: openerp.fields.Field class openerp.fields.Integer(string=None, **kwargs)[source] Bases: openerp.fields.Field class openerp.fields.Float(string=None, digits=None, **kwargs)[source] Bases: openerp.fields.Field The precision digits are given by the attribute class openerp.fields.Text(string=None, **kwargs)[source] Bases: openerp.fields._String Very similar to Char but used for longer contents, does not have a size and usually displayed as a multiline text box. translate=Trueto translate field values as a whole; translatemay also be a callable such that translate(callback, value)translates valueby using callback(term)to retrieve the translation of terms. class openerp.fields.Selection(selection=None, string=None, **kwargs)[source] Bases: openerp.fields.Field - selection -- specifies the possible values for this field. It is given as either a list of pairs ( value, string), or a model method, or a method name. - selection_add -- provides an extension of the selection in the case of an overridden field. It is a list of pairs ( value, string). The attribute selection is mandatory except in the case of related fields or field extensions. class openerp.fields.Html(string=None, **kwargs)[source] Bases: openerp.fields._String class openerp.fields.Date(string=None, **kwargs)[source] Bases: openerp.fields.Field static context_today(record, timestamp=None)[source] Return the current date as seen in the client's timezone in a format fit for date fields. This method may be used to compute default values. static from_string(value)[source] Convert an ORM value into a date value. static to_string(value)[source] Convert a date value into the format expected by the ORM. static today(*args)[source] Return the current day in the format expected by the ORM. This function may be used to compute default values. class openerp.fields.Datetime(string=None, **kwargs)[source] Bases: openerp.fields.Field static context_timestamp(record, timestamp)[source] Returns the given timestamp converted to the client's timezone. This method is not meant for use as a _defaults initializer, because datetime fields are automatically converted upon display on client side. For _defaults you fields.datetime.now() should be used instead. static from_string(value)[source] Convert an ORM value into a datetime value. static now(*args)[source] Return the current day and time in the format expected by the ORM. This function may be used to compute default values. static to_string(value)[source] Convert a datetime value into the format expected by the ORM. Relational fields class openerp.fields.Many2one(comodel_name=None, string=None, **kwargs)[source] Bases: openerp.fields._Relational The value of such a field is a recordset of size 0 (no record) or 1 (a single record). - comodel_name -- name of the target model (string) - domain -- an optional domain to set on candidate values on the client side (domain or string) - context -- an optional context to use on the client side when handling that field (dictionary) - ondelete -- what to do when the referred record is deleted; possible values are: 'set null', 'restrict', 'cascade' - auto_join -- whether JOINs are generated upon search through that field (boolean, by default False) - delegate -- set it to Trueto make fields of the target model accessible from the current model (corresponds to _inherits) The attribute comodel_name is mandatory except in the case of related fields or field extensions. class openerp.fields.One2many(comodel_name=None, inverse_name=None, string=None, **kwargs)[source] Bases: openerp.fields._RelationalMulti One2many field; the value of such a field is the recordset of all the records in comodel_name such that the field inverse_name is equal to the current record. - comodel_name -- name of the target model (string) - inverse_name -- name of the inverse Many2onefield in comodel_name(string) - domain -- an optional domain to set on candidate values on the client side (domain or string) - context -- an optional context to use on the client side when handling that field (dictionary) - auto_join -- whether JOINs are generated upon search through that field (boolean, by default False) - limit -- optional limit to use upon read (integer) The attributes comodel_name and inverse_name are mandatory except in the case of related fields or field extensions. class openerp.fields.Many2many(comodel_name=None, relation=None, column1=None, column2=None, string=None, **kwargs)[source] Bases: openerp.fields._RelationalMulti Many2many field; the value of such a field is the recordset. The attribute comodel_name is mandatory except in the case of related fields or field extensions. - relation -- optional name of the table that stores the relation in the database (string) - column1 -- optional name of the column referring to "these" records in the table relation(string) - column2 -- optional name of the column referring to "those" records in the table relation(string) The attributes relation, column1 and column2 are optional. If not given, names are automatically generated from model names, provided model_name and comodel_name are different! - domain -- an optional domain to set on candidate values on the client side (domain or string) - context -- an optional context to use on the client side when handling that field (dictionary) - limit -- optional limit to use upon read (integer) class openerp.fields.Reference(selection=None, string=None, **kwargs)[source] Bases: openerp.fields.Selection Inheritance and extension Odoo provides three different mechanisms to extend models in a modular way: - creating a new model from an existing one, adding new information to the copy but leaving the original module as-is - extending models defined in other modules in-place, replacing the previous version - delegating some of the model's fields to records it contains Classical inheritance When using the _inherit and _name attributes together, Odoo creates a new model using the existing one (provided via _inherit) as a base. The new model gets all the fields, methods and meta-information (defaults & al) from its base. class Inheritance0(models.Model): _name = 'inheritance.0' name = fields.Char() def call(self): return self.check("model 0") def check(self, s): return "This is {} record {}".format(s, self.name) class Inheritance1(models.Model): _name = 'inheritance.1' _inherit = 'inheritance.0' def call(self): return self.check("model 1") and using them: a = env['inheritance.0'].create({'name': 'A'}). Extension When using _inherit but leaving out _name, the new model replaces the existing one, essentially extending it in-place. This is useful to add new fields or methods to existing models (created in other modules), or to customize or reconfigure them (e.g. to change their default sort order): class Extension0(models.Model): _name = 'extension.0' name = fields.Char(default="A") class Extension1(models.Model): _inherit = 'extension.0' description = fields.Char(default="Extended") record = env['extension.0'].create({}) record.read()[0] will yield: {'name': "A", 'description': "Extended"} Note it will also yield the various automatic fields unless they've been disabled Delegation The third inheritance mechanism provides more flexibility (it can be altered at runtime) but less power: using the _inherits a model delegates the lookup of any field not found on the current model to "children" models. The delegation is performed via Reference fields automatically set up on the parent model: class Child0(models.Model): _name = 'delegation.child0' field_0 = fields.Integer() class Child1(models.Model): _name = 'delegation.child1' field_1 = fields.Integer() class Delegating(models.Model): _name = 'delegation.parent' _inherits = { 'delegation.child0': 'child0_id', 'delegation.child1': 'child1_id', } child0_id = fields.Many2one('delegation.child0', required=True, ondelete='cascade') child1_id = fields.Many2one('delegation.child1', required=True, ondelete='cascade') record = env['delegation.parent'].create({ 'child0_id': env['delegation.child0'].create({'field_0': 0}).id, 'child1_id': env['delegation.child1'].create({'field_1': 1}).id, }) record.field_0 record.field_1 will result in: 0 1 and it's possible to write directly on the delegated field: record.write({'field_1': 4}) Warning when using delegation inheritance, methods are not inherited, only fields Domains A domain is a list of criteria, each criterion being a triple (either a list or a tuple) of (field_name, operator, value) where: field_name( str) - a field name of the current model, or a relationship traversal through a Many2oneusing dot-notation e.g. 'street'or 'partner_id.country' operator( str) an operator used to compare the field_namewith the value. Valid operators are: = - equals to != - not equals to > - greater than >= - greater than or equal to < - less than <= - less than or equal to =? - unset or equals to (returns true if valueis either Noneor False, otherwise behaves like =) =like - matches field_nameagainst the valuepattern. An underscore _in the pattern stands for (matches) any single character; a percent sign %matches any string of zero or more characters. like - matches field_nameagainst the %value%pattern. Similar to =likebut wraps valuewith '%' before matching not like - doesn't match against the %value%pattern ilike - case insensitive like not ilike - case insensitive not like =ilike - case insensitive =like in - is equal to any of the items from value, valueshould be a list of items not in - is unequal to all of the items from value child_of is a child (descendant) of a valuerecord. Takes the semantics of the model into account (i.e following the relationship field named by _parent_name). value - variable type, must be comparable (through operator) to the named field Domain criteria can be combined using logical operators in prefix form: '&' - logical AND, default operation to combine criteria following one another. Arity 2 (uses the next 2 criteria or combinations). '|' - logical OR, arity 2. '!' logical NOT, arity 1. Tip Mostly to negate combinations of criteria Individual criterion generally have a negative form (e.g. =-> !=, <-> >=) which is simpler than negating the positive. Example To search for partners named ABC, from belgium or germany, whose language is not english: [('name','=','ABC'), ('language.code','!=','en_US'), '|',('country_id.code','=','be'), ('country_id.code','=','de')] This domain is interpreted as: (name is 'ABC') AND (language is NOT english) AND (country is Belgium OR Germany) Porting from the old API to the new API - bare lists of ids are to be avoided in the new API, use recordsets instead - methods still written in the old API should be automatically bridged by the ORM, no need to switch to the old API, just call them as if they were a new API method. See Automatic bridging of old API methods for more details. search()returns a recordset, no point in e.g. browsing its result fields.relatedand fields.functionare replaced by using a normal field type with either a related=or a compute=parameter depends()on compute=methods must be complete, it must list all the fields and sub-fields which the compute method uses. It is better to have too many dependencies (will recompute the field in cases where that is not needed) than not enough (will forget to recompute the field and then values will be incorrect) - remove all onchangemethods on computed fields. Computed fields are automatically re-computed when one of their dependencies is changed, and that is used to auto-generate onchangeby the client - the decorators model()and multi()are for bridging when calling from the old API context, for internal or pure new-api (e.g. compute) they are useless - remove _default, replace by default=parameter on corresponding fields if a field's string=is the titlecased version of the field name: name = fields.Char(string="Name") it is useless and should be removed - the multi=parameter does not do anything on new API fields use the same compute=methods on all relevant fields for the same result - provide compute=, inverse=and search=methods by name (as a string), this makes them overridable (removes the need for an intermediate "trampoline" function) - double check that all fields and methods have different names, there is no warning in case of collision (because Python handles it before Odoo sees anything) - the normal new-api import is from openerp import fields, models. If compatibility decorators are necessary, use from openerp import api, fields, models - avoid the one()decorator, it probably does not do what you expect - remove explicit definition of create_uid, create_date, write_uidand write_datefields: they are now created as regular "legitimate" fields, and can be read and written like any other field out-of-the-box when straight conversion is impossible (semantics can not be bridged) or the "old API" version is not desirable and could be improved for the new API, it is possible to use completely different "old API" and "new API" implementations for the same method name using v7()and v8(). The method should first be defined using the old-API style and decorated with v7(), it should then be re-defined using the exact same name but the new-API style and decorated with v8(). Calls from an old-API context will be dispatched to the first implementation and calls from a new-API context will be dispatched to the second implementation. One implementation can call (and frequently does) call the other by switching context. Danger using these decorators makes methods extremely difficult to override and harder to understand and document uses of _columnsor _all_columnsshould be replaced by _fields, which provides access to instances of new-style openerp.fields.Fieldinstances (rather than old-style openerp.osv.fields._column). Non-stored computed fields created using the new API style are not available in _columnsand can only be inspected through _fields - reassigning selfin a method is probably unnecessary and may break translation introspection Environmentobjects,) Automatic bridging of old API methods When models are initialized, all methods are automatically scanned and bridged if they look like models declared in the old API style. This bridging makes them transparently callable from new-API-style methods. Methods are matched as "old-API style" if their second positional parameter (after self) is called either cr or cursor. The system also recognizes the third positional parameter being called uid or user and the fourth being called id or ids. It also recognizes the presence of any parameter called context. When calling such methods from a new API context, the system will automatically fill matched parameters from the current Environment (for cr, user and context) or the current recordset (for id and ids). In the rare cases where it is necessary, the bridging can be customized by decorating the old-style method: - disabling it entirely, by decorating a method with noguess()there will be no bridging and methods will be called the exact same way from the new and old API styles defining the bridge explicitly, this is mostly for methods which are matched incorrectly (because parameters are named in unexpected ways): cr() - will automatically prepend the current cursor to explicitly provided parameters, positionally cr_uid() - will automatically prepend the current cursor and user's id to explictly provided parameters cr_uid_ids() - will automatically prepend the current cursor, user's id and recordset's ids to explicitly provided parameters cr_uid_id() will loop over the current recordset and call the method once for each record, prepending the current cursor, user's id and record's id to explicitly provided parameters. Danger the result of this wrapper is always a list when calling from a new-API context All of these methods have a _context-suffixed version (e.g. cr_uid_context()) which also passes the current context by keyword. - dual implementations using v7()and v8()will be ignored as they provide their own "bridging"
https://www.odoo.com/documentation/9.0/reference/orm.html
CC-MAIN-2021-25
en
refinedweb
Hi, I am making a simple algo to try to get familiar with how options work on QC. I have been following the short tutorial here: Please see the attached algo. Option chains aren't even in the data slice every day, and when there is an option chain it shows up as empty. Here is a sample log output. Here is are my main bits. I was assuming I would have options chain data for every time OnData was called. def Initialize(self): '''Initialise the data and resolution required, as well as the cash and start-end dates for your algorithm. All algorithms must initialized.''' self.SetStartDate(2016,10,01) #Set Start Date self.SetEndDate(2016,11,16) #Set End Date self.SetCash(25000) #Set Strategy Cash # Find more symbols here: # self.AddUniverse(self.CoarseSelectionFunction) self.AddEquity("GOOG", Resolution.Daily) option = self.AddOption("GOOG", Resolution.Daily) option.SetFilter(-10, +10, timedelta(0), timedelta(180)) def OnData(self,slice): self.Log("ondata") for i in slice.OptionChains: optionchain = i.Value self.Log("underlying price:" + str(optionchain.Underlying.Price)) df = pd.DataFrame([[x.Right,float(x.Strike),x.Expiry,float(x.BidPrice),float(x.AskPrice)] for x in optionchain], index=[x.Symbol.Value for x in optionchain], columns=['type(call 0, put 1)', 'strike', 'expiry', 'ask price', 'bid price']) self.Log(str(df))
https://www.quantconnect.com/forum/discussion/2925/getting-option-chains-in-python/
CC-MAIN-2018-39
en
refinedweb
An XSLT stylesheet is made up of a single top-level xsl:stylesheet element that contains one or more xsl:template elements. These templates can contain literal elements that become part of the generated result, functional elements from the XSLT grammar that control such things as which parts of the source document to process, and often a combination of the two. The contents of the source XML document are accessed and evaluated from within the stylesheet's templates and other function elements using the XPath language. The following shows a few XSLT elements (the associated XPath expression is highlighted): <xsl:value-of <xsl:apply-templates <xsl:copy-of XPath is a language used to select and evaluate various properties of the elements, attributes, and other types of nodes found in an XML document. In the context of XSLT, XPath is used to provide access to the various nodes contained by the source XML document being transformed. The XPath expressions used to access those nodes is based on the relationships between the nodes themselves . Nodes are selected using either the shortcut syntax that is somewhat reminiscent of the file and directory paths used to describe the structure of a Unix filesystem, or one that describes the abstract relationship axes between nodes (parent, child, sibling, ancestor , descendant, etc.). In addition, XPath provides a number of useful built-in functions that allow you to evaluate certain properties of the nodes selected from a given document tree. The most common components of an XPath expression are location paths and relationship axes, function calls, and predicate expressions. Borrowing from the Document Object Model, XPath visualizes the contents of an XML document as an abstract tree of nodes. At the top level of that tree is the root node represented by the string / . The root node is not the same as the top-level element (often called the document element) in the XML document, but is rather an abstract node above that level, which contains the document element and any special nodes, such as processing instructions: <?xml version="1.0"?> <?xml-stylesheet href="mystyle.xsl" type="text/xsl"?> <page> <para>I ੩ the XML Infoset</para> </page> In this document, the xml-stylesheet processing instruction is a meaningful part of the document as a whole, but it is not contained by the page document element. Were it not for the abstract root node floating above the document element, you would have no way to access the processing instruction from within your stylesheets. The practical result of having a root node above the top-level document element is that all XPath expressions that attempt to select nodes using an absolute path from root node to a node contained in the document must include both an / for the root node and the name of the document element. Here are a few examples of absolute location paths: / /html/body /book/chapter/sect1/title /recordset/row/order-quantity Relative location paths are resolved within the context of the current node (element, attribute, etc.) being processed . The following are functionally identical: chapter/sect1 child::chapter/sect1 Attribute nodes are accessed by using the attribute: : axis (or the shortcut, @ ), followed by the name of the attribute: attribute::class @class sect1/attribute::id sect1/@id /html/body/@bgcolor Relationship axes provide a way to access nodes in the document based on relative relationships to the context node that often cannot be captured by a simple location path. For example, you can look back up the node tree from the current node using the ancestor or parent axis: ancestor::chapter/title parent::chapter/title or across the tree at the same level using the preceding -sibling or following-sibling axes: preceding-sibling::product/@id following-sibling::div/@class XPath also provides several useful shortcuts to help make things easier. The . (dot) is an alias for the self axis, and . . is an alias for the parent axis. The // path abbreviation is an alias for the descendant-or-self axis. While using // can be expensive to process, it is hard to fault the simplicity it offers. For example, collecting all the hyperlinks in an XHTML document into a single nodeset, regardless of the current context or where the links may appear in the document, is as simple as: //a Similarly, you can select all the para descendants of the context node (the node currently being processed) using: .//para If all these relationship axes are confusing, recall that XPath visualizes the document as a hierarchical tree of nodes in much the same way that the filesystem is presented in most Unix-like operating systems. (Parents and ancestors are "up" towards the "root," siblings are at the same level on a given branch, and children and descendants are contents of the current node.) XPath provides a nice list of built-in functions to help with node selection, evaluation, and processing. This list includes functions for accessing the properties of nodesets (e.g., position() and count() ), functions for accessing the abstract components of a given node (e.g., namespace-uri() and name() ), string processing functions (e.g., substring-before () , concat() , and translate() ), number processing (e.g., round() , sum() , and ceiling() ) and Boolean functions (e.g., true() , false() , and not() ). A detailed reference covering each function is not appropriate here, but a few useful examples are. Get the number of para elements in the document: count(//para) Create a fully qualified URL based on the relative_url attribute of the context node: concat('', @relative_url) Replace dashes with underscores in the text of the context node: translate(., '-', '_') Get the scheme name from a fully qualified URL: substring-before(@url, '://') Quickly, get the total number of items ordered: sum(/order/products/item/quantity) In addition to the core functions provided by XPath, XSLT adds several additional functions to make the task of transforming documents easier, or more robust. Two are especially useful; the document() (which provides a way to include all or part of separate XML documents into the current result), and the key() function (which offers an easy way to select specific nodes from larger, regularly-shaped sets by using part of the individual nodes as a lookup). You can find a complete listing of XSLT's functions on the Web at. Many functions provided by XPath and XSLT can be useful for transforming the source document's content; others are only useful when combined with XPath predicate expressions. Predicate expressions provide the ability to examine the properties of a nodeset in a way similar to the WHERE clause in SQL. When a predicate expression is used, only those nodes that meet the criteria established by the predicate are selected. Predicates are set off from the rest of the expression using square brackets and can appear after any node identifier in the larger XPath expression. When the predicate expression evaluates to an integer, the node at that position is returned. The following all select the second div child of the context node: div[2] div[1 + 1] div[position() = 2] div[position() > 1 and position < 3] More than one predicate can be used in a given expression. The following returns a nodeset containing the title of the first section of the fifth chapter of a book in DocBook XML format. /book/chapter[5]/sect1[1]/title Predicate expressions may also contain function calls. The following selects all descendants of the current node that contain significant text data but no child elements: .//*[string-length(normalize-space(text())) > 0 and count(child::*) = 0] The basic building block of an XSLT stylesheet is the xsl:template element. The xsl:template element is used to create the output of the transformation. A template is invoked either by matching nodes in the source document against a pattern contained by the template element's match attribute, or by giving the template a name via the name attribute and calling it explicitly. The xsl:template element's match attribute takes a pattern expression that determines to which nodes in the source tree the template will be applied. These match rules can be evaluated and invoked from within other templates using the xsl:apply-templates element. The pattern expressions take the form of an XPath location expression that may also include predicate expressions. For example, the following tells the XSLT processor to apply that template to all div elements that have body parents: <xsl:template . . . </xsl:template> You can also add predicates to your patterns for more fine- tuned matching: <xsl:template . . . </xsl:template> By adding the @class='special ' predicate, this template would only be applied to the subset of those elements matched by the previous example that have a class attribute with the value special . Templates that contain match rules are invoked using the xsl:apply-templates element. If the xsl:apply-templates element's optional select attribute is present, the nodes returned by its pattern expression are evaluated one at a time against each pattern expression in the stylesheet's templates. When a match is found, the nodes are processed by the matching template. (If no match is found, a built-in template is used, and the children of the selected nodes are processed, and so on, recursively.) If the select attribute is not present, all child nodes of the current node are evaluated. This allows straightforward recursive processing of tree-shaped data. <?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:template <!-- root template, always matches --> <html> <xsl:apply-templates/> </html> </xsl:template> <xsl:template <!-- matches element nodes named 'article' --> <body> <xsl:apply-templates/> </body> </xsl:template> <xsl:template <!-- matches element nodes named 'para' --> <p> <xsl:apply-templates/> </p> </xsl:template> <xsl:template <!-- matches element nodes named 'emphasis' --> <em> <xsl:apply-templates/> </em> </xsl:template> </xsl:stylesheet> Applying this stylesheet to the following XML document: <?xml version="1.0"?> <article> <para> I was <emphasis>not</emphasis> pleased with the result. </para> </article> gives the following result: <?xml version="1.0"?> <html> <body> <p> I was <em>not</em> pleased with the result. </p> </body> </html> The name attribute provides a way to explicitly invoke a given template from within other templates using the xsl:call-template element. For example, the following inserts the standard disclaimer contained in the template named document_footer at the bottom of an HTML page: <xsl:template . . . more processing here . . . <xsl:call-template </body> </html> </xsl:template> <xsl:template <div class="footer"> <p>copyright 2001 Initech LLC. All rights reserved.</p> </div> </xsl:template> The xsl:template element's mode attribute provides a way to have template rules that have the same match expression (match the same nodeset) but process the matched nodes in a very different way. For example, the following snippet shows two templates whose expressions match the element nodes named section . One displays the main view of the document (the default), and the other builds a table of contents: <xsl:template <!-- create the table of contents first --> <div class="toc"> <xsl:apply-templates </div> <!-- then process the document as usual --> <xsl:apply-templates </xsl:template> <xsl:template <div class="section"> <h2> <a name="{generate-id(title)}"> <xsl:value-of </a> </h2> <xsl:apply-templates/> </div> </xsl:template> <xsl:template <a href="#{generate-id(title)}"> <xsl:value-of </a> <br /> <xsl:apply-templates </xsl:template> Adding to its flexibility, XSLT borrows the concepts of iterative loops and conditional processing from traditional programming languages. The xsl:for-each element is used for iterating over the nodes in the nodeset returned by the expression contained in its select attribute. In spirit, this corresponds to Perl's foreach (@some_list) loop. <xsl:template <xsl:for-each . . . do something with each xlink child of the current para element </xsl:foreach> </xsl:template> In the earlier example showing how template modes are used, you created two templates for the section elements: one for processing those nodes in the context of the main body of the document, and one for building the table of contents. You could just as easily have used an xsl:for-each element to create the table of contents: <xsl:template <!-- create the table of contents first --> <div class="toc"> <xsl:for-each <a href="#{generate-id(title)}"> <xsl:value-of </a> <br /> </xsl:for-each> </div> <!-- then process the document as usual --> <xsl:apply-templates </xsl:template> So, which type of processing is better: iteration or recursion? There is no hard-and-fast rule. The answer depends largely on the shape of the node trees being processed. Generally, an iterative approach using xsl:for-each is appropriate for nodesets that contain regularly shaped data structures (such as line items in a product list), while recursion tends to work better for irregular trees containing mixed content (elements that can have both text data and child elements, such as articles or books). XSLT's processing model is founded on the notion of recursion (process the current node, apply templates to all or some of the current node's children, and so on). The point is that one size does not fit all. Having a working understanding of both styles of processing is key to the efficient and professional use of XSLT. Conditional "if/then" processing is available in XSLT using xsl:if , xsl:choose , and their associated child elements. The xsl:if element offers an all-or-nothing approach to conditional processing. If the expression passed to the processor through the test attribute evaluates to true , the block is processed; otherwise , it is skipped altogether. Consider the following template. It prints a list of an employee's coworkers, adding the appropriate commas in between the coworker's names (plus an and just before the final name) by testing the position() of each coworker child: <xsl:template <p> Our employee <xsl:value-of works with: <xsl:for-each <xsl:value-of <xsl:if, </xsl:if> <xsl:if and </xsl:if> </xsl:for-each> on a daily basis. </p> </xsl:template> In cases in which you need to emulate the if-then-else block or switch statement found in most programming languages, use the xsl:choose element. An xsl:choose must contain one or more xsl:when elements and may contain an optional xsl:otherwise element. The test attribute of each xsl:when is evaluated in turn and contents processed for the first expression that evaluates to true. If none of the test conditions return a true value and an xsl:otherwise element is included, the contents of that element are processed: <xsl:template <xsl:choose> <xsl:when <xsl:apply-templates </xsl:when> <xsl:when <xsl:apply-templates </xsl:when> <xsl:otherwise> <xsl:apply-templates </xsl:otherwise> </xsl:choose> </xsl:template> XSLT offers a way to capture and reuse arbitrary chunks of data during processing via the xsl:param and xsl:variable elements. In both cases, a unique name is given to the parameter or variable using a name attribute, and the contents can be accessed elsewhere in the stylesheet by prepending a $ (dollar sign) to that name. Therefore, the value of a variable declaration whose name attribute is myVar will be accessible later as $myVar . The xsl:param element serves two purposes: it provides a mechanism for passing simple key/value data to the stylesheet from the outside, and it offers a way to pass information between templates within the stylesheet. One benefit of using XSLT in an environment such as AxKit is that all HTTP parameters are available from within your stylesheets via xsl:param elements: <?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:param <xsl:template . . . <p>Greetings, <xsl:value-of. Welcome to our site.</p> . . . </xsl:template> . . . The result of a request to a document that is transformed by this stylesheet, such as (or a POST request that contains a defined " user " parameter), contains the following: <p>Greetings, ahclem. Welcome to our site.</p> Default values can be set by adding text content to the xsl:param element: <xsl:paramMystery Guest</xsl:param> or by passing a valid XPath expression to its select attribute: <xsl:param <xsl:param Parameters can also be used to pass data to other templates during the transformation process. In this second form, a template rule is invoked using the xsl:call-template element, whose required name attribute must correspond to the name attribute of an xsl:template , or by using an xsl:apply-templates that is expected to match one of the template's match attributes. In both cases, data can be passed to the template via one or more xsl: with-param elements, which are then available inside its invoked template using xsl:param elements. The result returned by the called template is inserted into the transformation's output at the point that the template is called. Calling named templates and passing in parameters is the closest thing that XSLT has to the user-defined functions or subroutines provided in traditional programming languages. It can be employed to create reusable pseudofunctions: <xsl:template <xsl:for-each <xsl:call-template <xsl:with-param </xsl:call-template> </xsl:for-each> . . . </xsl:template> . . . <xsl:template <xsl:param <xsl:value-of <xsl:text>, </xsl:text> <xsl:value-of <xsl:if <xsl:text> </xsl:text> <xsl:value-of <xsl:text>.</xsl:text> </xsl:if> </xsl:template> The xsl:variable element is similar to xsl:param in that it can be assigned an arbitrary value such as a nodeset or string. Unlike parameters, though, variables only provide a way to store chunks of data; they are not used to pass information in from the environment or between templates. They can be useful for such things as creating shortcuts to complex nodesets: <xsl:varable or setting default values for data that may be missing from a given part of the document: <xsl:varable <xsl:choose> <xsl:when <xsl:value-of </xsl:when> <xsl:otherwise>Unknown User</xsl:otherwise> </xsl:choose> </xsl:variable> Once a variable or parameter has been assigned a value, it becomes read-only. This behavior trips most web developers who are used to doing such things as: my $grand_total = 0; foreach my $row (@order_data) { $grand_total += $row->{quantity} * $row->{price}; } print "Order Total: $grand_total"; There is a way around this limitation, but it requires creating a template that recursively consumes a nodeset while passing the sum of the previous value and current value back to itself through a parameter as each node is processed: <xsl:template <root> . . . Order total: <xsl:call-template <xsl:with-param </xsl:call-template> </root> </xsl:template> <xsl:template <xsl:param <xsl:param0</xsl:param> <xsl:choose> <xsl:when <xsl:call-template <xsl:with-param <xsl:with-param </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of </xsl:otherwise> </xsl:choose> </xsl:template> If you are more used to thinking in Perl, the following snippet illustrates the same principle: my $order_total = &price_total('0', @order_items); sub price_total { my ($total, @items) = @_; if (@items) { my $data = shift (@items); &price_total($total + $data->{price} * $data->{quantity}, @items); } else { return $total; } } This short introduction to XSLT's syntax and features really only touches the surface of what it can achieve. If what you read here intrigues you, I strongly recommend picking up one of the many fine books that cover the language in much greater depth.
https://flylib.com/books/en/1.117.1.36/1/
CC-MAIN-2018-39
en
refinedweb
Application wide settings can be a long conversation when starting a new application. Here are just a few questions when some of my teams started this conversation: - Do we use ENV variables? - What about manual bootstrap? - Configuration files? - Should we get them from the server? Using angular-cli & environment.ts The new angular-cli have the concept of different environments like development (dev) and production (prod). When creating a new application with the cli ng my-app and /environments folder is a part of the scaffold which contains the environment files. . ├── environment.ts ├── environment.prod.ts . . and then within the /src/app folder is and environment.ts file. Here are the contents: export const environment = { production: false }; As you might imagine, the *.production.ts file would have production: true. When the application is built (ng build) or served (ng serve), the environment.{env}.ts file from /environments is pulled and replaces the file within /src/app. By default this is dev. In order to grab the production version, set the environment to production using the following: #build $ ng build --environment=production #shorthand $ ng b -prod #serve $ ng serve --environment=production #shorthand $ ng s -prod Adding additional environments - not yet supported If there are additional environments your build process needs to support, you can add more files to the /environments folder with the name of said environment such as environment.qa.ts and then use. #build $ ng build --environment=environment One drawback here is there is no -qa shorthand supported. * Although the file is picked up by the CLI, there is a bug that the environments supported are only **prod or dev Adding a "qa" environment. Create a new file called environment.qa.ts in the /environment folder with the following contents: export const environment = { production: false, envName: 'qa' }; Add the qa entry to the .angular.cli.json config: "environments": { "dev": "environments/environment.ts", "prod": "environments/environment.prod.ts", "qa": "environments/environment.qa.ts" } Now the new environment is ready to use. Putting it together First, add a new property to each of the environment.{env}.ts files. export const environment = { production: false, envName: 'dev' }; Then in the myapp.component.ts file import settings, and set the binding. import { Component } from '@angular/core'; import { environment } from './environment'; @Component({ moduleId: module.id, selector: 'myapp-app', templateUrl: 'myapp.component.html', styleUrls: ['myapp.component.css'] }) export class MyappAppComponent { title = 'myapp works!'; environmentName = environment.envName; } Finally, in the html template add the h2 tag. <h1> {{title}} </h1> <h2> {{environmentName}} </h2> Now if you serve the app with each of the --environment={envName}, the binding will display accordingly. Note that the short hand ng s -prod will not work for custom environments i.e. ng s -qa, however you can use ng s -e qa Notes Although this is a nice feature there are some things to point out as shortcomings, challenges etc. only production or development are supported - For every property that is added to the /src/app/environment.ts, it must be added to the files in /config/environment.{env}.ts, this is a disruptive workflow. - Using this as a solution only works if you are building your application from source and not once then moving to each environment. this may change! There is a current issue on the angular-cli repo addressing this very issue : Closed - the /src/app/environment.tsis only a stub per se and there to support the typescript compiler and serves no other purpose. Enjoy, share, comment...
http://tattoocoder.azurewebsites.net/angular-cli-using-the-environment-option/
CC-MAIN-2018-39
en
refinedweb
. NOTE: This example is based on the XML Events specification [XML Events], which is proceeding independently from XForms, and thus might be slightly incorrect. <xforms:button> <xforms:caption>Reset</xforms:caption> <xforms:resetInstance ev: </xforms:button> This example recreates the behavior of the HTML reset button, which this specification does not define as an independent form control. For each built-in XForms action, this chapter lists the following: Name Description of behavior XML Representation Sample usage All elements defined in this chapter explicitly allow global attributes from the XML Events namespace, and apply the processing defined in that specification in section 2.3 [XML Events]. This action dispatches an XForms Event to a specific element identified by the target attribute. Two kinds of event can be dispatched: One of the predefined XForms events (i.e., xforms:event-name), in which case the bubbles and cancelable attributes are ignored and the standard semantics as defined in the Processing model apply. An event created by the XForms author with no predefined XForms semantics and as such not handled by default by the XForms processor. dispatch> <dispatch name = xsd:NMTOKEN target = xsd:IDREF bubbles = xsd:boolean : true cancelable = xsd:boolean : true /> name = xsd:NMTOKEN - required name of the event to dispatch. target = xsd:IDREF - required reference to the event target. bubbles = xsd:boolean : true - boolean indicating if this event bubbles—as defined in DOM2 events. cancelable = xsd:boolean : true - boolean indicating if this event is cancelable—as defined in DOM2 events. This action dispatches an xforms:refresh event. This action results in the XForms user interface being refreshed, and the presentation of user interface controls being updated to reflect the state of the underlying instance data --see 4.3.15 xforms:refresh refresh> <refresh/> This action dispatches an xforms:recalculate event. As a result, instance data nodes whose values need to be recomputed are updated as specified in the processing model --see 4.3.17 xforms:recalculate. recalculate> <recalculate/> This action dispatches an xforms:revalidate event. This results in the instance data being revalidated as specified by the processing model --see 4.3.16 xforms:revalidate revalidate> <revalidate/> This action sets focus to the form control referenced by the idref attribute by dispatching an xforms:focus event. Note that this event is implicitly invoked to implement XForms accessibility features such as accessKey. setFocus> <setFocus idref = xsd:IDREF /> idref = xsd:IDREF - required reference to a form control Setting focus to a repeating structure sets the focus to the member represented by the repeat cursor. This action traverses the specified XLink. loadURI> <loadURI (single node binding attributes) xlink:href = xsd:anyURI xlink:show = ("new" | "replace" | "embed" | "other" | "none") /> (single node binding attributes) - Selects the instance data node containing the URI. xlink:href - optional URI to load. xlink:show - optional link behavior specifier. Either the single node binding attributes, pointing to a URI in the instance data, or the attribute xlink:href are required. If both are present, the action has no effect. Possible values for attribute xlink. The document is incorporated into the current window in an application-specific manner. Form processing continues. The document is loaded in an application-specific manner. The application should look for other markup present in the link to determine the appropriate behavior. The document is loaded in an application-specific manner. The application should not look for other markup present in the link to determine the appropriate behavior. This action explicitly sets the value of the specified instance data node. setValue> <setValue (single node binding attributes) value = XPath expression > <!-- literal value --> </setValue> (single node binding attributes) - Selects the instance data node where the value is to be stored. value = XPath expression - (""). This action initiates submit processing by dispatching an xforms:submit event. Processing of event xforms:submit is defined in the processing model—see 4.4.1 xforms:submit. submitInstance> <submitInstance submitInfo = xsd:IDREF /> id = xsd:ID - optional unique identifier. submitInfo = xsd:IDREF - optional reference to a submitInfoelement. Note: This XForms Action is a convenient way of expressing the following: <dispatch target="mysubmitinfo" name="submitInstance"/> This action initiates reset processing by dispatching an xforms:reset event to the specified model. Processing of event xforms:reset is defined in the processing model—see 4.3.18 xforms:reset. resetInstance> <resetInstance model = xsd:IDREF /> model = xsd:IDREF - Selection of instance data for reset, defined in 8.12.3 Nodeset Binding Attributes This action marks a specific item as current in a repeating sequence (within 9.3 repeat). setRepeatCursor> <setRepeatCursor repeat = xsd:IDREF cursor = XPath expression that evaluates to number /> repeat = xsd:IDREF - required reference to a repeat cursor = XPath expression that evaluates to number - required 1-based offset into the sequence. This action is used to insert new entries into a homogeneous collection, e.g., a set of items in a shopping cart. Attributes of action insert specify the insertion in terms of the collection in which a new entry is to be inserted, and the position within that collection where the new node will appear. The new node is created by cloning the final member of the homogeneous collection specified by the initialization instance data. In this process, nodes of type xsd:ID are not copied. position operation has no effect. If the result is not a valid index for the node-set, it is clipped to either 1 or the size of the node-set, whichever is closer. Finally, the cursor for any repeat that is bound to the homogeneous collection where the node was added is updated to point to the newly added node. This action results in the insertion of newly created data nodes into the XForms data instance. Such nodes are constructed as defined in the initialization section of the processing model—see 4.2 Initialization Events. Following the insertion of the newly created node into the instance data, events xforms:recalculate, xforms:revalidate and xforms:refresh are triggered in sequence. As an example, this causes the instantiation of the necessary user interface for populating a new entry in the underlying collection when used in conjunction with repeating structures 9.3 repeat. insert> <insert (node-set binding attributes) at = XPath expression (nodeset binding attributes) - Selection of instance data nodes, defined in 8.12.3 Nodeset Binding Attributes at - required XPath expression evaluated to determine insert location. position - required selector if insert before/after behavior. An example of using insert with a repeating structure is located at 9.3 repeat. Note that XForms Action setValue can be used in conjunction with insert to provide initial values for the newly inserted nodes. This action deletes nodes from the instance data. The rules for delete processing are as follows: The homogeneous collection to be updated is determined by evaluating binding attribute nodeset. If the collection is empy, the delete action has no effect. The n-th node is deleted from the instance data, where n represents the number returned from node-set index evaluation, defined in 10.11 insert. If the last item in the collection is removed, the cursor position becomes 0. Otherwise, the cursor will point to the new n-th item. This action results in deletion of nodes in the instance data. Following the specified deletion, events xforms:recalculate, xforms:revalidate and xforms:refresh are triggered in sequence. As an example, this causes the destruction of the necessary user interface for populating a deleted entry in the underlying collection when used in conjunction with repeating structures 9.3 repeat. delete> <delete (node-set binding attributes) at = XPath expression /> (nodeset binding attributes) - Selection of instance data nodes, defined in 8.12.3 Nodeset Binding Attributes at - XPath expression evaluated to determine insert location. An example of using delete with a repeating structure is located at 9.3 repeat. This action selects one possible case from an exclusive list of choices e.g., encapsulated by switch see 9.2 switch, by: Dispatching an xforms:deselect event to the currently selected item. Dispatching an xform:select event to the item to be selected. toggle> <toggle case = xsd:IDREF /> case = xsd:IDREF - required reference to a case section inside the conditional construct The toggle action adjusts all selected attributes on the affected cases to reflect the new state. This action encapsulates an event handler authored in the specified scripting language. The handler may be inline, i.e., as PCDATA content of element script; alternatively it may be contained in an external resource and referred to via XML-events attribute ev:handler. Optional attribute role serves as documentation for the handler. script> <script type = xsd:string role=xsd:string > <!-- #CDATA --> </script> type = xsd:string - required mime-type identifier of scripting language. role = xsd:string - Optional descriptive text documenting the contained script. This action encapsulates a message to be displayed to the user. <message (single node binding attributes) xlink:href = xsd:anyURI <!-- mixed content --> </message> (single node binding attributes) - optional attributes that point to the instance data for a string message. xlink:href = xsd:anyURI - optional specifier of an external resource for the message. level - required message level identifier. The message specified can exist in instance data, in a remote document, or as inline text. If multiple captions are specified in this element, the order of preference is: ref, xlink:href, inline. A graphical browser might render an ephemeral message as follows: A graphical browser might render a modeless message as follows: A graphical browser might render a modal message as follows: Action action is used to group multiple actions. action> <action > <!-- Action handlers --> </action> When using element action to group actions, care should be taken to list the event on element action, rather than on the contained actions. <button> <caption>Click me</caption> <action ev: <resetInstance/> <setValue/> </action> </button> Notice that in the above example, ev:event="xforms:activate" occurs on element action. Placing ev:event="xforms:activate" on either or both of the contained actions will have no effect. This is because the above example relies on the defaulting of XML-Event button, not element action.
http://www.w3.org/TR/2002/WD-xforms-20020118/slice10.html
CC-MAIN-2015-27
en
refinedweb
SqlCommand.BeginExecuteXmlReader Method () Assembly: System.Data (in system.data.dll) Return ValueAn IAsyncResult that can be used to poll or wait for results, or both; this value is also needed when invoking EndExecuteXmlReader, which returns a single XML value. The BeginExecuteXmlReader method starts the process of asynchronously executing a Transact-SQL statement that returns rows as XML, so that other tasks can run concurrently while the statement is executing. When the statement has completed, developers must call the EndExecuteXmlReader method to finish the operation and retrieve the XML returned by the command. ntext data that contains valid XML. A typical BeginExecuteXmlReader query can be formatted as in the following C# example: When used with SQL Server 2005, lets multiple actions use the same connection.. Although command execution is asynchronous, value fetching is still synchronous. Because this overload does not support a callback procedure, developers need to either poll to determine whether the command has completed, using the IsCompleted property of the IAsyncResult returned by the BeginExecuteXmlReader XML data asynchronously. While waiting for the results, this simple application sits in a loop, investigating the IsCompleted property value. Once the process has completed, the code retrieves the XML and displays its contents. using System.Data.SqlClient; using System.Xml; class Class1 { static void Main() { // This example is not terribly effective, but it proves a point. // The WAITFOR statement simply adds enough time to prove the // asynchronous nature of the command. string commandText = "WAITFOR DELAY '00:00:03';" + "SELECT Name, ListPrice FROM Production.Product " + "WHERE ListPrice < 100 " + "FOR XML AUTO, XMLDATA";)) { SqlCommand command = new SqlCommand(commandText, connection); connection.Open(); IAsyncResult result = command.BeginExecuteXmlReader(); // Although it is not necessary, the following procedure //); XmlReader reader = command.EndExecuteXmlReader(result); DisplayProductInfo(reader); private static void DisplayProductInfo(XmlReader reader) { // Display the data within the reader. while (reader.Read()) { // Skip past items that are not from the correct table. if (reader.LocalName.ToString() == "Production.Product") { Console.WriteLine("{0: {1:C", reader["Name"], Convert.ToSingle(reader["ListPrice"]));.
https://msdn.microsoft.com/en-us/library/x76a3b72(v=vs.80).aspx
CC-MAIN-2015-27
en
refinedweb
Hi Claudius, >> - Crypto / x-crypt module (we can link to the spec at >>) > > I am not sure, but I dont think this is the implementation of that > spec, maybe check with Claudius. I think this is actually an early > alpha version, before the spec was written and the namespaces etc > changed. For the eXist-db documentation update I would like to be able to describe the relationship between the x-crypt module in eXist-db trunk and the EXPath Crypto spec (10 August 2011 edition, at). I recall that the x-crypt module predated the spec, and I know the x-crypt module works well (I use it on history.state.gov). But could you confirm that there is not an eXist-db implementation that corresponds to the 10 August 2011 version of the spec somewhere else? Also, if you have any other notes about either the x-crypt module or the EXPath Crypto spec's future direction, this info would be helpful to include. Thanks! Joe View entire thread
http://sourceforge.net/p/exist/mailman/message/29914360/
CC-MAIN-2015-27
en
refinedweb