text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Basic Toggle Control For Basic PRO Compiler @ device PIC16F877 define osc 4 define adc_bits 8 main: trisb =$00 trisa =$ff adcon1 = 7 lamp var portb.0 sw1 var porta.0 lamp=0 while(1) if(!sw1) then pause 10 toggle lamp while(!sw1):wend endif pause 10 wend end ...Read more › Basic input-output For Basic PRO Compiler define osc 4 define adc_bits 8 trisb =$00 trisa =$ff adcon1 = 7 lamp var portb start var porta.0 stops var porta.1 lamp =0 loop: if start=0 then repeat pause 10 lamp=$ff until stops=0 ' if constrain true out off loop endif lamp =$00 pause 10 goto loop end ...Read more › Output bit For Basic PRO Compiler define osc 4 define adc_bits 8 trisb =$00 lamp var portb.0 low lamp loop: high lamp pause 1000 low lamp pause 1000 goto loop end ...Read more › How to interface Servo Motor with PIC18F4550 Servo systems use the error sensing negative feedback method to provide precise angular motion. Servo Motors are used where precise control on expl ...Read more › PIC18F4550 Programming and Tutorial Hardware C455 ...Read more › A Video on Basic Introduction to PIC Microcontrollers [video width="600" height="450" id="x5C4nYkIkZw" type="youtube"] ...Read more › Macro Techniques using PIC16C711 Microcontroller Lots of news before getting into this article on Macros and other ways to simplify your application development. The first is, I have finished going through the page proofs of "PC PhD" and it should be ready for going to the printers later this week. This book is about interfacing hardware to the PC along with a focus on MS-DOS software. Windows operation is also presented. The book contains four PICMicro p ...Read more › PIC12F675 Tutorial 3 : PIC Serial Port The crea ...Read more ›
http://pic-microcontroller.com/tutorials/page/5/
CC-MAIN-2017-51
en
refinedweb
. December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.(java.lang.Object, int, java.lang.Object, int, int) It is not a classwork but a skill evaluation question which I did not do very well. So I would like to see different solutions. But thanks for your direction. RD a linked list is a data structure in which a seiries of node objects with data types of Node nextNode in them and they point to the next node in the list....... check that out for some decent ifno Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected. -------------------------- public class MyArray { private int max = 10; private Object[] objs = new Object[max]; private int increment = 10; private int size = 0; public void add( Object o ) { if (size == max ) { max = max + increment; Object[] temp = new Object[max]; System.arraycopy( objs, 0, temp, 0, size ); objs = temp; } objs[size++] = o; } public int size() { return size; } public void remove(int index) { System.arraycopy(objs, index+1, objs, index, size-index); objs[--size] = null; } public Object get( int index) { if (index>=size || index <0) throw new ArrayIndexOutOfBoundsExcep return objs[index]; } } -------------------------- kimboon If you HAVE to create your own the one above looks about right EXCEPT the size you should return is the number of items that have been added, not the physical size of the array as if you try to return an uninitialised cell you'll get an unwanted Exception. (i.e. java.util.Vector returns the number of items added not the physical size of the internal storage array). Also, wherever possible, you should try to implement applicable interfaces. In this case, implementing the Collection interface would make the class very useful (and powerful). I agree with you that we should always consider the use of existing API. I'm sure rdong is aware of Vector class (since he mentioned it himself), but I believe his purpose is to understand how we can implement a Vector class using the basic array... Or maybe is just to know how Java array can be "expandable" ? I don't know, leave it to rdong to answer :) BTW, to make my code posted above complete, I forgot to include the checking of index in remove() method. Thanks.
https://www.experts-exchange.com/questions/20949711/how-to-implement-an-expandable-array.html
CC-MAIN-2017-51
en
refinedweb
. Then each class needs a constructor named as fit it there won’t be collision in the global namespace. RAII_CTOR macro is used to invoke an object constructor. The real work is done by the RAII_CTOR_WITH_LINE, that these don’t fit my macro requirements. Don’t worry, it is quite straightforward to hammer malloc/free and fopen/fclose in the _ctor/_dtor schema. It is as simple as: Here is an example of how the code that employs my RAII macros could look: This code has some great advantages over the solutions I presented in my old post. First it has no explicit goto (the goto is hidden, as much as it is in any other structured statement). Then you don’t have to care about the construction order and to explicit begin). I think that the function pointer and argument pointer juggling are a bit on (or maybe beyond) the edge of the standard compliance. I tested the code on a PC, but maybe that it fails on more exotic architectures. So, what do you think?
http://www.maxpagani.org/2010/03/
CC-MAIN-2017-51
en
refinedweb
I am facing the task of making a dynamical spreadsheet in Java that has a grid of cells and does operations between between them. I have to "put" all my cells in some kind of container and get an integer index for every cell. I could only think about an array into an array or maybe a JTable. I have to remind that later on I will have to implement this into an Android App. Please help me. I tried something like this (it's totally wrong): public class BaseCell { public double value; int i=0,j=0; int row=20, column=5; BaseCell[] PrimaryCellArray = new BaseCell[i]; for (i=0; i<column; i++){ BaseCell[] SecondaryCellArray = new BaseCell[j]; SecondaryCellArray.addObj(); } PrimaryBaseCell.addObj.S //then add the secondary cell to primary cell } } I might misunderstand the question, but IMO List would be the best would be to use one of list types. int rows = 20, cols = 10; List myList = new Lise(); for (int i=0; i<rows; i++){ BaseCell[] secondaryCellArray = new BaseCell[cols]; myList.add(secondaryCellArray); } to get acces to some cell, call the row you need with myList.get(rowNum), it will return you variable of that type you're set there before
https://codedump.io/share/L7aI2JNYqw1m/1/dynamical-java-spreadsheettable
CC-MAIN-2017-51
en
refinedweb
I've been trying to fix this error for a while now, I could really use some help. C:\Users\Clayton\Documents\CH4P11.CPP(71) : error C2676: binary '>>' : 'class std::basic_ostream<char,struct std::char_traits<char> >' does not define this operator or a conversion to a type acceptable to the predefined operator code: #include <iostream> #include <iomanip> using namespace std; int main() { double baseSalary; double noOfServiceYears; double bonus; double totalSale; double additionalBonus; double payCheck; cout<<fixed<<showpoint; cout<<setprecision(2); cout<<"To calculate the salesperson's paycheck, please enter all data" <<" as accurately as possible."<<endl; cout<<"Please enter the base salary."<<endl; cin>>baseSalary; cout<<"Please enter the number of years that the salesperson has been" <<" with the company."<<endl; cin>>noOfServiceYears; if(noOfServiceYears <= 5) bonus = 10 * noOfServiceYears; else bonus = 20 * noOfServiceYears; cout<<"Finally, please enter the total sale made by the salesperson" <<" for the month."<<endl; cin>>totalSale; if(totalSale < 5000) additionalBonus = 0; else if(totalSale >= 5000 && totalSale < 10000) additionalBonus = totalSale * (0.03); else additionalBonus = totalSale * (0.06); payCheck = baseSalary + bonus + additionalBonus; cout<<"The paycheck for the salesperson will be approximately ">>payCheck>>" " <<" . Thank you for using this program."<<endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/258232/error-c2676
CC-MAIN-2017-51
en
refinedweb
Visual C# IntelliSense Visual C# IntelliSense is available when coding in the editor, and while debugging in the Immediate Mode command window. Completion lists The IntelliSense completion lists in Visual C# contain tokens from List Members, Complete Word, and more. It provides quick access to: Members of a type or namespace Variables, commands, and functions names Code snippets Language Keywords Extension Methods The Completion List in C# is also smart enough to filter out irrelevant tokens and pre-select a token based on context. For more information, see Filtered Completion Lists. Code Snippets in Completion Lists In Visual C#, the completion list includes code snippets to help you easily insert predefined bodies of code into your program. Code snippets appear in the completion list as the snippet's shortcut text... enum keyword: When you press the SPACEBAR after an equal sign for an enum assignment, a completion list appears. An item is automatically selected in the list, based on the context in your code. For example, items are automatically selected in the completion list after you type the keyword return and when you make a declaration.. Most recently used members. override. Automatic Code Generation Add using The Add using IntelliSense operation automatically adds the required using directive to your code file. This feature red squiggle appears on that line of code because the type reference cannot be resolved. You can then invoke the Add using through the Quick Action. The Quick Action is only visible when the cursor is positioned on the unbound type. Click the light bulb icon, and then choose using System.Xml; to automatically add the using directive. Remove and sort usings The Remove and Sort Usings option sorts and removes using and extern declarations without changing the behavior of the source code. Over time, source files may become bloated and difficult to read because of unnecessary and unorganized using directives. The Remove and Sort Usings option compacts source code by removing unused using directives, and improves readability by sorting them. On the Edit menu, choose IntelliSense, and then choose Organize Usings. Quick Actions light bulb is displayed. The light bulb Quick Actions light bulb is displayed. The light bulb.. A red wavy underline appears under each undefined identifier. When you rest the mouse pointer on the identifier, an error message appears in a tooltip. To display the appropriate options, you can use one of the following procedures: Click the undefined identifier. A Quick Actions light bulb appears under the identifier. Click the light bulb. Click the undefined identifier, and then press Ctrl + . (Ctrl + period). Right-click the undefined identifier, and then click Quick Actions and Refactorings. The options that appear can include the following: Generate property Generate field Generate method Generate class Generate new type... (for a class, struct, interface, or enum) Generate event handlers. Note If a new delegate that is created by IntelliSense references an existing event handler, IntelliSense communicates this information in the tooltip. You can then modify this reference; the text is already selected in the Code Editor. Otherwise, automatic event hookup is complete at this point. If you press Tab, IntelliSense stubs out a method with the correct signature and puts the cursor in the body of your event handler. Note Use the Navigate Backward command on the View menu (Ctrl + -) to go back to the event hookup statement. See also Using IntelliSense Visual Studio IDE
https://docs.microsoft.com/en-us/visualstudio/ide/visual-csharp-intellisense
CC-MAIN-2017-51
en
refinedweb
Free for PREMIUM members Submit Learn how to a build a cloud-first strategyRegister Now We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! public class serverz extends Frame implements Runnable You can then create a thread with: new Thread(new serverz());
https://www.experts-exchange.com/questions/20807618/extend-thread-and-frame.html
CC-MAIN-2017-51
en
refinedweb
sbt server reboot. inside the sbt shell Before we talk about the server, I want to take a quick detour. When I think of sbt, I mostly think of it in terms of task dependency graph and its parallel-processing engine. There's actually a sequential loop above that layer, that processes a command stored in State as a Seq[String] and calls itself again with the new State. An interesting thing to note is that the new State object may contain additional commands than what it started with, and it could even block on an IO device that waits for a new command. That is how the sbt shell works, and the IO device that it blocks on is me, the human. The sbt shell is a command in sbt called shell. It's short enough implementation, and it's helpful to read it: def shell = Command.command(Shell, Help.more(Shell, ShellDetailed)) { s => val history = (s get historyPath) getOrElse Some(new File(s.baseDir, ".history")) val prompt = (s get shellPrompt) match { case Some(pf) => pf(s); case None => "> " } val reader = new FullReader(history, s.combinedParser) val line = reader.readLine(prompt) line match { case Some(line) => val newState = s.copy(onFailure = Some(Shell), remainingCommands = line +: Shell +: s.remainingCommands).setInteractive(true) if (line.trim.isEmpty) newState else newState.clearGlobalLog case None => s.setInteractive(false) } } The key is this line right here: val newState = s.copy(onFailure = Some(Shell), remainingCommands = line +: Shell +: s.remainingCommands). It prepends the command it just asked from the human and shell command onto the remainingCommands and returns the new state back to the command engine. Here's an example scenerio to illustrate what's going on. - sbt starts. A gnome prepends shellcommand to the remainingCommandssequence. - The main loop takes the first command from remainingCommands. - Command engine processes shellcommand. It waits for the human to type something in. - I type in "compile". shellcommand turns remainingCommandto Seq("compile", "shell"). - The main loop takes the first command from remainingCommands. - Command engine processes whatever "compile" means. (it could mean multiple compile in Compiletasks aggregated across all subprojects). The remainingCommandsequence becomes Seq(shell"). - Go to step 2. multiplexing on a queue To support inputs from multiple IO devices (the human and network), we need to block on a queue instead of JLine. To mediate these devices, we will create a concept called CommandExchange. private[sbt] final class CommandExchange { def subscribe(c: CommandChannel): Unit = .... @tailrec def blockUntilNextExec: Exec = .... .... } To represent the devices, we will create another concept called CommandChannel. A command channel is a duplex message bus that can issue command executions and receive events in return. what events? To design CommandChannel, we need to step back and observe how we interact with the sbt shell currently. When you type in something "compile", happens next is that the compile task will print out warnings and error messages on the terminal window, and finish either with [success] or [error]. The return value of the compile task is not useful to the build user. As a side effect, the task also happen to produce some *.class files on the filesystem. The same can be said of assembly task or test task. When you run the tests, the results are printed on the terminal window. The messages that are displayed on the terminal contain the useful information for IDEs such as compilation errors and test results. Again, it's important to remember that these events are completely different thing from the return type of the task -- test's return type is Unit. For now, we'll have only one event called CommandStatus, which tells you if the command engine is currently processing something, or it's listening on the command exchange. network channel For the sake of simplicity, suppose that we are going to deal with one network client for now. The wire protocol is going to be UTF-8 JSON delimited by newline character over TCP socket. Here's the Exec format: { "type": "exec", "command_line": "compile" } An exec describes each round of command execution. When a JSON message is received, it is written into the channel's own queue. Here's the Status event format for now: { "type": "status_event", "status": "processing", "command_queue": ["compile", "server"] } Finally, we will introduce a new Int setting called serverPort that can be used to configure the port number. By default this will be calculated automatically using the hash of the build's path. Here's a common interface for command channels: abstract class CommandChannel { private val commandQueue: ConcurrentLinkedQueue[Exec] = new ConcurrentLinkedQueue() def append(exec: Exec): Boolean = commandQueue.add(exec) def poll: Option[Exec] = Option(commandQueue.poll) def publishStatus(status: CommandStatus, lastSource: Option[CommandSource]): Unit } server command Now that we have a better idea of the command exchange and command channel, we can implement the server as a command. def server = Command.command(Server, Help.more(Server, ServerDetailed)) { s0 => val exchange = State.exchange val s1 = exchange.run(s0) exchange.publishStatus(CommandStatus(s0, true), None) val Exec(source, line) = exchange.blockUntilNextExec val newState = s1.copy(onFailure = Some(Server), remainingCommands = line +: Server +: s1.remainingCommands).setInteractive(true) exchange.publishStatus(CommandStatus(newState, false), Some(source)) if (line.trim.isEmpty) newState else newState.clearGlobalLog } This is more or less the same as what the shell command is doing, except now we are blocking on the command exchange. In the above, exchange.run(s0) starts a background thread to start listening to the TCP socket. When an Exec is available, it prepends the line and "server" command. One benefit of implementing this as a command is that it would have zero effect on the batch mode running in the CI environment. You run sbt compile, and the sbt server will not start. Let's look at this in action. Suppose we have build that looks something like this: lazy val root = (project in file(".")). settings(inThisBuild(List( scalaVersion := "2.11.7" )), name := "hello" ) Nativate to the build in your terminal, and run sbt server (using a custom build of 1.0.x): $ sbt server [info] Loading project definition from /private/tmp/minimal-scala/project .... [info] Set current project to hello (in build file:/private/tmp/minimal-scala/) [info] sbt server started at 127.0.0.1:4574 > As you can see, the server has started on port 4574, which is unique to the build path. Now open another terminal, and run telnet 127.0.0.1 4574: $ telnet 127.0.0.1 4574 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Type in the Exec JSON as follows with a newline: { "type": "exec", "command_line": "compile" } On the sbt server you should now see: > compile [info] Updating {file:/private/tmp/minimal-scala/}root... [info] Resolving jline#jline;2.12.1 ... [info] Done updating. [info] Compiling 1 Scala source to /private/tmp/minimal-scala/target/scala-2.11/classes... [success] Total time: 4 s, completed Mar 21, 2016 3:00:00 PM and on the telnet window you should see: { "type": "exec", "command_line": "compile" } {"type":"status_event","status":"processing","command_queue":["compile","server"]} {"type":"status_event","status":"ready","command_queue":[]} Here's a screenshot: Note that this API is defined in terms of the wire representation, not case classes etc. IntelliJ plugin While Johan and I were working on the server side, Martin researched how to make an IntelliJ plugin. Since the plugin is currently hardcoded to hit port 12700, let's add that to our build: lazy val root = (project in file(".")). settings(inThisBuild(List( scalaVersion := "2.11.7", serverPort := 12700 )), name := "hello" ) The IntelliJ plugin has three buttons: "Build on sbt server", "Clean on sbt server", and "Connect to sbt server". First run sbt server from the terminal, and then connect to the server. Next, hitting "Build on sbt server" should start compilation. It worked. Similar to the telnet, the plugin currently just prints out the raw JSON events, but we can imagine this could contain more relevant information such as the compiler warnings. console channel Next piece of the puzzle was making a non-blocking readLine. We want to start a thread that listens to the human, but using JLine would making a blocking call that prevents anything else. I have a solution that seems to work for Mac, but I have not yet tested on Linux or Windows. I am wrapping new FileInputStream(FileDescriptor.in) with the following: private[sbt] class InputStreamWrapper(is: InputStream, val poll: Duration) extends FilterInputStream(is) { @tailrec final override def read(): Int = if (is.available() != 0) is.read() else { Thread.sleep(poll.toMillis) read() } } Now when I call readLine from a thread, it will spend most of its time sleeping instead of blocking on the IO. Similar to the shell command, this thread will read a single line and exit. When the console channel receives a status event from the CommandExchange, it will print out the next command. This emulates someone typing in a command and indicates that there has been an exec command from outside. If this works out, sbt server should function more or less like a normal sbt shell except for an added feature that it will also accept input from the network. summary and future works - sbt server can be implemented as a plain command without changing the existing architecture much. - JSON-based socket API will allow IDEs to drive sbt from the outside safely. Now that we can change the sbt code, we should allow Exec to have a unique ID that gets included into the associated events. I am thinking of something like the following: { "type": "exec", "command_line": "compile", "id": "29bc9b" } // I write this {"type":"problem_event","message":"not found: value printl","severity":"error","position":{"lineContent":"printl","sourcePath":"\/temp\/minimal-scala","sourceFile":"file:\/temp\/minimal-scala\/Hello.scals","line":2,"offset":2},"exec_id":"29bc9b"} While command execution needs to be batched to avoid the conflict, we can allow querying of the latest State information such as the current project reference, and setting values. The query-response communication could also go into the socket. The source is available from:
http://eed3si9n.com/sbt-server-reboot
CC-MAIN-2017-51
en
refinedweb
There are many ways to start Gazebo, open world models and spawn robot models into the simulated environment. In this tutorial we cover the ROS-way of doing things: using rosrun and roslaunch. This includes storing your URDF files in ROS packages and keeping your various resource paths relative to your ROS workspace. roslaunchto Open World Models The roslaunch tool is the standard method for starting ROS nodes and bringing up robots in ROS. To start an empty Gazebo world similar to the rosrun command in the previous tutorial, simply run roslaunch gazebo_ros empty_world.launch roslaunchArguments You can append the following arguments to the launch files to change the behavior of Gazebo: paused Start Gazebo in a paused state (default false) use_sim_time Tells ROS nodes asking for time to get the Gazebo-published simulation time, published over the ROS topic /clock (default true) gui Launch the user interface window of Gazebo (default true) headless (deprecated) recording (previously called headless) Enable gazebo state log recording debug Start gzserver (Gazebo Server) in debug mode using gdb (default false) roslaunchcommand Normally the default values for these arguments are all you need, but just as an example: roslaunch gazebo_ros empty_world.launch paused:=true use_sim_time:=false gui:=true throttled:=false recording:=false debug:=true Other demo worlds are already included in the gazebo_ros package, including: roslaunch gazebo_ros willowgarage_world.launch roslaunch gazebo_ros mud_world.launch roslaunch gazebo_ros shapes_world.launch roslaunch gazebo_ros rubble_world.launch Notice in mud_world.launch a simple jointed mechanism is launched. The launch file for mud_world.launch contains the following: <launch> <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="worlds/mud.world"/> <!-- Note: the world_name is with respect to GAZEBO_RESOURCE_PATH environmental variable --> <arg name="paused" value="false"/> <arg name="use_sim_time" value="true"/> <arg name="gui" value="true"/> <arg name="recording" value="false"/> <arg name="debug" value="false"/> </include> </launch> In this launch file we inherit most of the necessary functionality from empty_world.launch. The only parameter we need to change is the world_name parameter, substituting the empty.world world file with the mud.world file. The other arguments are simply set to their default values. Continuing with our examination of the mud_world.launch file, we will now look at the contents of the mud.world file. The first several components of the mud world is shown below: <sdf version="1.4"> <world name="default"> <include> <uri>model://sun</uri> </include> <include> <uri>model://ground_plane</uri> </include> <include> <uri>model://double_pendulum_with_base</uri> <name>pendulum_thick_mud</name> <pose>-2.0 0 0 0 0 0</pose> </include> ... </world> </sdf> See the section below to view this full world file on your computer. In this world file snippet you can see that three models are referenced. The three models are searched for within your local Gazebo Model Database. If not found there, they are automatically pulled from Gazebo's online database. You can learn more about world files in the Build A World tutorial. World files are found within the /worlds directory of your Gazebo resource path. The location of this path depends on how you installed Gazebo and the type of system your are on. To find the location of your Gazebo resources, use the following command: env | grep GAZEBO_RESOURCE_PATH An typical path might be something like /usr/local/share/gazebo-1.9. Add /worlds to the end of the path and you should have the directory containing the world files Gazebo uses, including the mud.world file. Before continuing on how to spawn robots into Gazebo, we will first go over file hierarchy standards for using ROS with Gazebo so that we can make later assumptions. For now, we will assume your catkin workspace is named catkin_ws, though you can name this to whatever you want. Thus, your catkin workspace might be located on your computer at something like: /home/user/catkin_ws/src Everything concerning your robot's model and description is located, as per ROS standards, in a package named /MYROBOT_description and all the world files and launch files used with Gazebo is located in a ROS package named /MYROBOT_gazebo. Replace 'MYROBOT' with the name of your bot in lower case letters. With these two packages, your hierarchy should be as follows: ../catkin_ws/src /MYROBOT_description package.xml CMakeLists.txt /urdf MYROBOT.urdf /meshes mesh1.dae mesh2.dae ... /materials /cad /MYROBOT_gazebo /launch MYROBOT.launch /worlds MYROBOT.world /models world_object1.dae world_object2.stl world_object3.urdf /materials /plugins Remember that the command catkin_create_pkg is used for creating new packages, though this can also easily be adapted for rosbuild if you must. Most of these folders and files should be self explanatory. The next section will walk you through making some of this setup for use with a custom world file. You can create custom .world files within your own ROS packages that are specific to your robots and packages. In this mini tutorial we'll make an empty world with a ground, a sun, and a gas station. The following is our recommended convention. Be sure to replace MYROBOT with the name of your bot, or if you don't have a robot to test with just replace it with something like 'test': launchfolder launchfolder create a YOUROBOT.launch file with the following contents (default arguments excluded): <launch> <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(find MYROBOT_gazebo)/worlds/MYROBOT.world"/> <!-- more default parameters can be changed here --> </include> </launch> worldsfolder, and create> </world> </sdf> . ~/catkin_ws/devel/setup.bash roslaunch MYROBOT_gazebo MYROBOT.launch You should see the following world model (zoom out with the scroll wheel on your mouse): You can insert additional models into your robot's world file and use the File->Save As command to export your edited world back into your ROS package. roslaunchto Spawn URDF Robots There are two ways to launch your URDF-based robot into Gazebo using roslaunch: ROS Service Call Spawn Method The first method keeps your robot's ROS packages more portable between computers and repository check outs. It allows you to keep your robot's location relative to a ROS package path, but also requires you to make a ROS service call using a small (python) script. Model Database Method The second method allows you to include your robot within the .worldfile, which seems cleaner and more convenient but requires you to add your robot to the Gazebo model database by setting an environment variable. We will go over both methods. Overall our recommended method is using the '''ROS Service Call Spawn Method''' This method uses a small python script called spawn_model to make a service call request to the gazebo_ros ROS node (named simply "gazebo" in the rostopic namespace) to add a custom URDF into Gazebo. The spawn_model script is located within the gazebo_ros package. You can use this script in the following way: rosrun gazebo_ros spawn_model -file `rospack find MYROBOT_description`/urdf/MYROBOT.urdf -urdf -x 0 -y 0 -z 1 -model MYROBOT To see all of the available arguments for spawn_model including namespaces, trimesh properties, joint positions and RPY orientation run: rosrun gazebo_ros spawn_model -h If you do not yet have a URDF to test, as an example you can download the baxter_description package from Rethink Robotics's baxter_common repo. Put this package into your catkin workspace by running: git clone You should now have a URDF file named baxter.urdf located in a within baxter_description/urdf/, and you can run: rosrun gazebo_ros spawn_model -file `rospack find baxter_description`/urdf/baxter.urdf -urdf -z 1 -model baxter You should then see something similar to: To integrate this directly into a ROS launch file, reopen the file MYROBOT_gazebo/launch/YOUROBOT.launch and add the following before the </launch> tag: <!-- Spawn a robot into Gazebo --> <node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-file $(find baxter_description)/urdf/baxter.urdf -urdf -z 1 -model baxter" /> Launching this file, you should see the same results as when using rosrun. If your URDF is not in XML format but rather in XACRO format, you can make a similar modification to your launch file. You can run this PR2 example by installing this package: ROS Jade: sudo apt-get install ros-jade-pr2-common Then adding this to your launch file created previously in this tutorial: <!--" /> Launching this file, you should see the PR2 in the gas station as pictured: Note: at this writing there are still a lot of errors and warnings from the console output that need to be fixed from the PR2's URDF due to Gazebo API changes. The second method of spawning robots into Gazebo allows you to include your robot within the .world file, which seems cleaner and more convenient but also requires you to add your robot to the Gazebo model database by setting an environment variable. This environment variable is required because of the separation of ROS dependencies from Gazebo; URDF package paths cannot be used directly inside .world files because Gazebo does not have a notion of ROS packages. To accomplish this method, you must make a new model database that contains just your single robot. This isn't the cleanest way to load your URDF into Gazebo but accomplishes the goal of not having to keep two copies of your robot URDF on your computer. If the following instructions are confusing, refer back to the Gazebo Model Database documentation to understand why these steps are required. We will assume your ROS workspace file hierarchy is setup as described in the above sections. The only difference is that now a model.config file is added to your MYROBOT_description package like so: ../catkin_ws/src /MYROBOT_description package.xml CMakeLists.txt model.config /urdf MYROBOT.urdf /meshes mesh1.dae mesh2.dae ... /materials /plugins /cad This hierarchy is specially adapted for use as a Gazebo model database by means of the following folders/files: Each model must have a model.config file in the model's root directory that contains meta information about the model. Basically copy this into a model.config file, replacing model.urdf with your file name: <?xml version="1.0"?> <model> <name>MYROBOT</name> <version>1.0</version> <sdf>urdf/MYROBOT.urdf</sdf> <author> <name>My name</name> <email>name@email.address</email> </author> <description> A description of the model </description> </model> Unlike for SDFs, no version is required for the Finally, you need to add an environment variable to your .bashrc file that tells Gazebo where to look for model databases. Using the editor of your choice edit "~/.bashrc". Check if you already have a GAZEBO_MODEL_PATH defined. If you already have one, append to it using a semi-colon, otherwise add the new export. Assuming your Catkin workspace is in ~/catkin_ws/ Your path should look something like: export GAZEBO_MODEL_PATH=/home/user/catkin_ws/src/ Now test to see if your new Gazebo Model Database is properly configured by launching Gazebo: gazebo And clicking the "Insert" tab on the left. You will probably see several different drop down lists that represent different model databases available on your system, including the online database. Find the database corresponding to your robot, open the sub menu, click on the name of your robot and then choose a location within Gazebo to place the robot, using your mouse. roslaunchwith the Model Database The advantage of the model database method is that now you can include your robot directly within your world files, without using a ROS package path. We'll use the same setup from the section "Creating a world file" but modify the world file: MYROBOT_description/launchfolder, edit> <include> <uri>model://MYROBOT</uri> </include> </world> </sdf> roslaunch MYROBOT_gazebo MYROBOT.launch The disadvantage of this method is that your packaged MYROBOT_description and MYROBOT_gazebo are not as easily portable between computers - you first have to set the GAZEBO_MODEL_PATH on any new system before being able to use these ROS packages. The useful info would be the format for exporting model paths from a package.xml: <export> <gazebo_ros gazebo_model_path="${prefix}/models"/> <gazebo_ros gazebo_media_path="${prefix}/models"/> </export> The '${prefix}` is something that new users might not immediately know about either, and necessary here. Also would be useful to have some info on how to debug these paths from the ROS side, e.g. that you can use rospack plugins --attrib="gazebo_media_path" gazebo_ros To check the media path that will be picked up by gazebo. Now that you know how to create roslaunch files that open Gazebo, world files and URDF models, you are now ready to create your own Gazebo-ready URDF model in the tutorial Using A URDF In Gazebo
http://gazebosim.org/tutorials/?tut=ros_roslaunch
CC-MAIN-2017-51
en
refinedweb
Aspect-oriented programming (AOP) and the .NET Framework The article explores the details of utilizing the intrinsic features of the .NET Framework to implement an aspect-oriented programming model. There has been a lot of talk about Aspect Oriented Programming (AOP) and how this new programming paradigm revolutionize the way we develop software much like how Object Oriented Programming (OOP) did about 15 years ago. The AOP model allows a developer to implement individual concerns of a system in a loosely coupled fashion and combine them into a whole. This enables one to easily modify the behavior of a system dynamically at runtime and thus better enables a system design to meet growing requirements. In this article, we will look in-depth at incorporating AOP semantics into components developed using the .NET Framework. There is a plethora of articles written about concepts and elements of AOP, and how AOP frameworks implement these concepts. Hence, this article will not be discussing AOP concepts per se in detail but focus exclusively on how existing capabilities offered by the .NET Framework enable us to develop components that are aspect aware. The gist of AOP model is to identify concerns such as logging, exception handling, etc. that cross-cut several layers in a system, modularize them into orthogonal units called aspects and inject these aspects at selected execution points in a system dynamically at runtime. One could leverage the interception mechanisms built into the Common Language Runtime (CLR) in conjunction with metadata to accomplish this task without having to resort to third-party AOP frameworks and tools. We will employ a trivial example that uses the most banal and ubiquitous of all cross-cutting concerns, namely, logging, and delve into the details of various .NET programming artifacts that need to be developed in order to inject this aspect at runtime. All examples and code listings are shown in C#. Working with metadata When a C# compiler compiles the source code, it creates a PE (portable executable) file which has four main parts, namely, PE header, CLR header, metadata and IL. Metadata is a block of binary data that consists of three categories of tables, namely, definition tables, reference tables and manifest tables. As a compiler compiles the source code, entries are created in the definition tables corresponding to each artifact such as types, methods, fields, properties events defined in the source code. Anybody who has done any non-trivial development with the .NET Framework is well aware of the usage of attributes. Attributes allow a developer to define information that is emitted by the C# compiler into the metadata. Attributes can be used to annotate any programming artifact for which information resides in a metadata definition table. This information can be queried by the CLR at runtime so as to dynamically alter the manner in which the code executes. In addition to the CLR-defined attributes such as Serializable, WebMethod, PrincipalPermission etc., developers may create their own custom attributes. A custom attribute is merely an instance of a type which must be derived directly from System.Attribute. We can also specify what target (assembly, class, interface, method, field or property) the attribute applies to. Following is an example of defining a custom attribute and using it to annotate a method. Our goal is to develop an aspect oriented programming model wherein, by merely annotating a method, field or property in a class with a custom attribute, we force the runtime to trigger an event whenever that method, field or property is called or accessed. The event thus triggered executes the logic that we have isolated as an aspect. Merely defining an attribute type and using it as illustrated above is useless by itself. All that we accomplish with a custom attribute such as MyAttribute is to make the C# compiler emit corresponding metadata when it compiles the source code for SomeClass.someMethod. It does not have any meaning beyond that. At runtime, the CLR merely ignores this metadata information. In order to affect runtime behavior, we have to force the CLR to check for the presence of MyAttribute and subsequently execute code that injects the desired behavior if the attribute is present. The key to accomplishing this goal is to piggy back on the existing interception mechanisms in the CLR. Context-bound object activation The central concept in the interception mechanism employed by the CLR is the notion of a context associated with a target object. A context is a set of properties or usage rules that define an environment. One or more target objects may share the same context. The rules defined in a context are enforced by the CLR at runtime when the objects are entering or leaving the context. A context is an object of type System.Runtime.Remoting.Contexts.Context. Objects that reside in a context and are bound to the context rules are called context-bound objects. Objects that are not context-bound are called agile objects. The context associated with the current thread of execution is obtained using the property System.Threading.Thread.CurrentContext. While executing within a method or accessing a field or property of a context-bound object, the CLR will ensure that this thread property always returns a reference to the same context object that the context-bound object was originally bound to thus guaranteeing that the latter executes in the same environment every time. In order to make an object context-bound, its type must be derived from System.ContextBoundObject which in turn derives from System.MarshalByRefObject. An important distinction between a context-bound and an agile object is that a client can never obtain a direct reference to an instance of a context-bound object. When the new operator is used to instantiate a context-bound object, the CLR intervenes and executes a series of steps which together constitute the object activation phase. During this phase, the CLR generates two proxies, namely, an instance of TransparentProxy and an instance of RealProxy. Both these types are defined in the System.Runtime.Remoting.Proxies namespace. The TransparentProxy is merely a stand-in for the target object and it is this proxy that a client always gets a reference to whenever it tries to communicate with a context-bound object. The injection of these proxies thus enables the CLR to always intercept any type of access to the context-bound object. In addition to these proxies, the CLR also provides an extensible mechanism for inserting message sinks between the proxies and the target object. A message sink is an object that implements the type System.Runtime.Remoting.Messaging.IMessageSink. After the object activation phase is completed, the objects that appear in the invocation chain between the client and the target object may be visualized as illustrated in Figure 1 below. The method invocation by the client is delegated by the TransparentProxy to the RealProxy which in turn sends it through a chain of IMessageSink instances before it reaches the target object. These message sinks provide us an entry point in order to inject the desired aspect at runtime. Where do custom attributes fit in? During the object activation phase of a context-bound object, the CLR inspects if the target object type has been annotated with a special category of custom attributes which implement the interface System.Runtime.Remoting.Contexts.IContextAttribute. Custom attribute types that implement the IContextAttribute interface are called context-attributes. The CLR provides an opportunity for each such context-attribute to add zero or more context-properties into the context that the target object will be associated with. It does so by first creating an instance of the context-attribute and invoking the IsContextOK method, passing in the Context associated with the caller. Each context-attribute votes on whether the calling context is acceptable to the target object. This step determines if the Context associated with the caller is shared or a new Context is created for the target object. Subsequently, the CLR invokes the GetPropertiesForNewContext method on each context-attribute. The IConstructionCallMessage argument represents the construction call request for the target object. It is in this method that the context-properties are added to the context. A context-property is an object that implements the interface System.Runtime.Remoting.Context.IContextProperty. After the Context associated with the target object has been properly initialized with context-properties, the CLR provides each context-property an opportunity to inject a message sink between the RealProxy and the target object as illustrated in Figure 1 above. Subsequently, when a method is called on the context-bound object, the CLR routes the invocation through the SyncProcessMessage method of the message sink. An object of type System.Runtime.Remoting.Messaging.IMessage is passed into this method which represents the invocation request that originated from the client. Inside this method, one could add code that implements the runtime behavior associated with the corresponding context-attribute. Custom attributes thus facilitate a mechanism with which we can inject aspects at runtime. Interception with custom sinks Message sinks are categorized into four logical groups, namely, envoy, client, server and object sinks. A context-property can indicate to the CLR the type(s) of message sink(s) that it wishes to inject into the invocation chain by implementing one or more of the four interfaces, namely, IContributeEnvoySink, IContributeClientContextSink, IContributeServerContextSink, and IContributeObjectSink. These interfaces are defined in the System.Runtime.Remoting.Contexts namespace. The implementation of the GetXXXSink method in these interfaces typically involves creating a message sink that implements the desired aspect and adding to it a reference to the next downstream sink so that they all chained to together. It was mentioned earlier that ,while executing a method or accessing a field or property of a context-bound object, the thread property System.Threading.Thread.CurrentContext always points to the context object that the context-bound object was originally bound to. In order to make this happen, whenever a client calls a context-bound object, the CLR has to switch the thread context somewhere in the invocation chain between the client and the target. It is important to note that this switch occurs after the envoy sinks and client sinks have intercepted the call and before the server sinks and object sinks get a chance to perform their interception. Also, note that when the CLR is assembling these sinks in the invocation chain, it obtains the envoy, server and object sinks from the context associated with the target object. The client sinks, on the other hand, are obtained from the caller's context. These concepts are illustrated below. Knowledge of these concepts is essential to understand the execution environment under which an aspect is injected using any or all of these message sinks. For most scenarios, it would suffice to inject server sinks into the invocation chain. A server sink is associated with the type of the target object. Thus, it corresponds to a Putting it all together We have covered quite a bit of ground about the innards of the interception mechanism. So, let’s recapitulate the key points before we move on to an example. - Instances of ContextBoundObject are always associated with a Context. - A Context can have zero or more properties which are instances of IContextProperty. - The properties are created and added to the Context by custom attributes that annotate the ContextBoundObject. These custom attributes must be instances of IContextAttribute. - A context-property can also perform the role of a message sink factory by implementing one or more of the four interfaces, namely, IContributeEnvoySink, IContributeClientContextSink, IContributeServerContextSink, and IContributeObjectSink. A message sink is an instance of IMessageSink - Access to a ContextBoundObject is always intercepted by the CLR using proxies and message sinks. A message sink is inserted into the invocation chain by the CLR during object activation and can inject the desired aspect at runtime. The relationships between these entities are illustrated with the two UML diagrams below. And now, a practical example We will use the same example that has already been bludgeoned to death by numerous articles on AOP, namely, using aspect-oriented programming to log diagnostics of a method call. First, we define our custom attribute type LogAttribute, shown in Listing 1. This context-attribute ensures that the target object is always instantiated in its own context by returning false from the IsContextOK method. The GetPropertiesForNewContext method of the context-attribute creates and adds a context-property LogProperty to the context via the construction call request passed in to the method. Implementation of the context-property is shown in Listing 2. The context-property should return true from the IsNewContextOK method. The CLR invokes this method on each context-property after they have all been added to the target context during the object activation phase. It gives each property a chance to inspect whether the context is in a properly initialized state or not. If this method returns false, CLR will raise a System.Runtime.Remoting.RemotingException exception. LogProperty inserts a message sink into the invocation chain by implementing the IContributeServerContextSink interface. To keep things simple in this example, we will not insert any envoy, client or object sinks. The GetServerContextSink method creates an instance of LoggingServerSink which implements the desired behavior. Implementation of the message sink is shown in Listing 3. Note that a server sink such as LoggingServerSink gets a chance to intercept the construction as well as any method invocations on a target object. In this example, we are only interested in intercepting a method invocation. The SyncProcessMessage method is where the messages sink injects the desired aspect, namely, that of logging information about the invocation parameters and time taken to execute the target method. The context-bound object Aspects.Target as well as a simple calling client is shown in Listing 4. Note that the target object type is annotated with the context-attribute LogAttribute. Usage of this attribute triggers the interception of method invocations on the type Target by the server sink LoggingServerSink and would hence generate the logging information. Voila! We have made our .NET component Aspects.Target aspect aware. We aren't done yet! There is still a little more ground to cover. In the above example, we applied the LogAttribute custom attribute to the Target class declaration. Because the declaration scope of this attribute was defined as AttributeTargets.Class, this causes the aspect injection to be applied to all the methods in the Aspects.Target class. So, if we wish to alter the runtime behavior of only the add method with an aspect, what do we do? One would guess that the declaration scope of the LogAttribute would merely need to be changed to AttributeTargets.Method and the attribute moved to the top of the add method declaration as shown in the pseudo-code below. Unfortunately, it is not as straightforward as that. You see, when the CLR inspects the type of a context-bound object during the object activation phase, it checks only for context-attributes that are defined at the class scope. Attributes declared at any other scope are ignored. Thus, if we move the LogAttribute custom attribute to method scope, we would go back to square one where all that we accomplish is to make the C# compiler emit some metadata which is blissfully ignored by the CLR at runtime. The work around for this problem is as follows: - We retain the context-attribute at the class scope as this provides us an entry point into the object activation phase and allows us to insert our custom message sink in the invocation chain. - The custom message sink, however, does not inject the desired aspect as the latter is specific to the method, field or property that has been invoked or accessed. Hence, it delegates this task to another .NET component. This component could be the custom attribute itself that annotates the method, field or property in question or could be just any plain .NET component. The modified implementation of the server sink ModifiedLoggingServerSink is shown in Listing 5. As before, this sink intercepts only method invocations. Using APIs in System.Reflection namespace, it checks whether the method in question has been annotated with a custom attribute type LogMethodAttribute before injecting the aspect. The Log method in the declaring class of this attribute injects the logging aspect that originally resided in the message sink. Note that LogMethodAttribute, shown in Listing 6, has method scope and it is not a context-attribute which is fine as the CLR allows only context-attributes at the class scope to participate in object activation. And, that's it! To inject the logging aspect into any method invocation, all that we have to do is annotate the method in question with the custom attribute LogMethodAttribute, annotate the method's class with the context-attribute LogAttribute. The modified version of the advised target object, namely, Aspects.Target is shown in Listing 7. Design Improvements Before talking about improvements to the AOP model discussed in this article, let's quickly run down some of the key terms in AOP parlance. - Join-point is a specific point of execution in the code such as the call of a method, access of a field or property, throwing of an exception etc. - Advice is a particular action taken when a join-point is executed. An advice encapsulates the behavior that we want to inject into the code at runtime. - Aspect is a concern that cuts across multiple layers in an application. In an AOP framework, it is typically a class that encapsulates one or more advices. - Point-cut is an expression language used by AOP frameworks much akin to regular expressions. A point-cut is used to represent a set of join-points specifying when an advice is to be executed. The sample code could be generalized and improved in several ways. In the example, the message sink ModifiedLoggingServerSink is hard-coded to inject only one advice, namely, the Log method and it considers only one join-point, namely, methods that have been annotated with the LogMethodAttribute custom attribute. In order to have the maximum flexibility, we would like to be able to turn the advice on/off, control which advice is triggered as well as specify the join-points where the injection occurs using one or more of the following: - Name of the method, field or property being invoked or accessed - Types of parameters passed into a method, and/or return value of a method - Type of custom attribute, if any, decorating the method, field or property - Declaring type of the method, field or property. And, we would, of course, want to do all of this without having to recompile the code. This requirement is addressed by loading such information from an external configuration file and making that information available to the message sink so that it can determine which aspect/advice to inject at a given join-point. The details of such an implementation are not presented here. For example, using the configurations syntax in JBoss AOP framework as a reference, this could be done as follows: The above configuration specifies two separate bindings. The first one causes the advice Log to be injected whenever the execution is at the join-point of calling the method Aspects.Target.add. The aspect that provides this advice is the class Aspects.Attributes.LogMethodAttribute. The second binding triggers the advice SomeMethod to be injected when the execution is at the join-point of accessing any field that has been annotated with the attribute LogMethodAttribute. Here, the aspect is implemented by the class Aspects.SomeClass. The AOP model outlined in this article is not capable of injecting aspects at joint-points of a plain .NET component. We must have a context-attribute annotating the declaring class of the component whose access we want to advise. Concluding remarks The article explored the details of utilizing the intrinsic features of the .NET Framework to implement an aspect-oriented programming model. Custom attributes were used in conjunction with the extensible interception mechanism provided by the CLR to implement this model. Even though the capabilities of this AOP model are a far cry from that of full-fledged AOP frameworks, it is nonetheless adequate enough to implement simple aspect injection semantics without having to resort to any third-party tools. Download the source code for this article About the author Dr. Viji Sarathy has worked for several years in the design of object-oriented, distributed software systems. More recently, he is specializing in the design and implementation of software architectures with both .NET Framework and J2EE technologies. He is a MCAD for .NET and Sun Certified Enterprise Architect for J2EE. He is currently a Senior Software Architect at ComFrame Software Corp. Founded in 1997 in Birmingham, Ala., ComFrame delivers a wide range of custom software solutions to its clients. ComFrame has offices in Birmingham and Huntsville, Ala. and Nashville, Tenn. Join the conversation
http://searchwindevelopment.techtarget.com/tip/Aspect-oriented-programming-AOP-and-the-NET-Framework
CC-MAIN-2017-51
en
refinedweb
Posted 30 Mar 2015 Link to this post Posted 02 Apr 2015 Link to this post See What's Next in App Development. Register for TelerikNEXT. Posted 02 Jul 2015 Link to this post Hi, I am experiencing the same problem. Here is how I have defined my RadAsyncUpload <telerik:RadAsyncUpload <telerik:RadButton But it's failing for all except PDF? Do I need to add something? Please see attached output after Files have been selected. Thanks in advance Posted 06 Jul 2015 Link to this post Posted 22 Oct 2015 in reply to Konstantin Dikov Link to this post Hello, I am experiencing the same thing, however I cannot explain it. It is in the javascript client-side validation that it is happening. So far, it is happening just for "JPG" vs "jpg". For instance both "txt" and "TXT" work and I only have "txt" in the allowed file extensions list. If the file's extension is "JPG" and I only have "jpg" in the allowed file extensions, the javascript function detects -1 and throws the "This file type is not supported." error. Here is the js function that I am using: function getErrorMessage) { return ("This file type is not supported."); } else { return ("This file exceeds the maximum allowed size of 80 KB."); } } else { return ("not correct extension."); } } I am using Telerik.Web.UI v2015.2.623.45. If I set the allowed file extensions like this: AllowedFileExtensions="txt,jpg,JPG" it does not throw the error. Any suggestions as to why this is happening would be appreciated. Thanks, Ray Goforth Posted 26 Oct 2015 Link to this post if (sender.get_allowedFileExtensions().toLowerCase().indexOf(fileExtention.toLowerCase()) == -1) { return ( "This file type is not supported." ); } Posted 26 Oct 2015 in reply to Konstantin Dikov Link to this post Hi Konstantin, Thanks very much for the code. This does resolve the issue but I see why some people are experiencing this case sensitivity when others are not. The issue only happens when the maximum size has been exceeded. For instance: Lets say that my allowed file types is "jpg" and my max size is 80kb. If I try to upload a file "test.jpg" that is 85kb, the error says "This file exceeds the maximum allowed size of 80 KB.", which makes sense. If I rename the file to "test.JPG" and try to upload it, the error says "This file type is not supported." Thanks for your help! Ray Goforth Posted 25 Jan Link to this post Is there any progress on this issue? Ray's post in October 2015 is still happening in the demos. (see attached image). It is easy to resolve the error in aspx by just duplicating our 20+ allowed types, but this seems like a workaround the bug. Also, is there any interest in adding a DISallowed file extensions property? Andrew Posted 30
http://www.telerik.com/forums/allowedfileextensions-case-insensitive
CC-MAIN-2017-13
en
refinedweb
I'd like to use Python and appscript to set a given Google tab as the foremost tab. I can acquire the tab thusly: from appscript import * chrome = app('Google Chrome') tab_title = 'New York Times' win_tabs = [] for win in chrome.windows.get(): win_tabs.append([]) tnames = map(lambda x: x.title(), win.tabs()) rel_t = [t for t in win.tabs() if tab_title in t.title()] if len(rel_t): rel_t = rel_t[0] break se = app('System Events') You can use win.active_tab_index.set(number) to change the active tab. No need for system events. But the tab object doesn't have a reference to its own index, so change it to this (edit: rewrote to avoid a bug) def activate_chrome_tab(title): for window in chrome.windows.get(): for index, tab in enumerate(window.tabs()): if title in tab.title(): window.active_tab_index.set(index + 1) window.activate() window.index.set(1) return activate_chrome_tab(tab_title) (If there are multiple windows with the same tab name it'll focus each one in turn - this is how the original (Source: Chrome applescript dictionary) (But note appscript is not maintained)
https://codedump.io/share/5Xj48RdCWMTo/1/set-active-tab-in-chrome-and-bring-it-to-the-front
CC-MAIN-2017-13
en
refinedweb
Medgen-prime is a public fork of Invitae’s “medgen”, a toolkit and API into medgen-mysql. Medgen-prime improves upon medgen in the following main ways: * prevention of SQL-injection attacks * more fine-grained control over GeneDB * enhanced test framework Medgen-prime has been extensively tested in production enviroments, particularly for Text2Gene (). To install: pip install medgen-prime To use: import medgen This is a drop-in replacement for Invitae’s medgen; as such, the two cannot be installed at the same time. You’ll want to pip uninstall medgen before installing medgen.
https://pypi.org/project/medgen-prime/
CC-MAIN-2017-13
en
refinedweb
The new ASP.net Identity project has brought some useful code and interfaces for website security. To implement a custom system using the interfaces (instead of using the standard Entity Framework implementation included in the MVC 5 template) an IPasswordHasher IPasswordHasher namespace Microsoft.AspNet.Identity { public interface IPasswordHasher { string HashPassword(string password); PasswordVerificationResult VerifyHashedPassword(string hashedPassword, string providedPassword); } } "Is it possible to use password salting for more secure encryption in ASP.net Identity and via this interface?" Yes, the interface is provided for the new implementation of PasswordHasher already present in Core framework. Also note that the default implementation is already using Salt+Bytes. After creating custom PasswordHasher (say MyPasswordHasher), you can assign it to UserManager instance like userManager.PasswordHasher=new MyPasswordHasher() See one example of such IPasswordHasher To implement a custom system using the interfaces (instead of using the standard Entity Framework implementation included in the MVC 5 template) an IPasswordHasher is required. For implementing alternate system from EF, - You shall implement all Core interfaces. - IPasswordHasher implementation is not required. PasswordHasher is already provided in Core framework as it's implementation.
https://codedump.io/share/ebmmwGxArGOk/1/aspnet-identity-password-hashing
CC-MAIN-2017-13
en
refinedweb
/* * Form.io.BufferedWriter; import java.io.Writer; /** * The <code>Formatter</code> object is used to format output as XML * indented with a configurable indent level. This is used to write * start and end tags, as well as attributes and values to the given * writer. The output is written directly to the stream with and * indentation for each element appropriate to its position in the * document hierarchy. If the indent is set to zero then no indent * is performed and all XML will appear on the same line. * @see org.simpleframework.xml.stream.Indenter */ class Formatter { /** * Represents the prefix used when declaring an XML namespace. */ private static final char[] NAMESPACE = { 'x', 'm', 'l', 'n', 's' }; * Represents the XML escape sequence for the less than sign. */ private static final char[] LESS = { '&', 'l', 't', ';'}; * Represents the XML escape sequence for the greater than sign. private static final char[] GREATER = { '&', 'g', 't', ';' }; * Represents the XML escape sequence for the double quote. private static final char[] DOUBLE = { '&', 'q', 'u', 'o', 't', ';' }; * Represents the XML escape sequence for the single quote. private static final char[] SINGLE = { '&', 'a', 'p', 'o', 's', ';' }; * Represents the XML escape sequence for the ampersand sign. private static final char[] AND = { '&', 'a', 'm', 'p', ';' }; * This is used to open a comment section within the document. private static final char[] OPEN = { '<', '!', '-', '-', ' ' }; * This is used to close a comment section within the document. private static final char[] CLOSE = { ' ', '-', '-', '>' }; * Output buffer used to write the generated XML result to. private OutputBuffer buffer; * Creates the indentations that are used buffer the XML file. */ private Indenter indenter; * This is the writer that is used to write the XML document. private Writer result; * Represents the prolog to insert at the start of the document. private String prolog; * Represents the last type of content that was written. private Tag last; * Constructor for the <code>Formatter</code> object. This creates * an object that can be used to write XML in an indented format * to the specified writer. The XML written will be well formed. * * @param result this is where the XML should be written to * @param format this is the format object to use public Formatter(Writer result, Format format){ this.result = new BufferedWriter(result, 1024); this.indenter = new Indenter(format); this.buffer = new OutputBuffer(); this.prolog = format.getProlog(); } * This is used to write a prolog to the specified output. This is * only written if the specified <code>Format</code> object has * been given a non null prolog. If no prolog is specified then no * prolog is written to the generated XML. * @throws Exception thrown if there is an I/O problem public void writeProlog() throws Exception { if(prolog != null) { write(prolog); write("\n"); } * This is used to write any comments that have been set. The * comment will typically be written at the start of an element * to describe the purpose of the element or include debug data * that can be used to determine any issues in serialization. * * @param comment this is the comment that is to be written */ public void writeComment(String comment) throws Exception { String text = indenter.top(); if(last == Tag.START) { append('>'); if(text != null) { append(text); append(OPEN); append(comment); append(CLOSE); last = Tag.COMMENT; * This method is used to write a start tag for an element. If a * start tag was written before this then it is closed. Before * the start tag is written an indent is generated and placed in * front of the tag, this is done for all but the first start tag. * @param name this is the name of the start tag to be written * @throws Exception thrown if there is an I/O exception public void writeStart(String name, String prefix) throws Exception{ String text = indenter.push(); append('>'); } flush(); append(text); append('<'); if(!isEmpty(prefix)) { append(prefix); append(':'); append(name); last = Tag.START; * This is used to write a name value attribute pair. If the last * tag written was not a start tag then this throws an exception. * All attribute values written are enclosed in double quotes. * @param name this is the name of the attribute to be written * @param value this is the value to assigne to the attribute public void writeAttribute(String name, String value, String prefix) throws Exception{ if(last != Tag.START) { throw new NodeException("Start element required"); } write(' '); write(name, prefix); write('='); write('"'); escape(value); write('"'); * This is used to write the namespace to the element. This will * write the special attribute using the prefix and reference * specified. This will escape the reference if it is required. * @param reference this is the namespace URI reference to use * @param prefix this is the prefix to used for the namespace public void writeNamespace(String reference, String prefix) throws Exception{ write(NAMESPACE); write(':'); write(prefix); escape(reference); * This is used to write the specified text value to the writer. * If the last tag written was a start tag then it is closed. * By default this will escape any illegal XML characters. * @param text this is the text to write to the output public void writeText(String text) throws Exception{ writeText(text, Mode.ESCAPE); * This will use the output mode specified. public void writeText(String text, Mode mode) throws Exception{ write('>'); } if(mode == Mode.DATA) { data(text); } else { escape(text); last = Tag.TEXT; * This is used to write an end element tag to the writer. This * will close the element with a short <code>/></code> if the * last tag written was a start tag. However if an end tag or * some text was written then a full end tag is written. * @param name this is the name of the element to be closed public void writeEnd(String name, String prefix) throws Exception { String text = indenter.pop(); write('/'); } else { if(last != Tag.TEXT) { write(text); } if(last != Tag.START) { write('<'); write('/'); write(name, prefix); write('>'); } } last = Tag.END; * This is used to write a character to the output stream without * any translation. This is used when writing the start tags and * end tags, this is also used to write attribute names. * @param ch this is the character to be written to the output private void write(char ch) throws Exception { buffer.write(result); buffer.clear(); result.write(ch); * This is used to write plain text to the output stream without * @param plain this is the text to be written to the output */ private void write(char[] plain) throws Exception { result.write(plain); private void write(String plain) throws Exception{ * @param prefix this is the namespace prefix to be written private void write(String plain, String prefix) throws Exception{ result.write(prefix); result.write(':'); * This is used to buffer a character to the output stream without * any translation. This is used when buffering the start tags so * that they can be reset without affecting the resulting document. private void append(char ch) throws Exception { buffer.append(ch); * This is used to buffer characters to the output stream without * @param plain this is the string that is to be buffered */ private void append(char[] plain) throws Exception { buffer.append(plain); private void append(String plain) throws Exception{ buffer.append(plain); * This method is used to write the specified text as a CDATA block * within the XML element. This is typically used when the value is * large or if it must be preserved in a format that will not be * affected by other XML parsers. For large text values this is * also faster than performing a character by character escaping. * @param value this is the text value to be written as CDATA private void data(String value) throws Exception { write("<![CDATA["); write(value); write("]]>"); * This is used to write the specified value to the output with * translation to any symbol characters or non text characters. * This will translate the symbol characters such as "&", * ">", "<", and """. This also writes any non text * and non symbol characters as integer values like "{". * @param value the text value to be escaped and written private void escape(String value) throws Exception { int size = value.length(); for(int i = 0; i < size; i++){ escape(value.charAt(i)); * @param ch the text character to be escaped and written private void escape(char ch) throws Exception { char[] text = symbol(ch); write(text); write(ch); } * This is used to flush the writer when the XML if it has been * buffered. The flush method is used by the node writer after an * end element has been written. Flushing ensures that buffering * does not affect the result of the node writer. public void flush() throws Exception{ result.flush(); * This is used to convert the the specified character to unicode. * This will simply get the decimal representation of the given * character as a string so it can be written as an escape. * @param ch this is the character that is to be converted * @return this is the decimal value of the given character private String unicode(char ch) { return Integer.toString(ch); * private boolean isEmpty(String value) { if(value != null) { return value.length() == 0; return true; * This is used to determine if the specified character is a text * character. If the character specified is not a text value then * this returns true, otherwise this returns false. * @param ch this is the character to be evaluated as text * @return this returns the true if the character is textual private boolean isText(char ch) { switch(ch) { case ' ': case '\n': case '\r': case '\t': return true; } if(ch > ' ' && ch <= 0x7E){ return ch != 0xF7; return false; * This is used to convert the specified character to an XML text * symbol if the specified character can be converted. If the * character cannot be converted to a symbol null is returned. * @return this is the symbol character that has been resolved private char[] symbol(char ch) { case '<': return LESS; case '>': return GREATER; case '"': return DOUBLE; case '\'': return SINGLE; case '&': return AND; return null; } * This is used to enumerate the different types of tag that can * be written. Each tag represents a state for the writer. After * a specific tag type has been written the state of the writer * is updated. This is needed to write well formed XML text. private enum Tag { COMMENT, START, TEXT, END } }
http://simple.sourceforge.net/download/stream/report/cobertura/org.simpleframework.xml.stream.Formatter.html
CC-MAIN-2017-13
en
refinedweb
GameFromScratch.com This is part 3, the following are links for part one and part two in this series. Alright, we've installed the tools, got an editor up and going and now how to run the generated code, both on our computers and on device ( well… Android anyways… I don't currently have an iOS developer license from Apple ), so now the obvious next step is to take a look at code. Let's get one thing clear right away… I know nothing about ActionScript, never used it, and I didn't bother taking the time to learn how. As unfair as that sounds, frankly when it comes to scripting languages, I rarely bother learning them in advance… I jump in with both feet and if they are good scripting languages, you can generally puzzle them out with minimal effort. This is frankly the entire point of using a scripting language. So today is no different. This may mean I do some stupid stuff, or get impressed by stuff that makes you go… well duh. Just making that clear before we continue… now, lets continue... Apparently LoomScript is ActionScript with a mashup of C# and a smattering of CSS. ActionScript is itself derived or based on JavaScript. I know and like JavaScript and know and like C#, so we should get along fabulously. Let's look at the specific changes from ActionScript. First are delegates, a wonderful feature of C#. What exactly is a delegate? In simple terms it's a function object, or in C++ terms, a function pointer. It's basically a variable that is also a function. This allows you to easily create dynamic event handlers or even call multiple functions at once. Next was type inference, think the var keyword in C# or auto keyword in C++ 11. They added support for the struct data type. This is a pre-initialized and copy by value (as opposed to reference) class. I am assuming this is to work around an annoyance in ActionScript programming that I've never encountered. They also added C# style Reflection libraries. I assume this is confined to the System.Reflection namespaces. If you are unfamiliar with Reflection in C# land, it's a darned handy feature. In a nutshell, it lets you know a heck of a lot about objects at runtime, allowing you to query information about what "object" you are currently working with and what it can do. It also enables you load assemblies and execute code at runtime. Some incredibly powerful coding techniques are enabled using reflection. If you come from a C++ background, it's kinda like RTTI, just far better with less of an overall performance hit. Finally they added operator overloading. Some people absolutely love this feature… I am not one of those people. I understand the appeal, I just think it's abused more often than used well. This is an old argument and I generally am in the minority on this one. Now let's take a look at creating the iconic Hello World example. First is the loom.config file, it was created for us: { "sdk_version": "1.0.782", "executable": "Main.loom", "display": { "width": 480, "height": 320, "title": "Hello Loom", "stats": true, "orientation": "landscape" }, "app_id": "com.gamefromscratch.HelloLoom", "app_name": "HelloWorld" } This is basically the run characteristics of your application. This is where you set application dimensions, the title, the application name, etc. Initially you don't really even have to touch this file, but it's good to know where it is and to understand where the app details are set. Pretty much every application has a main function of some sort, the entry point of your application and Loom is no exception. Here is ours in main.ls package import cocos2d.Cocos2DApplication; static class Main extends Cocos2DApplication { protected static var game:HelloWorld = new HelloWorld(); public static function main() { initialize(); onStart += game.run; } } Here we are creating a Cocos2DApplication class Main, with one member, our (soon to be created) Cocos2DGame derived class HelloWorld. We have one function, main(), which is our app entry point, and is called when the application is started. Here you can see the first use of a delegate in LoomScript, where you assign the function game.run to the delegate onStart, which is a property of Cocos2DApplication. In a nutshell, this is the function that is going to be called when our app is run. We will look at HelloWorld's run() function now. Speaking of HelloWorld, lets take a look at HelloWorld.ls import cocos2d.Cocos2DGame; import cocos2d.Cocos2D; import UI.Label; public class HelloWorld extends Cocos2DGame override public function run():void super.run(); var label = new Label("assets/Curse-hd.fnt"); label.text = "Hello World"; label.x = Cocos2D.getDisplayWidth()/2; label.y = Cocos2D.getDisplayHeight()/2; System.Console.print("Hello World! printed to console"); //Gratuitous delegate example! layer.onTouchEnded += function(){ label.text = "Touched"; } layer.addChild(label); We start off with a series of imports… these tell Loom what libaries/namespaces we need to access. We added cocos2d.Cocos2D to have access to Cocos2D.getDisplayWidth() and Cocos2D.getDisplayHeight(). Without this import, these methods would fail. We similarly import UI.Label to have access to the label control. Remember about 20 seconds ago ( if no btw… you may wish to get that looked into… ) when we assigned game.run to the Cocos2DApplications onStart delegate? Will, this is where we define the run method. The very first thing it does is calls the parent's run() method to perform the default behaviour. Next we create a Label widget using the font file Curse-hd.fnt (that was automatically added to our project when it was created ). We set the text to "Hello World" and (mostly) centre the label to the screen by setting its x and y properties. You may notice something odd here, depending on your background… the coordinate system. When working with Cocos2D, there are a couple things to keep in mind. First, things are positioned relative to the bottom left corner of the screen/window/layer by default, not the top left. Second, nodes within the world are by default positioned relative to their centre. It takes a bit of getting used to, and can be overridden if needed. Next we print "Hello World was printed to the console" to demonstrate how to print to the console. Then we follow with another bit of demonstrative code. This is wiring a delegate to the layer.onTouchEnded property. This function is going to be called when the screen is released, as you can see, this is an anonymous function, unlike run we used earlier. When a touch happens, we simply change the label's text to Touched. Finally we add the label to our layer, inherited from Cocos2DGame. Run the code and you will see: While if you check out your terminal window, you will see: As you can see, Hello world is also displayed to the terminal. Now lets take a look at one of the cool features of Loom. Simply edit an .ls file in your text editor of choice and if you are currently running your project, if you flip back to the terminal window you will see: Loom is automatically updating the code live as you make changes. This is very cool feature. Ironically though, in this particular case, it's a useless one as all of our code runs only when the application is first started. However in more complicated apps, this will be a massive time saver. On top, this is also how you can easily detect errors… let's go about creating one right now. Instead of label.Text, we are going to make an error, label.Txt. Save your code and see what happened in the Terminal window: As you can see the error and line number are reported live in the Terminal without you having to stop and run your application again. Pretty cool over all. In the next part, we will look at more real world code examples. You can read the next part dealing with graphics right here. Android, iOS
http://www.gamefromscratch.com/post/2013/03/12/A-closer-look-at-the-Loom-game-engine-Part-Three-Hello-World%E2%80%A6-and-a-bit-more.aspx
CC-MAIN-2017-13
en
refinedweb
tag:blogger.com,1999:blog-45623293669432757832014-10-07T01:07:37.307+01:00Chronological ThoughtA collection of notes and musings on technical issues by David SavageDavid Savage & The Cloud (Part 2)This is the second blog entry in a series documenting the underlying points I made in my recent talk at the <a href="">OSGi Community Event</a> in London. Entitled "OSGi And Private Cloud", the slides are available <a href="">here</a> and the agenda is as follows:<br /><ul><li>Where is Cloud computing today? (<a href="">Part 1</a>)</li><li>Where does OSGi fit in the Cloud architecture?</li><li>What are the challenges of using OSGi in the Cloud?</li><li>What does an OSGi Cloud platform look like?</li></ul>In this section of the talk I look at where OSGi fits into the Cloud architecture. However, as the community event was co-hosted with <a href="">JAX London</a>.<br /><ul></ul><h3 class="Header">OSGi A Quick Review</h3><img align="right" border="0" height="221" src="" width="320" />I've been working with Richard Hall, Karl Pauls and Stuart McCulloch on writing <a href="">OSGi In Action</a> which explains OSGi from first principles to advanced use cases, so if you want to know more that's a good place to look. However, here I'd like to give my elevator pitch for OSGi which would be something like as follows...<br /><br /:<br /><ul><li>Modules - the building blocks from which to create applications</li><li>Life cycle - control when modules are installed or uninstalled and customise their behaviour when they are activated </li><li>Services - minimal coupling between modules</li></ul>You might say that none of these are new ideas, so why is OSGi important? The key is in the standardisation of these fundamental axioms of Java applications. Instead of every software stack having a new and inventive way of wiring classloaders together, booting components, or connecting component A to component B, OSGi provides a minimal flexible specification that allows us to get interoperability between modules and let developers get on with the interesting part of building applications.<br /><h3 class="Header">An Uncomfortable Truth </h3><img align="right" border="0" height="137" src="" width="320" />To see where OSGi fits into the Cloud story it's worth taking a brief segue to consider a point made by <a href="">Kirk Knoernschild</a> at the OSGi community event in February this year. Namely that we are generating more and more code with every passing day:<br /><ul><li>Lines of code double every 7 years</li><li>50% of development time spent understanding code</li><li>90% of software cost is maintenance and evolution</li></ul>By 2017, we'll have written not only double the amount of code written in the past 7 years but more than the total amount of code ever written combined! Object Orientation has helped in encapsulating our code so that changes in private implementation details do not effect consumers. But in fact OO turns out to be just a stop gap and it is reaching the limits of its capabilities. If you refactor public objects or methods you still need to worry about who is consuming these and without modules this can be a hard question to answer.<br /><br /.<br /><br /?<br /><h3 class="Header">Types of Scale</h3><img align="right" border="0" height="320" src="" width="320" /><br />There are three measures of scale that I think are of relevance to this discussion of OSGi and the Cloud:<br /><ul><li> Operational scale - the number of processors, network interfaces, storage options required to perform a function</li><li>Architectural scale - the number and diversity of software components required to make up a system to perform a function</li><li>Administrative scale - the number of configuration options that our architectures and our algorithms generate</li></ul>In fact, I think we've got pretty good patterns by now for dealing with the operational scale. As we increase the number of physical resources at our disposal, this drives the class of software algorithms required to perform a function. To pick a random selection <a href="">Actors</a>, <a href="">CEP</a>, <a href="">DHTs</a> and <a href="">Grid</a> are just some of the useful software patterns for use in the Cloud. However, I think architectural and administrative scale is often less well managed.<br /><br / <a href="">paradox of choice</a>.<br /><br /.<br /> <br /?<br /><br />All this brings me to...<br /><h3 class="Header">OSGi Cloud Benefits </h3><img align="right" border="0" height="320" src="" width="240" />In <a href="">Part 1</a> of this series of blogs I mentioned that the <a href="">Nist</a><i> </i>definition of a cloud includes the statement that: <i>"Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability"</i>, to my mind OSGi has these bases covered.<br /><br /.<br /><br />But why should cloud software have these features? <br /><br /. <br /><br />OK interesting, but you might say that "TechnologyX (pick your favourite) can also provide these features, so really sell me on the OSGi cloud benefits". In which case I propose that there are <a href="">four</a> additional benefits of OSGi with respect to Cloud software which I'll deal with in turn:<br /><br /><b>Dynamic</b>:.<br /><br /><b>Extensible</b>:.<br /><br /><b>Lightweight</b>::<br /><ul><li>if you need to get diagnostics information out of the software, only deploy the diagnostics components for the time that they are needed - for the rest of the time run lean</li><li>if you need to scale up a certain component's processing power, swap an in-memory job queue for a distributed processing queue and when you're done swap it back again. </li></ul><br /><b>Self describing</b>:; <a href="">Nimble</a>, <a href="">OBR</a> and <a href="">P2</a>. This simplifies deployments by allowing software engineers to focus on what they <i>want</i> to deploy instead of what they <i>need</i> to deploy.<br /><br /.<br /><h3 class="Header">OSGi Cloud Services</h3>To conclude this post, assuming I've managed to convince you of the benefits of OSGi in Cloud architectures, here are some ideas for potential cloud OSGi services (definitely non exhaustive): <br /><ul><li>MapReduce services - Hadoop or Bigtable implementations?</li><li>Batch services - Plug and play Grids?</li><li>NoSQL services - Scalable data for the Cloud!</li><li>Communications services - Email, Calendars, IM, Twitter?</li><li>Social networking services - Cross platform widgets?</li><li>Billing services - Making money in the Cloud! </li><li>AJAX/HTML 5.0 services - Pluggable UI architectures? </li></ul>These would enable developers to start building modular, dynamic, scalable applications for the Cloud and are in fact pretty simple to achieve if there's the will power to make it happen.<br /><br /.<br /><br />So all good right? Well there are still of course challenges, so in the next post I'll look at some of these and discuss how to overcome these. In the meantime, I'm very interested in any feedback on the ideas found in this post.<br /><br />Laters<img src="" height="1" width="1" alt=""/>David Savage & The Cloud (Part 1)<a href=""><img align="right" border="0" height="130" src="" style="margin-left: auto; margin-right: auto;" width="400" /></a><br />I recently attended the <a href="">OSGi Community Event</a> where I gave a talk entitled "OSGi And Private Cloud" the slides for which are available <a href="">here</a>. However as has been pointed <a href="">out</a>, if you watch the slide deck they're a little on the zen side, so if you weren't at the event then it's a bit difficult to guess the underlying points I was trying to make.<br /><br />To address this I've decided to create a couple of blog entries that discuss the ideas I was trying to get across. Hopefully this will be of interest to others.<br /><br />In the talk the agenda was as follows:<br /><ul><li>Where is Cloud computing today?</li><li>Where does OSGi fit in the Cloud architecture? (<a href="">Part 2</a>)</li><li>What are the challenges of using OSGi in the Cloud?</li><li>What does an OSGi cloud platform look like?</li></ul>I'll stick to this flow but break these sections up into separate blog entries. So here goes with the first section...<br /><h3 class="Header">Where is Cloud computing today?</h3>Ironically Cloud computing is viewed by many as a pretty nebulous technology so before even describing <i>where</i> Cloud computing is, it's possibly useful to define <i>what</i> Cloud computing is.<br /><ul><li>Wikipedia <a href="">defines</a> a Cloud as: "Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid".</li><li>InfoWorld <a href="">defines</a>".</li><li>NIST <a href="">defines</a> "</li></ul>For me all of these definitions seem pretty similar to Utility computing. Again wikipedia <a href="">defines</a> Utility computing as "[sic] the packaging of computing resources, such as computation, storage and services, as a metered service similar to a traditional public utility (such as electricity, water, natural gas or telephone network)". So what is the boundary between Utility computing and Cloud computing? Others have attempted to define Cloud by what it is <a href="">not</a>. I tend to agree with some of these points but not others.<br /><br />So where does this leave us…?<br /><br />Actually I think the NIST definition I referred to above does a good job of describing what Cloud is as long as you read past the first sentence. For me the summary of this document is that:<br /><br /><i>Cloud is computation that is: on demand; easily accessed from a network; with pooled resources; and rapid elasticity.</i><br /><br />As a final foot note in the NIST document it mentions that:<br /><br /><i>"Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability".</i><br /><br />This last sentence for me is of fundamental importance when considering the relevance OSGi to the Cloud.<br /><br /.<br /><h3 class="Header">Why Cloud?</h3>So as the previous discussion suggests, the reason for using a Cloud model is it gives users just in time:<br /><ul><li>Processing power</li><li>Storage capacity</li><li>Network capacity. </li></ul>Clouds models are great for small, medium and large organisations.<br /><ul><li>Small organisations benefit from the reduced startup costs of Clouds compared with setting up and provisioning home grown infrastructure for web sites, email, accounting software, etc. </li><li>Medium sized organisations benefit from the on demand nature of Clouds - as their business grows so can their infrastructure </li><li>Large organisations benefit from Cloud due to their shared resources - instead of having to maintain silos of computing infrastructure for different departments they can get cost savings via economy of scale.</li></ul>There are a large number of vendors touting Cloud products, including <a href="">Amazon</a>, <a href="">Google</a>, <a href="">Salesforce</a>, <a href="">Rackspace</a>, <a href="">Microsoft</a>, <a href="">IBM</a>, <a href="">VMware</a> and <a href="">Paremus</a>. These products fit into various categories of Cloud, IAAS (Infrastructure as a Service), PAAS (Platform as a Service), SAAS (Software as a Service) and Public or Private Cloud.<br /><h3 class="Header">Cloud Realities</h3><a href=""><img align="right" border="0" height="225" src="" width="400" /></a><br />So Cloud seems pretty utopian right? In fact despite promise of Cloud the realities it delivers are somewhat different.<br /><ul><li>As there are so many vendors, there are also multiple APIs that developers need to code to for simple things they used to do like loading resources </li><li>Depending on the vendor the sort of things you can do in a Cloud are often limited (in Google App Engine you can't create Threads for example) </li><li.</li></ul>Finally, and this is a factor that effects all Clouds, they are <i>not</i> - despite marketing - <a href="">infinite resources</a> (pdf). Contention and latency are real problems in Cloud environments. The the shared nature of Cloud architectures means that SLAs can be severely impacted by seemingly random processing spikes by other tenants. Cloud providers employ many different tactics to minimise these problems but running an application in the Cloud and running it on dedicated hardware is not a seamless transition.<br /><h3 class="Header">Why Private Cloud?</h3:<br /><ul><li>Data ownership risks – A bank for example is often extremely reluctant to host private customer details on infrastructure they don't own. This can be for legal/regulatory reasons or business intelligence reasons</li><li>Data inertia – I've heard one horror story at a previous <a href="">Cloud Camp</a></li><li</li><li>SLA – The contention and latency issues of Clouds can mean that for those businesses that are are in a competitive compute-intensive business then any downtime or latency outside of your control can have a major effect on your bottom line.</li></ul>Private Cloud implies all of the on-demand, dynamic, network accessible goodness, but in a controlled environment where the business has direct control of the cloud tentants, so can better control their SLA. A bit like owning a well, or growing your own food, there are costs but also benefits.<br /><br /...<br /><h3 class="Header">How Do We Get Here?</h3><a href=""><img align="right" border="0" height="277" src="" style="margin-left: auto; margin-right: auto;" width="400" /></a>This is M51a “The whirlpool galaxy” discovered by Charles Messier in 1774 (and its companion galaxy NGC 5195). <br /><br />I came at computing from the physics angle and when I think of computer software/architecture I tend to think in terms of patterns. A galaxy is just a cloud of gas after all - but there is structure, dynamicity and mechanics that describe their overall behaviour! <br /><ul><li</li><li>Clouds are dynamic, resources come and go, their can be gaps in communication caused by latency, their can even be large scale events like data centre collapse. Software that is deployed on them must be able to cope with these dynamics</li><li.</li></ul><br /.<br /><br />In the mean time, I'm very interested in any comments or feedback on any of the ideas discussed here.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage Tatooine<p>So in my last couple of posts I've been showing the power of Nimble. You will have noticed that it is primarily a console environment. As such you may be wondering how you can provide your own commands to execute in the <a href="">Nimble</a> shell - <a href="">Posh</a> (Paremus OSGi Shell).</p><p>Posh is an implementation of the command line interface specified in <a href="">RFC-147</a>.<br /></p><p <a href="">Karaf</a> container and the Nimble container from Paremus.</p><p>This gives you some background, so now the standard thing for me to do would be to write a trivial hello world application. But that's no fun, so instead of conforming to the norm I thought it would be more interesting to port the <a href="">Starwars Asciimation</a> work to run in OSGi as an RFC 147 command line interface.</p><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img 0="" 10px="" src="" alt="" id="Gogo Starwars" border="0" /></a><br /><br /><p>Yep this is very probably the geekiest post you will ever see... :)</p><p.</p><p>The first thing we need to do to define our cli is define a class that implements the core functionality as shown below:</p><pre>package org.chronologicalthought;<br /><br />import java.io.BufferedInputStream;<br />import java.io.IOException;<br />import java.io.InputStream;<br />import java.io.InputStreamReader;<br />import java.io.PrintStream;<br />import java.io.Reader;<br />import java.net.URL;<br /><br />public class Starwars {<br /> public void starwars() throws IOException, InterruptedException {<br /> play(67);<br /> }<br /><br /> public void starwars(int frameLength) throws IOException, InterruptedException {<br /> URL res = Starwars.class.getResource("/starwars.txt");<br /> if (res == null)<br /> throw new IllegalStateException("Missing resource");<br /> InputStream in = res.openStream();<br /> try {<br /> InputStreamReader reader = new InputStreamReader(new BufferedInputStream(in));<br /> render(reader, System.out, frameLength);<br /> } finally {<br /> in.close();<br /> }<br /> }<br /><br /> private void render(Reader reader, PrintStream out, int frameLength) {<br /> // ...<br /> }<br />}</pre><p>Here the command provides two methods, play and play(int) and prints the individual frames from the "starwars.txt" file embedded in our bundle to System.out.</p><p>Wait a minute you might be thinking. Where's the API to the CLI? Well this is one of the neat things about RFC 147 you don't need to write your code to <em>any</em>.</p><p>The next step is to define an activator that publishes our cli class to the OSGi bundle context.<br /></p><pre>package org.chronologicalthought;<br /><br />import java.util.Hashtable;<br />import org.osgi.service.command.CommandProcessor;<br /><br />import org.osgi.framework.BundleActivator;<br />import org.osgi.framework.BundleContext;<br /><br />public class Activator implements BundleActivator {<br /><br /> public void start(BundleContext ctx) throws Exception {<br /> Hashtable props = new Hashtable();<br /> props.put(CommandProcessor.COMMAND_SCOPE, "ct");<br /> props.put(CommandProcessor.COMMAND_FUNCTION, new String[] { "starwars" });<br /> ctx.registerService(Starwars.class.getName(), new Starwars(), props);<br /> }<br /><br /> public void stop(BundleContext ctx) throws Exception {<br /> }<br />}</pre><p>This activator publishes the Starwars class with two attributes:</p><ul><li>CommandProcessor.COMMAND_SCOPE - a unique namespace for our command</li><li>CommandProcessor.COMMAND_FUNCTION - the names of the methods to expose as commands in the cli</li></ul><p>The code is available from <a href="">here</a> for those who want to take a look around:</p><p <a href="">here</a>. So finally let the show commence:<pre>$ svn co<br />$ cd starwars<br />$ ant<br />$ posh<br />Paremus Nimble 30-day license, expires Wed Dec 30 23:59:59 GMT 2009.<br />________________________________________<br />Welcome to Paremus Nimble!<br />Type 'help' for help.<br />[feynman.local.0]% source<br />[feynman.local.0]% installAndStart file:build/lib/org.chronologicalthought.starwars.jar<br />[feynman.local.0]% starwars<br /><br /><br /><br /><br /><br /><br /><br /><br /> presents<br /><br /><br /><br /><br /></pre><p:</p><pre>% starwars 20</pre><p>To set the frame length as 20 milliseconds.</p><p>Enjoy the show.</p><p>Laters.</p><img src="" height="1" width="1" alt=""/>David Savage for my next trickJust for fun and to demonstrate the power of the Posh (sh)ell environment I decided to knock together the following trivial script to do a "traditional" OSGi bundle file install from a directory:<br /><br /><pre>// create a temporary array for storing ids<br />array = new java.util.ArrayList;<br /><br />// iterate over the files passed <br />// in as arguement 1 to this script<br />each (glob $1/*) {<br /><br /> // use the BundleContext.installBundle <br /> // method to install each bundle<br /> id=osgi:installBundle $it;<br /><br /> // store the bundle id for start later<br /> $array add $id;<br />};<br /><br />// iterate over our installed bundles<br />each ($array) {<br /> // use the BundleContext.start method<br /> //to start it<br /> osgi:start $it;<br />};</pre><br />To try this out for yourself or to find out more about Nimble you look <a href="">here</a> once installed you can run the above script using the following command:<br /><br />posh -k <your bundles dir><br /><br />Where you should replace <your bundles dir> with a path to a directory on your local file system that contains bundles.<br /><br />Hmmm what to blog next...ponders...<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage OSGiSo I just sent a rather cryptic <a href="">twitter</a> message with the instructions:<br /><br /><span style="font-family: courier new;">posh -kc "repos -l springdm;add org.springframework.osgi.samples.simplewebapp@active"</span><br /><br />I figure it's probably worth a short note to explain what this is doing given the narrowband aspect of twitter communications.<br /><br />This command is running an instance of the posh (sh)ell which ships with <a href="">Nimble</a>. There are two switch parameters parsed to the shell:<br /><br /><span style="font-family:courier new;">-c</span>: Tells posh to execute the command passed in from the unix shell in the posh (sh)ell environment<br /><span style="font-family:courier new;">-k</span>: Tells posh to remain running after the command has completed and open a tty session for user input<br /><br />Now we come to the actual commands:<br /><br /><span style="font-family:courier new;">repos -l springdm</span>: tells posh to load the spring dm repository index into the nimble resolver<br /><br /><span style="font-family:courier new;">add org.springframework.osgi.samples.simplewebapp@active</span>: tells nimble to resolve all dependencies for the spring simplewebapp from it's configured repositories.<br /><br />The interesting thing about nimble resolution is that it doesn't just figure out the bundles that need to be installed. It also figures out what <em>state</em> these bundles should be in. If you look at the bundles in the nimble container using the command<span style="font-family:monospace;"> </span><span style="font-family:courier new;">lsb</span> you will see that not only are all the bundles installed but certain key bundles have also been activated:<br /><br /><pre>lsb<br />*nimble/com.paremus.util.cmds-1.0.4.jar 00:00 59Kb<br />0 ACTIVE org.eclipse.osgi:3.5.1.R35x_v20090827<br />1 ACTIVE com.paremus.posh.runtime:1.0.4<br />2 ACTIVE com.paremus.posh.shell:1.0.4<br />3 RESOLVED com.paremus.util.types:1.0.4<br />4 ACTIVE com.paremus.nimble.core:1.0.4<br />5 ACTIVE com.paremus.nimble.repos:1.0.4<br />6 ACTIVE com.paremus.nimble.cli:1.0.4<br />7 RESOLVED javax.servlet:2.5.0.v200806031605<br />8 RESOLVED com.springsource.slf4j.api:1.5.6<br />9 RESOLVED com.springsource.slf4j.nop:1.5.6<br />10 RESOLVED com.springsource.net.sf.cglib:2.1.3<br />11 RESOLVED com.springsource.edu.emory.mathcs.backport:3.1.0<br />12 RESOLVED org.springframework.osgi.log4j.osgi:1.2.15.SNAPSHOT<br />13 RESOLVED com.springsource.org.aopalliance:1.0.0<br />14 RESOLVED org.springframework.osgi.jsp-api.osgi:2.0.0.SNAPSHOT<br />15 RESOLVED com.springsource.slf4j.org.apache.commons.logging:1.5.6<br />16 RESOLVED osgi.cmpn:4.2.0.200908310645<br />17 RESOLVED org.mortbay.jetty.util:6.1.9<br />18 RESOLVED org.springframework.osgi.jstl.osgi:1.1.2.SNAPSHOT<br />19 RESOLVED org.springframework.core:2.5.6.A<br />20 RESOLVED org.springframework.osgi.commons-el.osgi:1.0.0.SNAPSHOT<br />21 RESOLVED org.mortbay.jetty.server:6.1.9<br />22 ACTIVE org.springframework.osgi.samples.simplewebapp:0.0.0<br />23 RESOLVED org.springframework.beans:2.5.6.A<br />24 RESOLVED org.springframework.osgi.io:1.2.0<br />25 RESOLVED org.springframework.osgi.jasper.osgi:5.5.23.SNAPSHOT<br />26 RESOLVED org.springframework.aop:2.5.6.A<br />27 RESOLVED org.springframework.osgi.catalina.osgi:5.5.23.SNAPSHOT<br />28 RESOLVED org.springframework.context:2.5.6.A<br />29 ACTIVE org.springframework.osgi.catalina.start.osgi:1.0.0<br />30 RESOLVED org.springframework.osgi.core:1.2.0<br />31 RESOLVED org.springframework.web:2.5.6.A<br />32 RESOLVED org.springframework.osgi.web:1.2.0<br />33 ACTIVE org.springframework.osgi.web.extender:1.2.0<br />34 ACTIVE com.paremus.posh.readline:1.0.4<br />35 ACTIVE com.paremus.util.cmds:1.0.4</pre><br />This listing also demonstates another key feature of nimble. Typing<span style="font-family:monospace;"> </span><span style="font-family:courier new;">lsb</span> resulted in the following log line:<br /><br /><pre>*nimble/com.paremus.util.cmds-1.0.4.jar 00:00 59Kb</pre><br />This demonstrates that the nimble container resolved the<span style="font-family:monospace;"> </span><span style="font-family:courier new;">lsb</span> command from its repository index and installed it on the fly. In fact if you look at the Nimble download it is only 55K in size. All of the extra functionality is automatically downloaded based on information provided via the nimble index files and traversing package and service level dependencies!<br /><br />To complete this blog post you can browse the simple web app running from nimble by opening:<br /><br /><pre></pre><br />Nimble is available for download <a href="">here</a>.<img src="" height="1" width="1" alt=""/>David Savage Dev Con 2009 & OSGi Tooling Summit Roundup.<br /><br /.<br /><br />Even better for me people really seemed to get what we (<a href="">Paremus</a>) are about. Last year we were the "RMI guys". This year people we talked to seemed to get genuinely excited about what our product is a really about: a flexible, scalable solution to provisioning and managing dynamic distributed OSGi based applications in enterprise environments.<br /><br /.<br /><br />I think good tooling solutions that work right the way though the stack are crucial to help new developers though the pitfalls of the new classloader space. Unfortunately, tooling support for new developers is pretty disjointed.<br /><br />Hence, the next part of my post...<br /><br / <a href="">Sigil</a>. Prior to OSGi Dev Con Paremus chose to licence Sigil under the Apache licence, as we recognise that tooling is an area where we need support from the community in order to help the community as a whole.<br /><br />On the Friday, after the end of the conference, I and a number of other representatives with interests in the area of development tooling met at an OSGi Tooling Summit hosted by Yan Pujante at <a href="">LinkedIn</a>'s Mountain View offices. The group was pretty large and diverse (as you can see <a href="">here</a>)..<br /><br /.<br /><br />I had a number of really encouraging conversations with Chris Aniszczyk and Peter Kriens who work on <a href="">PDE</a> and <a href="">BND</a> respectively, both of which have a lot of cross over with the work I've been doing on Sigil.<br /><br />Chris has just <a href="">twittered</a>.<br /><br />I guess in a perfect world I'd like to be able to support Maven, Netbeans and IntelliJ users as well. Hopefully I'll be able to update you in the next couple of months on progress in this area.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage Behaviours<a href=""><img src="" alt="I'm speaking at EclipseCon 2009" border="0" height="100" width="100" /></a><br /><br / <a href="">here</a>.<br /><br />I guess <em>my</em> main focus for calling the BOF is that I'm very interested in talking to other OSGi developers to see what it is we on the tooling side can do to make our collective jobs easier. <br /><br />What is it about OSGi development that really frustrates you - and what can tools do to make it easier?<br /><br /.<br /><br />Hope to see you there.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage toolingThere's a really interesting conversation going on at TSS about <a href="">OSGi and future directions for Enterprise Java</a>.<br /><br />I've posted a reply which I thought it was worth reposting here:<br /><br />I think there are two issues with [the approach of repackaging existing modules as OSGi bundles and simply importing/exporting all packages] which really cause headaches going forward; Module vs API dependencies and complex "Uses" graphs.<br /><br />Firstly module vs api; in most dependency tools such as Maven and Ivy the developer specifies dependencies at the module layer - i.e.<br /><br /><dependency org="org.apache.log4j" name="org.apache.log4j" rev="1.2.15" /><br /><br />But then spring have added an OSGi version of the module which has a different module id.<br /><br /><dependency org="org.apache.log4j" name="com.springsource.org.apache.log4j" rev="1.2.15" /><br /><br /.<br /><br /.<br /><br /).<br /><br />Secondly the major thorn in the approach of naively exporting all packages in a module is the complex "uses" information it generates. "Uses" is a flag provided by OSGi on exports to assert that the class space is coherent across multiple bundle import/export hops.<br /><br /.<br /><br />I've referred to this as placing barbed wire around sand castles (in most cases). If the modules were more sensibly designed i.e. only exporting the "really" public code then this problem is much reduced.<br /><br /.<br /><br />[1]<br />[2]<img src="" height="1" width="1" alt=""/>David Savage up for airOk so it's been a while since I last post anything here, mainly because I've been working on a number of different big projects for the last couple of months and this left no time for blog posting.<br /><br />So having come up for air for a few hours at least I thought it'd be a good idea to blog about them see if I can drum up some interest ;)<br /><br />The major addition to my task list over the past few months has been the <a href="">Sigil project</a>. This is a set of tools to help developers build OSGi bundles. It started off as an eclipse plugin but about two or three months ago my team mate came along with some really great code to integrate it with Ivy.<br /><br />I believe Sigil is the first tool out there to unify OSGi development in the IDE and server side build in a common configuration file (this being a good thing as it saves the messy job of keeping other systems in sync).<br /><br />The IDE supports tools such as code complete, quick fixes and tools to analyse OSGi dependencies. I've also built in the idea of repositories (currently either file system or <a href="">OBR</a> but extensible via plugins) which allow the developer to download dependencies on the fly whilst developing by simply adding an import-package or require bundle statement to their code. Oh and the same repositories can be used in eclipse and ivy :)<br /><br />The other big piece of code I've been working on is of course <a href="">Newton</a>. There are no big feature announcements for this release as we've been focussing on making the current platform more and more robust. But we've just made available the <a href="">1.3.1 release</a> :)<br /><br />Anyway that seems like a good amount of detail for the time being. I'll try to blog some more about some of this stuff soon...<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage this an application which I see before me?This post has been triggered by two <a href="">interesting</a> <a href="">posts</a> on the topic of what it is to be an application in an OSGi environment. This is something I felt I just had to get stuck in with as its exactly what we've been working on in the <a href="">Newton</a> project.<br /><br />The way we've tackled this is very similar to the approach suggested by Mirko, except instead of using Spring-DM as the top level element in the model we've picked <a href="">SCA</a>.<br /><br />SCA is a relatively new specification but it gives a vendor neutral way of describing the service dependencies and architecture of an application running in an enterprise environment.<br /><br /.<br /><br />In Newton we associate a composite with a top level (or root) bundle. This bundle then provides the class space for the composite to instantiate it's implementation, services and references. Importantly the bundle does not have to contain <span style="font-style: italic;">all</span> of the classes that it needs to function but can use OSGi tools such as Import-Package to achieve modularity at the deployment level.<br /><br />When an SCA composite is installed in a Newton runtime we go through a series of steps:<br /><ol><li>Resolve the root bundle that supplies the class space for the composite. If this is not the first time the root bundle has been resolved we increment an internal counter<br /></li><li>Resolve and optionally download the bundle dependencies required to satisfy the root bundle against a runtime repository (this includes ensuring that we reuse existing bundles within the runtime - if they were installed for other composites)</li><li>Build a runtime model around the SCA document that controls the lifecycle of the component as references come and go</li><li>Finally when all required references are satisfied (a dynamic process) we publish the services to be consumed by other components in the enterprise.</li></ol>When an SCA composite is uninstalled we go through the reverse process:<br /><ol><li>Unpublish the services and release any references.</li><li>Shut down the implementation and discard our runtime model.<br /></li><li>The bundle root counter is decremented, if the bundle root counter reaches zero then it is no longer required in the runtime and is marked as garbage.<br /></li><li>Finally garbage collect all bundles that are no longer in use, so clearing down the environment.</li></ol>This pattern then forms the building blocks of our distributed provisioning framework that is able to deploy instances of SCA composites across what we term a "fabric" of newton runtime instances.<br /><br /.<br /><br /:<br /><ul><li>how implementations and interfaces are connected<br /></li><li>how remote services should interact via bindings</li><li>how they should scale</li><li>where they should install</li></ul>Hope that's been of interest,<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage New HopeA long time ago in a galaxy far far away...<br /><br /.<br /><br /.<br /><br />Inevitably these powers drew the attention of those still living within the empire. Initially some sought to discredit the Jedi powers, either through misunderstanding or fear. But soon others became envious and wanted these powers for themselves.<br /><br /.<br /><br />Those in the Jedi council found themselves torn between the short term promises of wealth offered by the empire or sticking to their ideals and holding out for the long term riches of a truely flexible Java virtual machine architecture.<br /><br />It is at this point in the story that we find our selves. Peter Kriens (Obi Wan?) has recently <a href="">blogged</a> about the choices facing the Jedi alliance and argues for the purist ideals to be upheld.<br /><br />Myself I find myself acting as a trader (or possibly a smuggler - gonna argue for Han but you make your own judgements...) between these two worlds.<br /><br />As a developer and architect of <a href="">Infiniflow</a> I have been directly involved with building a distributed computing infrastructure that seeks to embrace ways of the Force as championed by the Jedi alliance for a single JVM but across the enterprise.<br /><br /.<br /><br />Whether you believe a word of this, having walked the boundary between the Jedi world and that of the Empire I am acutely aware of the problems associated with integrating tools built by these two communities.<br /><br /.<br /><br />When it is impossible to integrate a legacy pattern I'd argue that this is a point when we have to admit that Gödel was right - it is not always possible to please all of the people all of the time (I <a href="">paraphrase</a> but you get the point). You can always delegate legacy cases to a separate jvm and communicate remotely to the old world.<br /><br />If the Jedi council compromise their core ideals for ill conceived or temporary solutions they risk sending out a mixed and confusing message to those who are new to this technology.<br /><br /.<br /><br />Once stepping onto the path to the dark side it is very difficult to turn back and ultimately leads to ruin. (Or cool lightning powers - you decide)<br /><br />I have to give credit to the fantastic blog posts of <a href="">Hal Hildebrand</a> for the Star Wars theme to this blog entry, whether this will become a common theme for my own posts I'm unsure but it was certainly fun to write.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage Be(*) Or Not To Be(*) That Is The Question(*) Included in an API bundle.<br /><br />There's been a lively <a href="">debate</a> on the OSGi mailing list over the past couple of weeks surrounding the issue of whether an API should be packaged with it's implementation in one bundle or whether it should be split up into two bundles (one for API and one for implementation).<br /><br /.<br /><br /:<br /><ul><li>installation simplicity - minimizing the number of bundles needed to run an OSGi application</li><li>runtime simplicity - minimizing the connectivity between bundles in a running OSGi application.</li><li>architectural simplicity - minimizing duplicated packages between bundles used to build an OSGi application</li></ul.<br /><br />I'll try to give a quick overview.<br /><br /.<br /><br /.<br /><br />Another important detail to be considered is that in OSGi it is possible for many bundles to export the same API package and only one will be picked by the runtime to actually provide the classes.<br /><br /.<br /><br />My own advice would be to start by assuming that API and implementation are packaged in separate bundles. The reasoning behind this is based on the following criteria:<br /><ul><li>In general an implementation is likely to depend on more packages than it's API<br /></li><li>You can always collapse back to one bundle later if you use import-package vs require-bundle</li><li>If you use a provisioning mechanism such as <a href="">Newton</a> or <a href="">P2</a> (when it's ready) downloading two bundles vs one is handled automagically</li></ul>The benefits of splitting API and implementation are the following:<br /><ul><li>If you are considering making your application distributed or want it to run in a constrained environment you can install the API without having to resolve the implementation dependencies (possibly a <span style="font-style: italic;">big deal</span> in a client/server architecture)<br /></li><li>If you want to restart or reinstall the implementation bundle then this doesn't automatically mean the restart of all client bundles that are using the API from that bundle<br /></li><li.</li></ul>If you start by assuming the API and implementation are separate then you can use the following logic to assess whether you can condense them back to one bundle for the purposes of your architecture:<br /><ol><li>Start by designing your API to depend on as little as possible.</li><li>Make your implementation depend on API packages and any other dependencies it needs to function.<br /></li><li>If after doing this the implementation depends only on API consider whether the implementation is ever likely to get more complicated.</li><li>If it isn't then you can probably collapse back to one bundle.</li></ol>Of course this can always be done prior to building <i class="moz-txt-slash"><span class="moz-txt-tag"></span>any<span class="moz-txt-tag"></span></i> bundles if you are good at modelling in you head or on paper etc.<br /><br />Hopefully that's helped some people understand the issues.<br /><br />Laters<img src="" height="1" width="1" alt=""/>David Savage Versioning Is Complex Or Am I Imagining It?So it seems everyone and his dog are talking about versioning at the moment. Specifically the proposed version numbering systems used in <a href="">OSGi</a> and <a href="">JSR 277</a> and their ability to coexist (or not).<br /><br />For my own part I personally favor the OSGi scheme because to my mind it is simpler and better defined.<br /><br /.<br /><br />When a module consumer wishes to specify a version they wish to use they specify either a single version number.<br /><br />1.0.0<br /><br />or a version range<br /><br />[1.0.0,2.0.0)<br /><br />The first version means any version greater or equal to 1.0.0. The second means any version between 1.0.0 and any version prior to 2.0.0.<br /><br />However anyone who has worked with software development for any non trivial amount of time will know there are still inherent problems in versioning schemes in that they require developers to correctly markup when changes to their code effect the compatibility of that code.<br /><br />This is a <span style="font-style: italic;">big deal</span> as depending on the consumer of the code even changes as innocuous as a java doc comment update can effect the version number. There's a really good discussion of the issues surrounding this <a href="">here</a>.<br /><br />As we all know developers are mostly human and so; make mistakes, are lazy or are just plain forgetful so it is inevitable that version numbers will sometimes not be updated when they should have been.<br /><br />After release these badly versioned modules (poison pills) will sit around in software repositories for potentially a very long time causing instabilities in the systems that use them.<br /><br />Stanly Ho's <a href="">suggestion</a> for combating this issue is to allow consumers of modules to specify complex lists which select certain version ranges but exclude certain known bad versions.<br /><br /.<br /><br /.<br /><br />This is a good thing as if it is easier for developers to handle version numbering there should be fewer bad versions.<br /><br /.<br /><br />I'll prefix the rest of this blog entry by saying my idea is a little crazy so please bare with me, pat me on the head and say there, there back to the grind stone you'll get over it :)<br /><br />So enough pontificating, <span style="font-weight: bold;">the idea</span>:<br /><br />It occurred to me that software versions should really carry an imaginary coefficient (i.e. square root of minus one).<br /><br />[Sound of various blog readers falling off their seats]<br /><br />...<br /><br />What the hell does that mean I here you ask.<br /><br />I said it was crazy and I'm only half proposing it as a real (chuckle) solution. However to my mind it seems more natural to label versions intended as release candidates or as development code as having an imaginary coefficient.<br /><br /.<br /><br />Ok so just because it has a nice analogy doesn't make it valid, how does this help in real world software development?<br /><br /.<br /><br />Imagine a case where software producer A (providing module A) is building release candidates and software consumer B is building module B that depends on module A:<br /><br />ModuleA:1.0.0_RC1<br />ModuleA:1.0.0_RC2<br />ModuleA:1.0.0_RC3<br />etc.<br /><br />ModuleB:import ModuleA:1.0.0<br /><br />In the current scheme there is no way to distinguish release candidates from final code and incremental patches so when producer A builds his release candidate consumer B sees each release candidate as actually being more advanced than the final 1.0.0 release.<br /><br />This is clearly wrong.<br /><br /.<br /><br / <span style="font-style: italic;">after</span> we have decided to make 0.9 the non backwards compatible internal release.<br /><br />Instead I propose that when producer A starts work on a set of changes to module A he increments the appropriate version element (depending on the scale of the changes he/she is making) early and adds an imaginary coefficient to mark it in development.<br /><br />Therefore from the previous example we have<br /><br />ModuleA:1.0.0_3i (RC1)<br />ModuleA:1.0.0_2i (RC2)<br />ModuleA:1.0.0_1i (RC3)<br /><br />As an external consumer of module A we are then able to use [1.0.0,2.0.0) to mean all compatible increments. An internal consumer prior to release can say [1.0.0_2i,2.0.0)<br />to mean all releases of module A after RC2. Importantly this will continue to work after release with no need to update existing imports.<br /><br /.<br /><br />We <span style="font-style: italic;">could</span> come up with a scheme where by the major.minor.micro elements of the imaginary coefficient denote degrees of confidence as to how closely the code matches the proposed design - i.e. major = zero -> developer tested, minor = zero -> system tested, micro = zero -> beta tested etc.<br /><br />The notion of complex version numbers applies equally to the JSR 277 four number version scheme which to my thinking is a completely pointless addition to a spec which does nothing to address the actual problem and breaks lots of existing code that was previously working fine.<br /><br />I'd be very happy for someone to come along and state why imaginary version numbers are not needed as in general I prefer to reuse existing tools where possible and ideally the simplest tool that does the job.<br /><br />So if nothing else this is a vote for OSGi versioning but with a couple of notes on how we <span style="font-style: italic;">may</span> be able to improve it.<br /><br />Laters<br /><br />Update: 09/06/2008<br />Apologies I linked to the second part of the eclipse article on version updates by mistake the correct link is <a href="">here</a>.<img src="" height="1" width="1" alt=""/>David Savage JSR To Far?So there's a lot of conversation going on around the both <a href="">the</a> <a href="">politics</a> and the <a href="">technological</a> <a href="">issues</a> surrounding JSR 277. Personally I think the spec is doomed if it doesn't start working much more closely with the OSGi community - and here's why.<br /><br /. <br /><br />If JSR 277 makes it into the Java 7 release then it seems entirely plausible that the major vendors <span style="font-style:italic;">could</span> choose not to certify their products on Java 7 (especially if the JSR gets in the way of their existing investment in OSGi).<br /><br />Where would this leave Sun if the likes of Websphere, Weblogic, Spring, etc, etc are not certified to run in an enterprise environment? Also where does it leave Java?<br /><br /. <br /><br />Please guys, set egos aside and come to a sensible decision that allows us all to get on with our day jobs.<img src="" height="1" width="1" alt=""/>David Savage
http://feeds.feedburner.com/ChronologicalThought
CC-MAIN-2017-13
en
refinedweb
source: A weakness has been reported in Java implementations that may constitute unauthorized access by Java applets to floppy devices. This weakness appears to present a flaw in the Java security model. This issue was reported in Java Plug-in 1.4.x versions on Microsoft Windows operating systems, when run with Internet Explorer. Other environments and versions may also be affected. import java.awt.Label; public class MyFloppySucks extends java.applet.Applet { private Label m_labVersionVendor; public MyFloppySucks () //constructor { m_labVersionVendor = new Label ("Java Floppy Stress Testing Applet, (2003)" +" / Java Version: " + System.getProperty("java.version")+ " from "+System.getProperty("java.vendor")); this.add(m_labVersionVendor); } public void paint(java.awt.Graphics g) { while (1==1) try { org.apache.crimson.tree.XmlDocument.createXmlDocument("",false); } catch (Exception e) { System.out.println("Java Floppy Stress Testing Applet, (2003)"); } } } Related ExploitsTrying to match CVEs (1): CVE-2003-1521 Trying to match OSVDBs (1): 60428 Other Possible E-DB Search Terms: Sun Java Plugin 1.4, Sun Java Plugin
https://www.exploit-db.com/exploits/23270/
CC-MAIN-2017-39
en
refinedweb
Curiosity. Worker Role Requirements A standard C# worker roles requires three functions: OnStart, OnStop, and Run: public class PyWorkerRole : RoleEntryPoint { public override bool OnStart() { /* ... */ } public override void OnStop() { /* ... */ } public override void Run() { /* ... */ } } This can be mapped to a Python module in a fairly straightforward fashion: def start(): return True def run(): pass def stop(): pass This has some advantages and disadvantages compared to using a class, but I like it for its simplicity. The Implementation Azure requires an actual .NET class to implement a worker role, so we create one that hosts the IronPython engine. This is a good example of how to embed IronPython to run very simple scripts. The core IronPython hosting function is shown here; for the rest, see the files linked below. private void InitScripting(string scriptName) { this.engine = Python.CreateEngine(); this.engine.Runtime.LoadAssembly(typeof(string).Assembly); this.engine.Runtime.LoadAssembly(typeof(DiagnosticMonitor).Assembly); this.engine.Runtime.LoadAssembly(typeof(RoleEnvironment).Assembly); this.engine.Runtime.LoadAssembly(typeof(Microsoft.WindowsAzure.CloudStorageAccount).Assembly); this.scope = this.engine.CreateScope(); engine.CreateScriptSourceFromFile(scriptName).Execute(scope); if(scope.ContainsVariable("start")) this.start = scope.GetVariable<Func<bool>>("start"); this.run = scope.GetVariable<Action>("run"); if(scope.ContainsVariable("stop")) this.stop = scope.GetVariable<Action>("stop"); } First, we create a ScriptEngine and add some useful assemblies; then we create a Scope to execute in; then we actually execute the script. Finally, we try to pull out the functions and convert them to C# delegates; run is required but start and stop are optional. Those delegates are called from the C# wrapper (from Run, OnStart, and OnStop, as appropriate). The rest of the file is pretty much taken from the worker role template, so I'll leave it out. Doing Actual Work Now, a worker role needs some actual work to do – usually, reading items from a queue and processing them. Happily, the Azure StorageClient library is perfectly usable from IronPython. from Microsoft.WindowsAzure import CloudStorageAccount from Microsoft.WindowsAzure.StorageClient import CloudQueueMessage, CloudStorageAccountStorageClientExtensions def run(): account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString") queueClient = CloudStorageAccountStorageClientExtensions.CreateCloudQueueClient(account) queue = queueClient.GetQueueReference("messagequeue") while True: Thread.Sleep(10000) if queue.Exists(): msg = queue.GetMessage() if msg: Trace.TraceInformation("Message '%s' processed." % msg.AsString) queue.DeleteMessage(msg) The only catch is that CreateCloudQueueClient is an extension method, so it must be called as a static method on the CloudStorageAccountStorageClientExtensions class. Using the Code To actually use the code, create a C# worker role as per usual, but replace the generated class file with PythonWorkerRole.cs (see below). Next, add the IronPython assemblies as references to the project. Then, create a string setting for the role (under the Cloud project's Roles folder) called ScriptName and set it to the name of the script file. Finally, add a .py file to the worker role and ensure that (under 'Properties') its 'Build Action' is 'Content' and 'Copy to Output' is 'Copy if Newer'. The code can be downloaded from my PyAzureExamples repository, including zip archives of it. It includes the PyWorkerRole project and the Cloud Service project.
http://blog.jdhardy.ca/2010/01/ironpython-worker-roles-for-windows.html
CC-MAIN-2017-39
en
refinedweb
A Swift iOS 8 iOS application based on the Single View Application template. Enter CollectionDemo as the product name, choose Swift as the programming language and set the device menu to Universal. ViewController.swift file in the project navigator and pressing the keyboard delete key. When prompted, click the button to move the file to trash. Next, select the Main.storyboard file and, in the storyboard canvas, select the view controller so that it is outlined in blue (the view controller is selected by clicking on the toolbar area at the top of the layout view) and tap the keyboard delete key to remove the controller and leave a blank storyboard. Adding a Collection View Controller to the Storyboard The first element that needs to be added to the project is a UICollectionViewController subclass. To add this, select the Xcode File -> New -> File… menu option. In the resulting panel, select Source listed under iOS in the left hand panel, followed by Cocoa Touch Class in the main panel before clicking Next. On the subsequent screen, name the new class MyCollectionViewController and, from the Subclass of drop down menu, choose UICollectionViewController. Verify that the option to create an XIB file is switched off before clicking Next. Choose a location for the files and then click Create. The project now has a new file named MyCollectionViewController.swift which represents a new class named MyCollectionViewController which is itself a subclass of UICollectionViewController. The next step is to add a UICollectionViewController instance to the storyboard and then associate it with the newly created class. Select the Main.storyboard file and drag and drop a Collection View Controller object from the Object Library panel onto the storyboard canvas as illustrated in Figure 55-1. Note that the UICollectionViewController added to the storyboard has also brought with it a UICollectionView instance (indicated by the black background) and a prototype cell (represented by the white square outline located in the top left hand corner of the collection view). Figure 55-1 With the new view controller selected in the storyboard, display the Identity Inspector either by selecting the toolbar item in the Utilities panel or via the View -> Utilities -> Show Identity Inspector menu option and change the Class setting (Figure 55-2) from the generic UICollectionViewController class to the newly added MyCollectionViewController class. Figure 55-2 With the view controller scene still selected, display the Attributes Inspector and enable the Is Initial View Controller option so that the view controller is loaded when the app starts. Adding the Collection View Cell Class to the ProjectWith a subclass of UICollectionViewController added to the project, a new class must now be added which subclasses UICollectionViewCell. Once again, select the Xcode File -> New -> File… menu option. In the resulting panel, select the iOS Source option from the left hand panel, followed by Cocoa Touch.storyboard file and select the white square in the collection view. This is the prototype cell for the collection view and needs to be assigned a reuse identifier and associated with the new class. With the cell selected, open the Identity Inspector panel and change the Class to MyCollectionViewCell. Remaining in the Utilities panel, display the Attributes Inspector (Figure 55-3) and enter MyCell as the reuse identifier. Figure 55 slightly larger size, locating the Image View object in the Object Library panel and dragging and dropping it into the prototype cell as shown in Figure 55-4. Figure 55-4 With the Image View object selected in the storyboard scene, select the Pin menu from the toolbar located in the lower right hand corner of the storyboard panel and set up Spacing to nearest neighbor constraints on all four sides of the view with the Constrain to margins option turned off as illustrated in Figure 55-5: Figure 55-5 Once configured, click on the Add 4 Constraints button to establish the constraints. Since it will be necessary to assign an image when a cell is configured, an outlet to the Image View will be needed. Display the Assistant Editor, make sure it"] } Implementing the.swift, implement this method to identify the size of the current image and return the result to the collection view: // MARK: UICollectionViewFlowLayoutDelegate func collectionView(collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAtIndexPath indexPath: NSIndexPath) -> CGSize { let image = UIImage(named: carImages[indexPath.row]) return image!.size } Run the application once again and note that, as shown in Figure 55-8, the images are now displayed at their original sizes. Figure 55-8.swift file, implement the didSelectItemAtIndexPath delegate method as follows: // MARK: UICollectionViewDelegate override func collectionView(collectionView: UICollectionView, didSelectItemAtIndexPath indexPath: NSIndexPath) { let myLayout = UICollectionViewFlowLayout() myLayout.scrollDirection = UICollectionViewScrollDirection.Horizontal self.collectionView.setCollectionViewLayout(myLayout, animated: true) } Compile and run the application and select an image once the collection view appears. Notice how the layout smoothly animates the transition from vertical to horizontal scrolling. Note also that the layout adjusts automatically when the orientation of the device is rotated. Figure 55-9, for example shows the collection view in landscape orientation with horizontal scrolling enabled. Figure 55.storyboard file and click on the black background of the collection view controller canvas representing the UICollectionView object. Display the Attributes Inspector in the Utilities panel, locate the Accessories section listed under Collection View and set the Section Header check box as illustrated in Figure 55-10. Figure 55 guidelines appear. Using the Auto Layout Align menu, set constraints on the label so that it is positioned in the horizontal and vertical center of the containing view as shown in Figure 55-11: Figure 55-11 By default, labels have a black foreground so the text in the label will not be visible until a different color is selected. With the label still selected, move to the Attributes Inspector and change the Color property to white. Once completed, the layout should now resemble that of Figure 55-12. Figure 55-12 With a header prototype added to the storyboard, a new class needs to be added to the project to serve as the class for the header. Select File -> New -> File… and add a new iOS Cocoa Touch.swift. With the MySupplementaryView.swift file displayed in the Assistant Editor, establish an outlet connection from the label in the header and name it headerLabel. On completion of the connection, the content of the file should read as follows: import UIKit class MySupplementaryView: UICollectionReusableView { @IBOutlet weak var headerLabel: UILabel! } use a dequeuing mechanism similar to cells. For this example, implement the viewForSupplementaryElementOfKind method as follows in the MyCollectionViewController.swift file: override func collectionView(collectionView: UICollectionView, viewForSupplementaryElementOfKind kind: String, atIndexPath indexPath: NSIndexPath) -> UICollectionReusableView { var header: MySupplementaryView? if kind == UICollectionElementKindSectionHeader { header = collectionView.dequeueReusableSupplementaryViewOfKind(kind, withReuseIdentifier: "MyHeader", forIndexPath: indexPath) as? MySupplementaryView header?.headerLabel.text = "Car Image Gallery" } return header! } Compile and run the application once again and note that the header supplementary view is now visible in the collection view. Summary The previous chapter covered a considerable amount of ground in terms of the theory behind collection views in iOS 8. This chapter has put much of this theory into practice through the implementation of an example application that uses a collection view to display a gallery of images.
http://www.techotopia.com/index.php/A_Swift_iOS_8_Storyboard-based_Collection_View_Tutorial
CC-MAIN-2017-39
en
refinedweb
In python, how would I go about making a http request but not waiting for a response. I don't care about getting any data back, I just need to server to register a page request. Right now I use this code: urllib2.urlopen("COOL WEBSITE") What you want here is called Threading or Asynchronous. Threading: urllib2.urlopen()in a threading.Thread() Example: from threading import Thread def open_website(url): return urllib2.urlopen(url) Thread(target=open_website, args=[""]).start() Asynchronous: Use the requests library which has this support. Example: from requests import async async.get("") There is also a 3rd option using the restclient library which has builtin (has for some time) Asynchronous support: from restclient import GET res = GET("", async=True, resp=True)
https://codedump.io/share/46C7XipFhwFz/1/python-amp-urllib2---request-webpage-but-don39t-wait-for-response
CC-MAIN-2017-39
en
refinedweb
I'm trying to use the News API in a python program, and for some reason I can't get a 200 response no matter what. I'm pretty unfamiliar with the requests library, so maybe I'm not doing something right, but here's what my code looks like: api = XXXXXXXXXX def get_json_response(apiKey, resource='google-news', sortBy='latest'): url = '' headers = { 'source': resource, 'apiKey': apiKey, 'sortBy': sortBy} r = requests.get(url, headers=headers) print(r.status_code) get_json_response(api) r = requests.get(url + '/?source=' + resource + '&sortBy=' + sortBy + '&apiKey=' + apiKey) Based on the 'working' link provided, it expects URL parameters, not headers on its request, so: def get_json_response(apiKey, resource='google-news'): url = '' params = {'source': resource, 'apiKey': apiKey} r = requests.get(url, params=params) print(r.status_code) # etc.
https://codedump.io/share/9JrHiRef7Ap9/1/python-requests-for-newsapi-reponding-401-every-time
CC-MAIN-2017-39
en
refinedweb
SoComplexity.3coin3 man page SoComplexity —. Synopsis #include <Inventor/nodes/SoComplexity.h> Inherits SoNode. Public Types enum Type { OBJECT_SPACE = SoComplexityTypeElement::OBJECT_SPACE, SCREEN_SPACE = SoComplexityTypeElement::SCREEN_SPACE, BOUNDING_BOX = SoComplexityTypeElement::BOUNDING_BOX } Public Member Functions virtual SoType getTypeId (void) const SoComplexity (void) virtual void doAction (SoAction *action) virtual void callback (SoCallbackAction *action) virtual void getBoundingBox (SoGetBoundingBoxAction *action) virtual void GLRender (SoGLRenderAction *action) virtual void pick (SoPickAction *action) virtual void getPrimitiveCount (SoGetPrimitiveCountAction *action) Static Public Member Functions static SoType getClassTypeId (void) static void initClass (void) Public Attributes SoSFEnum type SoSFFloat value SoSFFloat textureQuality Protected Member Functions virtual const SoFieldData * getFieldData (void) const virtual ~SoComplexity () Static Protected Member Functions static const SoFieldData ** getFieldDataPtr (void) Additional Inherited Members Detailed Description. Shape nodes like SoCone, SoSphere, SoCylinder and others, will render with fewer polygons and thereby improve performance, if the complexity value of the traversal state is set to a lower value. By using the SoComplexity::type field, you may also choose to render the scene graph (or parts of it) just as wireframe bounding boxes. This will improve rendering performance a lot, and can sometimes be used in particular situations where responsiveness is more important than appearance. Texture mapping can be done in an expensive but attractive looking manner, or in a quick way which doesn't look as appealing by modifying the value of the SoComplexity::textureQuality field. By setting the SoComplexity::textureQuality field to a value of 0.0, you can also turn texturemapping completely off. FILE FORMAT/DEFAULTS: Complexity { type OBJECT_SPACE value 0.5 textureQuality 0.5 } Member Enumeration Documentation enum SoComplexity::Type The available values for the SoComplexity::type field. Enumerator - OBJECT_SPACE Use the SoComplexity::value in calculations based on the geometry's size in world-space 3D. - SCREEN_SPACE Use the SoComplexity::value in calculations based on the geometry's size when projected onto the rendering area. This is often a good way to make sure that objects are rendered with as low complexity as possible while still retaining their appearance for the user. - BOUNDING_BOX Render further geometry in the scene graph as bounding boxes only for superfast rendering. Constructor & Destructor Documentation SoComplexity::SoComplexity (void) Constructor. SoComplexity::~SoComplexity () [protected], [virtual] Destructor. Member Function Documentation SoType SoComplexplexity::getFieldData (void) const [protected], [virtual] Returns a pointer to the class-wide field data storage object for this instance. If no fields are present, returns NULL. Reimplemented from SoFieldContainer. void SoComplexity::doAction (SoAction * action) [virtual] This function performs the typical operation of a node for any action. Reimplemented from SoNode. void SoComplexity: SoComplex SoComplex SoComplexity::pick (SoPickAction * action) [virtual] Action method for SoPickAction. Does common processing for SoPickAction action instances. Reimplemented from SoNode. void SoComplex. Member Data Documentation SoSFEnum SoComplexity::type Set rendering type. Default value is SoComplexity::OBJECT_SPACE. SoSFFloat SoComplexity::value Complexity value, valid settings range from 0.0 (worst appearance, best performance) to 1.0 (optimal appearance, lowest rendering speed). Default value for the field is 0.5. Note that without any SoComplexity nodes in the scene graph, geometry will render as if there was a SoComplexity node present with SoComplexity::value set to 1.0. SoSFFloat SoComplexity::textureQuality Sets the quality value for texturemapping. Valid range is from 0.0 (texturemapping off, rendering will be much faster for most platforms) to 1.0 (best quality, rendering might be slow). The same value for this field on different platforms can yield varying results, depending on the quality of the underlying rendering hardware. Note that this field influences the behavior of the SoTexture2 node, not the shape nodes. There is an important consequence of this that the application programmer need to know about: you need to insert your SoComplexity node(s) before the SoTexture2 node(s) in the scenegraph for them to have any influence on the textured shapes. Author Generated automatically by Doxygen for Coin from the source code. Referenced By The man page SoComplexity.3coin2(3) is an alias of SoComplexity.3coin3(3).
https://www.mankier.com/3/SoComplexity.3coin3
CC-MAIN-2017-39
en
refinedweb
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Dependency Management8:44 with Chris Ramacciotti One of the main purposes of a build tool is dependency management. This video discusses how to add external libraries to your projects with Maven. - 0:00 A dependency is a Java library that this project depends on. - 0:04 While developing Java applications, you'll almost always be using other developers - 0:08 libraries, in addition to your own. - 0:10 So if you want to use code that isn't a part of the Java core library, - 0:14 you'll need to add that library as a dependency. - 0:17 Dependencies and Maven are declared inside this dependencies element. - 0:20 With each dependency represented by its own dependency element. - 0:25 Naming follows the GAV convention which stands for Group Artifact and Version. - 0:31 Remember, this application is going to spy on a directory in our file system. - 0:35 And alert us when files of a certain type are added to the directory. - 0:39 Detecting the file's type can be a cumbersome process. - 0:41 And at the very least, it's a problem that other developers have already tackled, so - 0:45 let's use their code. - 0:47 Hey, remember that Apache Tika library we found on Maven central in this last video? - 0:51 Guess what? - 0:52 We're gonna use it. - 0:53 So let me click on this version, I'll copy this dependency element here. - 0:59 And I'm going to paste it into my palm, under the dependencies element, great. - 1:05 Now, when we write our code and compile with Maven, - 1:07 we'll be able to reference the classes provided by Apache Tika. - 1:11 Let's write our code now in app.java. - 1:15 I'm gonna wipe out most of this class and start from scratch. - 1:19 Okay, let's start with a couple import statements. - 1:21 First, we're gonna import java.nio.file and - 1:25 I'm gonna import all classes in that file package. - 1:30 And this java.nio package stands for - 1:32 Java's non-blocking input/output API, more on that in just a bit. - 1:38 The next thing I wanna import is org.apache.tika.Tika; class. - 1:42 Now, you'll notice that Intellige can't find this Apache package. - 1:50 Though this won't be a problem for Maven when we compile, - 1:53 my IDE will keep complaining. - 1:55 To fix this and more fully utilize my IDE for coding, - 1:58 I'll pop open the Maven Projects window here. - 2:02 And then, I'll click that refresh icon. - 2:06 After that, my IDE is happy as you can see. - 2:10 Be sure to check out the options in your own IDE's Maven - 2:12 tool window to see what's available. - 2:14 But for this workshop, we'll stick with the command line for all Maven commands. - 2:19 Okay, let's continue coding our class. - 2:21 This class is called, public class App. - 2:27 And I'm gonna drop two constants at the top of the class here for - 2:30 the file type and the directory we want to watch. - 2:33 So private static final String and I'll call it FILE_TYPE, - 2:38 and I'll say text/csv files, cool. - 2:42 And then, I'll do the same for the directory to watch - 2:47 private static file String and all say DIR_TO_WATCH. - 2:52 And also a /Users/Chris/Downloads and I made a tmp directory there. - 2:59 You can change this to any empty folder on your own system, cool. - 3:04 And then, I wanna public static void main method here, awesome. - 3:08 Now ,the code we're gonna write here using the Java NIO, might look a little cryptic. - 3:13 I'm certainly not asking you to have a complete understanding of how to use this - 3:16 non-blocking input/output API provided by the Java core library. - 3:22 Our work here is more about understanding how to package the project - 3:25 into a distributable jar using Maven. - 3:27 In any case, it's a nice opportunity for - 3:29 us to look at Java features you may not have seen. - 3:32 So we'll start by defining a path object that contains the directory to watch, - 3:37 as well as a Tika object for detecting files. - 3:39 So I say, path dir = Paths.get and then, - 3:43 I'll say, (DIR_TO_WATCH), cool. - 3:47 And I'll define a Tika object and call its default constructor. - 3:53 Next, lets add our watch service which will allow us to spy on the directory, so - 3:58 WatchService. - 3:59 I'll just say watchService = FileSystems.getDefault. - 4:07 It gets the current file system, newWatchService, cool. - 4:11 And then, I'm going to register this watch service with a directory. - 4:15 So dir.register(watchService,) and - 4:19 I want to register it for events, for creating new files. - 4:25 So I only wanna detect events for when new files are added to this directory. - 4:30 So I'll say, ENTRY_CREATE. - 4:34 Now, this is a constant that comes from a certain class. - 4:39 So let's import that, - 4:41 .StandardWatchEventKinds.ENTRY_CREATE;, cool. - 4:46 Now, what you're probably going to see at this point are a bunch of warnings saying, - 4:51 that you have on caught exceptions. - 4:54 So I'm gonna do something here which I would normally do on a distributed Java - 4:58 application. - 4:58 But I'll do it here for purposes of brevity. - 5:00 I'm just going to say throws Exception and that will silence the compiler, cool. - 5:06 Now, we can move on. - 5:08 Okay, let's start our event loop which will continually run until we receive - 5:12 an invalid watch key. - 5:13 So I'll define a WatchKey called key, and then, I'll define as do while loop. - 5:20 while(key.reset());, this will loop as long as the key is valid. - 5:28 Now, you'll see this little red squiggly here, until we assign key a value. - 5:32 But we're gonna do that inside the loop. - 5:35 Let's go ahead and do that. - 5:37 So now, this watch key is an object that - 5:40 represents the registration of our path object with the watch service. - 5:43 When an event occurs, a key will be available through the watch service, - 5:47 through a call to its take method. - 5:49 So that's what I will assign this key variable, watchService.take(). - 5:55 Now, at this point in our code, we need to loop through any events that come through. - 6:00 And instead of using a for loop here which we could do. - 6:02 We'll use streams to access the events, we'll call the poll of events method and - 6:07 examine the stream there. - 6:08 So I'll say, key.pollEvents().stream(). - 6:12 And since, we only care about the create events for a certain file type. - 6:15 Let's filter our stream().filter and that will accept an event object. - 6:21 And it will return true, if we want the item to be included in our stream or - 6:25 false, if we don't. - 6:28 So in general, what we need to do here is return true when - 6:30 the file type equals our constant FILE_TYPE, up here. - 6:35 Looks like a misspelled it, FILE, there we go. - 6:40 So we wanna return true, when the file associated with - 6:44 this event is of this file type and false, otherwise. - 6:49 So let's start with that code and work backwards. - 6:53 So I will say, return FILE_TYPE.equals, and I'll say, (type). - 6:59 Well, this must mean, we need a variable declared as type before this line of code, - 7:04 let's do that. - 7:05 How are we going to get that, well, String type = tika., - 7:09 this is where that library comes in handy. - 7:12 (filename.toString()) but I don't have a filename yet, - 7:17 so how am I gonna get the file name of the file associated with this event e? - 7:22 Well, here's how you can do that. - 7:23 We'll say, Path filename =, I'm gonna cast to - 7:28 a path object to e.context();, just like that. - 7:35 All right, now, you might get a warning like I am, - 7:38 that Lambda Expressions are not supported it's language level. - 7:41 Well, let's change the language level to eight in our IDE. - 7:47 Okay, now that error is gone. - 7:49 That's not needed for Maven but just needed for our IDE. - 7:53 So now, we have a filtered stream, cool. - 7:55 Now, what do we wanna do with each one of these events? - 7:57 Let's use the forEach method to perform an action on each one of these events. - 8:02 So we'll say, forEach and then an event will be the parameter. - 8:07 And what do I wanna do, I'll do a single line here. - 8:11 That way, I don't have to enclose it in curly braces. - 8:13 I'll do System.out.println, now, I'll say printf. - 8:18 And I'll save file found, and I'll drop the name of the file. - 8:22 And then, I'll drop a new line there, cool. - 8:25 And e.context()), will give me that filename. - 8:30 Oops, don't need a semicolon because I haven't used the curly braces here, - 8:32 Eexcellent. - 8:34 Now, with our code in place, - 8:35 we're ready to start running Maven commands from the terminal. - 8:38 So next, we'll learn about Maven build life cycles. - 8:41 And we'll begin to build our app through Maven commands.
https://teamtreehouse.com/library/dependency-management
CC-MAIN-2017-39
en
refinedweb
Review this list to help avoid issues that frequently prevent apps from getting certified, or that might be identified during a spot check after the app is published. Note Be sure to review the Windows Store Policies to ensure your app meets all of the requirements listed there.. Ensure that your app doesn't crash without network connectivity. Even if a connection is required to actually use your app, it needs to perform appropriately when no connection is present. Provide any necessary info required to use your app, such as the user name and password for a test account if your app requires users to log in to a service, or any steps required to access hidden or locked features. Include a privacy policy if your app requires one; for example, if your app accesses any kind of personal information in any way or is otherwise required by law. To help determine if your app requires a privacy policy, review the App Developer Agreement and the Windows Store Policies. Make sure that your app's description clearly represents what your app does. For help, see our guidance on writing a great app description. Provide complete and accurate responses to all of the questions in the Age ratings section. Don't declare your app as accessible unless you have specifically engineered and tested it for accessibility scenarios. If your app uses the commerce APIs from the Windows.ApplicationModel.Store namespace, make sure to test the app and verify that it handles typical exceptions. Also, make sure that your app uses the CurrentApp class and not the CurrentAppSimulator class, which is for testing purposes only. (Note that if your app targets Windows 10, version 1607 or later, we recommend that you use members of the Windows.Services.Store namespace instead of the Windows.ApplicationModel.Store namespace.)
https://docs.microsoft.com/en-us/windows/uwp/publish/avoid-common-certification-failures
CC-MAIN-2017-39
en
refinedweb
0 Hello everyone.. Would anyone know why I can't do the below. I'm not the best programmer in the world so I'm wondering why the below won't work. I get an error 'A template declaration cannot appear in block scope' No doubt it's due to my lack of understanding of the topic. Thank you. #include <iostream> #include <cstdlib> #include <string> using namespace std; int main() { template<typename T> T one = "test"; T two = 255; T three = 287.52; T four = 'x'; cout << one << " " << two << " " << three << " " << four << endl; system(pause>nul); }
https://www.daniweb.com/programming/software-development/threads/461923/templates-c
CC-MAIN-2017-39
en
refinedweb
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Crouching Alchemist Hidden Panda (CALCHIPAN) What is it? A SQLAlchemy dialect which will consider a series of Pandas DataFrame objects as relational database tables, allowing them to be queried and manipulated in all the usual SQLAlchemy ways. Why is it? Me: Naive Pandas idea #1: map SQLAlchemy declarative classes to pandas dataframes, use ORM or core queries to query it. Pointless? Wes McKinney: @chriswithers13 Michael Bayer integration layers = good From the man himself, and a Pycon sprint project was born! Example from sqlalchemy import create_engine, MetaData, Table, Column, \ Integer, select, String engine = create_engine("pandas://") m = MetaData() employee = Table('emp', m, Column('emp_id', Integer, primary_key=True), Column('name', String) ) m.create_all(engine) engine.execute(employee.insert().values(emp_id=1, name='ed')) result = engine.execute( select([employee]).where(employee.c.name == 'ed') ) print(result.fetchall()) Note we didn't need to import Pandas at all above; we created and populated the table entirely using SQLAlchemy means, and Pandas remained hidden as a storage engine - hence the name! Of course, you can and in most cases probably should start with regular Pandas dataframes, and send them off into an engine to be queried. Using this approach, the Table objects can be built explicitly or more easily just reflected as below; obviously primary/foreign key constraints, useful for when using the ORM, would still have to be established manually: import pandas as pd dept_df = pd.DataFrame([{"dep_id": 1, "name": "Engineering"}, {"dep_id": 2, "name": "Accounting"}]) emp_df = pd.DataFrame([ {"emp_id": 1, "name": "ed", "fullname": "Ed Jones", "dep_id": 1}, {"emp_id": 2, "name": "wendy", "fullname": "Wendy Wharton", "dep_id": 1} ]) engine = create_engine("pandas://", namespace={"dept": dept_df, "emp": emp_df}) m = MetaData() department = Table('dept', m, autoload=True, autoload_with=engine) employee = Table('emp', m, autoload=True, autoload_with=engine) result = engine.execute( select([employee.c.name, department.c.name]). select_from( employee.join(department, employee.c.dep_id == department.c.dep_id)) ) print(result.fetchall()) Add Any Python Function Since we're totally in Python now, you can make any kind of Python function into a SQL function, most handily the various numpy functions: from calchipan import aggregate_fn @aggregate_fn(package='numpy') def stddev(values): return values.std() Above, stddev is now a SQLAlchemy aggregate function: result = engine.execute(select([stddev(mytable.c.data)])) And it is also available from the func namespace (note we also put it into a sub-"package" called numpy: from sqlalchemy import func result = engine.execute(select([func.numpy.stddev(mytable.c.data)])) Autoincrement The Table object can be configured to deliver the index of the data frame as the "primary key" of the table, which will also work as an "autoincrement" field when rows are inserted. This feature requires SQLAlchemy 0.8.1 (or latest tip) to work: my_table = Table('my_dataframe', metadata, Column('id', Integer, primary_key=True), Column('data', Integer), pandas_index_pk=True ) result = engine.execute(my_table.insert().values(data=5)) pk = result.inserted_primary_key() Great, so Pandas is totally SQL-capable now right? Well let's just slow down there. The library here is actually working quite nicely, and yes, you can do a pretty decent range of SQL operations here, noting the caveats that this is super duper alpha stuff I just started a week ago. Some SQL operations that we normally take for granted will perform pretty badly (guide is below), so at the moment it's not entirely clear how much speed will be an issue. There's a good number of tests for all the SQL functionality that's been implemented, though these are all rudimentary "does it work at all" tests dealing with at most only three or four rows and two tables. Additional functional tests with real world ORM examples have shown very good results, illustrating queries with fairly complex geometries (lots of subqueries, aliases, and joins) working very well with no errors. The performance of some operations, particularly data mutation operations, are fairly slow, but Pandas is not oriented towards manipulation of DataFrames in a CRUD-style way in any case. For straight up SELECTs that stay close to primary Pandas use cases, results should be pretty decent. Can I just type "select * from table" and it will work? No, we're dealing here strictly with SQLAlchemy expression constructs as the source of the SQL parse tree. So while the ORM works just fine, there's no facility here to actually receive a SQL string itself. However, the (more) ambitious (than me) programmer should be able to take a product like sqlparse and use that product's parse tree to deliver the same command objects that the compiler does here, the calchipan.compiler (SQLAlchemy compiler) and calchipan.resolver (command objects understood by the Pandas DBAPI) are entirely separate, and the resolver has minimal dependencies on SQLAlchemy. All your caveats and excuses are making me sad. Here's the pandasql package, which does basically the same thing that sqldf does for R, which is copies data out to SQLite databases as needed and lets you run SQL against that. So if you want straight up SQL queries delivered perfectly, use that. You just have to wait while it copies all your dataframes out to the database for each table (which might not be a problem at all). pandasql also doesn't provide easy hooks for usage with packages like SQLAlchemy, though the whole thing is only 50 lines so hacking its approach might be worth it. Will CALCHIPAN at least return the right results to me? As noted before, initial testing looks very good. But note that this is half of a relational database implementation written in Python; if you look at sqlite's changelog you can see they are still fixing "I got the wrong answer" types of bugs after nine years of development, which is 46800% the number of weeks versus Calchipans one week of development time as of March 25, 2013. So as a rule of thumb I'd say Calchipan is way too new to be trusted with anything at all. Feel free to use the bugtracker here to report on early usage experiences and issues, the latter should absolutely be expected. Performance Notes The SQL operations are all implemented with an emphasis on relying upon Pandas in the simplest and most idiomatic way possible for any query given. Two common SQL operations, implicit joins and correlated subqueries, work fully, but are not optimized at all - an implicit join (that is, selecting from more than one table without using join()) relies internally on producing a cartesian product, which you aren't going to like for large (or even a few thousand rows) datasets. Correlated subqueries involve running the subquery individually on every row, so these will also make the speed-hungry user sad (but the "holy crap correlated subqueries are possible with Pandas?" user should be really happy!). A join using join() or outerjoin() will internally make use of Pandas' merge() function directly for simple criteria, so if you stay within the lines, you should get pretty good Pandas-like performance, but if you try non-simple criteria like joinining on "x > y", you'll be back in cartesian land. The libary also does a little bit of restatement of dataframes internally which has a modest performance hit, which is more significant if one is using the "index as primary key" feature, which involves making copies of the DataFrame's index into a column. What's Implemented - WHERE criterion - column expressions, functions - implicit joins (where multiple tables are specified without using JOIN) - explicit joins (i.e. using join()), on simple criteria (fast) and custom criteria (slower) - explicit outerjoins (using outerjoin()), on simple criteria (sort of fast) and custom criteria (slower) - subqueries in the FROM clause - subqueries in the columns and WHERE clause which can be correlated; note that column/where queries are not very performant however as they invoke explicitly for every row in the parent result - ORDER BY - GROUP BY - aggregate functions, including custom user-defined aggregate functions - HAVING, including comparison of aggregate function values - LIMIT, using select().limit() - OFFSET, using select().offset() - UNION ALL, using union_all() - A few SQL functions are implemented so far, including count(), max(), min(), and now() - Table reflection - Only gets the names of columns, and at best only the "String", "Integer", "Float" types based on a dataframe. There's no primary key, foreign key constraints, defaults, indexes or anything like that. Primary and FK constraints would need to be specified to the Table() explicitly if one is using the ORM and wishes these constructs to be present. - CRUD operations - Note that Pandas is not optimized for modifications of dataframes, and dataframes should normally be populated ahead of time using normal Pandas APIs, unless SQL-specific or ORM-specific functionality is needed in order to produce initial schemas and/or populate data. CRUD operations here work correctly but are not by any means fast, nor is there any notion of thread safety or anything like that. ORM models can be fully persisted to dataframes using this functionality. - insert() - Plain inserts - multi-valued inserts, i.e. table.insert().values([{"a": 1, "b": 2}, {"a": 3, "b": 4}]) - Note that inserts here must create a new dataframe for each statement invoked! Generally, dataframes should be populated using Pandas standard methods; INSERT here is only a utility - cursor.lastrowid - if the table is set up to use the Pandas "index" as the primary key, this value will function. The library is less efficient when used in this mode, however, as it needs to copy the index column every time the table is accessed. SQLAlchemy returns this value as result.inserted_primary_key(). - update() - Plain updates - Expression updates, i.e. set the value of a column to an expression possibly deriving from other columns in the row - Correlated subquery updates, i.e. set the value of a column to the result of a correlated subquery - Full WHERE criterion including correlated subqueries - cursor.rowcount, number of rows matched. - delete() - Plain deletes - Full WHERE criterion including correlated subqueries - cursor.rowcount, number of rows matched - ORM - The SQLAlchemy ORM builds entirely on top of the Core SQL constructs above, so it works fully. What's Egregiously Missing - Other set ops besides UNION ALL - UNION, EXCEPT, INTERSECTION, etc., these should be easy to implement - RETURNING for insert, update, delete, also should be straightforward to implement - Lots of obvious functions are missing, only a few are present so far - Coercion/testing of Python date and time values. Pandas seems to use an internal Timestamp format, so SQLAlchemy types that coerce to/from Python datetime() objects and such need to be added. - EXISTS, needs to be evaluated - CASE statements (should be very easy) - anything fancy, window functions, CTEs, etc. - ANY KIND OF INPUT SANITIZING - I've no idea if Pandas and/or numpy have any kind of remote code execution vulnerabilities, but if they do, they are here as well. This library has no security features of any kind, please do not send untrusted data into it. Thanks, and have a nice day!
https://bitbucket.org/zzzeek/calchipan.git
CC-MAIN-2017-39
en
refinedweb
Symptom When trying to save a new Bank Account with Manage Bank Accounts there is the error: Activate function returned an entity which is still a draft instance - IsActiveEntity = false - The following problem occurred: HTTP request failed400,Bad Request,{"error":{"code":"SY/530","message":{"lang":"en","value":"An exception was raised."},"innererror":{"application":{"component_id":"FIN-FSCM-CLM","service_namespace":"/SAP/","service_id":"FCLM_BAM_ACCOUNTWD_SRV","service_version":"0001"},"transactionid":"B5312D7761214DCDB30F175FE49CC829","timestamp":"","Error_Resolution":{"SAP_Transaction":"","SAP_Note":"See SAP Note 1797736 for error analysis ()","Batch_SAP_Note":"See SAP Note 1869434 for details abou [TRUNCATED] Product Support Expert's Start Time: 18:46:41 (GMT -4.00) Eastern Daylight Time Read more... Environment SAP Fiori Front End 1709 Manage Bank Accounts Keywords Cash, Basic Setting, FCLM_BAM_ACCOUNTWD_SRV, Manage Bank Accounts, FIORI, Bank Account, Save , KBA , FIN-FSCM-CLM-BAM , Bank Account Management , Problem About this pageThis is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP ONE Support launchpad (Login required). Visit SAP Support Portal's SAP Notes and KBA Search.
https://apps.support.sap.com/sap/support/knowledge/preview/en/2689860
CC-MAIN-2018-47
en
refinedweb
Ever needed to create an icon for your Windows Desktop project? Well, I did, and many times. Nowadays, we can find some nice Websites that provide easy ways to create a Windows Icon file from a PNG image. But, in the past, things were different. Some years ago, before the explosion of the mobile development world, finding a good, and more important, free tool to help create an icon was a really tough deal! That's why I begun the development of this tool some years ago. And now, I've just updated and uploaded it to GITHUB, in the way that everyone can make use of it and, if want, help in the development process. In fact, it might not seems to be this way but, the icon of a software is like its profile picture. Our mind associates that tiny picture with our program. So, you should expend more time creating a particular Icon that defines your application. Think about it! With this cool open source tool, you will be able to design you application's icon, on your favorite program (Like GIMP, Illustrator, Corel), export it to a PNG image or even a SVG vector and just open Icon Pro and import that image. It will do all the background work, generating all the multiple size frames from that single image. With a few clicks, your icon will be there for you! Please, note that, even though I've done lots of researches and studied a lot about icon files, this software might be still not perfect. If you find a bug, help me out to fix it. Write a reply, tell me what's wrong, let's share some code. Please, don't vote down, the main goal here is to build a simple, free and open source tool to help everyone. So, this is a WPF application, written in C# (7.1) for Microsoft .NET Framework 4.6.1. This tool is, basically, a complement for my project described here: There, you can understand a little bit more about its background. But, for now, I'm going to tell you the most important things about its background: It's divided into layers, 3 layers: CursorBitmapDecoder CursorBitmapEncoder IconBitmapEncoder IconBitmapDecoder This tool allows you to extract icons from executables (.exe and .dll). This very cool functionality is based on the great article below by Tsuda Kageyu: This tool can open Windows Cursor files (.cur), but still, there's no implemented functionality that allows you to create one, even though that's already possible with the CursorBitmapEncoder class. With the use of the great API SVG.NET, this application lets you create an Icon from a SVG vector file. You can, as well, create an Icon file from a PNG image! In short words, the main functionalities of this program are: Now, after lots of researches I've successful implemented an very experimental support for opening Animated Cursors. I would like to mention the great article series by Jeff Friesen that helped me a lot understing the structure of the cursor files! Introduce Animated Cursors to Java GUIs, Part 1 | Let There Be Animated Cursors On the latest updates, I've included a piece of code from the great article bellow: IconLib - Icons Unfolded (MultiIcon and Windows Vista supported) - CodeProject The code I've included is related to the AND mask creation. Actually, you don't need to write even one single line of code to create an icon, but, in the background, this tool and the libraries behind it, allows you to code things like this: IconPro.Lib.Wpf.IconBitmapEncoder encoder = new Lib.Wpf.IconBitmapEncoder(); encoder.UsePngCompression = compression; foreach (Models.IconFrameModel fr in _Frames) { encoder.Frames.Add(System.Windows.Media.Imaging.BitmapFrame.Create(fr.Frame)); } encoder.Save(Output); encoder = null; Okay, I know, it's 2017 and now everything is about Mobile Development, and it makes no much sense to publish a project to handle files from the past century. But, I found the development of this tool very fun, and, as it can be useful for someone, I'm publishing it for everyone on GITHUB. I would like to thanks asiwel for pointing out some issues and bugs. Thanks to him, I've made some changes and fixed some bugs. There you can find the source code and the release: I'd like to present the road-map for this tool. Currently, I'm doing improvements on PHASE 2. But there is already som progress towards the PHASE 3. For example, the Core Lib already contains a namespace, called Motion, with some classes to handle Animated Cursor files. Even though that it is on a very experimental stage, this codes lets you open animated cursor files and view its frames. In fact, the core library is already on version 2.2.x while the rest of the codes are still on version 1.x.x. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) I guess I thought using 8 BPP just meant a smaller icon file with fewer colors available? I tend to use indexed color files anyway for icons and simple images. Maybe that is bad?) Why that makes a difference in the frames sizes, I really have no idea. (All of those "hand-made" icon files I mentioned above contain all the frames sizes and were created from 8 BPP images, I think ... and seem to work fine. Maybe they would be poor or fail on certain hardware??? PS / Oddly, "exporting" (apparently SAVE saves icon files and EXPORT saves icon images) opens a "Browse for Folder" dialog that shows folders/places to save such as Libraries, Network, Control Panel, and my User Name folder (!?!) as well as ordinary desktop folders and the This PC folder, etc. This seems to me to be a "poor design" or else a bug in that no one would want to accidentally drop a bunch of image files (or anything else) into those system folders. I suggest this be fixed. Incidentally I think I just noticed another "bug." When you click EXPORT and the "Browse for Folder window is open, it of course has the focus. But if you then click on your app window, giving it the focus, and then click on the X box to close the app, it does close ... but the Browse for Folder window stays open. That is not good. When you close your application, all of the windows and other stuff associated with that running process should close properly as well. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Tips/1213799/Icon-Pro-Create-icons-for-Windows-Desktop
CC-MAIN-2018-47
en
refinedweb
Represents four segments that form a loop, and might be a tag. More... #include <Quad.h> Represents four segments that form a loop, and might be a tag. Definition at line 22 of file Quad.h. List of all members. Constructor. (x,y) are the optical center of the camera, which is needed to correctly compute the homography. Definition at line 11 of file Quad.cc. Interpolate given that the lower left corner of the lower left cell is at (-1,-1) and the upper right corner of the upper right cell is at (1,1). Definition at line 36 of file Quad.cc. Referenced by interpolate01(). Same as interpolate, except that the coordinates are interpreted between 0 and 1, instead of -1 and 1. Definition at line 36 of file Quad.h. Referenced by AprilTags::TagDetector::extractTags(). [static] Searches through a vector of Segments to form Quads. Definition at line 48 of file Quad.cc. Given that the whole quad spans from (0,0) to (1,1) in "quad space", compute the pixel coordinates for a given point within that quad. Note that for most of the Quad's existence, we will not know the correct orientation of the tag. Definition at line 53 of file Quad.h. Referenced by AprilTags::TagDetector::extractTags(), interpolate(), and Quad(). Early pruning of quads with insane ratios. Definition at line 25 of file Quad.h. Referenced by search(). Minimum size of a tag (in pixels) as measured along edges and diagonals. Definition at line 24 of file Quad.h. Total length (in pixels) of the actual perimeter observed for the quad. This is in contrast to the geometric perimeter, some of which may not have been directly observed but rather inferred by intersecting segments. Quads with more observed perimeter are preferred over others. Definition at line 49 of file Quad.h. [private] Definition at line 66 of file Quad.h. Referenced by interpolate(), and Quad(). Points for the quad (in pixel coordinates), in counter clockwise order. These points are the intersections of segments. Definition at line 39 of file Quad.h. Referenced by AprilTags::TagDetector::extractTags(), and Quad(). Segments composing this quad. Definition at line 42 of file Quad.h.
http://tekkotsu.org/dox/classAprilTags_1_1Quad.html
CC-MAIN-2018-47
en
refinedweb
Note: This method computes everything by hand, step by step. For most people, the new API for fuzzy systems will be preferable. The same problem is solved with the new API in this example. The ‘tipping problem’ is commonly used to illustrate the power of fuzzy logic principles to generate complex behavior from a compact, intuitive set of expert rules. A number of variables play into the decision about how much to tip while dining. Consider two of them: quality: Quality of the food service: Quality of the service The output variable is simply the tip amount, in percentage points: tip: Percent of bill to add as tip For the purposes of discussion, let’s say we need ‘high’, ‘medium’, and ‘low’ membership functions for both input variables and our output variable. These are defined in scikit-fuzzy as follows import numpy as np import skfuzzy as fuzz import matplotlib.pyplot as plt # Generate universe variables # * Quality and service on subjective ranges [0, 10] # * Tip has a range of [0, 25] in units of percentage points x_qual = np.arange(0, 11, 1) x_serv = np.arange(0, 11, 1) x_tip = np.arange(0, 26, 1) # Generate fuzzy membership functions qual_lo = fuzz.trimf(x_qual, [0, 0, 5]) qual_md = fuzz.trimf(x_qual, [0, 5, 10]) qual_hi = fuzz.trimf(x_qual, [5, 10, 10]) serv_lo = fuzz.trimf(x_serv, [0, 0, 5]) serv_md = fuzz.trimf(x_serv, [0, 5, 10]) serv_hi = fuzz.trimf(x_serv, [5, 10, 10]) tip_lo = fuzz.trimf(x_tip, [0, 0, 13]) tip_md = fuzz.trimf(x_tip, [0, 13, 25]) tip_hi = fuzz.trimf(x_tip, [13, 25, 25]) # Visualize these universes and membership functions fig, (ax0, ax1, ax2) = plt.subplots(nrows=3, figsize=(8, 9)) ax0.plot(x_qual, qual_lo, 'b', linewidth=1.5, label='Bad') ax0.plot(x_qual, qual_md, 'g', linewidth=1.5, label='Decent') ax0.plot(x_qual, qual_hi, 'r', linewidth=1.5, label='Great') ax0.set_title('Food quality') ax0.legend() ax1.plot(x_serv, serv_lo, 'b', linewidth=1.5, label='Poor') ax1.plot(x_serv, serv_md, 'g', linewidth=1.5, label='Acceptable') ax1.plot(x_serv, serv_hi, 'r', linewidth=1.5, label='Amazing') ax1.set_title('Service quality') ax1.legend() ax2.plot(x_tip, tip_lo, 'b', linewidth=1.5, label='Low') ax2.plot(x_tip, tip_md, 'g', linewidth=1.5, label='Medium') ax2.plot(x_tip, tip_hi, 'r', linewidth=1.5, label='High') ax2.set_title('Tip amount') ax2.legend() # Turn off top/right axes for ax in (ax0, ax1, ax2): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.tight_layout() Now, to make these triangles useful, we define the fuzzy relationship between input and output variables. For the purposes of our example, consider three simple rules: Most people would agree on these rules, but the rules are fuzzy. Mapping the imprecise rules into a defined, actionable tip is a challenge. This is the kind of task at which fuzzy logic excels. What would the tip be in the following circumstance: # We need the activation of our fuzzy membership functions at these values. # The exact values 6.5 and 9.8 do not exist on our universes... # This is what fuzz.interp_membership exists for! qual_level_lo = fuzz.interp_membership(x_qual, qual_lo, 6.5) qual_level_md = fuzz.interp_membership(x_qual, qual_md, 6.5) qual_level_hi = fuzz.interp_membership(x_qual, qual_hi, 6.5) serv_level_lo = fuzz.interp_membership(x_serv, serv_lo, 9.8) serv_level_md = fuzz.interp_membership(x_serv, serv_md, 9.8) serv_level_hi = fuzz.interp_membership(x_serv, serv_hi, 9.8) # Now we take our rules and apply them. Rule 1 concerns bad food OR service. # The OR operator means we take the maximum of these two. active_rule1 = np.fmax(qual_level_lo, serv_level_lo) # Now we apply this by clipping the top off the corresponding output # membership function with `np.fmin` tip_activation_lo = np.fmin(active_rule1, tip_lo) # removed entirely to 0 # For rule 2 we connect acceptable service to medium tipping tip_activation_md = np.fmin(serv_level_md, tip_md) # For rule 3 we connect high service OR high food with high tipping active_rule3 = np.fmax(qual_level_hi, serv_level_hi) tip_activation_hi = np.fmin(active_rule3, tip_hi) tip0 = np.zeros_like(x_tip) # Visualize this fig, ax0 = plt.subplots(figsize=(8, 3)) ax0.fill_between(x_tip, tip0, tip_activation_lo, facecolor='b', alpha=0.7) ax0.plot(x_tip, tip_lo, 'b', linewidth=0.5, linestyle='--', ) ax0.fill_between(x_tip, tip0, tip_activation_md, facecolor='g', alpha=0.7) ax0.plot(x_tip, tip_md, 'g', linewidth=0.5, linestyle='--') ax0.fill_between(x_tip, tip0, tip_activation_hi, facecolor='r', alpha=0.7) ax0.plot(x_tip, tip_hi, 'r', linewidth=0.5, linestyle='--') ax0.set_title('Output membership activity') # Turn off top/right axes for ax in (ax0,): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.tight_layout() With the activity of each output membership function known, all output membership functions must be combined. This is typically done using a maximum operator. This step is also known as aggregation. Finally, to get a real world answer, we return to crisp logic from the world of fuzzy membership functions. For the purposes of this example the centroid method will be used. # Aggregate all three output membership functions together aggregated = np.fmax(tip_activation_lo, np.fmax(tip_activation_md, tip_activation_hi)) # Calculate defuzzified result tip = fuzz.defuzz(x_tip, aggregated, 'centroid') tip_activation = fuzz.interp_membership(x_tip, aggregated, tip) # for plot # Visualize this fig, ax0 = plt.subplots(figsize=(8, 3)) ax0.plot(x_tip, tip_lo, 'b', linewidth=0.5, linestyle='--', ) ax0.plot(x_tip, tip_md, 'g', linewidth=0.5, linestyle='--') ax0.plot(x_tip, tip_hi, 'r', linewidth=0.5, linestyle='--') ax0.fill_between(x_tip, tip0, aggregated, facecolor='Orange', alpha=0.7) ax0.plot([tip, tip], [0, tip_activation], 'k', linewidth=1.5, alpha=0.9) ax0.set_title('Aggregated membership and result (line)') # Turn off top/right axes for ax in (ax0,): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.tight_layout() The power of fuzzy systems is allowing complicated, intuitive behavior based on a sparse system of rules with minimal overhead. Note our membership function universes were coarse, only defined at the integers, but fuzz.interp_membership allowed the effective resolution to increase on demand. This system can respond to arbitrarily small changes in inputs, and the processing burden is minimal. Python source code: download (generated using skimage 0.2)
https://pythonhosted.org/scikit-fuzzy/auto_examples/plot_tipping_problem.html
CC-MAIN-2018-47
en
refinedweb
Calculation of multinomial coefficients is often necessary in scientific and statistic computations. Look at this ball set: We could wonder how many different ways we can arrange these 10 balls in a row, regarding solely ball colors and not ball numbers. Thus, the following two arrangements would be considered equal and we would just count once: and We have 5 blue balls, 3 red balls and 2 green balls. The solution of this problem can be noted as: multi(5, 3, 2) = 2520 In a more general way, let: multi(r<sub>1</sub>, r<sub>2</sub>,..., r<sub>m</sub>) a multinomial coefficient with r<sub>i</sub> non negative. Its calculation is defined as: multi(r<sub>1</sub>, r<sub>2</sub>,..., r<sub>m</sub>) r<sub>i</sub> multi(r<sub>1</sub>, r<sub>2</sub>,..., r<sub>m</sub>) = (r<sub>1</sub> + r<sub>2</sub> +...+ r<sub>m</sub>)! / (r<sub>1</sub>! r<sub>2</sub>!... r<sub>m</sub>!) The sign ! notes the factorial operator. The sum r<sub>1</sub> + r<sub>2</sub> +...+ r<sub>m</sub> is said to be the order of the multinomial coefficient. ! r<sub>1</sub> + r<sub>2</sub> +...+ r<sub>m</sub> Although conceptually very simple, its resolution can be tricky because both numerator and denominator may easily overflow even when the result itself does not. The aim of this article is providing a fast and efficient way of computing multinomial coefficients. Possible overflows will be monitored during calculations. Thus, a exception will be thrown in case a given numeric type cannot hold the result. Because the problem of money change is inextricably related with the calculation of multinomial coefficients, I will also address it. The description of this adjacent problem could be like this: How many different ways can a given amount of money (rem) be gathered with coins of face value in the set {1, 2, 3,..., top}? The solution of this problem will be noted as: parti(rem, top). rem {1, 2, 3,..., top} parti(rem, top) One of the main Internet resources on this topic is Dave Barber's Efficient Calculation of Multinomial Coefficients. His description on this subject is so comprehensive and accurate that I am just going to focus on the subtle differences between both implementations. Thus, consider this article as a complement of his (and thanks Dave for your support!). #include <cstdlib> #include <iostream> #include <stdexcept> #include "multinomial.h" int main(int argn, char *argc[]) { // How many different ways can I gather 21 euros with coins of face value // in the set: {1, 2, 3, 4 and 5 euros} (if all those coins existed)? try { size_t result_1=multinomial::parti<size_t>(21, 5); } catch (std::overflow_error& ex) { std::cout << ex.what() << std::endl; } // Solving the multinomial coefficient: multi(9, 8, 4). try { unsigned long long result_2=multinomial::multi<unsigned long long>({9, 8, 4}); } catch (std::overflow_error& ex) { std::cout << ex.what() << std::endl; } system("PAUSE"); return 0; } As you can see, a numeric type for the result must be specified in both cases as a template parameter. If such numeric types cannot hold the result, an overflow exception will be thrown. Whereas Dave Barber's implementation relies on the client to use an ad hoc numeric type with overflow control, this implementation can use whatever built-in numeric type you wish because the overflow control is hardcoded with little overhead. Calculations are carried out by recursion, storing the intermediate results in static vector caches. These are the recursive goberning rules: multi(r<sub>1</sub>, r<sub>2</sub>,..., r<sub>m</sub>) = multi(r<sub>1</sub>-1, r<sub>2</sub>,..., r<sub>m</sub>) + multi(r<sub>1</sub>, r<sub>2</sub>-1,..., r<sub>m</sub>) +...+ multi(r<sub>1</sub>, r<sub>2</sub>,..., r<sub>m</sub>-1) parti(rem, top) = parti(rem, top - 1) + parti(rem - top, top) Please, visit Dave Barber's page for more detailed information. The overflow control mechanism has been written in the file overflow.h. The code is as follows: overflow.h #ifndef OVERFLOW_H #define OVERFLOW_H #include <limits> #include <stdexcept> #include <type_traits> // Which numeric type is the greatest? template <typename numeric_type_A, typename numeric_type_B> struct greater_numeric_type { typedef typename std::conditional< (std::numeric_limits<numeric_type_A>::digits < std::numeric_limits<numeric_type_B>::digits), numeric_type_B, numeric_type_A >::type type; }; // Which numeric type is the smallest? template <typename numeric_type_A, typename numeric_type_B> struct lesser_numeric_type { typedef typename std::conditional< std::is_same< typename greater_numeric_type<numeric_type_A, numeric_type_B>::type, numeric_type_A >::value, numeric_type_B, numeric_type_A >::type type; }; // Does a + b overflow in the sum_numeric_type type? template < typename numeric_type_A, typename numeric_type_B, typename sum_numeric_type= typename greater_numeric_type<numeric_type_A, numeric_type_B>::type > bool is_sum_sum_overflowed<>(...). Arguments must be non negative." ); const sum_numeric_type& max_value= std::numeric_limits<sum_numeric_type>::max(); // If a or b are overflowed in sum_numeric_type type, true is returned. if(a>max_value || b>max_value) return true; return max_value-a<b; } // Does a * b overflow in the product_numeric_type type? // (This function is not used in the code. It is showed here for completeness). template < typename numeric_type_A, typename numeric_type_B, typename product_numeric_type= typename greater_numeric_type<numeric_type_A, numeric_type_B>::type > bool is_product_product_overflowed<>(...). Arguments must be non negative." ); const product_numeric_type& max_value= std::numeric_limits<product_numeric_type>::max(); // If a or b are overflowed in product_numeric_type type, true is returned. if(a>max_value || b>max_value) return true; return max_value/a<b; } #endif Overflow control can only be instantiated for numeric types supporting the std::numeric_limits<> trait. Fortunately all numeric built-in types support this trait. std::numeric_limits<> This overflow control is intended to be reused in other projects. As an example, if we need to know if the sum of two numbers a and b (both of them of type size_t) overflows in a int type, we simply write: a b size_t int bool overflowed_sum=is_sum_overflowed<size_t, size_t, int>(a, b); These are carried out in multinomial.h. Although comments are in Spanish, key comments and tables are in English so understanding is not prevented at all. I encourage you to visit Dave Barber's page for further in-depth knowledge. Every overflowable number is represented by a standard pair: the first element is the number itself and the second one is a boolean representing its state (true for overflowed and false for underflowed). This state is updated after any operation (additions). Intermediate results are stored in static vector caches, so many subsequent calculations are immediate. Please find attached the source code above. This has been written with Code::Blocks and compiled with MinGW GCC g++ v.4.8.1-4 in C+.
https://www.codeproject.com/Articles/695644/Multinomial-coefficients-with-overflow-control
CC-MAIN-2018-47
en
refinedweb
- Advertisement MatsenMember Content Count142 Joined Last visited Community Reputation132 Neutral About Matsen - RankMember Personal Information - InterestsDevOps Programming - Quote: The switch statement would still be needed upon construction of the array, but that would be it. I hope that makes more sense. Let me know! You're on the right track. I think you should separate the construction of the concrete file classes from the class itself. For example, you could create a separate function which creates a concrete file for you. The rest of your code will only deal with the base class, FileObjBase. FileObjBase* CreateFileObj(const std::string& fileName) { HANDLE fileHandle; // open the file and fetch the type switch (theFileType) { case TYPE_INT: return new FileObj<int>(fileHandle); case TYPE_FLOAT: return new FileObj<float>(fileHandle); case TYPE_MYTYPE: return new FileObjMyType(fileHandle); case TYPE_WEIRDTYPE: return new FileObjWeirdType(fileHandle); } return 0; // or throw an exception for unknown type } I can't see any reason for why your solution would not work, basically you're doing the same thing but with the array class. But, I think writing your own special array class when std::vector would do the work for you, with a different design, feels more appealing. Also, as your code shows, it looks like you're stuck with switch statements all over. By separating construction from the class, you will only need one switch statement and need only to create subclasses for any type you wish to support. Hope it helps - Why use a base class for the array class? Looking at your previous question, it seems as if a base class for the file class would be better: class FileObjBase { public: virtual void foo() = 0; virtual void bar() = 0; }; template <typename T> class FileObj : public FileObjBase { std::vector<T> array; public: void foo() {} void bar() {} }; Please elaborate a little on your design, if I've misunderstood you =) hello world Matsen replied to jagguy's topic in For Beginners's ForumYou're mixing C++ and Managed C++ (.NET). Console::WriteLine is part of the .NET libraries while cin and cout is part of the standard C++ libraries. As Boder said, cin and cout are in the std namespace: std::cin, std::cout a design problem Matsen replied to derek7's topic in General and Gameplay ProgrammingIf it is correct to say that Material has a texture, then texture should be owned by Material. But, having a class where a texture is rarely used makes using the Material class a bit cumbersome, as you have to check if a texture is available. My suggestion would be to either create a sub class of Material which always have a texture, as "Material" is a very abstract concept. If this do not fit your design, you could always create some sort of a null-texture, so that the rest of the design can assume a texture always being present. Arrays with new operator Matsen replied to Rabbi's topic in For Beginners's Forumstd::string* layerOrder = new std::string[n]; ... delete [] layerOrder; Regards Mats Edit: I'm too slow Archiving files c++ Matsen replied to incin's topic in General and Gameplay ProgrammingIf compression is of no particular concern, the best might be to write your own simple compression routine. It's not very hard, search for Huffman or Adaptive Huffman encoding. Edit: Or RLE encoding. It depends on the data you wish to compress. Regards Mats. C++ help Matsen replied to namingway's topic in For Beginners's ForumSure there is. It seems as if game maker uses strings, but that seem unnecessary. The simplest way would be to use a class, with the different levels separated: case 1: player.hp += 30; player.mana += 3; case 2: player.hp += 32; player.mana += 4; . . etc If strings is what you want, you should take a look at std::string and std::stringstream. There are other ways, such as using masking for storing the levels in one 32 bit int (search for masking and bitwise operators). Your question is very general, so perhaps you should elaborate so we can give you a more precise answer. Regards Mats - Quote:Original post by Winegums are deleting the struct and deleting the thread two different things? if so, how do i delete each/both of them? i thought changing the struct instances 'running' variable to END would kill the thread, but it seems not. Yes. You have to delete the struct and let the thread return from doit(). The thread doesn't know about the struct and doesn't care either. It has a life of its own. Don't try to terminate the thread forcibly, that can cause havoc if you try to expand on this framwork of yours. As I said in my previous reply, changing the state to END probably does nothing, because the state variable is only read from memory once. Making it volatile might solve the problem, but this is a guess and even if it works, it's a very bad solution. Your solution will not allow for a 'theoretically limitless number of threads' because any computer will choke long before that. What you should do is use synchronization. An 'event' would be very useful in your case. Here's a symbolic example. The syntax is not correct. Replace the 'paused' state with an event. while (thread_properties->isAlive()) { thread_properties->event->wait(); // if paused, the thread is stopped here // do stuff } That will allow for as many threads and events as the OS can handle. As I also said, threads are tricky, so my suggestion is that you find some tutorials about threads AND synchronization before you continue. Edit: If this really is a school assignment or from an employer/co-worker you should run and never look back ;) - I don't understand what you're trying to do. Perhaps you should take a look at something called a thread pool, if that's what you want achieve. Anyway, first, you don't need the _endthread() call. It's called automatically when the thread returns from doit(). Second, don't use loops to prevent unused threads from dying. They will eat resources. Use some form of synchronization. Third, you're not deleting the thread entry, you simply assign the pointer a null value. Forth, this is a guess, but the reason why the thread doesn't die might be because the state variable is not read from memory each lap in the loop. Try making it volatile and see if that helps (volatile int state). Threads are tricky, so don't be discouraged! Regards Mats Or Suggestions on good *series* scifi novels! Matsen replied to Tape_Worm's topic in GDNet LoungeI recommend Peter F Hamilton's Night's Dawn Trilogy. Each book is about 1000+ pages, so you'll be busy for a while. operator overload sort problem Matsen replied to fireside's topic in For Beginners's Forumoperator< is not called because it's comparing pointers. Try this: bool less(const Obj* lhs, const Obj* rhs) { return lhs->getY() < rhs->getY(); } ... std::sort(drawObjects.begin(), drawObjects.end(), less); Regards Mats Access violation in dbgdel.cpp? Matsen replied to nickwinters's topic in General and Gameplay ProgrammingYou're probably deleting something twice, or maybe your trying to delete something that isn't dynamically allocated. Run a debug. Regards Mats The internet phenomenon (Consuming our lives) Matsen replied to MindWipe's topic in GDNet LoungeQuote:Original post by MindWipe But it's not really that easy. I think I want to study computer engineering, mostly because I've been coding for 10 years and it's been fun. BUT, I'm not so sure this is what will give me the most out of life, I was thinking becoming a teacher would really be much more rewarding. And maybe go somewhere, do something out of the blue, an impuslive action. It is easy. You can't foresee the future. You don't know what you want to do in 1, 5 or 10 years from now. So, just do what your heart/gut tells you to do and trust it. Put your head in a refrigerator for a while. Stop thinking and intellectualizing your situation. Worry about the future when the future is now. Until then, the future is an illusion that exists only in your head. You can't do something impulsive by trying to be impulsive. You have to stop thinking. Quote:People need to learn to be thankful for everything they take for granted. I agree 100% Regards Mats Use class variables between threads Matsen replied to Halsafar's topic in General and Gameplay ProgrammingQuote:...but is it also safe to use class instances between threads? It depends on your class, its expected behaviour and design. And it's not inherently safe to share variables between threads either. It all depends :) Look at this: struct A { int some_variable = 0; void f() { if (some_variable == 0) { some_variable++; } } }; If two threads access a shared instance of A and call f at the same time, some_variable can accidentally be incremented twice. So, in the case above you'll need some form of synchronization. Basically, the only scenario where you don't need synchronization is when there are only read operations. As soon as you start adding some behaviour to the class, as in the case above, you'll probably need to serialize access. In your example, using a global variable to pause the demo/game seems a bit silly. Why not let all objects that can be paused have pause-methods? Regards Mats - Advertisement
https://www.gamedev.net/profile/30845-matsen/?tab=node_profile_projects_ProjectUserProfile&page=1&sortby=project_rating_total&sortdirection=desc
CC-MAIN-2018-47
en
refinedweb
A Google revelation Quite a while ago I was intrigued with the ability of to provide text suggestions for search strings AS I WAS TYPING the string … When I traced this interaction, I noticed that on each keystroke the site was in the background firing an event which eventuated in a RESTful call to a server using AJAX. Each call would return a limited number of text suggestions to the webpage, such as in the example above. The below trace illustrates the interaction when typing the 3 letters ‘S’ ‘A’ ‘P’ in succession … I wondered whether it would be possible to implement a similar form of interaction where the source data was an SAP system. This could be used, for instance, to search for employees, customers, cost centres etc. by name. And I really like this interaction pattern, because for the user it is simple and seamless (and anyone who has used Google or other sites that implement a similar approach should be familiar with it). At the same time, I have always believed that there is an untapped opportunity to combine the capabilities of the SAP ICM (Internet Communication Manager) and custom HTTP handlers, with the broader web community and technologies that have evolved in that realm. To put it more bluntly, often SAP departments are not inclined to work with corporate web or intranet teams, choosing instead to focus on the SAP supplied UI platforms of (primarily) SAPGUI or SAP Portal with which they are familiar. Similarly, corporate web and intranet teams are often not inclined to collaborate with their SAP counterparts, believing that they live on a different planet which lies somewhere in the Gamma quadrant. It seems there is a lost opportunity for these two groups to collaborate and build new ways to extract value from SAP data. Proof of concept So I set about building a scenario on my personal environment which would include the following: - Simple dummy website which might serve as a corporate intranet - Dynamic search in the website triggering AJAX calls to a local miniSAP ABAP system - The scenario here is a search for transaction codes in an SAP system (not a great scenario but bear in mind this demonstration was built on a miniSAP system) Custom HTTP handler on the ABAP system which returns results in JSON format - Website processes the return and displays the results, similar to Google Here is a YouTube video demonstration below …. If you cannot see the YouTube video, a Flash version is available here … I should also point out that this scenario is NOT to be confused with SAP NetWeaver Enterprise Search. That is a different thing altogether. You might wonder why I simply wouldn’t code something using WebDynpro ABAP or BSP? Well, in the case of WebDynpro ABAP that framework currently does not support stateless scenarios (I am told that is in the roadmap). I certainly need this to be stateless, because the service could simply sit on the webpage untouched for an indefinite period. BSP is a viable option, but I really wanted here to look at a scenario where your corporate intranet resides ON A DIFFERENT PLATFORM. What’s the point? Why might this be a useful architecture to consider? After all, it really is a non-standard approach when compared with standard SAP UI solutions. Well, I certainly wouldn’t consider this for every UI scenario. Rather, I think it is useful in certain edge cases where quick and simple access to SAP data might be needed (eg. integrating customer lookup into your corporate intranet homepage). Here are some benefits of this approach … - You are no longer tied to SAP’s browser support matrixes (which you are if you are using technologies such as WebDynpro etc.). Of course, since your web developers are coding the client-side, they will take responsibility for browser compatibility based on what they need to support in your organization. - You can implement this using a stateless approach, which can really scale. So for instance you could deploy this service to the corporate intranet homepage. The SAP server is only load-effected if someone starts typing in the search field. - You could RE-USE this service with other clients, such as iPhone, Blackberry or Android apps in your organisation. Let’s build it! As always, where possible I like to share my sample code so that you can try this yourself. You can build this in 15 minutes. For THIS scenario, I will implement a simple SAP transaction code lookup service. Of course, you could think of much more valuable scenarios for your own system (customer, employee lookups etc.), but since I am running this on a personal miniSAP system there is limited data available. The concept however can be readily applied to other scenarios. PART A: Enable RESTful service - Via transaction SE24, create a public class ZCL_TRANSACTION_CODES - Assign interface IF_HTTP_EXTENSION to this class. This should introduce an instance method ‘HANDLE_REQUEST’ to our class. - For our implementation of HANDLE_REQUEST, paste the following code* and activate the class … method IF_HTTP_EXTENSION~HANDLE_REQUEST. * John Moy, April 2011 * An SAP Community Contribution * * RESTful service to deliver transaction code details in JSON format * for a given search string. * * This code is has been simplified for illustrative purposes only. * For productive use, you should seek to implement a RESTful framework, * implement a JSON converter, and refactor to achieve appropriate * separation of concerns. * * Data definition * data: path_info type string, verb type string, action type string, attribute type string, rows type integer, json_string type string. * * Process request * path_info = server->request->get_header_field( name = ‘~path_info’ ). verb = server->request->get_header_field( name = ‘~request_method’ ). * * Determine if method is get. * if verb ne ‘GET’. call method server->response->set_header_field( name = ‘Allow’ value = ‘GET’ ). call method server->response->set_status( code = ‘405’ reason = ‘Method not allowed’ ). exit. endif. * * Determine the action and attribute * SHIFT path_info LEFT BY 1 PLACES. SPLIT path_info AT ‘/’ INTO action attribute. * Application logic. * (in reality this would be refactored into a separate class) data: lt_tcodes type table of tstct, lv_search_string type string, lv_search_string_upper type string, lv_search_string_upperlower type string, lv_search_firstchar(1) type c, exc_ref type ref to cx_sy_native_sql_error. field-symbols: <tcoderow> type tstct. * Version of search string that directly matches case concatenate ‘%’ attribute ‘%’ into lv_search_string. * Version of search string that is entirely upper case move lv_search_string to lv_search_string_upper. translate lv_search_string_upper to upper case. * Version of search string with first character in upper case move attribute(1) to lv_search_firstchar. translate lv_search_firstchar to upper case. move attribute to lv_search_string_upperlower. shift lv_search_string_upperlower by 1 places. concatenate ‘%’ lv_search_firstchar lv_search_string_upperlower ‘%’ into lv_search_string_upperlower. * Persistence access logic. * (in reality this would be refactored into a separate layer) * Note also inefficiences of this ‘select’ statement can and should * be addressed in real life implementations. select * from tstct into table lt_tcodes where sprsl eq sy-langu and ( tcode like lv_search_string_upper or ttext like lv_search_string_upper or ttext like lv_search_string_upperlower or ttext like lv_search_string ). * If error detected then abort if sy-subrc ne 0. call method server->response->set_status( code = ‘404’ reason = ‘ERROR’ ). call method server->response->set_cdata( data = json_string ). exit. endif. rows = lines( lt_tcodes ). * Now populate JSON string with the appropriate fields we need * (in reality it would be appropriate to implement and call a JSON converter) move ‘[ ‘ to json_string. loop at lt_tcodes assigning <tcoderow>. concatenate json_string ‘{ ‘ ‘”key”: “‘ <tcoderow>-tcode ‘”, ‘ ‘”desc”: “‘ <tcoderow>-ttext ‘” }’ into json_string. if ( sy-tabix < rows ). concatenate json_string ‘, ‘ into json_string. endif. * If we have reached 15 rows, then terminate the loop if ( sy-tabix eq 15 ). concatenate json_string ‘{ “key”: “”, “desc”: “… and more” }’ into json_string. exit. endif. endloop. concatenate json_string ‘ ]’ into json_string. * * Set the content type * server->response->set_header_field( name = ‘Content-Type’ value = ‘application/json; charset=utf-8’ ). * * Enable this service to be available to other sites * server->response->set_header_field( name = ‘Access-Control-Allow-Origin’ value = ‘*’ ). * * Return the results in a JSON string * call method server->response->set_cdata( data = json_string ). endmethod. * Note: As with prior blogs, I should mention that I cannibalised some code from this blog (Android and RESTFul web service instead of SOAP) by Michael Hardenbol to develop this service. I have also collapsed several layers of code into the one method simply for ease of cutting and pasting this exercise, however in reality you would refactor to build in a separation of concerns. You would probably want to implement a proper RESTful framework in your ICF if you intend to deploy a number of services (or wait for SAP’s Gateway product). SAP Mentor DJ Adams provides some ideas to accomplish this in his blog (A new REST handler / dispatcher for the ICF). Also, the code returns a result in JSON () format. Typically you would call a re-usable JSON converter (there are several available in the SCN community) but for the purposes of this exercise I have simply constructed the result with a crude CONCATENATE statement. - Via transaction SICF, create a new service ‘ztransactions’ under the path /default_host/sap - For the service, add a description - Go to the ‘Logon Data’ tab and enter service user credentials here (username / password) to make this an anonymous service (note that we could make this an authenticated service with some extra work) - Go to the ‘Handler List’ tab and add the following class into the handler list …. ZCL_TRANSACTION_CODES - Activate the service (from the SICF tree, right-click on the service and select ‘Activate Service’) PART B: Enable Web Application - This is actually the part you would expect your web team to accomplish - For simplicity however, I have provided some sample code that you can download here and implement on a web server, or alternatively you can simply utilize a hosted version here. DISCLOSURE: Under terms of use for the original website template, I created the hosted dummy site based on a sample provided by Free Website Templates. I incorporated my own images, themes, an open source accordian control and some custom javascript into the amended site. - In either case, to get this to work you will need to overtype the default fully qualified domain name for your own server that appears immediately above the ‘Transaction Codes’ accordian button. - The most important work that occurs here is in the javascript file ‘sapsearch.js’ which I replicate below … // // Sample dynamic search javascript // by John Moy // An SAP Community Contribution // March 2011 // // // Function to initiate dynamic search of SAP data // function showSAPSearchResult(sapserver, resource, queryString, targetElement) { if (queryString.length<=1) { document.getElementById(targetElement).innerHTML=””; return; } var url= “http://” + sapserver.value + “/sap/” + resource + “/search/” + queryString; // Initiate request var httpRequest = createCORSRequest(“get”, url); if (httpRequest) { // Register handler for processing successful search httpRequest.onload = function() { var result = JSON.parse(httpRequest.responseText); var htmlResponse = “”; for (var i = 0; i < result.length; i++) { var key = result[i][“key”]; var description = result[i][“desc”]; htmlResponse += “<ul>” + key + ” – ” + description + “</ul>”; } document.getElementById(targetElement).innerHTML=”<ul>” + htmlResponse + “</ul>”; } // Register handler for processing failed search httpRequest.onerror = function() { document.getElementById(targetElement).innerHTML=”<ul>No results returned</ul>”; } // Initiate query httpRequest.send(); } else { alert(‘Sorry, your browser does not support cross origin resource sharing’); } } // // Function to create Cross Origin Resource Sharing (CORS) request // for all modern browser types // Based on: // function createCORSRequest(method, url){ var xhr; // Checking for XDomainRequest determines if browser // implements IE-based proprietary solution if (typeof XDomainRequest != “undefined”){ xhr = new XDomainRequest(); xhr.open(method, url); } else { xhr = new XMLHttpRequest(); // Checking for ‘withCredentials’ property determines if browser supports CORS if (“withCredentials” in xhr){ xhr.open(method, url, true); } // Browser supports neither CORS or XDomainRequest else { xhr = null; } } return xhr; } - You can try this out. The easiest way is to launch the version I have hosted for your convenience. - Note that this is currently coded to work with IE8+ and recent versions of Firefox, Chrome and Safari. It is possible to code for earlier versions of IE (such as IE6) however the code for cross origin resource sharing (discussed below) would need to be changed. How does it Work? The interaction here is as follows … - User types into the input field for Transaction Codes - Website detects an ‘onKeyup’ event for theinput field and fires a request to the javascript function ‘showSAPSearchResult’, passing various parameters including the SAP server and the value entered. - The javascript function ‘showSAPSearchResult’ ignores any inputs that are only one character in length and exits in these situations. - The javascript function ‘showSAPSearchResult’ constructs a RESTful url call which looks something like this example (if typing ‘sql’) … - At this point I should point out that it is arguable whether this form of url request fully complies with ‘REST’ principles. Originally I thought not to include the verb ‘search’ in the url but when I saw that both Google and Yahoo APIs incorporated this, I decided to include it. - The javascript function ‘showSAPSearchResult’ then calls a function ‘createCORSRequest’ which creates an HTTP GET call to the SAP service we created earlier, passing the constructed url. - The ABAP service processes the request, and then returns a very lean JSON-formatted response that looks something like this … [{“key”: “ADA_SQLDBC”,”desc”: “SQLDBC_CONS” },{“key”: “DB6CST”,”desc”: “DB6: Analyze Cumulative SQL Trace” },{“key”: “DB6CST_LST”,”desc”: “DB6: Analyze Cumulative SQL Trace” },{“key”: “DB6EXPLAIN”,”desc”: “DB6: Explain SQL Statement” },{“key”: “DB6SQC”,”desc”: “DB6: Analyze SQL Cache” },{“key”: “SDBE”,”desc”: “Explain an SQL statement” },{“key”: “SQLR”,”desc”: “SQL Trace Interpreter” },{“key”: “ST04RFC”,”desc”: “SAP Remote DB Monitor for SQL Server” },{“key”: “ST04_MSS”,”desc”: “Monitoring SQL Server (remote/local)” } ] - A local callback javascript function ‘showSAPSearchResult’ receives the response from the ABAP server. This function parses the JSON result, transforms it into HTML and places it dynamically into the HTML Document Object Model so that it appears instantly for the user to see. A word about Cross Origin Resource Sharing (CORS) The solution here has been architected here such that the calling website can reside in a different domain to the SAP server. There might be instances where this may be useful (eg. mashups). It is also useful here because it means you can trial it using the hosted site I uploaded to . It is however not essential if the consuming website has the same domain as the RESTful service. I would like to thank Chris Paine for bringing this to my attention. You can read more about it here. A word about Performance For these situations, you should consider the impact on your server and whether the request / response cycle will occur quickly enough. Also in some cases the data you are looking up might involve onerous volumes that will not be viable for use with this solution. To be sure, the simple scenario I have provided here has NOT been performance optimized. Ideally, the table you lookup is memory resident (eg. fully buffered) and / or searches utilize database indexes. In my example I added some crude code to tackle case insensitive searches for text …. in a full ERP system for some tables you can leverage MATCHCODE columns which store data in full uppercase to avoid this issue. You can see why HANA would be a great accompaniment to this type of solution. Extending the Solution Whilst I don’t address it in this blog, you can very easily extend the solution to then permit you to click on a result item, invoking another RESTful service to receive a details display on your site. In the case of our transaction codes example, we could invoke the transaction via ITS or even launching SAPGUI (if we leverage a Portal service). Licensing A word about licensing …. Whilst the approach I have outlined here seems technically feasible, you may need to discuss any licensing implications with your friendly SAP account rep. I’m no licensing expert, so I won’t even begin to speculate about what the implications might be from a licensing perspective. Final Words In some respects, the vision for SAP’s new Gateway product is along the same lines as this blog post. That is, consumption of lean RESTful services by heterogeneous clients (eg. websites, mobile devices, etc). What Gateway provides is a formal framework to expose services. Gateway or not, if you take nothing else from this post, perhaps you should consider having a friendly chat with your neighbourhood web team? You might just accomplish some great things together. This is really cool stuff. Thanks for sharing! Sue My pleasure! It’s been a few months since I blogged because I was focussed on the Mastering SAP conference and my presentation for that. I have had the idea for this blog for quite a while though, so it’s good to finally release it. By the way, one of my biggest regrets from Mastering SAP was that I didn’t get to speak to you more. I have a soft spot for workflow (I have implemented workflow solutions on a few occasions in the past) but I wouldn’t consider myself an expert. It would have been great to discuss some of the use cases for workflow that we are using and what your opinion would be on those. Hopefully our paths will cross again in future! Rgds John Maybe there is a TechEd in your future? We’ll figure something out. Cheers, Sue Cheers Graham Robbo Thanks for sharing! If SAP Gateway will provide a generic data service, autocomplete functions may look like –('Joh‘,Name)%20eq%20true&$top=5&$format=json Netflex Rest service returning Top 5 people whose name starts with ‘Joh’. Cheers John P Thanks for sharing that! Looks like the URI is quite different to my approach, although I’m not surprised. I saw your tweet about datajs and bookmarked it … I’ll need to look into that also. Thanks again. John You explored interesting technologies like REST JSON and DataJS but speaking about your starting point I would like to highlight an interesting blog by colleagues of mine Sergio Cipolla. Blog is Introducing SAP Suggest at Introducing SAP Suggest and it is about a similar implementation bu in Adobe Flash Island. “suggestion” is definitely a feature well appreciated, if not expected, by end-users. Sergio Wow, I hadn’t seen that one! If I had l would have referenced it in my blog, since as you say the intent is similar. The two solutions do differ in use cases. I think my blog is targeted more at non-SAP web clients such as corporate intranets where you don’t necessarily wish to build upon WDA. Especially as I had a key aim to keep the interaction stateless and therefore highly scaleable (which you might need for the home page of a corporate intranet). That said, your Flex app could easily be implemented in a similar fashion and independently of WDA. Rgds John Sergio Thanks for sharing. Chris I’ve been following your blogs for quite a while, all these ideas and prototypes are extremely helpful, Thank you very much! Regards Luis
https://blogs.sap.com/2011/04/07/deliver-dynamic-search-of-sap-data-into-a-website-using-restful-services/
CC-MAIN-2017-47
en
refinedweb
#include "ltwrappr.h" L_INT LAnnotation::SetRotateOptions(pRotateOptions, uFlags) Sets the rotation options for the annotation object. Use this function to set the rotate options of any annotation object, including the automation object. To use this function, declare a variable of type ANNROTATEOPTIONS, and pass the address of this variable for pRotateOptions. For more information, refer to the documentation for the structure ANNROTATEOPTIONS. With the new rotate options, two rotate handles are a rotate by dragging the "gripper" handle: The following figure illustrates moving the "center" handle: The rotate handles can be reset to a default location by right-clicking the annotation object, and selecting the Reset Rotate Handles option, as shown in the following figure: The rotate handles can be globally hidden or displayed by right-clicking on the image (not an annotation object), and selecting the Hide Rotate Handles For All Objects option, as shown in the following figure: This functionality can be enabled using the following code snippet: L_VOID ExampleEnableOption(LAnnAutomation& annAutomation) { L_UINT nRet, uOptions = 0; nRet = annAutomation.GetOptions( &uOptions); nRet = annAutomation.SetOptions( uOptions | OPTIONS_NEW_ROTATE); } Required DLLs and Libraries Win32, x64. For an example, refer to LAnnotation::GetRotateOptions.
https://www.leadtools.com/help/leadtools/v19/main/clib/lannotation-setrotateoptions.html
CC-MAIN-2017-47
en
refinedweb
the internal representation of logging events. More... #include <log4c/defs.h> #include <log4c/buffer.h> #include <log4c/location_info.h> #include <sys/time.h> Go to the source code of this file. the internal representation of logging events. When a affirmative logging decision is made a log4c_logging_event instance is created. This instance is passed around the different log4c components. Destructor for a logging event.
http://log4c.sourceforge.net/logging__event_8h.html
CC-MAIN-2017-47
en
refinedweb
0 Hi Everyone :-) I'm writing a small Python script that invokes the Linux shell and runs some BASH commands in it. My problem is, I can't seem to use the output and store it in a Python variable. Here's what i'm trying: import subprocess value = subprocess.call("ls -la", shell=True) print(variable) What happens is that the "ls -la" command is executed just fine in the shell window, but the variable "value" is empty. what am i missing here? Thanks for the help :-)
https://www.daniweb.com/programming/software-development/threads/281000/using-the-bash-output-in-python
CC-MAIN-2017-47
en
refinedweb
Hello, I'd like to know if it's possible to hide some content from user. My idea was to use something like the 'PermissionTag' for wiki pages. What I wish to have: I have restricted the permission to view content of my wiki to specific groups - since the NameSpace plugin doesn't come with great usability I use prefixes for my site, e.g. help-faq, help-XYZ,... So that each group can visit and edit only the pages for a specific "name space". Now I have a big "LeftMenu" that contains links to all "name spaces" .. so each user sees links that cannot be viewed with that account. My idea was to split the menu in several subpages that start with the prefix of the namespace - then I could use the "PermissionTag" to make content only visible if the user has the permission to view .. Unfortunately I then noticed, that the "tags" are only available for the jsp-sites .. is there an easy way to solve this problem? Cheers! -- Christian Rösch Application Development icon Systemhaus GmbH Tel. +49(711)806098-0 Sophienstraße 40 70178 Stuttgart Fax. +49(711)806098-299 Geschäftsführer: Uwe Seltmann HRB Stuttgart 17655 USt-IdNr.: DE 811944121 +++++++++++++++++++++++++++++++++++++++++ icon Events: +++++++++++++++++++++++++++++++++++++++++
http://mail-archives.apache.org/mod_mbox/incubator-jspwiki-user/201207.mbox/%3C06080D44730F41428BAF2168FA8BDD80069D64CC@icsrv02.icongmbh.de%3E
CC-MAIN-2017-47
en
refinedweb
In my opinion regions are a bad idea (and this seems to be a fairly common opinion). Everyone has their own take on how members should be grouped. Someone else's logic is not the same as mine and therefore the benefit of grouping by an inconsistent means is flawed. A colleague also suggested grouping by logical program flow. This seems like a good idea at first, but with experience modern software typically doesn't follow linear patterns, most applications are asynchronous and event driven, leading to many very different lines of execution through a class. So the best way to consistently order members in a class is by accessor and then alphabetically by member name. (This conincides with Microsoft's Framework Guidelines book and default configuration for StyleCop). Quite often all a consumer knows is the method name, so that's what they are looking for. Also its nice to have the code ordered in the same order as the member selector dropdown on the toolbar. To achieve this I turn to Regionerate. As them name suggests it can be used to create regions in the code, but like all good tools its configurable, and its sorting capabilities are great. I've been attempting to perfect my Regionerate config xml, and to do this ideally you need a test class with all manner of members. My Test Class is at the bottom of this post. And here's my resulting config file so far. (Check my Essential Tools page for the most up to date version of my config). There are a couple of annoying bugs in Regionerate that creates unavoidable double line spacing between members. It also seems to change the alignment of closing braces by deleting their indenting tabs. Both of these issues can be quickly worked around by using Visual Studio's Format Document feature (Ctrl+K Ctrl+D), and then doing a quick replace (Ctrl+H) for \n:b*\n:b*\n and replace with \n\n Hit replace 2 or 3 times and its good to go. namespace Foo.Service.Console { using System; using System.Diagnostics.CodeAnalysis; public interface IClass1 { void TestMethod(); } public struct Struct1 { public int Member1; public int Member2; } public class Class1 { public static readonly int PublicStaticReadOnly1 = 5; public static readonly int PublicStaticReadOnly2 = 6; public const double Pi = 3.141; private const double Ei = 9.12335; private static readonly string Field1; private static readonly int Field2; private static readonly double Field3; private readonly string field4; private readonly int field5; private readonly double field6; private string field7; private int field8; private double field9; static Class1() { Field3 = Pi; } public Class1(int parameter) : this() { this.field8 = parameter; } internal Class1() { this.field7 = "Hello"; } public delegate bool Invocation1(int x, int y); internal delegate bool Invocation2(int x, int y, string position); protected delegate bool Invocation3(int x, int y, string position, double offset); private delegate bool Invocation4(double x, double y, string position, double offset); public event EventHandler IsChanging; public event EventHandler IsMorphing; public enum MyEnum { Cyan, Magenta, Yellow, Black, } public static int Property1 { get; set; } public static int Property2 { get; set; } public static int Property3 { get; set; } public int Property4 { get; set; } public int Property5 { get; set; } public int Property6 { get; set; } internal static int Property7 { get; set; } internal static int Property8 { get; set; } internal static int Property9 { get; set; } internal int Property10 { get; set; } internal int Property11 { get; set; } internal int Property12 { get; set; } protected static int Property13 { get; set; } protected static int Property14 { get; set; } protected static int Property15 { get; set; } protected int Property16 { get; set; } protected int Property17 { get; set; } protected int Property18 { get; set; } private static int Property19 { get; set; } private static int Property20 { get; set; } private static int Property21 { get; set; } private int Property22 { get; set; } private int Property23 { get; set; } private int Property24 { get; set; } public static void StaticMethod1() { } public static void StaticMethod2() { } public static void StaticMethod2(int x, int y) { } public static void StaticMethod2(int x, int y, string position) { } public void Method1() { } public void Method1(int x, int y) { } public void Method1(int x, int y, string position) { } public void Method2() { } internal static void Method3() { } internal static void Method4() { } internal static void Method4(int x) { } internal void Method5() { } internal void Method6() { } internal void Method7(int x) { } protected internal void Method8() { } protected internal void Method9() { } protected internal void Method9(int x) { } protected static void Method10() { } protected static void Method11() { } protected static void Method12(int x) { } protected void Method13() { } protected void Method14() { } protected void Method15(int x) { } private static void Method16() { } private static void Method17() { } private static void Method18(int x) { } private void Method19() { } private void Method20() { } private Class5 Method21(int x) { return new Class5(); } public class Class2 { } public class Class3 { } internal class Class4 { } private class Class5 { } } }
http://blog.rees.biz/2011/03/
CC-MAIN-2017-47
en
refinedweb
#include "request_properties.h" This class keeps track of the request properties of the client, which are for the most part learned from the UserAgent string and specific request headers that indicate what optimizations are supported; most properties are described in device_properties.h. It relies on DeviceProperties and DownstreamCachingDirectives objects for deciding on support for a given capability. Calls ParseCapabilityListFromRequestHeaders on the underlying DownstreamCachingDirectives object. Note that it's assumed that if the proxy cache SupportsWebp it also supports the Accept: image/webp header (since this represents a strict subset of the user agents for which SupportsWebpRewrittenUrls holds).
https://www.modpagespeed.com/psol/classnet__instaweb_1_1RequestProperties.html
CC-MAIN-2017-47
en
refinedweb
PCAP_SETDIRECTION(3PCAP) PCAP_SETDIRECTION(3PCAP) pcap_setdirection - set the direction for which packets will be cap‐ tured #include <pcap/pcap.h> int pcap_setdirection(pcap_t *p, pcap_direction_t d); ``savefile'' is being read. pcap_setdirection() returns 0 on success and -1 on failure. If -1 is returned, pcap_geterr() or pcap_perror() may be called with p as an argument to fetch or display the error text. pcap(3PCAP), pcap_geterr(3PCAP) 8 March 2015 PCAP_SETDIRECTION(3PCAP)
http://man7.org/linux/man-pages/man3/pcap_setdirection.3pcap.html
CC-MAIN-2017-47
en
refinedweb
DBMI Library (base) - strip strings. More... #include <grass/dbmi.h> Go to the source code of this file. DBMI Library (base) - strip strings. (C) 1999-2009, 2011 by the GRASS Development Team This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details. Definition in file strip.c.
http://grass.osgeo.org/programming7/strip_8c.html
CC-MAIN-2017-47
en
refinedweb
MooseX::Mangle - mangle the argument list or return values of your methods version 0.02 package Foo; use Moose; sub foo { "FOO" } sub bar { shift; join '-', @_ } package Foo::Sub; use Moose; use MooseX::Mangle; extends 'Foo'; mangle_return foo => sub { my $self = shift; my ($foo) = @_; return lc($foo) . 'BAR'; }; mangle_args bar => sub { my $self = shift; my ($a, $b, $c) = @_; return ($b, $c, $a); }; my $foo = Foo::Sub->new->foo # 'fooBAR' my $bar = Foo::Sub->new->bar(qw(a b c)) # 'b-c-a' MooseX::Mangle provides some simple sugar for common usages of around. Oftentimes all that is needed is to adjust the argument list or returned values of a method, but using around directly for this can be tedious. This module exports a few subroutines which make this a bit easier. Applies an around method modifier to METHOD_NAME, using CODE to mangle the argument list. CODE is called as a method, and additionally receives the arguments passed to the method; it should return the list of arguments to actually pass to the method. Applies an around method modifier to METHOD_NAME, using CODE to mangle the returned values. CODE is called as a method, and additionally receives the values returned by the method; it should return the list of values to actually return. Provides a requirement that must be satisfied in order for METHOD_NAME to be called. CODE is called as a method, receiving the arguments passed to the method. If CODE returns true, the method is called as normal, otherwise undef is returned without the original method being called at all. No known bugs. Suggestions for more modifiers are always welcome, though. Please report any bugs through RT: email bug-moosex-mangle at rt.cpan.org, or browse to. You can find this documentation for this module with the perldoc command. perldoc MooseX::Mangle You can also look for information at: Jesse Luehrs <doy at tozt dot net> This software is copyright (c) 2009 by Jesse Luehrs. This is free software; you can redistribute it and/or modify it under the same terms as perl itself.
http://search.cpan.org/dist/MooseX-Mangle/lib/MooseX/Mangle.pm
CC-MAIN-2017-30
en
refinedweb
The id of the author of a post receiving comments. The disqus.parent.* namespace is populated when an interaction is a reply to a comment. It contains details of the parent comment. Details of the comment itself are available in the disqus.* namespace. Note that if the post was created by an anonymous author the value of this target is 0. Examples - Filter for responses to posts or comments written by a specified author: disqus.parent.post.author.id == "105063750" Resource information Target service: Disqus Type: string Array: No Always exists: No
http://dev.datasift.com/docs/products/stream/features/sources/public-sources/disqus/disqus-parent-post-author-id
CC-MAIN-2017-30
en
refinedweb
Hi guys. I've been doing pretty good with my programming so its been awhile since my last post, but as the title suggests I am stuck on traversing my maze using recursion. I have a 2D array that is given a set value to represent my maze. My constructor creates a 2D array that is sized by given row and column values, and then the actual values themselves are passed after that. It appears that the problem lies when I come across the character that's supposed to represent the walls of the maze, which is a *. It gives me the following error: Code java: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 5 at recursivemaze.Maze.traverse(Maze.java:41) at recursivemaze.Maze.traverse(Maze.java:52) at recursivemaze.Maze.traverse(Maze.java:52) at recursivemaze.Maze.traverse(Maze.java:52) at recursivemaze.Maze.traverse(Maze.java:52) at recursivemaze.Maze.traverse(Maze.java:51) at recursivemaze.Maze.traverse(Maze.java:51) at recursivemaze.RecursiveMaze.main(RecursiveMaze.java:26) Java Result: 1 I'm not sure what the problem is, because my code is supposed to return nothing when it comes across a * and keep continuing the recursive method calls until it reaches the last possible part of the maze that it can travel, all the while placing a 'P' in areas where it has traveled. Any help would be appreciated in solving this. My code is as followed: Code java: public class Maze { private char[][] primeMaze = null; private int rows = 0; private int columns = 0; private final char PATH = 'P'; private int total = 0; public Maze(){ } public Maze(int x, int y){ rows = x; columns = y; primeMaze = new char[x][y]; } public int getDimensions(){ return (rows * columns); } public void fillMaze(char[][] m){ for (int i = 0; i < primeMaze.length; i++){ for (int j = 0; j < primeMaze[i].length; j++){ primeMaze[i][j] = m[i][j]; } } } public void traverse(int r, int c){ if (primeMaze[r][c] == '*'){ return; ****This is the source of the error**** } if (primeMaze[r][c] == PATH){ return; } primeMaze[r][c] = PATH; traverse(r + 1, c); traverse(r, c + 1); traverse(r - 1, c); traverse(r, c - 1); primeMaze[r][c] = ' '; return; } public String toString(){ String answer = null; for (int i = 0; i < rows; i++){ for (int j = 0; j < columns; j++){ answer += primeMaze[i][j] + " "; } answer += "\n"; } return answer; } } Code java: public class RecursiveMaze { /** * @param args the command line arguments */ public static void main(String[] args) { char[][] test = new char[][]{{'*', '*', '*', '*', '*'}, {'*', ' ', ' ', '*', '*'}, {'*', ' ', '*', '*', '*'}, {'*', ' ', ' ', ' ', ' '}, {'*', '*', '*', '*', '*'}}; Maze m = new Maze(5, 5); m.fillMaze(test); m.traverse(1, 1); System.out.println(m.toString()); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/24943-help-traversing-maze-recursively-printingthethread.html
CC-MAIN-2017-30
en
refinedweb
Memory Trainer in Python Introduction: Memory Trainer in Python This program starts of by creating a list of words. It then shuffles the list and asks you how many words you would like to get tested on and how many seconds you would like per word Then it checks your results and you get a score :) If anyone out there wants to collaborate I will let you edit and upload new code! Program starts here: import random, time, sys, os wordList = ['book','cook','table','car','pen','stand', 'carpet','computer','soap', 'ball','van','tub', 'bat','bed','book','boy','bun','can','cake','cap','cat', 'cow','cub','cup','dad','day','dog','doll','dust','fan','feet', 'girl','gun','hall','hat','hen','jar','kite','man','map','men', 'mom','pan','pet','pie','pig','pot','rat','son','sun','toe', 'marshmallow','steak','butterfly', 'fire','chocolate','banana', 'bunny','bubbles','rock','panda','kitten','meerkat', 'crazy', 'dinosaur', 'coffee'] random.shuffle(wordList) points = 0 print("how many words do you want to test yourself on") testLength = int(input()) # how many words do tou want to test? print("how many seconds per word") timePerWord = int(input()) # how much time do you want per word print("These are the words that you have to memorise:") def spaceOut(): for i in range(0,14): print('*') for i in range(0,testLength): spaceOut() print(wordList[i]) spaceOut() time.sleep(timePerWord) print("now its your turn to guess all the words in the correct sequence") for i in range(0,testLength): yourGuess = input() if yourGuess ==wordList[i]: points = points + 1 print("Correct!!") else: print("Nope, the answer is " + wordList[i]) print("your scored " + str(points) + " out of " + str(testLength)) time.sleep(6) I checked out your code and tried to run it but it always gave me an error. I am by no means an expert but I tweaked the code just a bit to make it more aesthetically pleasing for me. I just started learning python 2 weeks ago out of curiosity and passion. I don't want to come off as someone who is cocky or showboating so please do not be offended by what I have done. I changed line 36's code because that was what gave the error this whole time. Everything else i did was just for fun. yourGuess = input() to yourGuess = raw_input().
http://www.instructables.com/id/Memory-Trainer-in-Python/
CC-MAIN-2017-30
en
refinedweb
CV: QBvXdPmJxqpNVEha.0 Visual Studio 2015 RC fixed bugs and known issues This article lists the fixed bugs and known issues for the Microsoft Visual Studio 2015 Release Candidate (RC). detailsTo discover what is new in Visual Studio 2015 RC, see the Visual Studio RC release notes. Known issues Installation Before you install Upgrading from previous releases After you install Issues that affect installing from an ISO Windows App Development Tools Miscellaneous Issues Visual Studio IDE Web platform and Tools ASP.NET and Web Development Agile testing Unit testing Visual Studio Tools for Windows 10 Insider Preview Diagnostic Tools Others - During Visual Studio 2015 RC Setup, if you select a non-Windows drive on which to install Visual Studio, most of the program is still installed to the system drive. Only a small part of the program is installed to the non-Windows drive. - Remove Visual Studio Tools for Windows 10 Insider Preview before you install Visual Studio 2015 RC. If you have installed Visual Studio 2015 CTP6 and the Visual Studio Tools for Windows 10 Insider Preview, you must uninstall Tools for Windows 10 before you install Visual Studio 2015 RC. - When you install Visual Studio 2015 RC on a computer that is running Windows 7 SP1, the authenticode certificate in the Dotfuscator installer fails to verify whether the computer is disconnected from the Internet. To work around this issue, use one of the following methods: - Click Skip Package when this issue occurs to skip the Dotfuscator installation. Visual Studio 2015 Setup will continue as usual. However, Dotfuscator will not be installed. - Connect the computer to the Internet. - When you install Visual Studio 2015 RC, you do not see the progress bar move for some time in the Progress dialog box. Therefore, you might think that Visual Studio Setup is not responding. To work around this issue, use Task Manager to examine the activity behind the setup interface. Look for processes that are using lots of network bandwidth or hard disk time. Setup programs typically use lots of network bandwidth to download packages and lots of hard disk time to decompress and copy files. If programs have installation-related names, you can see the work occurring even though it is not sent to the setup interface. If these programs do not resemble installation programs, they could be competing with Setup for use of these computer resources. - When you upgrade from Visual Studio 2015 CTP to Visual Studio 2015 RC, the shortcuts for Microsoft Test Manager 2015 CTP and Feedback client 2015 CTP are not removed. However, the shortcuts for the RC counterparts are added. You can safely delete the shortcuts from CTP to avoid confusion. If you choose instead to ignore the CTP shortcuts, no functionality is lost because the CTP shortcuts also open the RC version of the product. - Assume that you upgrade to Visual Studio 2015 RC from a Visual Studio 2015 version. You select either the Typical or Custom installation type. After the upgrade, you may find that some features in the earlier Visual Studio version are not present in Visual Studio 2015 RC. To work around this issue, follow these steps: - In Control Panel, open the Programs and Features item before you start the RC installation. - Click the entry for the earlier Visual Studio 2015 release, and then click Custom. - In the maintenance mode dialog box, click Modify. - Note the selected check boxes in the UI. - In the RC installation, click Custom. - Use you notes to match the check box selections for the earlier release. - When a Visual Studio 2013 deployment agent is automatically upgraded to work against Visual Studio 2015 RC server, and the deployment agent is enabled to use Through Release Management Server over HTTP(s) for copying the components, you may experience one of the following issues: - Your release may fail at Deploy step and you receive the following error) - The following error is logged in the event viewer log on deployment agent computer:Timestamp: <DateTime): \r\n\r\n at Microsoft.TeamFoundation.Release.Data.Proxy.RestProxy.BaseDeploymentControllerServiceProxy.GetPackageFileInfos(String packageLocation) at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.HttpPackageDownloader.CopyPackageAndUnpackIt(String packageSourceLocation, String filesDestinationLocation) at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.ComponentProcessor.CopyComponentFilesImplementation(Action`2 copyFolder, Func`2 packageFileInfo, Func`4 downloadFile, Action`1 downloadCompleted) at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.ComponentProcessor.CopyComponentFiles() at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.ComponentProcessor.DeployComponent()Category: General Priority: -1 EventId: 0 Severity: Error Title: Machine: NVM30682 Application Domain: DeploymentAgent.exe Process Id: 4668 Process Name: C:\Program Files\Microsoft Visual Studio 12.0\Release Management\bin\DeploymentAgent.exe Win32 Thread Id: 4876 Thread Name: Extended Properties: To work around these issues, use one of the following methods: - Option 1: Copy Newtonsoft.Json.dll from any Visual Studio 2015 RC deployment agent to Visual Studio 2013 upgraded agent and restart Microsoft Deployment Agent service. Source path on Visual Studio 2015 RC deployment agent: "%ProgramFiles(x86)%\Microsoft Visual Studio 14.0\Release Management\bin\Newtonsoft.Json.dll" or "%ProgramFiles%\Microsoft Visual Studio 14.0\Release Management\bin\Newtonsoft.Json.dll" (according to the location of the deployment agent is installed). Destination path on Visual Studio 2013 deployment agent: "%ProgramFiles(x86)%\Microsoft Visual Studio 14.0\Release Management\bin\Newtonsoft.Json.dll" or "%ProgramFiles%\Microsoft Visual Studio 14.0\Release Management\bin\Newtonsoft.Json.dll" (according to the location of the deployment agent is installed). - Option 2: - Uninstall the existing Visual Studio 2013 deployment agent. - Install Visual Studio 2015 RC deployment agent. -. - If you choose to repair Visual Studio Test Professional installation as part of your install or repair process, you receive the following error message in a dialog box:VSTestConfig.exe has stopped workingYou can safely ignore this error message because it does not impact the repair operation. Click Close the program to resume the repair operation. If you encounter any issues with Microsoft Test Manager post repair, reach out to customer support. - Assume that you install Visual Studio 2015 RC from an ISO file, and then you select the Apache Cordova tools. After you repair Visual Studio, a dialog box appears and asks for the source for JSBreadcrumbRes.msi. If you skip the package, Setup will complete and return a "Tools for Apache Cordova - Templates. The system cannot find the file specified" warning message. To work around this issue, select Skip package during the repair. The package is not necessary, and the warning can be safely ignored. For more known issues that are related to Visual Studio Tools for Apache Cordova, see Tools for Apache Cordova Known Issues. - Visual Studio Setup file includes features that are not contained in the ISO file. To enable the acquisition of the latest versions of new platform features, and to enable more customization of your Visual Studio installation to minimize the installation time and size, Visual Studio Setup includes features that are not included in the ISO files. To work around this issue, create a folder with the files necessary for an offline install experience: - Save the Visual Studio installer to your local computer. - At a command prompt, run the .exe file by using the /layout switch. For example, run the following command: vs_community.exe /layout - Specify the folder to which the setup files should be downloaded. For example, specify the following folder: c:\Users\YourName\Downloads\VSCommunity - After the download is complete, run the .exe file from the specified folder location. For example, open the VSCommunity folder in your Downloads library, and run Vs_community.exe. Note Because of an issue in the RC release, the /layout download option does not download all the software that is required by some Visual Studio features. Some components require an Internet connection to install. -. - To install the development tool for Windows 10 universal applications in Visual Studio Setup, click Custom, click Next, and then select the Windows Universal App Development Tools feature. Windows 10 Insider Preview no longer uses a separate installer. - The Windows Emulators require a physical computer that runs Windows 8.1 (x64) Professional edition or a later version and a processor that supports Client Hyper-V and Second Level Address Translation (SLAT). The emulators do not run when Visual Studio is installed in a Virtual Machine (VM). - When you install Visual Studio 2015 RC, after you click Cancel and Yes in the Progress dialog box, the Progress bar continues for a long time. If you wait, the cancellation request is respected at the next package transition. However, this can take many minutes. Wait for the current processing to end in the usual manner to make sure that you have the best possible partial installation. If you must stop Setup immediately, you can use Task Manager to locate and end the setup process task. Warning This may the put the final setup operation into an indeterminate state. If you choose to terminate an installation before it is completed, we recommend you restart Setup as soon as it is convenient, and then either repair the installation or uninstall the program. - If you have disabled JavaScript in Internet Explorer, you cannot sign in to Visual Studio and sync settings together with other devices. To work around this issue, enable JavaScript for Internet Explorer. To do this, follow these steps: - On the Tools menu, click Internet Options, and then click the Security tab. - Click the Internet zone. - If you do not have to customize your Internet security settings, click Default Level. Then, go. - When you enable Enhanced Security in Internet Explorer, you cannot sign in to Visual Studio on Windows Server because Enhanced Security blocks the online service URI that is required by Visual Studio to sign in to the online service. Restrictions that disable JavaScript or cookies also prevent Visual Studio from signing in correctly. To work around this issue, click Add to add the required URLs into the Windows Server exclusion list if you see the following dialog box. Then, restart Visual Studio and try to sign in again. Another workaround is to turn off Internet Explorer Enhanced Security Configuration. - C++ and JavaScript applications that are referencing a managed Windows Runtime Metadata (.winmd) file is not optimized by having .NET Native because .NET Native is not enabled for those projects. To enable .NET Native, you must modify the JSProj or VCXProj file. To work around this issue, use one of the following methods: - For C++ applications: - Close the project that you want to modify. - Open the VCXProj file in a text editor. - Find the PropertyGroup element that does not contain a Condition attribute. - Add "<EnableDotNetNativeCompatibleProfile>true</EnableDotNetNativeCompatibleProfile>" in the PropertyGroup element. - Find the PropertyGroup elements that contain Condition="'$(Configuration)|$(Platform)'=='Release|<arch>'" for which <arch> is Win32, ARM, or x64. - Add "<UseDotNetNativeToolchain>true</UseDotNetNativeToolchain>" inside each PropertyGroup element. - Save the VCXProj file. - For JavaScript applications: - Close the project you want to modify. - Open the JSProj file in a text editor. - Find the PropertyGroup element that does not contain a Condition attribute. - Add "<EnableDotNetNativeCompatibleProfile>true</EnableDotNetNativeCompatibleProfile>" in the PropertyGroup element. - Find the ProjectConfiguration elements that contain Include="Release|<arch>" for which <arch> is ARM, x64, or x86. - Add "<UseDotNetNativeToolchain>true</UseDotNetNativeToolchain>" inside each ProjectConfiguration element. - Save the JSProj file. - The AnyCPU platform configuration is not supported for Windows 10 Insider Preview applications that are built by using C# and Visual Basic. This release of Visual Studio uses the .NET Native to build Windows 10 applications. The .NET Native compiles C# and Visual Basic code to native code and is not CPU-agnostic. - When you debug C# or Visual Basic Windows 8.1 applications, DataTips might not be displayed when you hover over expressions. Also, evaluating expressions in debugger windows may fail and return an error message that resembles the following: error CS0012: The type 'Windows.UI.Core.Dispatcher’ is defined in an assembly that is not referenced. You must add a reference to assembly 'Windows.UI.winmd, ... To work around the issue, enable the legacy C# and VB expression evaluators. - Static Code Coverage files are not collected if you use Visual Studio 2013 agents that are configured against Visual Studio 2015 or TFS 2015 and if you receive the following error message: System.DllNotFoundException: Unable to load DLL 'VSCover 140': The specified module could not be found. (Exception from HRESULT:0x8007007E) Note This issue occurs only when you are running tests on a remote computer and you try to collect static code coverage data. - When you reference a portable class library in a project, some operations may cause errors that are reported in the Visual Studio Error List, even though build succeeds. To work around this issue, follow these steps: - Right-click the project in which the errors are reported in Solution Explorer, and then click Unload Project. - Right-click again on the project in Solution Explorer, and then click Edit <ProjectName>. Note In this command, <ProjectName> represents the actual project name. - In the <PropertyGroup> entry at the top of the project file that has no Condition attribute, add the following: <CheckForSystemRuntimeDependency>true</CheckForSystemRuntimeDependency> - Save and close the file. - Right-click the project name in Solution Explorer, and then click Reload Project. Web platform and Tools - In Visual Studio 2015 RC, you cannot configure your build settings when you use TypeScript in an ASP.NET 5 or Cordova project. To work around this issue, follow these steps: - Right-click the project, and then click Unload project. - Right-click the project, and then click Edit <Project>. Note In this command, <Project> represents the actual project name. - In the XML file for your project, you can define the MSBuild settings that are used by the TypeScript editor by following these guidelines: - These settings are also used to configure how .ts files are built in a Visual Studio Apache Cordova project. - For an ASP.NET 5 project, see Using TypeScript in Visual Studio Code. <PropertyGroup> <TypeScriptTarget>ES5</TypeScriptTarget> <TypeScriptCompileOnSaveEnabled>true</TypeScriptCompileOnSaveEnabled> <TypeScriptNoImplicitAny>false</TypeScriptNoImplicitAny> <TypeScriptModuleKind>none</TypeScriptModuleKind> <TypeScriptRemoveComments>false</TypeScriptRemoveComments> <TypeScriptOutFile></TypeScriptOutFile> <TypeScriptOutDir></TypeScriptOutDir> <TypeScriptGeneratesDeclarations>false</TypeScriptGeneratesDeclarations> <TypeScriptSourceMap>true</TypeScriptSourceMap> <TypeScriptMapRoot></TypeScriptMapRoot> <TypeScriptSourceRoot></TypeScriptSourceRoot> <TypeScriptNoEmitOnError>true</TypeScriptNoEmitOnError> </PropertyGroup>See the values for TypeScript MSBuild settings. For more known issues that are related to Visual Studio Tools for Apache Cordova, see Tools for Apache Cordova Known Issues. - In the Visual Studio 2015 RC, Project Spartan (desktop or mobile) is not displayed as a debug target in the F5 list for web-based projects such as ASP.NET. To work around this issue, change the default browser in Windows 10 Insider Preview from Project Spartan to Internet Explorer. To do this, open the Start menu, point to Settings, point to System, and then click Default Apps. Under Web browser, click Project Spartan, and then click Internet Explorer. - Web Essentials CTP version is not disabled when you upgrade to Visual Studio 2015 RC. Older versions of Web Essentials are expected to be disabled when Visual Studio upgrades are installed, however, this mechanism is broken in Visual Studio 2015 RC. No incompatibilities with Web Essentials for CTP6 in Visual Studio 2015 RC. We strongly recommend that you uninstall or upgrade versions of Web Essentials other than the Visual Studio RC version. - Knockout IntelliSense is disabled by default. Knockout IntelliSense does not work in Visual Studio 2015 RC until a .jsx file is opened. There is no item template for JSX, just add a JavaScript file and change its extension for .js to .jsx. The file can be opened, closed, and ignored. Opening the .jsx file will trigger the code necessary to make KnockOut features work as in previous releases.Installing Web Essentials for Visual Studio 2015 RC will fix this issue. Therefore, you do not have to open a JSX file. - When you create a Web Forms 4.5 WAP and open a Web Form page, you receive the following errors in the Errors List window: The project will run without any issues. Error CS0234 The type or namespace name 'global_asax' does not exist in the namespace 'ASP' (are you missing an assembly reference?) Error CS0234 The type or namespace name 'Linq' does not exist in the namespace 'System' (are you missing an assembly reference?) - Assume that you use new language features for C# and VB in Visual Studio 2015 RC. You receive a runtime error when you use C# or VB in a Web Form page or in Razor Views. To work around this issue, install Microsoft.CodeDom.Providers.DotNetCompilerPlatform NuGet package. This package will substitute the Roslyn-based provider for the in-box CodeDom providers for ASP.NET. - The Coded UI Test (CUIT) does not work for UAP Phone applications on Windows 10 Insider Preview. - VSTest task in Build.VNext does not upload test results to the TFS server. Instead, you can retrieve the test results from a .trx file that is stored on the execution computer. - The Run All command in the Test Explorer does not work when you try to run all tests for a Universal applications by having the deployment target set to a phone device or the emulator. To work around this issue, select all the tests in Test Explorer, and then run them on these deployment targets. - Debugging a unit test by having the deployment target set to a phone device or the emulator is not supported in Visual Studio 2015 RC. - When you run the Create Unit Tests command from the context menu, and then you run the Save command from the IntelliTest Exploration Results window, an Android Unit Test project is created. To work around this issue, follow these steps: - Rename UnitTestProject.zip in "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Xamarin\Xamarin\[Version]\T\PT\Android" to UnitTestProject2.zip. - Open the "Developer Command Prompt for Visual Studio 2015" as an administrator. - Run the following command: devenv /InstallVSTemplates - The Visual Studio Tools for Windows 10 Insider Preview will be available soon. To check for availability, see. - For known issues and installation instructions, see the Windows App Development Forums. - Express for Windows 10 Does Not Support Windows 8.1, Windows Phone 8.1, or Windows Phone 8.0. Although this RC release of Visual Studio Express 2015 for Windows 10 includes support for creating and maintaining apps for Windows 8.1, Windows Phone 8.1, and Windows Phone 8.0, the final release of Visual Studio Express 2015 for Windows 10 will not. To work around this issue: Use Visual Studio Community 2015 instead. Visual Studio Community, Visual Studio Professional, and Visual Studio Enterprise will continue to support Windows Store apps for Windows 8.1, Windows Phone 8.1, and Windows Phone 8.0. - When you start debugging for the first time, the Diagnostic Tools window fails to open, and you receive the following error message: The Diagnostic Tools failed unexpectedly However, the Diagnostic Tools window opens correctly in subsequent debugging sessions or after you restart Visual Studio. For more information about the tools, search on "Known Issues for Windows 10 SDK and Tools" at the following MSDN forums website: - Source control may show some characters in Chinese instead of English. In order to fix this problem please do a Repair for Visual Studio 2015 RC in Program and Features. - "Collection type must derive from ICollection<T>" error occurs when you execute some operations in the Visual Studio 2015 RC UI Assume that you upgrade from an earlier Windows 10 Preview Build to Flight 10122 or later versions. When you open, create projects, or perform other actions in Visual Studio 2015 RC UI, you receive the following error message:Collection type must derive from ICollection<T> To repair this issue, use the following method: - Open a Visual Studio Developer command prompt as administrator. - Type devenv.exe /setup into the command prompt, and press Enter. - Repair Visual Studio. Breaking changes Cloud platform Agent Visual C++ - Previously, you could version a Windows application by using four different numbers: major version, minor version, build version, and revision version. This version is specified in the AppxManifest.xml file. The revision number of Windows 10 Insider Preview applications is the fourth part of a x.x.x.x version string and is reserved for use by Microsoft use. Applications should always use "0" as the revision number. Windows App Certification Kit (WACK) and the Windows Store will reject an application that does not use a revision number of "0." - The .NET Native for Visual Studio 2015 no longer supports Windows 8.1 or Windows Phone 8.1 applications. Only Windows 10 Insider Preview applications are supported in this release and in future releases. - By using the Agents for Visual Studio 2015, you do not require a separate test controller because agents can handle the orchestration by communicating to the TFS 2015 or Visual Studio Online. In all the new Visual Studio 2015 and TFS 2015 scenarios, we recommend that you use Agents for Visual Studio 2015. For other Test Controller required scenarios, we recommend that you use Agents for Visual Studio 2013 Update 5. Test Controller is fully compatible with TFS 2013 and TFS 2015 products. The following table summarizes our recommendations. - The __declspec(align) struct is not allowed in functions in Visual Studio 2015 RC. - Exception objects have to be either copyable or movable. The following code compiles in Visual Studio 2013, but does not compile in Visual Studio 2015: struct S {public:S();private:S(const S &);}; int main(){throw S(); // error} or struct S {S();explicit S(const S &);}; int main(){throw S(); // error} - Capturing the exception by value requires the exception object to be copyable. The following code compiles in Visual Studio 2013, but does not compile in Visual Studio 2015: struct B {public:B();private:B(const B &);}; struct D : public B {}; int main(){try{}catch (D) // error{}} - The mutable specifier can be applied only to names of class data members(9.2). They cannot be applied to names that are declared const or static, and also cannot be applied to reference members. For example: class X { mutable const int* p; // OK mutable int* const q; // ill-formed };To work around this issue, simply remove the redundant "mutable" instance. - We used to generate a ctor or dtor for the anonymous union that is non-conformant in either C++03 or C++11. These are now deleted. -(). After Visual Studio 2015 RC, it prints nothing. Additionally, you receive the following warning message: warning C4588: 'U::s': behavior change: destructor is no longer implicitly called - The type of the explicit non-type template argument should match the type of the non-type template parameter. However, Visual Studio 2015 RC sometimes fails to validate this. For example, the following code is no longer allowed: struct S2{ void f(int); void f(int, int);};struct Sink{ template <class C, void (C::*Function)(int) const> void f();};void f(){ Sink sink; sink.f<S2, &S2::f>();} - Data members of unions can no longer have reference types. - When you use the /Zc:forScope- command in Visual Studio 2015 RC, you receive the following warning message: cl : Command line warning D9035: option 'Zc:forScope-' has been deprecated and will be removed in a future release - Macros that immediately follow a string without any white space between the string and the macro are now interpreted as user-defined literal suffixes. For example: //Before compiled#define _x "there"char* func() { return "hello"_x;}int main(){ char * p = func(); return 0;}When you compile the code, you receive the following error message: test.cpp(52): error C3688: invalid literal suffix '_x'; literal operator or literal operator template 'operator ""_x' not found test.cpp(52): note: Did you forget a space between the string literal and the prefix of the following string literal? -(). In Visual Studio 2015 RC, it prints nothing. Additionally, you receive the following warning message: warning C4587: 'U::s': behavior change: constructor is no longer implicitly called. - In Visual Studio 2015 RC, the implicitly-declared copy constructor is deleted if there is a user-defined move constructor or a move-assignment operator. - The concatenation of adjacent wide or raw string literals now requires a space to be inserted (L"Hello" L"World"), because the prefix for the second string is now treated as a user-defined literal suffix. For example: - const wchar_t *s = L"Hello"L"World"; // emits error C3688: invalid literal suffix 'L'; literal operator or literal operator template 'operator ""L' not found - const wchar_t *t = L”Hello” L”World”; // compiles without error. Restart requirementYou may have to restart your computer after you install this package. Supported architectures - 32-bit (x86) - 64-bit (x64) (WOW) - ARM Third-party applications - Visual Studio 2015 RC install lets you install third-party applications. For information on which third-party applications are required when you install Cross Platform Mobile Development tools from Visual Studio 2015 RC, check out KB Article 3060693. - Visual Studio 2015 RC uninstall does not uninstall the third-party applications. For information on how to uninstall third-party applications that installed with Visual Studio 2015 RC, check out KB Article 3060695. Third-party information disclaimer Third-party information disclaimer The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products. Properties Article ID: 3025133 - Last Review: May 15, 2017 - Revision: 3
https://support.microsoft.com/en-us/help/3025133/visual-studio-2015-rc-fixed-bugs-and-known-issues
CC-MAIN-2017-30
en
refinedweb
A question that pops up for many DSP-ers working with IIR and FIR filters, I think, is how to look at a filter’s frequency and phase response. For many, maybe they’ve calculated filter coefficients with something like the biquad calculator on this site, or maybe they’ve used a MATLAB, Octave, Python (with the scipy library) and functions like freqz to compute and plot responses. But what if you want to code your own, perhaps to plot within a plugin written in c++? You can find methods of calculating biquads, for instance, but here we’ll discuss a general solution. Fortunately, the general solution is easier to understand than starting with an equation that may have been optimized for a specific task, such as plotting biquad response. Plotting an impulse response One way we could approach it is to plot the impulse response of the filter. That works for any linear, time-invariant process, and a fixed filter qualifies. One problem is that we don’t know how long the impulse response might be, for an arbitrary filter. IIR (Infinite Impulse Response) filters can have a very long impulse response, as the name implies. We can feed a 1.0 sample followed by 0.0 samples to obtain the impulse response of the filter. While we don’t know how long it will be, we could take a long impulse response, perhaps windowing it, use an FFT to convert it to the frequency domain, and get a pretty good picture. But it’s not perfect. For an FIR (Finite Impulse Response) filter, though, the results are precise. And the impulse response is equal to the coefficients themselves. So: For the FIR, we simply run the coefficients through an FFT, and take the absolute value of the complex result to get the magnitude response. (The FFT requires a power-of-2 length, so we’d need to append zeros to fill, or use a DFT. But we probably want to append zeros anyway, to get more frequency points out for our graph.) Plotting the filter precisely Let’s look for a more precise way to plot an arbitrary filter’s response, which might be IIR. Fortunately, if we have the filter coefficients, we have everything we need, because we have the filter’s transfer function, from which we can calculate a response for any frequency. The transfer function of an IIR filter is given by \(H(z)=\frac{a_{0}z^{0}+a_{1}z^{-1}+a_{2}z^{-2}…}{b_{0}z^{0}+b_{1}z^{-1}+b_{2}z^{-2}…}\) z0 is 1, of course, as is any value raised to the power of 0. And for normalized biquads, b0 is always 1, but I’ll leave it here for generality—you’ll see why soon. To translate that to an analog response, we substitute ejω for z, where ω is 2π*freq, with freq being the normalized frequency, or frequency/samplerate: \(H(e^{j\omega})=\frac{a_{0}e^{0j\omega}+a_{1}e^{-1j\omega}+a_{2}e^{-2j\omega}…}{b_{0}e^{0j\omega}+b_{1}e^{-1j\omega}+b_{2}e^{-2j\omega}…}\) Again, e0jω is simply 1.0, but left so you can see the pattern. Here it is restated using summations of an arbitrary number of poles and zeros: \(H(e^{j\omega})=\frac{\sum_{n=0}^{N}a_{n}e^{-nj\omega}}{\sum_{m=0}^{M}b_{m}e^{-mj\omega}}\) For any angular frequency, ω, we can solve H(ejω). A normalized frequency of 0.5 is half the sample rate, so we probably want to step it from 0 to 0.5—ω from 0 to π—for however many points we want to evaluate and plot. Coding it From that last equation, we can see that a single FOR loop will handle the top or the bottom coefficient sets. Here, we’ll code that into a function that can evaluate either zeros (a terms) or poles (b terms). We’ll refer to this as our direct evaluation function, since it evaluates the coefficients directly (as opposed to evaluating an impulse response). You’ve probably noticed the j, meaning an imaginary part of a complex number—the output will be complex. That’s OK, the output of an FFT is complex too, and we know how to get magnitude and phase from it already. Some languages support complex arithmetic, and have no problem evaluating “ e**(-2*j*0.5)”—either directly, or with an “exp” (exponential) function. It’s pretty easy in Python, for instance. (Something like, coef[idx] * math.e**(-idx * w * 1j), as the variable idx steps through the coefficients array.) For languages that don’t, we can use Euler’s formula, ejx = cos(x) + j * sin(x); that is, the real part is the cosine of the argument, and the imaginary part is the sine of it. (Remember, j is the same as i—electrical engineers already used i to symbolize current, so they diverged from physicist and used j. Computer programming often use j, maybe because i is a commonly used index variable.) So, we create our function, run it on the numerator coefficients for a given frequency, run it again on the denominator coefficients, and divide the two. The result will be complex—taking the absolute value gives us the magnitude response at that frequency. Revisiting the FIR Since we already had a precise method of looking at FIR response via the FFT/DFT, let’s compare the two methods to see how similar they are. To use our new method for the case of an FIR, we note that the denominator is simply 1, so there is no denominator to evaluate, no need for division. So: For the FIR, we simply run the coefficients through our evaluation function, and take the absolute value of the complex result to get the magnitude response. Does that sound familiar? It’s the same process we outlined using the FFT. And back to IIR OK, we just showed that our new evaluation function and the FFT are equivalent. (There is a difference—our evaluation function can check the response at an arbitrary frequency, whereas the FFT frequency spacing is defined by the FFT size, but we’ll set that aside for the moment. For a given frequency, the two produce identical results.) Now, if the direct evaluation function and the FFT give the same results, for the same frequency point, and the numerator and denominator are evaluated by the same function, by extension we could also get a precise evaluation by substituting an FFT process for both the numerator and denominator, and dividing the two as before. Note that we’re no longer talking about the FFT of the impulse response, but the coefficients themselves. That means we no longer have the problem of getting the response of an impulse that can ring out for an unknown time—we have a known number of coefficients to run through the FFT. Which is better? In general, the answer is our direct evaluation method. Why? We can decide exactly where we want to evaluate each point. That means that we can just as easily plot with log frequency as we can linear. But, there may be times that the FFT is more suitable—it is extremely efficient for power-of-2 lengths. (And don’t forget that we can use a real FFT—the upper half of the general FFT results would mirror the lower half and not be needed.) An implementation We probably want to evaluate ω from 0 to π, corresponding to a range of half the sample rate. So, we’d call the evaluation function with the numerator coefficients and with the denominator coefficients, for every ω that we want to know (spacing can be linear or log), and divide the two. For frequency response, we’d take the absolute value (equivalently, the square root of the sum of the squared real and imaginary parts) of each complex result to obtain magnitude, and arc tangent of the imaginary part divided by the real part (specifically, we use the atan2 function, which takes into account quadrants). Note that this is the same conversion we use for FFT results, as you can see in my article, A gentle introduction to the FFT. \(magnitude:=\left |H \right |=abs(H)=\sqrt{H.real^2+H.imag^2}\) \(phase := atan2(H.imag,H.real)\) For now, I’ll leave you with some Python code, as it’s cleaner and leaner than a C or C++ implementation. It will make it easier to transfer to any language you might want (Python can be quite compact and elegant—I’m going for easy to understand and translate with this code). Here’s the direct evaluation routine corresponding to the summation part of the equation (you’ll also need to “import numpy” to have e available—also available in the math library, but we’ll use numpy later, so we’ll stick with numpy alone): import numpy as np # direct evaluation of coefficients at a given angular frequency def coefsEval(coefs, w): res = 0 idx = 0 for x in coefs: res += x * np.e**(-idx * 1j * w) idx += 1 return res Again, we call this with the coefficients for each frequency of interest. Once for the numerator coefficients (the a coefficients on this website, corresponding to zeros), once for the denominator coefficients (b, for the poles—and don’t forget that if there is no b0, the case for a normalized filter, insert a 1.0 in its place). Divide the first result by the second. Use use abs (or equivalent) for magnitude and atan2 for phase on the result. Repeat for every frequency of interest. Here’s a python function that evaluates numerator and denominator coefficients at an arbitrary number of points from 0 to π radians, with linear spacing, returning array of magnitude (in dB) and phase (in radian, between +/- π): # filter response, evaluated at numPoints from 0-pi, inclusive def filterEval(zeros, poles, numPoints): magdB = np.empty(0) phase = np.empty(0) for jdx in range(0, numPoints): w = jdx * math.pi / (numPoints - 1) resZeros = coefsEval(zeros, w) resPoles = coefsEval(poles, w) # output magnitude in dB, phase in radians Hw = resZeros / resPoles mag = abs(Hw) if mag == 0: mag = 0.0000000001 # limit to -200 dB for log magdB = np.append(magdB, 20 * np.log10(mag)) phase = np.append(phase, math.atan2(Hw.imag, Hw.real)) return (magdB, phase) Here’s an example of evaluating biquad coefficients at 64 evenly spaced frequencies from 0 Hz to half the sample rate (these coefficients are right out of the biquad calculator on this website—don’t forget to include b0 = 1.0): zeros = [ 0.2513643668578741, 0.5027287337157482, 0.2513643668578741 ] poles = [ 1.0, -0.17123074520885395, 0.1766882126403502 ] (magdB, phase) = filterEval(zeros, poles, 64) print("\nMagnitude:\n") for x in magdB: print(x) print("\nPhase:\n") for x in phase: print(x) Next up, a javascript widget to plot magnitude and phase of arbitrary filter coefficients. Extra credit The direct evaluation function performs a Fourier analysis at a frequency of interest. For better understanding, reconcile it with the discrete Fourier transform described in A gentle introduction to the FFT. In that article, I describe probing the signal with cosine and sine waves to obtain the response at a given frequency. Look again at Euler’s formula, which shows that ejω is cosine (real part) and sine (imaginary part), which the article alludes to this under the section “Getting complex”. You should understand that the direct evaluation function presented here could be used to produce a DFT (given complete evaluation of the signals at appropriately spaced frequencies). The main difference is that for this analysis, we need not do a complete and reversible transform—we need only analyze frequency response values that we want to graph. Nice sum up! This is actually what scipy.signal.freqz does with the polynomial. I would definitely change the Python code though as you could benefit greatly from vector instructions. The first would be np.exp() instead of np.e**/for coeffs loop and then in your second script, you can use array masking to vectorize the range call. Thanks for the input, Matthieu, good points. Yes, if I really needed a Python routine, I’d definitely code it differently, but going for easy translation. In fact, I wrote a Javascript widget to display frequency and phase response (will post tonight, probably, after I get a chance to add some text explanation and instructions) that was nearly a direct translation. I love Python, but it’s not so good for translatable examples unless you code “C-like”. 😉 Hi Nigel, I am always amazed at your ability to write easy to understand descriptions of things that are not so easy to understand, if that makes sense. 😉 Really love when you post new articles; they are always useful to me. Thank you very much! Hi Nigel, as usual, amazing job 🙂 Why did you choose nPoints = 64? What does it means? Somethings near “10 hz”? Or it is just random? I mean: which w must I need to supply for example if I want to catch magnitude at 100 and 2300 hz? The filterEval function shown specifically divides the frequency range into every spaced points and returns an array of readings. That’s what you’d want to do when plotting linear response, for instance. I was going for minimal code here, to keep the idea clear. But in practice you might want to break the inner loop out to evaluate a single point—to support log frequency spacing, for instance. You can also look at the JavaScript source code for the grapher widget in your browser; here’s the coefficient evaluation function for a single point (normalized frequency—frequency in Hz divided by sample rate); the function implements just the single summation operation in the equations in the article—the coefsEval Python function—so you’d use it on the numerator (zeros) coefficients, the denominator (poles) coefficients, and divide; also note I’m using math.js—it keeps the code a lot cleaner since it handles the complex math: Clear! Thank you very much. Ill try to convert it into C++ 😉 Not sure why, but I’ve a problem getting amp from freq. Check this code in JavaScript (mostly copy and paste from your code): It alerts the amp in dB of the freq 606Hz. I have those coefficients: a0 = 1.1311308700603495 a1 = -1.9742954081351101 a2 = 0.8512071363732295 b0 = 1.0000000000000000 b1 = -1.9742954081351101 b2 = 0.9823380064335793 If I show them on your tool , I can clearly see that near the freq 606 its up of +20db more or less. Thats correct, the filter is working weel (even in my C++ plugin). But when I try to plot from your JS code (the code used in this tutorial), I just get +7db at that Hz. The whole slope is slowed down in gain. Where am I wrong? If you take a look at some of your intermediate results, I think you’ll see the problem is that you’re not doing complex math. Note that I’m using “math.abs”, etc., not “Math.abs”—I’m using the js.math library. Otherwise, you need to use Euler’s identity, unroll the complex math (more code). What a mistake! I see “0” as real part and I didn’t figure out that exp will produce value as well 🙂 Thanks Just an arbitrary number of points to evaluate. Note that the function explains the parameter—but I added a few words to the text to make it clearer. It’s surprising to find on earlevel.com a resource so precious about equations. We will note your page as a benchmark for Evaluating filter frequency response . We also invite you to link and other web resources for equations like or. Thank you and good luck! Thanks for an awesome article! Helped a lot to understand how to plot these. Thanks, It was all I needed to check my coefficients by plotting then in a spreadsheet. Regards AlanC Here is the simply implementation in C# (without any “magic” Complex computation).
http://www.earlevel.com/main/2016/12/01/evaluating-filter-frequency-response/
CC-MAIN-2017-30
en
refinedweb
I want to make this program that acts as a bank, how do I make sure the correct ID number must be entered with the correct pin and have it depending on the id you entered print hello then their name and prompt how much money they have in the bank. attempts = 0 store_id = [1057, 2736, 4659, 5691, 1234, 4321] store_name = ["Jeremy Clarkson", "Suzanne Perry", "Vicki Butler-Henderson", "Jason Plato"] store_balance = [172.16, 15.62, 23.91, 62.17, 131.90, 231.58] store_pin = [1057, 2736, 4659, 5691] start = int(input("Are you a member of the Northern Frock Bank?\n1. Yes\n2. No\n")) if start == 1: idguess = "" pinguess = "" while (idguess not in store_id) or (pinguess not in store_pin): idguess = int(input("ID Number: ")) pinguess = int(input("PIN Number: ")) if (idguess not in store_id) or (pinguess not in store_pin): print("Invalid Login") attempts = attempts + 1 if attempts == 3: print("This ATM has been blocked for too many failed attempts.") break elif start == 2: name = str(input("What is your full name?: ")) pin = str(input("Please choose a 4 digit pin number for your bank account: ")) digits = len(pin) balance = 100 while digits != 4: print("That Pin is Invalid") pin = str(input("Please choose a 4 digit pin number for your bank account: ")) digits = len(pin) store_name.append(name) store_pin.append(pin) I'm very impressed by how much you've elaborated on your program. Here's how I would view your solution. So to create a login simulation, I would instead use a dictionary. That way you can assign an ID to a PIN. For example: credentials = { "403703": "121", "3900": "333", "39022": "900" } Where your ID is on the left side of the colon and the PIN is on the right. You would also have to assign the ID to a name that belongs to that ID using, you guessed it, a dictionary! bankIDs = { "403703": "Anna", "3900": "Jacob", "39022": "Kendrick" } Now that you've done that, you can create your virtual login system using if/else control flow. I've made my code like this: attempts = 0 try: while attempts < 3: id_num = raw_input("Enter your ID: ") PIN = raw_input("Password: ") if (id_num in credentials) and (PIN == credentials[id_num]): print "login success." login(id_num) else: print "Login fail. try again." attempts += 1 if attempts == 3: print "You have reached the maximum amount of tries." except KeyboardInterrupt: print "Now closing. Goodbye!" Note the try and except block is really optional. You could use the break operator like you did in your code if you wanted to, instead. I just like to put a little customization in there (Remember to break out of your program is CTRL-C). Finally, Python has a way of making life easier for people by using functions. Notice I used one where I put login(id_num). Above this while loop you'll want to define your login so that you can display a greeting message for that particular person. Here's what I did: def login(loginid): print "Hello, %s!" % bankIDs[loginid] Simple use of string formatting. And there you have it. The same can be done with displaying that person's balance. Just make the dictionary for it, then print the code in your login definition. The rest of the code is good as it is. Just make sure you've indented properly your while-loop inside the elif on the bottom of your code, and your last 2 lines as well. Hope I helped. Cheers!
https://codedump.io/share/h6U8YTCkwjRU/1/bank-atm-program-login
CC-MAIN-2017-30
en
refinedweb
Deploying Bokeh Apps ¶ import numpy as np import holoviews as hv hv.extension('bokeh') Purpose ¶ HoloViews is an incredible convenient way of working interactively and exploratively within a notebook or commandline context, however when you have implemented a polished interactive dashboard or some other complex interactive visualization you often want to deploy it outside the notebook. ¶ ¶ The most convenient way to work with HoloViews is to iteratively improve a visualization in the notebook. Once you have developed a visualization or dashboard that you would like to deploy you can use the BokehRenderer to save the visualization or deploy it as a bokeh server app. Here we will create a small interactive plot, using Linked Streams , which mirrors the points selected using box- and lasso-select tools in a second plot and computes some statistics: %%opts Points [tools=['box_select', 'lasso_select']] #)(style=dict(color='red')) # Combine points and DynamicMap layout = points + hv.DynamicMap(selected_info, streams=[selection]) layout Working with the BokehRenderer ¶01808> Using the state attribute on the HoloViews plot we can access the bokeh Column model, which we can then work with directly. hvplot.state Column ( id = '5a8b7949-decd-4a96-b1f8-8f77ec90e5bf', …)' Deployment from a script with bokeh serve is one of the most common ways to deploy a bokeh app. Any .py or .ipynb file that attaches a plot to bokeh's curdoc can be deployed using bokeh serve . The easiest way to do this is using the BokehRenderer.server_doc method, which accepts any HoloViews object generates the appropriate bokeh models and then attaches them to curdoc . See below to see a full standalone script: import numpy as np import holoviews as hv import holoviews.plotting.bokeh renderer = hv.renderers('bokeh') points = hv.Points(np.random.randn(1000,2 ))(plot=dict(tools=['box_select', 'lasso_select'])) selection = hv.streams.Selection1D(source=points) def selected_info(index): arr = points.array()[index] if index: label = 'Mean x, y: %.3f, %.3f' % tuple(arr.mean(axis=0)) else: label = 'No selection' return points.clone(arr, label=label)(style=dict(color='red')) layout = points + hv.DynamicMap(selected_info, streams=[selection]) doc = renderer.server_doc(layout) doc.title = 'HoloViews App' In just a few steps, i.e. by our plot to a Document renderer.server_doc we have gone from an interactive plot which we can iteratively refine in the notebook to a deployable bokeh app. Note also that we can also deploy an app directly from a notebook. By adding BokehRenderer.server_doc(holoviews_object) to the end of the notebook any regular .ipynb file can be made into a valid bokeh app, which can be served with bokeh serve example.ipynb . In addition to starting a server from a script we can also start up a server interactively, so let's do a quick deep dive into bokeh Application and Server objects and how we can work with them from within HoloViews. A bokeh Application encapsulates a Document and allows it to be deployed on a bokeh server. The BokehRenderer.app method provides an easy way to create an Application and either display it immediately in a notebook or manually include it in a server app. To let us try this out we'll define a slightly simpler plot to deploy as a server app. We'll define a DynamicMap of a sine Curve varying by frequency, phase and an offset. def sine(frequency, phase, amplitude): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(frequency*xs+phase)*amplitude))(plot=dict(width=800)) ranges = dict(frequency=(1, 5), phase=(-np.pi, np.pi), amplitude=(-2, 2), y=(-2, 2)) dmap = hv.DynamicMap(sine, kdims=['frequency', 'phase', 'amplitude']).redim.range(**ranges) app = renderer.app(dmap) print(app) <bokeh.application.application.Application object at 0x11c0ab090> Once we have a bokeh Application we can manually create a Server instance to deploy it. To start a Server instance we simply define a mapping between the URL paths and apps that we want to deploy. Additionally we define a port (defining port=0 will use any open port), and the IOLoop . To get an IOLoop we simply use IOLoop.current() , which will already be running if working from within a notebook but will give you a new IOLoop outside a notebook context. from tornado.ioloop import IOLoop from bokeh.server.server import Server loop = IOLoop.current() server = Server({'/': app}, port=0, loop=loop) Next we can define a callback on the IOLoop that will open the server app in a new browser window and actually start the app (and if outside the notebook the IOLoop): def show_callback(): server.show('/') loop.add_callback(show_callback) server.start() # Outside the notebook ioloop needs to be started # loop.start() After running the cell above you should have noticed a new browser window popping up displaying our plot. Once you are done playing with it you can stop it with: server.stop() The BokehRenderer.app method allows us to the same thing automatically (but less flexibly) using the show=True and new_window=True arguments: server = renderer.app(dmap, show=True, new_window=True) We will once again stop this Server before continuing: server.stop() Instead of displaying our app in a new browser window and manually creating a Server instance we can also display an app inline in the notebook simply by supplying the show=True argument to the BokehRenderer.app method.: renderer.app(dmap, show=True, websocket_origin='localhost:8888') Periodic callbacks ¶)))(plot=dict(width=800)) dmap = hv.DynamicMap(sine, streams=[hv.streams.Counter()]) app = renderer.app(dmap, show=True, websocket_origin='localhost:8888') Once we have created the app we can start a periodic callback with the periodic method on the DynamicMap . The first argument to the method is the period and the second argument the number of executions to trigger (we can set this value to None to set up an indefinite callback). As soon as we start this callback you should see the Curve above become animated. dmap.periodic(0.1, 100) While HoloViews provides very convenient ways of creating an app it is not as fully featured as bokeh itself is. Therefore we often want to extend a HoloViews based app with bokeh plots and widgets created directly using the bokeh API. Using the BokehRenderer we can easily convert a HoloViews object into a bokeh model, which we can combine with other bokeh models as desired. To see what this looks like we will use the sine example again but this time connect a Stream to a manually created bokeh slider widget and play button. To display this in the notebook we will reuse what we learned about creating a Server instance using a FunctionHandler , you can of course run this in a script by calling the modify_doc function with with the Document returned by the bokeh curdoc() function. import numpy as np import holoviews as hv from bokeh.application.handlers import FunctionHandler from bokeh.application import Application from bokeh.io import show from bokeh.layouts import layout from bokeh.models import Slider, Button renderer = hv.renderer('bokeh').instance(mode='server') # Create the holoviews app again def sine(phase): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(xs+phase)))(plot=dict(width=800)) stream = hv.streams.Stream.define('Phase', phase=0.)() dmap = hv.DynamicMap(sine, streams=[stream]) #) def animate(): if button.label == '► Play': button.label = '❚❚ Pause' doc.add_periodic_callback(animate_update, 50) else: button.label = '► Play' doc.remove_periodic_callback(animate_update) button = Button(label='► Play', width=60) button.on_click(animate) # Combine the holoviews plot and widgets in a layout plot = layout([ [hvplot.state], [slider, button]], sizing_mode='fixed') doc.add_root(plot) return doc # To display in the notebook handler = FunctionHandler(modify_doc) app = Application(handler) show(app, notebook_url='localhost:8888') # To display in a script # doc = modify_doc(curdoc()) If you had trouble following the last example, you will noticed how much verbose things can get when we drop down to the bokeh API. The ability to customize the plot comes at the cost of additional complexity. However when we need it the additional flexibility of composing plots manually is there.
http://holoviews.org/user_guide/Deploying_Bokeh_Apps.html
CC-MAIN-2017-30
en
refinedweb
AWS Partner Network (APN) Blog, I walk through best practices for architecting your AWS Marketplace SaaS Subscription across multiple AWS accounts. Let’s begin! Overview Calls to the SaaS Subscriptions APIs, ResolveCustomer and BatchMeterUsage, must be signed by credentials from your AWS Marketplace Seller account. This does not mean that your SaaS code needs to run in the AWS MP Seller account. The best practice is to host your production code in a separate AWS account, and use cross-account roles and sts:AssumeRole to obtain temporary credentials which can then be used to call the AWS MP Metering APIs. This post walks you through how this can be implemented. Accounts In our example, there are two AWS accounts: - AWS Marketplace Seller Account – this is the account your organization has registered as a seller in AWS Marketplace. API calls must be authenticated from credentials in this account. - AWS Accounts for Production Code – this is the AWS account where your SaaS service is hosted. Why Use Separate Accounts? Sellers should only use a single AWS Account as the AWS Marketplace account. This simplifies management and avoids any confusion for customers viewing an ISV’s products and services. Separating the Seller account from the product accounts means each SaaS service can have its own AWS account, which provides a good security and management boundary. When a seller has multiple products, multiple AWS accounts can be used to further separate environments across teams. Using different AWS Marketplace seller and production accounts In this scenario, there are 2 AWS accounts in play. The AWS account registered as an AWS Marketplace Seller (222222222222) and the AWS account where the production code resides (111111111111). The Seller Account is registered with AWS Marketplace and does have permissions to call the Metering APIs. The seller account contains an IAM Role, with the appropriate IAM Policy to allow access to the Metering API as well as the permission for the role to be assumed from the Production Account. The IAM Role in the Seller Account in our example is called productx-saas-role. This has the AWSMarketplaceMeteringFullAccess managed policy attached. The IAM Role has a trust relationship as shown below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111111111111:root" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "someid" } } } ] } The SaaS application is hosted in the Production Account. This account is not authorized to call the Metering APIs. This account contains an IAM Role and Policy which is attached to the EC2 instances running the hosting application via an EC2 Instance Profile. This provides the instance with temporary credentials which can be used to sign requests to AWS API calls. These temporary credentials are used to call the sts:AssumeRole method, which returns temporary credentials from the seller account. These are used to call the Metering API. The permissions required to perform the sts:AssumeRole command are: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::222222222222:role/productx-saas-role" } } In order for the application to make a call to the Metering API, it must first assume the role in the seller account. This is done by calling the sts:AssumeRole method. If successful, this call returns temporary credentials (secret/access keys). These credentials can then be used to call the Metering API. The following code snippet shows how you can call the assume_role function in python to obtain the temporary credentials from the seller account. import boto3 sts_client = boto3.client('sts') assumedRoleObject = sts_client.assume_role( RoleArn="arn:aws:iam::222222222222:role/productx-saas-role", RoleSessionName="AssumeRoleSession1", ExternalId="someid") credentials = assumedRoleObject['Credentials'] client = boto3.client('marketplace-metering','us-east-1', aws_access_key_id = credentials['AccessKeyId'], aws_secret_access_key=credentials['SecretAccessKey'], aws_session_token = credentials['SessionToken']) Summary Using a single AWS Account for AWS Marketplace avoids confusion and mistakes. Using cross-account roles allows you to avoid hosting production code in the AWS Account registered as a seller. For more information on SaaS Subscriptions, please visit the AWS Marketplace SaaS Subscriptions page.
https://aws.amazon.com/it/blogs/apn/how-to-best-architect-your-aws-marketplace-saas-subscription-across-multiple-aws-accounts/
CC-MAIN-2017-51
en
refinedweb
New home! I am happy to announce that this project is becoming a part of something bigger and has a new home at. Intro This is a project to provide an Eclipse plugin for CoffeeScript, using Xtext. Development uses Xtext 2.1. It works as a regular Eclipse plugin (see Installation for details). Highlights include - syntax highlighting - variable autocompletion in the current namespace - autoindent Status It's aimed at being mostly compatible with the original CoffeeScript project. There are some extra features and some missing, but you probably (and hopefully) won't notice the difference in everyday use. Extra features CoffeeScript is a dyanmic language, the parser doesn't check the types or even the existence of variables, such mismatches are detected at runtime. This plugin is based on Xtext which is geared to static languages, providing some useful tools for dealing with these issues. Doing proper type inference for such a flexible language would be difficult, but there are some cases where more checking can be done than by the original CoffeeScript parser. Consider this snippet foo = 1 bar = foo + increment It is perfectly valid code, the increment variable is undefined, though. This plugin will issue a warning about a reference to an unknown variable. It works within string interpolation, too, so the next snippet will also give a warning console.log "Incremented by ${ increment }" Note that the console variable won't cause any warning, it's handled as a built-in variable. Incompatibilities The plugin handles properly most language constructs, including all examples in the coffeescript documentation folder. There are two cases it cannot handle, though, post-increment and assignment to a deeply nested property. # Not working drinkBeer(glasses++) # Workaround drinkBeer(glasses) glasses += 1 # Not working my.secret.money = 1000 # Workaround tmp = my.secret tmp.money = 1000 I guess they don't hurt that much... Getting the value of a deeply nested property is OK. # This is working borrow(my.secret.money) borrowed = my.secret.money Changelog <dl> <dt>0.2.2</dt> <dd>Embed coffeescript in a DSL (see the example directory)</dd> <dt>Planned next release</dt> <dd>Integrated build: convert coffee code to javascript, and run it</dd> </dl> Installation You will need an Eclipse instance with Xtext plugins. You can either install a complete Indigo distribution with Xtext, or install the required plugins into your existing workspace. See download Xtext 2.1 for details. The update site is: So in Eclipse, perform these steps Help -> Install New Software... Add..., then use this url as Location Work with..., and choose the location you just added Coffeescript editor Nextand Finish You may be given a warning, but that won't affect the plugin.
https://bitbucket.org/adamschmideg/coffeescript-eclipse/src
CC-MAIN-2017-51
en
refinedweb
Patch against SWIG v2.0.2 This patch (against SWIG v2.0.2) is intended to address some small deficiencies in the Octave language module: 1) It's currently not possible to tell SWIG whether or not symbols should be loaded into the global namespace by default. In fact, it's not even possible to make the generated .oct module *not* load all symbols globally, due to a bug in Lib/octave/octruntime.swg (line 37: "noglobal" should be "global"). 2) It's not possible to change the name of the symbol used to access global variables/constants from the default "cvar". This patch adds 3 Octave-specific command-line options: * -global/-noglobal tell SWIG whether the generated .oct module should load symbols into the global namespace by default. The default option is -global to preserve existing behaviour. * -globals <name> sets the name of the symbol used to access global variables/constants. It is set to "cvar" by default. These options are parsed by OCTAVE::main() in Source/Modules/octave.cxx, which sets global variables global_load (bool) and global_name (String*). In OCTAVE::top() these variables are output to the generated wrapping code as #defines SWIG_global_load and SWIG_global_name. In Lib/octave/octruntime.swg, the %insert(initbeforefunc) block containing the Octave entry point function DEFUN_DLD now contains a expanded input argument parser, which uses the same command-line arguments (-global/-noglobal, -globals) as can be passed to SWIG itself. In this way, the user loading the .oct module can change any of the global options the module was originally compiled with. The parser checks for non-string and unrecognised arguments, and also checks that global_name is a valid Octave identifier. A -help option is also included, which generated a short usage message. Example usage using an example module: ------------------------------ octave:1> example -help usage: example [-global|-noglobal] [-globals <name>] octave:2> example(4) error: example: unrecognised non-string argument. usage: example [-global|-noglobal] [-globals <name>] octave:3> example 4 error: example: unrecognised argument '4'. usage: example [-global|-noglobal] [-globals <name>] octave:4> example octave:5> example example = { cvar somefunc (global method) } octave:6> somefunc Not implemented :) ans = 0 octave:7> cvar.somevar ans = 12345 ------------------------------ and ------------------------------ octave:1> example -noglobal -globals mycvar octave:2> somefunc() error: `somefunc' undefined near line 2 column 1 octave:2> example.somefunc() Not implemented :) ans = 0 octave:3> example.mycvar.somevar ans = 12345 ------------------------------ This patch does break the current Octave module behaviour as described in section 29.3.1 of the SWIG 2.0 documentation. However, since the "noglobal" option (which does not seem to be mentioned in the documentation) is currently broken anyway, hopefully this is not a major issue :) I would be very happy to discuss any issues with / improvements to this patch. Cheers, Karl Patch applied by Xavier for swig-2.0.4, thanks. Log in to post a comment.
https://sourceforge.net/p/swig/patches/262/
CC-MAIN-2017-51
en
refinedweb
Here is the code of extracting comments from the PDF file using the itext.This extracts the comments of each page one by one and print them on the console. /******************************************************************************/ package pdftest; import com.itextpdf.text.pdf.PdfArray; import com.itextpdf.text.pdf.PdfDictionary; import com.itextpdf.text.pdf.PdfName; import com.itextpdf.text.pdf.PdfObject; import com.itextpdf.text.pdf.PdfReader; import com.itextpdf.text.pdf.PdfString; import java.util.ListIterator; public class CommentsTest { public static void main(String args[]) { try { System.out.println("Hi in the Comments Test"); PdfReader reader = new PdfReader("c:\\temp\\testattachments.pdf"); for (int i = 1; i <= reader.getNumberOfPages(); i++) { PdfDictionary page = reader.getPageN(i); PdfArray annotsArray = null; if(page.getAsArray(PdfName.ANNOTS)==null) continue; annotsArray = page.getAsArray(PdfName.ANNOTS); for (ListIterator { PdfDictionary annot = (PdfDictionary) PdfReader.getPdfObject(iter.next()); PdfString content = (PdfString) PdfReader.getPdfObject(annot.get(PdfName.CONTENTS)); if (content != null) { System.out.println(content); } } } } catch (Exception e) { e.printStackTrace(); } } } /******************************************************************************/ Regards, Vajahat Ali 6 comments: You are wonderful. Thank you. I can not claim complete originality for all things in this study. [url=][/url You may do something once a day, once a week, once a month. [url=]canada goose women parka[/url] Llgvxudsb [url=]pandora store[/url] Nrsnuoaxo [url=]canada goose outlet[/url] lozdlgwri But for many developers, all of this came to a halt this week when Roku removed 25 channels with foreign-language content, effectively shutting down the entire international section of its channel store.. Reggie acknowledges that they trying to get more units into stores as fast as possible while still trying to make the Wii U sound like an item so popular that it is flying off shelves.. It those so-called regular users that are the key tothis deal, all those folks that have never played a Zynga game before, or any mobile game for that matter. He said a PR person should look presentable they should represent the company. ghd online Radical fats want to tear up the rulebook and spit on everything they been told about clothes. What i do not realize is if truth be told how you're no longer really much more neatly-liked than you may be right now. You are so intelligent. You already know thus significantly in the case of this topic, produced me in my view consider it from so many numerous angles. Its like women and men aren't interested unless it is one thing to do with Girl gaga! Your own stuffs outstanding. All the time handle it up! Here is my web site; online graduate certificates
http://vajahatalee.blogspot.com/2010/04/how-to-extract-comments-from-pdf-using.html
CC-MAIN-2017-51
en
refinedweb
xcb_ungrab_pointer man page xcb_ungrab_pointer — release the pointer Synopsis #include <xcb/xproto.h> Request function xcb_void_cookie_t xcb_ungrab_pointer(xcb_connection_t *conn, xcb_timestamp_t time); Request Arguments - conn The XCB connection to X11. - time Timestamp to avoid race conditions when running X over the network. The pointer will not be released if time is earlier than the last-pointer-grab time or later than the current X server time. Description Releases the pointer and any queued events if you actively grabbed the pointer before using xcb_grab_pointer, xcb_grab_button or within a normal button press. EnterNotify and LeaveNotify events are generated. Return Value Returns an xcb_void_cookie_t. Errors (if any) have to be handled in the event loop. If you want to handle errors directly with xcb_request_check instead, use xcb_ungrab_pointer_checked. See xcb-requests(3) for details. Errors This request does never generate any errors. See Also xcb-requests(3), xcb_enter_notify_event_t(3), xcb_grab_button(3), xcb_grab_pointer(3), xcb_leave_notify_event_t(3) Author Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements. Referenced By The man page xcb_ungrab_pointer_checked(3) is an alias of xcb_ungrab_pointer(3).
https://www.mankier.com/3/xcb_ungrab_pointer
CC-MAIN-2017-51
en
refinedweb
If you have a map from something to large vectors, to insert into that map, you would first call value_type(). This would cause the large vector to be copied. When you call insert(), the large vector would be copied a second time. Here is a technique to get rid of one of the large vector copies: #include <iostream> #include <map> using namespace std; namespace { typedef vector<unsigned> MyVectorType; typedef map<unsigned, MyVectorType, less<unsigned> > U2VU; U2U g_myU2U (less<unsigned>()); U2VU g_myU2VU(less<unsigned>()); } // end unnamed namespace int main() { MyVectorType myVector; myVector.push_back(1); myVector.push_back(2); myVector.push_back(3); U2VU::value_type vt1(1,MyVectorType()); U2VU::value_type vt2(2,MyVectorType()); g_myU2VU.insert(vt1); pair<U2VU::iterator, bool> ret; ret = g_myU2VU.insert(vt2); if (ret.second == true) { // This means the key was not already there, // so the item was inserted. std::cout << "Inserting vector into map via iterator" << std::endl; ret.first->second = myVector; } std::cout << "U2VU has " << g_myU2VU.size() << " items in it." << std::endl; return 0; } The trick was to insert a value_type() with an empty vector, and then use the returned iterator to set the vector part to the large vector, thus saving a copy of the large vector.
http://cpptrivia.blogspot.com/2010/11/trick-to-reducing-copying-of-large.html
CC-MAIN-2017-51
en
refinedweb
Create a Moles TestLet's see some code for testing. Create a new C# Class project. Paste the following code in the Class1.cs file. This is the class and method we want to unit test. The method copies all files from SourceDirectory, to both DestinationDirectory and BackupDirectory. The method checks that each directory exists, and then creates missing directories. Class1 should require no explanation. using System.IO; namespace InstanceMoleDemo { public class Class1 { public void CopyDirectoryWithBackup(DirectoryInfo source, DirectoryInfo destination, DirectoryInfo backup) { if (!source.Exists) source.Create(); if (!destination.Exists) destination.Create(); if (!backup.Exists) backup.Create(); // TODO - Copy directory files from source to destination and // backup directories... } } } Now, we need to create a test project. Follow these steps: - Add a new test project to the solution - Build the solution - In the test project, add a reference to the C# Class project (InstanceMoleDemo, in the code above) - Right-click the test project's reference node -- the Context menu appears - In the context menu, select Add Moles assembly for mscorlib - Build the solution -- you'll see more references added for Moles support using System.IO; using System.IO.Moles; using InstanceMoleDemo; using Microsoft.Moles.Framework; using Microsoft.VisualStudio.TestTools.UnitTesting; [assembly: MoledType(typeof(System.IO.DirectoryInfo))] namespace TestProject1 { [TestClass] public class UnitTest1 { [TestMethod] [HostType("Moles")] public void CopyFileWithBackupTestsBackupDirectory() { // Arrange. var result = false; var c = new Class1(); MDirectoryInfo.AllInstances.ExistsGet = dirInfo => true; MDirectoryInfo.AllInstances.Create = dirInfo => { }; var source = new DirectoryInfo("C:\\Temp\\SourceDirectory"); var destination = new DirectoryInfo("C:\\Temp\\DectinationDirectory"); var backup = new DirectoryInfo("C:\\Temp\\BackupDirectory"); var moledBackup = new MDirectoryInfo(backup); moledBackup.ExistsGet = () => { result = true; return false; }; // Act. c.CopyDirectoryWithBackup(source, destination, backup); // Assert. Assert.IsTrue(result); } } } This test checks to be sure the method correctly detects the backup directory is missing. To do so, the test must: - Must not access the file system -- the file system is a dependency that must be isolated - Indicate the source and destination directories exist - Indicate the backup directory does not exist - Prevent any creation of directories on the file system Examining the TestThe unit test should be ready to run, and fully functional. Please be aware that the test must be executed using the Visual Studio testing tools, as it is not geared for other test harnesses. (See my Moles requires tests to be IN instrumented process post.) Let's step through one piece at a time. using System.IO;The test is using the System.IO.DirectoryInfo type. This is here for accessibility. using System.IO.Moles;We want to mole the System.IO.DirectoryInfo type, to detour calls to the Exists and Create methods. The Moles Framework creates the Moles sub-namespace, which we must use. using InstanceMoleDemo;Of course, we must reference the InstanceMoleDemo assembly, to test it. using Microsoft.Moles.Framework;This assembly contains the Moles framework, and is required whenever using Moles. using Microsoft.VisualStudio.TestTools.UnitTesting;Required for unit testing. [assembly: MoledType(typeof(System.IO.DirectoryInfo))] This assembly attribute streamlines Moles by identifying specific moled types to use, rather than attempting to load all moled types. The mscorlib assembly is quite extensive, and Moles requires substantial overhead. This attribute indicates we are interested only in moling the System.IO.DirectoryInfo type. [HostType("Moles")]The HostType attribute indicates the test method is dependent on an external host. In this case, the Moles framework is the host, as it will inject detours into the mscorlib assembly, upon compilation. You may have also seen the HostType attribute used with ASP.NET testing. MDirectoryInfo.AllInstances.ExistsGet = dirInfo => true;There are several things happening on this line. The cumulative effect is that all calls to DirectoryInfo.Exists will always return a true value. This line of code literally tells moles to inject a call to a Func that points to an anonymous method, created through a lambda expression. (This should be nothing new to users of lambda expressions.) - MDirectoryInfo is the moled DirectoryInfo type - AllInstances indicates we want to alter all instances of the type - ExistsGet is the accessor for the Exists property's Get method - = means we are delegating the call to Exists(get) to the specified method -- in this case, an anonymous method via lambda expressions - dirInfo is the lambda expression input object - => is the lambda "goes to" expression - true is the value to be returned by the anonymous method MDirectoryInfo.AllInstances.Create = dirInfo => { };This line is very muck like the previous. We are detouring the call to DirectoryInfo.Create to an anonymous method. This method happens to be empty -- we don't want anything to happen when Create is called. var moledBackup = new MDirectoryInfo(backup);This line of code is the key to this post. We have successfully moled all instances of DirectoryInfo to return always return a true value when Exists is called. However, in the case of the backup instance, we want to return false. Here, we are instantiating a mole object, MDirectoryInfo, from the existing backup DirectoryInfo object. This is possible, because the mole types inherit the original type. Therefore, passing the backup object as an input value to the MDirectoryInfo constructor returns a mole instance of the backup object. Once we have a mole instance of the backup object, we can alter the single, mole instance. moledBackup.ExistsGet = () => { result = true; return false; };We are once again creating a detour for the DirectoryInfo.Exists property. However, there are a couple of differences. - Because we are dealing with a specific instance, the Exists property does not accept any input parameter. To pass no input parameters into a lambda expression anonymous method, use (). - We want to execute multiple lines of code in the anonymous method. Therefore, curly braces are required. - We set result = true, to indicate the test is successful. The result value is the test sentinel value. SummaryUpon executing the test method, we have asked the Moles framework to do the following tasks: - Always return a true value whenever DirectoryInfo.Exists is called, in any instance of the object - Never do anything when the DirectoryInfo.Create() method is called. - When the backup instance of DirectoryInfo.Exists is called, always return false. If your intention is to test interaction with a dependency (file system, database, network, etc.), you're actually writing an integration test. Integration tests should be separate from unit tests, and executed only after all unit tests are successful. Also, you don't simply write an integration tests to see whether a database server returns correct information -- you have to first test everything in between (network connectivity, appropriate permission and access to connect to the database server, connection to the database server, accessing the database, etc.). Why can't we mole FileSystemWatcher or WebClient? I also have had the need to detour FileSystemWatcher, myself, but have not yet encountered WebClient. I usspect the reason why these are not moled is that these types interact with operating system handles that are considered unsafe. Detouring portions of the type could negatively impact or compromise the operating system, not to mention present a possible security issue. While poking around FileSystemWatcher.StartRaisingEvents, I noted that it calls ThreadPool.BindHandle, which contains the following line, which looks a little ominous: return ThreadPool.BindIOCompletionCallbackNative(osHandle.DangerousGetHandle()); SOLUTION: Use constructor dependency injection, to circumvent this problem, as outlined in my post, How to Stub Dependency Event Handlers in Integration Tests
http://thecurlybrace.blogspot.com/2011/05/how-to-mole-specific-type-instance.html
CC-MAIN-2017-51
en
refinedweb
allways get this warning, when i try to generate OTF of a font i am working on: [WARNING] [internal] Feature block seen before any language system statement. You should place languagesystem statements before any feature definition [/Users/andrejw/Library/Application Support/FontLab/Studio 5/Features/fontlab.fea 20] does anybody know what i am supposed to do? Scroll down on this thread at the FontLab forum for instructions. sorry, am i missing something here? i don´t think there is a link for download below this posting by adam twardoch:... does someone have this file? Ah, I think you need to register on the Fontlab Forum (it's free) in order to see attachments. yes, i figured it out. thanks! … still waiting for the sys-admin aproval to my registration on the forum. i am seroiusly dissapointed by fontlab-support. still waiting for the aproval of fontlab-forum. does anybody check their email-account @fontlab? sorry guys (adam), but you shouldnt post new updates in your forum when you dont allow people to read or register there. i am very dissapointed now. You probably have a feature in the lower-right box of the OpenType panel. Don’t do that. In fact, don’t use Fontlab’s OpenType panel. Set up all of your code in a text editor and use an include statement to import the code from a file into Fontlab. More likely he has no languagesystem statements in his code at all. I'm a bit reluctant to spell this out further since this issue has now come up in several threads despite the fact that the crucial difference between the old and new AFDKO is described in detail at the FLS Beta download page in the section labelled "Please read the posts below carefully for important information about new features, fixed bugs and known issues", and I don't think Adam or anyone else should be required to continually answer the same question, especially when they had already answered it before it was asked. The script Adam provides is just a convenience -- it's not needed to fix the problem. André sorry, but if i buy a piece of software i want a proper support. i am not going to read or include/exclude scripts in my library sub-folders. i just want this piece of software to run woithout problems. i have spent 400€ to get this tool for my own needs and i want something back. if not — no probs, next time i choose another app. there seem to be some new and more intuitive and most of all, "functionable" tools on the market. have a nice eve. oh and by the way, i still havent got "aproval" for the registered account on the fontlab-forum. In terms of 'just working without problems' you are likely to run into the same problem in any font tool that made the transition from AFDKO code 1.0 to the more recent version, because Adobe made necessary changes in that code to make new functionality work. This means that older feature code needs to be fixed up to work with the new AFDKO, and this will be true in other tools, not just FontLab. This doesn't excuse the FontLab support folk for apparently failing to process your forum approval, or otherwise providing you with some kind of response. But making fonts is a fairly complicated technical process, and being willing to read release notes and understand changes in software is a pretty standard expectation in professional manufacturing tools. This doesn't excuse the FontLab support folk for apparently failing to process your forum approval. Perhaps it's the asking on a Friday afternoon and then the complaining on saturday and sunday there was noone in the office to answer that does it. @ theunis: any excuses for the monday? :( Registration on the FL Forum is indeed a problem. We usually use support at fontlab dot com to get quick support. Or you can write directly to apetrov at fontlab dot com to get the file and solve the issue. FontLab's staff is, AFAIK, scattered across several continents, but it is a statutory holiday in both Canada and the US which might have some effect on them. sorry, but if i buy a piece of software i want a proper support. i am not going to read or include/exclude scripts in my library sub-folders. Bear in mind that you're talking about a beta version here, not a final product. Even if you're someone who doesn't normally read release notes, I'd think it wise to read them when dealing with a beta. André @johnych: thanks a lot! BTW, They fixed this with the new release. If you have not already, download it from FontLab. On a different note: With "A little Help from My Friends" Erik, Adam, and Frederik, I have finally succeeded in getting Python, RoboFab, and FontLab 5.1 running merrily hand-in-hand with OSX Lion 107.2 and executing Python scripts! Did I say a LITTLE help? That should read "Great Gobs of Gargantuan Help!"...... Instructions from Adam Twardoch: "1. Quit FontLab Studio. 2. Download: 2. In Finder, navigate to the location where you downloaded the .zip file, double-click to unzip it (if it already hadn't happened). You should see a folder named "RoboFab226M_plusDependencies". 3. Open the Terminal app 4. In Terminal, type "cd " (with a trailing space) but don't hit Enter. Drag the RoboFab226M_plusDependencies folder to the Terminal window. You should see something like this in the Terminal window: cd /users/[your username]/Downloads/RoboFab226M_plusDependencies 5. In Terminal, hit Enter. 6. In Terminal, do the following: (Hit Enter after each line. After the first "sudo" you'll need to enter your administrator password.) cd FontTools sudo /usr/bin/python setup.py install cd .. cd RoboFab sudo /usr/bin/python setup.py install cd .. cd DialogKit sudo mv Lib/dialogKit/ /Library/Python/2.7/site-packages/ 7. In Finder, do File / New Finder Window 8. If you have enabled "Use custom locations for user data folders" in FontLab Studio / Preferences / General Options / Folders and paths, navigate Finder to folder that you've set as the "FontLab Studio user data files" folder, and skip to step 11. 9. If you have not customized the location, then in Finder's menu, choose Go / Go to Folder... 10. In the text box that appears, enter: ~/Library/Application Support/FontLab/Studio 5/Macros and click on Go. 11. Your Finder should now should show a Macro folder with at least three subfolders in it (Effects, Export and System). 12. Switch to the Finder window that has the RoboFab226M_plusDependencies folder, and open the subfolder RoboFab/Scripts folder that is inside. 13. You should see a folder with several subfolders: "Contributed", "RoboFabIntro", "RoboFabUFO", "RoboFabUFO2", "RoboFabUtils". 14. Select those folders and drag them to the Macro folder in the other Finder window. 15. Open FontLab Studio and in the Edit Macro panel type in: import robofab import dialogKit If you don't see anything in the Output panel, you have installed RoboFab and its FontLab macros and DialogKit successfully. Best, Adam" The download link doesn't work! Anyone have a replacement link? This is the error it spits out: Not Found The requested URL /download/current/RoboFab226M_plusDependencies.zip was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. @characters: the 226 download has moved to: Probably best to use the most current version of Robofab, and dependencies from here: If you want to store something in your logs, make it the address that always has the links to the latest: Merry Christmas!
http://www.typophile.com/node/86114
CC-MAIN-2017-51
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). def fibo_print(a, b): i, j = 0, 1 while j < a: i, j = j, i+j while j <= b: print j i, j = j, i+j Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/42558/given-the-integers-1-a-b-print-all-the-fibonacci-numbers-between-a-and-b-inclusive
CC-MAIN-2017-51
en
refinedweb
hey!I'm new to GUI programming and I'm using PySide (as 3ds Max and Maya use it). I'm having a little difficulty trying to get my head around connecting slots and signals. I can connect up single signals no problem but with more complicated signals, I start to struggle a little bit and the documentation (Signals and Slots in PySide) is a little confusing. I have separated the GUI from the core code and I'm trying to keep it that way. Here is the GUI code in question (I have made it as simple as possible): import sys from PySide import QtGui import otherlocalModule class GUIWindow(QtGui.QWidget): def __init__(self): super(GUIWindow, self).__init__() self.setWindowTitle("Example Window") self.setGeometry(300, 250, 270, 200) self.example_button = QtGui.QPushButton("Export Selection") self.freeze_checkbox = QtGui.QCheckBox("Freeze Transform") self.del_history_checkbox = QtGui.QCheckBox("Delete History") self.example_button.clicked.connect(self.export_selection) self.freeze_checkbox.isChecked.connect(self.export_selection) self.del_history_checkbox.isChecked.connect(self.export_selection) layout = QtGui.QVBoxLayout() layout.addWidget(self.example_button) layout.addWidget(self.freeze_checkbox) layout.addWidget(self.del_history_checkbox) self.setLayout(layout) self.show() def export_selection(self): otherlocalModule.export_selection(self.freeze_checkbox.isChecked(), self.del_history_checkbox.isChecked()) if __name__ == '__main__': try: app = QtGui.QApplication(sys.argv) app_window = GUIWindow() app.exec_() sys.exit(0) except SystemExit: pass # otherLocalModule # half-pseudo code def export_selection(freeze_transform, delete_history): for mesh in get_current_selection(): if freeze_transform: FreezeTransforms(mesh) if delete_history: DeleteHistory(mesh) exportSelection(mesh) I'm trying to allow the user to customise their export process like Freezing Transforms or Deleting Histories and this connecting of signals/slots is currently what has stumped me so far. Thanks!Andrew Hi, I believe your code will work once you remove the following two lines... self.freeze_checkbox.isChecked.connect(self.export_selection) self.del_history_checkbox.isChecked.connect(self.export_selection) This is because you dont want to export when those values are changed, you only want to export when the button is pushed. Equally, you have already (correctly) hooked up the button, and within the export_selection method you're querying the relevent ui options. Hi Mike Thanks for the reponse! It does connect,but I now have a strange TypeError issue where if I add the directory argument to otherLocalModule and its corresponding function call in my GUI. Not sure if this is related as I have reloaded both modules, made sure .pyc files have been cleaned so i'm not sure what is going on here. If you want to pass arguments to a function you are connecting with .connect() you need to use functools.partial or lambda. You want to bind the function call, not get the functions return values. do this:button.clicked.connect(functools.partial(my_func, argument)) not this:button.clicked.connect(my_func(argument)) Ah this is handy to know - thanks I've adjusted my code to include this
http://tech-artists.org/t/pyside-connecting-multiple-signals-slots-help/9433
CC-MAIN-2017-51
en
refinedweb
I want to integrate flowjs or ng-flow to my Angular 2 application. I've installed the flowjs typings using npm install --save-dev @types/flowjs import { Flow } from 'flowjs'; /node_modules/@types/flowjs/index.d.ts' is not a module. I got this error before when creating my own modules and it was caused when there is nothing exported class or module inside the file index.d.ts. I looked at the repository and my guess is: the error is caused because there's nothing exported inside the file. But as same as other definition files you don't need to import the definitions, it is global definitions, so you just need to add it in your compiler life cycle using the "///ref" operator. I'm not used with this new Typescript definition files approach (@types), but there's a post explaining very well how to add it to your compiler life cycle:
https://codedump.io/share/K8fvzE28k54j/1/using-flowjs-in-angular-2-typescript
CC-MAIN-2017-51
en
refinedweb
using VS 2005 and I don't know if the following method is valid for VS 2003. Try calling the Invalidate method of the control: dataGrid.Invalidate(); This method forces the control to redraw itself; But your suggestions don't work. I still have to click the grid to see the colored regions. So is there a way to simulate a click event on a grid? Thanks San Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team. so it should be Namespace: System.Windows.Forms protected virtual void OnClick ( EventArgs e ) ex.1 public class SingleClickTextBox: TextBox { protected override void OnClick(EventArgs e) { this.SelectAll(); base.OnClick(e); } } ex.2; } Thanks San Is there a way to programmatically click the grid without physically using the mouse to do it. That is, I want the click event to occur from my program. Help as I don't understand your solutions San using System.Runtime.InteropServ Then you can use the following code: [DllImport("user32.dll")] public static extern int SendMessage(IntPtr hWnd, int wMsg, int wParam, int lParam); public const int WM_LBUTTONDOWN = 513; public const int WM_LBUTTONUP = 514; public void EmulateLMClick(int x, int y) { SendMessage(this.Handle, WM_LBUTTONDOWN, 1, ((y << 16) | x)); SendMessage(this.Handle, WM_LBUTTONUP, 0, ((y << 16) | x)); } Do you have the same code in VB.NET? I will try it and get back to you Public Const WM_LBUTTONDOWN As Integer = 513 Public Const WM_LBUTTONUP As Integer = 514 Public Sub EmulateLMClick(ByVal x As Integer, ByVal y As Integer) SendMessage(Me.Handle, WM_LBUTTONDOWN, 1, ((y << 16) Or x)) SendMessage(Me.Handle, WM_LBUTTONUP, 0, ((y << 16) Or x)) End Sub I still have to physically click on the grid for the row to be painted in red color. I debugged using the same x and y co-ordinates that I would click using the mouse and it doesn't work. Somehow the paint event of the DataGridTextBoxColumn only gets fired when I physically click on the grid or tab into it. Otherwise, it doesn't work. Thanks San No, it still doesn't work. Even when I use the mouse to resize, the paint method doesn't get fired. Now, when I click, it works. San The rows do get painted red, just not the first one. I have to click into the grid for the first row to get painted. Strange, really strange. Thanks San
https://www.experts-exchange.com/questions/22753584/How-to-simulate-click-event-on-datagrid.html
CC-MAIN-2017-51
en
refinedweb
gdbbridge.py in native gdb fails I'm trying to use the qtcreator extensions in native gdb. According to this page I should be able to just add the appropriate commands to my .gdbinit and I should have the "pp" command available. I do get that command but it always fails. For example, which a break at line 128 in the below it claims that "app" is undefined but it clearly is. I get the same error message for any variable I try to print. Does anyone know what's wrong? (gdb) list 123 QApplication::setAttribute(Qt::AA_X11InitThreads); 124 #ifdef Q_OS_MAC 125 QApplication::setAttribute(Qt::AA_UseHighDpiPixmaps); 126 #endif 127 QApplication app(argc, argv); 128 setGlobalFont(app); // BREAK IS HERE 129 130 QFile styleSheetFile(":/theme/rdsTheme.qs"); 131 if (styleSheetFile.open(QFile::ReadOnly)) 132 { (gdb) pp app Python Exception <class 'NameError'> name 'app' is not defined: Error occurred in Python command: name 'app' is not defined For reference, here's my .gdbinit: python import sys sys.path.insert(0, '/home/me/Software/gdb_printers/python') sys.path.insert(1, '/usr/share/qtcreator/debugger/') from libstdcxx.v6.printers import register_libstdcxx_printers register_libstdcxx_printers (None) from gdbbridge import * end
https://forum.qt.io/topic/81057/gdbbridge-py-in-native-gdb-fails
CC-MAIN-2017-51
en
refinedweb
Tell me what you think code-wise (not much of a designer as you might tell!) Feedback on Drum Machine Feedback on Drum Machine 0 ThamiMemel #2 very nice job i really loved the design it’s so smooth and lovely, i’m not that far yet in the curriculum so i can’t judge the code but common it’s working . You really don’t need to put the button letters, audio names, or audio urls in state. I was able to create a btns array with all the info need and eliminate a lot of repetitive code and text (the urls) with the following: class App extends React.Component { state = { display: '' } stealTextName = text => this.setState({display: text}) render() { const btns = [ ['Q','Heater-1'], ['W', 'Heater-2'], ['E', 'Heater-3'], ['A','Heater-4_1'], ['S', 'Heater-6'], ['D', 'Kick_n_Hat'], ['Z', 'punchy_kick_1'], ['X', 'side_stick_1'], ['C', 'Brk_Snr'] ]; const convertName = name => name .replace(/_/g,'-') .replace(/\w+/g, ([first, ...rest]) => first.toUpperCase() + rest.join('')); const {display} = this.state, {stealTextName} = this return ( <div className='container'> <div id='drum-machine'> <div id='display'> <span>{display}</span> </div> <div id='controls'> {btns.map(([letter,audioName]) => ( <Drumpad id={convertName(audioName)} text={letter} audio={`{audioName}.mp3`} stealTextName={stealTextName} />) )} </div> </div> </div> ) } } renmanimel #4 I like how simple and clean the app looks, few lines of code, great work
https://www.freecodecamp.org/forum/t/feedback-on-drum-machine/216647
CC-MAIN-2018-47
en
refinedweb
public class S3AccessControlList. An ACL is a list of grants. A grant consists of one grantee and one permission. ACLs only grant permissions; they do not deny them. For convenience, some commonly used Access Control Lists are defined in S3CannedACL. Note: Bucket and object ACLs are completely independent; an object does not inherit: We highly recommend that you do not grant the anonymous group write access to your buckets as you will have no control over the objects others can store and their associated charges. For more information, see Grantees and Permissions
https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_S3AccessControlList.htm
CC-MAIN-2018-47
en
refinedweb
GIS Library - Comma string functions. More... #include <string.h> #include <grass/gis.h> Go to the source code of this file. GIS Library - Comma string functions. (C) 2001-2008 by the GRASS Development Team This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details. Definition in file commas.c. Inserts commas into a number string. Examples: Note: Does not work with negative numbers. Definition at line 38 of file commas.c. Referenced by E_edit_cellhd().
https://grass.osgeo.org/programming6/commas_8c.html
CC-MAIN-2018-47
en
refinedweb
Python bindings for GNU Lightning JIT Project description Lyn provides Python bindings. The source code is on GitHub: Releases are uploaded to PyPI: “Lyn” is the Norwegian word for “lightning”. Warning This project is in early alpha! Many instructions have not been implemented yet, and tests are lacking for those that have This means that you shouldn’t be surprised to segfault the entire Python process (you will have to get used to that anyway, unless you happen to always write bug-free Lightning code). But, you can use it right now to JIT-compile native machine code, straight from Python. To get a taste of Lyn and GNU Lightning, scroll down to the examples below. Installation >From PyPi: $ pip install lyn >From the bleeding edge: $ git clone $ cd lyn $ python setup.py test $ python setup.py install Non-Python Dependencies You must install the following libraries using your favourite package manager: - The GNU Lightning shared library v2.1.0 (later versions may also work), - Optional: The Capstone Disassembler, The last time I compiled GNU Lightning on Linux, I had to disable the disassembly options because of linker problems with libopcodes.so. This worked for me: $ ./configure --enable-shared --disable-static --disable-disassembler To use Capstone as a disassembler with Lyn, you have to install the Python modules and the C library. The module can be installed with pip install capstone. Example: Multiply two numbers In this example, we use with-blocks so that the GNU Lightning environment (along with the mul function) is reclaimed: from lyn import Lightning, word_t, Register with Lightning() as lib: with lib.state() as jit: jit.prolog() jit.getarg(Register.r0, jit.arg()) jit.getarg(Register.r1, jit.arg()) jit.mulr(Register.r0, Register.r0, Register.r1) jit.retr(Register.r0) jit.epilog() mul = jit.emit_function(word_t, [word_t, word_t]) for a in xrange(-100, 100): for b in xrange(-100, 100): assert(mul(a,b) == a*b) To use the mul function elsewhere in your program, you need to keep a reference to the state jit and the GNU Lightning environment lib. Both objects have release() methods for doing it manually: lib = Lightning() jit = lib.state() # ... jit.release() lib.release() The last two parts are order dependant, in that lib.release() must run after its associated states. If you don’t release them, it’s not a big deal, but you’ll waste memory. In such a case, OS will free up the memory at exit. Example: Calling a C function This example shows how to call C functions from GNU Lightning. In the example below, we create a function that takes a string argument and returns the result of passing it to strlen: import lyn from lyn import Register, Lightning lightning = Lightning() libc = lightning.load("c") jit = lightning.state() jit.prolog() # Get the Python argument jit.getarg(Register.r0, jit.arg()) # Call strlen with it jit.pushargr(Register.r0) jit.finishi(libc.strlen) # Return strlen's return value jit.retval(Register.r0) jit.retr(Register.r0) jit.epilog() strlen = jit.emit_function(lyn.word_t, [lyn.char_p]) self.assertEqual(strlen(""), 0) self.assertEqual(strlen("h"), 1) self.assertEqual(strlen("he"), 2) self.assertEqual(strlen("hello"), 5) lightning.release() Notice that we tell emit_function to create a function that returns a lyn.word_t. This is a datatype whose size equals the computer’s pointer width, or sizeof(void*). lyn.word_t will then be either ctypes.c_int64 or ctypes.c_int32. The parameter type lyn.char_p is a subclass of ctypes.c_char_p that automatically converts strings to bytes objects. This is provided as a compatibility convenience for Python 2 and 3 users. Use this type instead of ctypes.c_char_p. Example: Disassembling native code with Capstone If you install Capstone, you can use it as a disassembler for the generated functions. At some point, I’ll integrate Capstone into Lyn: from lyn import Lightning, Register, word_t import capstone import ctypes lib = Lightning() jit = lib.state() # A function that returns one more than its integer input start = jit.note() jit.prolog() arg = jit.arg() jit.getarg(Register.r0, arg) jit.addi(Register.r0, Register.r0, 1) jit.retr(Register.r0) jit.epilog() end = jit.note() # Bind function to Python: returns a word (native integer), takes a word. incr = jit.emit_function(word_t, [word_t]) # Sanity check assert(incr(1234) == 1235) # This part should be obvious to C programmers: We need to read data from raw # memory in to a Python iterable. length = (jit.address(end) - jit.address(start)).value codebuf = ctypes.create_string_buffer(length) ctypes.memmove(codebuf, ctypes.c_char_p(incr.address.value), length) print("Compiled %d bytes starting at 0x%x" % (length, incr.address)) def hexbytes(b): return "".join(map(lambda x: hex(x)[2:] + " ", b)) # Capstone is smart enough to stop at the first RET-like instruction. md = capstone.Cs(capstone.CS_ARCH_X86, capstone.CS_MODE_64) md.syntax = capstone.CS_OPT_SYNTAX_ATT # Change to Intel syntax if you want for i in md.disasm(codebuf, incr.address.value): print("0x%x %-15s%s %s" % (i.address, hexbytes(i.bytes), i.mnemonic, i.op_str)) raw = "".join(map(lambda x: "\\x%02x" % x, map(ord, codebuf))) print("\nRaw bytes: %s" % raw) jit.release() lib.release() On my computer, this outputs: Compiled 34 bytes starting at 0x105ed3000 0x105ed3000 48 83 ec 30 subq $0x30, %rsp 0x105ed3004 48 89 2c 24 movq %rbp, (%rsp) 0x105ed3008 48 89 e5 movq %rsp, %rbp 0x105ed300b 48 83 ec 18 subq $0x18, %rsp 0x105ed300f 48 89 f8 movq %rdi, %rax 0x105ed3012 48 83 c0 1 addq $1, %rax 0x105ed3016 48 89 ec movq %rbp, %rsp 0x105ed3019 48 8b 2c 24 movq (%rsp), %rbp 0x105ed301d 48 83 c4 30 addq $0x30, %rsp 0x105ed3021 c3 retq Raw bytes: \x48\x83\xec\x30\x48\x89\x2c\x24 \x48\x89\xe5\x48\x83\xec\x18\x48 \x89\xf8\x48\x83\xc0\x01\x48\x89 \xec\x48\x8b\x2c\x24\x48\x83\xc4 \x30\xc3 Capstone has a lot of neat features. I happen to favour AT&T assembly syntax, but you can easily change that in the above code. But if you set md.detail = True, you’ll be able to see implicit registers and a lot of other cool stuff. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/lyn/
CC-MAIN-2018-47
en
refinedweb
I’ve been attempting to add a simple rjs ‘delete’ method to my project. However, no matter what I try, it simply doesn’t work. Even the easy, easy stuff seems broken. (Yes, <%= javascript_include_tag :defaults %> is being included in the template.) For example, from: VIEW: RJS Template Test - Dog - Cat - Mouse <%= link_to_remote(“Add a fox”, :url =>{ :action => :add }) %> CONTROLLER: def add end RJS: page.insert_html :bottom, ‘list’, content_tag(“li”, “Fox”) page.visual_effect :highlight, ‘list’, :duration => 3 page.replace_html ‘header’, ‘RJS Template Test Complete!’ results in nothing. Any insights? - josh
https://www.ruby-forum.com/t/rjs-not-working/51080
CC-MAIN-2018-47
en
refinedweb
I've got a problem with the following script: import arcpy, os infile = r"C:\temp4\ALK_Merge.shp" infileL = "infile_Layer" count = int(arcpy.GetCount_management(infile).getOutput(0)) x = 99 xC = count/x xC1 = xC + 2 for i in range(1, xC1): print i sPath = infile.split(".shp") print sPath xRange = i * 99 xRange1 = (i*99)-99 print xRange print xRange1 outFeature = sPath + "_" + i + ".shp" arcpy.MakeFeatureLayer_management (infile, infileL) arcpy.SelectLayerByAttribute_management (infileL, "NEW_SELECTION", "\"FID\" <= " + xRange + "AND" + "\"FID\" >= "+ xRange1 + "") arcpy.CopyFeatures_management(infileL, outFeature) The problem is in line 25: outFeature = sPath + "_" + i + ".shp" TypeError: can only concatenate list (not "str") to list Any ideas how to solve this? The method you are using to create the chunks to get exactly 99 features will only work in case there were no deletes in the featureclass. Take the following example: this will yield: The OID are not continuous, so the chunk function provides a more stable way of creating the appropriate chucks of 99 elements.
https://community.esri.com/thread/204826-problem-with-python-script-can-only-concatenate-list-not-str-to-list
CC-MAIN-2018-47
en
refinedweb
This page was generated from examples/ale_classification.ipynb. Accumulated Local Effects for classifying flowers¶ In this example we will explain the behaviour of classification models on the Iris dataset. It is recommended to first read the ALE regression example to familiarize yourself with how to interpret ALE plots in a simpler setting. Interpreting ALE plots for classification problems become more complex due to a few reasons: Instead of one ALE line for each feature we now have one for each class to explain the feature effects towards predicting each class. There are two ways to choose the prediction function to explain: Class probability predictions (e.g. clf.predict_probain sklearn) Margin or logit predictions (e.g. clf.decision_functionin sklearn) We will see the implications of explaining each of these prediction functions. [1]: %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from alibi.explainers.ale import ALE, plot_ale Load and prepare the dataset¶ [2]: data = load_iris() feature_names = data.feature_names target_names = data.target_names X = data.data y = data.target print(feature_names) print(target_names) ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] ['setosa' 'versicolor' 'virginica'] Shuffle the data and define the train and test set: [3]: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) Fit and evaluate a logistic regression model¶ [4]: lr = LogisticRegression(max_iter=200) [5]: lr.fit(X_train, y_train) [5]:) [6]: accuracy_score(y_test, lr.predict(X_test)) [6]: 1.0 Calculate Accumulated Local Effects¶ There are several options for explaining the classifier predictions using ALE. We define two prediction functions, one in the unnormalized logit space and the other in probability space, and look at how the resulting ALE plot interpretation changes. [7]: logit_fun_lr = lr.decision_function proba_fun_lr = lr.predict_proba [8]: logit_ale_lr = ALE(logit_fun_lr, feature_names=feature_names, target_names=target_names) proba_ale_lr = ALE(proba_fun_lr, feature_names=feature_names, target_names=target_names) [9]: logit_exp_lr = logit_ale_lr.explain(X_train) proba_exp_lr = proba_ale_lr.explain(X_train) ALE in logit space¶ We first look at the ALE plots for explaining the feature effects towards the unnormalized logit scores: [10]: plot_ale(logit_exp_lr, n_cols=2, fig_kw={'figwidth': 8, 'figheight': 5}, sharey=None); We see that the feature effects are linear for each class and each feature. This is exactly what we expect because the logistic regression is a linear model in the logit space. Furthermore, the units of the ALE plots here are in logits, which means that the feature effect at some feature value will be a positive or negative contribution to the logit of each class with respect to the mean feature effect. Let’s look at the interpretation of the feature effects for “petal length” in more detail: [11]: plot_ale(logit_exp_lr, features=[2]); The ALE lines cross the 0 mark at ~3.8cm which means that for instances of petal length around ~3.8cm the feature effect on the prediction is the same as the average feature effect. On the other hand, going towards the extreme values of the feature, the model assigns a large positive/negative penalty towards classifying instances as “setosa” and vice versa for “virginica”. We can go into a bit more detail about the “mean response” at the petal length around ~3.8cm. First, we calculate the mean response (in logit space) of the model on the training set: [12]: mean_logits = logit_fun_lr(X_train).mean(axis=0) mean_logits [12]: array([-0.64214307, 2.26719121, -1.62504814]) Next, we find instances for which the feature “petal length” is close to ~3.8cm and look at the predictions for these: [13]: lower_index = np.where(logit_exp_lr.feature_values[2] < 3.8)[0][-1] upper_index = np.where(logit_exp_lr.feature_values[2] > 3.8)[0][0] subset = X_train[(X_train[:, 2] > logit_exp_lr.feature_values[2][lower_index]) & (X_train[:, 2] < logit_exp_lr.feature_values[2][upper_index])] print(subset.shape) (8, 4) [14]: subset_logits = logit_fun_lr(subset).mean(axis=0) subset_logits [14]: array([-1.33625605, 2.32669999, -0.99044394]) Now if we subtract the logits of the instances for which petal length is ~3.8 from the mean logits, because \(\text{ALE}(3.8)\approx 0\) for petal length, any difference from zero must be due to the combined effect of all other features (except petal length): [15]: mean_logits - subset_logits [15]: array([ 0.69411298, -0.05950878, -0.6346042 ]) For example, the remaining 3 features combined must be responsible for a positive effect of around ~0.69 on predicting instances with petal length ~3.8cm to be of class setosa. This is true only because the model is linear, so, by calculating the ALE of a feature, we account for all of the effects of that feature. For non-linear models, however, there might be higher order interaction effects of the feature in question with other features responsible for the difference from the mean effects. We can gain even more insight into the ALE plot by looking at the class histograms for the feature petal length: [16]: fig, ax = plt.subplots() for target in range(3): ax.hist(X_train[y_train==target][:,2], label=target_names[target]); ax.set_xlabel(feature_names[2]) ax.legend(); Here we see that the three classes are very well separated by this feature. This confirms that the ALE plot is behaving as expected—the feature effects of small value of “petal length” are that of increasing the the logit values for the class “setosa” and decreasing for the other two classes. Also note that the range of the ALE values for this feature is particularly high compared to other features which can be interpreted as the model attributing more importance to this feature as it separates the classes well on its own. ALE in probability space¶ We now turn to interprting the ALE plots for explaining the feature effects on the probabilities of each class. [17]: plot_ale(proba_exp_lr, n_cols=2, fig_kw={'figwidth': 8, 'figheight': 5}); As expected, the ALE plots are no longer linear which reflects the non-linear nature due to the softmax transformation applied to the logits. Note that, in this case, the ALE are in the units of relative probability mass, i.e. given a feature value how much more (less) probability does the model assign to each class relative to the mean prediction. This also means that any increase in relative probability of one class must result into a decrease in probability of another class. In fact, the ALE curves summed across classes result in 0 as a direct consequence of conservation of probability: [18]: for feature in range(4): print(proba_exp_lr.ale_values[feature].sum()) -5.551115123125783e-17 1.734723475976807e-17 -6.661338147750939e-16 4.440892098500626e-16 By transforming the ALE plots into probability space we can gain additional insight into the model behaviour. For example, the ALE curve for the feature petal width and class setosa is virtually flat. This means that the model does not use this feature to assign higher or lower probability to class setosa with respect to the average prediction. This is not readily seen in logit space as the ALE curve has negative slope which would lead us to the opposite conclusion. The interpretation here is that even though the ALE curve in the logit space shows a negative effect with feature value, the effect in the logit space is not significant enought to translate into a tangible effect in the probability space. Finally, the feature sepal width does not offer significant information to the model to prefer any class over the other (with respect to the mean effect of sepal_width that is). If we plot the marginal distribution of sepal_width it explains why that is—the overlap in the class conditional histograms of this feature show that it does not increase the model discriminative power: [19]: fig, ax = plt.subplots() for target in range(3): ax.hist(X_train[y_train==target][:,1], label=target_names[target]); ax.set_xlabel(feature_names[1]) ax.legend(); ALE for gradient boosting¶ Finally, we look at the resulting ALE plots for a highly non-linear model—a gradient boosted classifier. [20]: from sklearn.ensemble import GradientBoostingClassifier [21]: gb = GradientBoostingClassifier() gb.fit(X_train, y_train) [21]: GradientBoostingClassifier(ccp_alpha=0.0,, n_iter_no_change=None, presort='deprecated', random_state=None, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) [22]: accuracy_score(y_test, gb.predict(X_test)) [22]: 1.0 As before, we explain the feature contributions in both logit and probability space. [23]: logit_fun_gb = gb.decision_function proba_fun_gb = gb.predict_proba [24]: logit_ale_gb = ALE(logit_fun_gb, feature_names=feature_names, target_names=target_names) proba_ale_gb = ALE(proba_fun_gb, feature_names=feature_names, target_names=target_names) [25]: logit_exp_gb = logit_ale_gb.explain(X_train) proba_exp_gb = proba_ale_gb.explain(X_train) ALE in logit space¶ [26]: plot_ale(logit_exp_gb, n_cols=2, fig_kw={'figwidth': 8, 'figheight': 5}); The ALE curves are no longer linear as the model used is non-linear. Furthermore, we’ve plotted the ALE curves of different features on the same scale on the \(y\)-axis which suggests that the features petaln length and petal width are more discriminative for the task. Checking the feature importances of the classifier confirms this: [27]: gb.feature_importances_ [27]: array([0.00220678, 0.01643393, 0.53119079, 0.4501685 ]) ALE in probability space¶ [28]: plot_ale(proba_exp_gb, n_cols=2, fig_kw={'figwidth': 8, 'figheight': 5}); Because of the non-linearity of the gradient boosted model the ALE curves in probability space are very similar to the curves in the logit space just on a different scale. Comparing ALE between models¶ We have seen that for both logistic regression and gradient boosting models the features petal length and petal width have a high feature effect on the classifier predictions. We can explore this in more detail by comparing the ALE curves for both models. In the following we plot the ALE curves of the two features for predicting the class setosa in probability space: [29]: fig, ax = plt.subplots(1, 2, figsize=(8, 4), sharey='row'); plot_ale(proba_exp_lr, features=[2, 3], targets=['setosa'], ax=ax, line_kw={'label': 'LR'}); plot_ale(proba_exp_gb, features=[2, 3], targets=['setosa'], ax=ax, line_kw={'label': 'GB'}); From this plot we can draw a couple of conclusions: Both models have similar feature effects of petal length—a high positive effect for predicting setosafor small feature values and a high negative effect for large values (over >3cm). While the logistic regression model does not benefit from the petal widthfeature to discriminate the setosaclass, the gradient boosted model does exploit this feature to discern between different classes.
https://docs.seldon.io/projects/alibi/en/latest/examples/ale_classification.html
CC-MAIN-2020-40
en
refinedweb
NLP Learning Series: Part 1 - Text Preprocessing Methods for Deep Learning Recently, - Find Insincere questions on Quora. A current ongoing competition on kaggle - Find fake reviews on websites - Will. $$\frac{\boldsymbol{x}^\top \boldsymbol{y}}{|\boldsymbol{x}| |\boldsymbol{y}|} \in [-1, 1].$$. # Some preprocesssing that will be common to all the text classification methods you will see. puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£', '·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…', '“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─', '▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞', '∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ] def clean_text(x): x = str(x) for punct in puncts: if punct in x: x = x.replace(punct, f' {punct} ') return x This could also have been done with the help of a simple regex. But I normally like the above way of doing things as it helps to understand the sort of characters we are removing from our data. def clean_text(x): pattern = r'[^a-zA-z0-9\s]' text = re.sub(pattern, '', x) return x. # This comes from CPMP script in the Quora questions similarity challenge. import re from collections import Counter import gensim import heapq from operator import itemgetter from multiprocessing import Pool model = gensim.models.KeyedVectors.load_word2vec_format('../input/embeddings/GoogleNews-vectors-negative300/GoogleNews-vectors-negative300.bin', binary=True) words = model.index2word w_rank = {} for i,word in enumerate(words): w_rank[word] = i WORDS = w_rank def words(text): return re.findall(r'\w+', text.lower()) def P(word): "Probability of `word`." # use inverse of rank as proxy # returns 0 if the word isn't in the dictionary return - WORDS.get(word, 0)1(word): "All edits that are one edit away from `word`." letters = 'abcdefghijklmnopqrstuvwxyz' splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [L + R[1:] for L, R in splits if R] transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1] replaces = [L + c + R[1:] for L, R in splits if R for c in letters] inserts = [L + c + R for L, R in splits for c in letters] return set(deletes + transposes + replaces + inserts) def edits2(word): "All edits that are two edits away from `word`." return (e2 for e1 in edits1(word) for e2 in edits1(e1)) def build_vocab(texts): sentences = texts.apply(lambda x: x.split()).values vocab = {} for sentence in sentences: for word in sentence: try: vocab[word] += 1 except KeyError: vocab[word] = 1 return vocab vocab = build_vocab(train.question_text) top_90k_words = dict(heapq.nlargest(90000, vocab.items(), key=itemgetter(1))) pool = Pool(4) corrected_words = pool.map(correction,list(top_90k_words.keys())) for word,corrected_word in zip(top_90k_words,corrected_words): if word!=corrected_word: print(word,":",corrected_word) Once we are through with finding misspelled data, the next thing remains to replace them using a misspell mapping and regex functions. mispell_dict = {'colour': 'color', 'centre': 'center', 'favourite': 'favorite', 'travelling': 'traveling', 'counselling': 'counseling', 'theatre': 'theater', 'cancelled': 'canceled', 'labour': 'labor', 'organisation': 'organization', 'wwii': 'world war 2', 'citicise': 'criticize', 'youtu ': 'youtube ', 'Qoura': 'Quora', 'sallary': 'salary', 'Whta': 'What', 'narcisist': 'narcissist', 'howdo': 'how do', 'whatare': 'what are', 'howcan': 'how can', 'howmuch': 'how much', 'howmany': 'how many', 'whydo': 'why do', 'doI': 'do I', 'theBest': 'the best', 'howdoes': 'how does', 'mastrubation': 'masturbation', 'mastrubate': 'masturbate', "mastrubating": 'masturbating', 'pennis': 'penis', 'Etherium': 'Ethereum', 'narcissit': 'narcissist', 'bigdata': 'big data', '2k17': '2017', '2k18': '2018', 'qouta': 'quota', 'exboyfriend': 'ex boyfriend', 'airhostess': 'air hostess', "whst": 'what', 'watsapp': 'whatsapp', 'demonitisation': 'demonetization', 'demonitization': 'demonetization', 'demonetisation': 'demonetization'} def _get_mispell(mispell_dict): mispell_re = re.compile('(%s)' % '|'.join(mispell_dict.keys())) return mispell_dict, mispell_re mispellings, mispellings_re = _get_mispell(mispell_dict) def replace_typical_misspell(text): def replace(match): return mispellings[match.group(0)] return mispellings_re.sub(replace, text) # Usage replace_typical_misspell("Whta is demonit. contraction_dict = {, there are other preprocessing techniques of text like Stemming, Lemmatization and Stopword Removal. Since these techniques are not used along with Deep Learning NLP models, we won’t talk about them. Representation: Sequence Creation One. #Signature: Tokenizer(num_words=None, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', lower=True, split=' ', char_level=False, oov_token=None, document_count=0, **kwargs) sequence(each training example) will be of the same length(same number of words/tokens). We can control this using the maxlen parameter. For example: train_X = pad_sequences(train_X, maxlen=maxlen) test_X = pad_sequences(test_X, maxlen=maxlen). glove_embedding_index = load_glove_index() Be sure to put the path of the folder where you download these GLoVE vectors. What does this glove_embedding_index contain? It is just which just contains required embeddings. >>IMAGE: count_found-=1 print("Got embedding for ",count_found," words.") return embedding_matrix.: if word.islower(): # try to get the embedding of word in titlecase if lowercase is not present embedding_vector = embeddings_index.get(word.capitalize()) if embedding_vector is not None: embedding_matrix[i] = embedding_vector else: count_found-=1 else: count_found-=1 print("Got embedding for ",count_found," words.") return embedding_matrix. from textblob import TextBlob word_sent = TextBlob("good").sentiment print(word_sent.polarity,word_sent.subjectivity) # 0.7 0.6 We can get the polarity and subjectivity of any word using TextBlob. Pretty neat. So let us try to add this extra information to our embedd+4)) count_found = nb_words for word, i in tqdm(word_index.items()): if i >= max_features: continue embedding_vector = embeddings_index.get(word) word_sent = TextBlob(word).sentiment # Extra information we are passing to our embeddings extra_embed = [word_sent.polarity,word_sent.subjectivity] if embedding_vector is not None: embedding_matrix[i] = np.append(embedding_vector,extra_embed) else: if word.islower(): embedding_vector = embeddings_index.get(word.capitalize()) if embedding_vector is not None: embedding_matrix[i] = np.append(embedding_vector,extra_embed) else: embedding_matrix[i,300:] = extra_embed count_found-=1 else: embedding_matrix[i,300:] = extra_embed count_found-=1 print("Got embedding for ",count_found," words.") return embedding_matrix def add_features(df): df['question_text'] = df['question_text'].progress_apply(lambda x:str(x)) df["lower_question_text"] = df["question_text"].apply(lambda x: x.lower()) df['total_length'] = df['question_text'].progress_apply(len) df['capitals'] = df['question_text'].progress_apply(lambda comment: sum(1 for c in comment if c.isupper())) df['caps_vs_length'] = df.progress_apply(lambda row: float(row['capitals'])/float(row['total_length']), axis=1) df['num_words'] = df.question_text.str.count('\S+') df['num_unique_words'] = df['question_text'].progress_apply(lambda comment: len(set(w for w in comment.split()))) df['words_vs_unique'] = df['num_unique_words'] / df['num_words'] return df. You can start for free with the 7-day Free Trial. If you think we.
https://mlwhiz.com/blog/2019/01/17/deeplearning_nlp_preprocess/
CC-MAIN-2020-40
en
refinedweb
DevDisasters What's proper etiquette for handling code snafus when working for -- literally -- a mom-and-pop company? Facing the fact that the money from his graduate research grant -- and only source of income -- was dangerously close to running out, Rumi found himself in a difficult position: How the heck would he ever be able to finish his degree and not starve in the process? There were many possibilities. He could sell blood plasma, post an ad offering a redundant vital organ on Craigslist, or perhaps get an old RV and use it to produce and sell illicit substances. After great consideration, Rumi went with the riskiest endeavor of all: jumping headfirst into a greenfield ASP.NET project at the main headquarters of Baker's Family Restaurants Inc. as a part-time Web developer. As a bonus, he was to be paired with another developer who was quite qualified in his own right – more than 10 years of C# experience and extensive knowledge of the business processes that they'd be developing code to support. Just kidding! He was actually a junior developer named Fabian -- a recent college grad with a coincidental genetic connection to one of the restaurant chain's corporate managers. On the bright side, Fabian did have his own copy of "A Newbie's Guide to Developing Applications in C#," so he wasn't completely unprepared. Mercifully, the deadline for the first project was generous and Rumi, keeping positive (and perhaps a little bit desperate), was determined to make the best of the situation. Starting from Scratch Initially, Rumi mentored Fabian in the basics of using Visual Studio. He explained the importance of proper source control, good patterns and practices to follow, and later escalating to how to write LINQ queries and beyond. Under expert tutelage, Fabian's skills grew, but that didn't change the fact that the deadline was still closing in. As the crunch time loomed, the two worked increasingly independent from each other, postponing development lessons to save on time. As such, instead of regularly reviewing Fabian's check-ins, Rumi went about his work -- using the code and trusting it to work, relying on unit tests to catch any unwanted behavior, and so on. This system actually worked very well. When Fabian decorated the Edit method in Rumi's controller with the attribute [HandleError(ExceptionType = typeof(EmptyObjectReceivedException)] he was pleased to see Fabian spent time thinking about proper error handling instead of just writing code in the "nothing wrong can ever happen" style common in so many junior devs. But something about the name of the exception felt wrong. Upon asking Fabian why he wrote a custom exception class instead of using something obvious, such as System.ArgumentNullException, he answered, "I guess we could use the existing class, but then the mail would not be sent, so I think we should keep my class." Mail?! OK, something is definitely strange here, Rumi thought. He thanked Fabian for his explanation, but knew more digging was in order. A Bad Ingredient Rumi opened up Team Foundation Server and quickly found EmptyObjectReceivedException.cs -- thank goodness for good source control practices! Rumi opened the file and was greeted with this code (minus the namespace and import for anonymization purposes): public class EmptyObjectReceivedException : Exception { public string BusinessObjectType { get; private set; } public string BusinessObjectMethod { get; private set; } /// <summary> /// This constructor creates an exception with a generic message /// </summary> /// <param name="businessObjectType">A user-understandable name for the type of business object which could not be retrieved</param> /// <param name="businessObjectId">The method which received the empty parameter</param> public EmptyObjectReceivedException(string businessObjectType, string businessObjectMethod) : base(message: String.Format("An error occured when trying to retrieve a {0} in the method {1}", businessObjectType, businessObjectMethod)) { this.BusinessObjectType = businessObjectType; this.BusinessObjectMethod = businessObjectMethod; SendMail(); } /// <summary> /// This constructor lets the developer use a specific error message /// </summary> /// <param name="businessObjectType">A user-understandable name for the type of business object which could not be retrieved</param> /// <param name="businessObjectId">The ID of the record which could not be retrieved</param> /// <param name="specificMessage"> Used to initialize the Message property of the base Exception class </param> public EmptyObjectReceivedException(string businessObjectType, string businessObjectMethod, string specificMessage) : base(message: specificMessage) { this.BusinessObjectType = businessObjectType; this.BusinessObjectMethod = businessObjectMethod; SendMail(); } /// <summary> /// Send a eMail with the exception to all administrators /// </summary> public void SendMail() { using (MailMessage EMail = new MailMessage()) { OurUserRoleManagement OurUserRoleManagement = new OurUserRoleManagement(new DataContext(ConfigurationManager.ConnectionStrings["OurDB"].ConnectionString)); foreach (OurUser admin in OurUserRoleManagement.GetAllAdmins()) { } EMail.Body = "The program " + WebConfigurationManager.AppSettings["Programname"] + " throws an EmptyObjectReceivedException.\r\n The exception occured in the method " + BusinessObjectMethod + " with the object type " + BusinessObjectType + " .\r\n Here the exact exception text: " + GetBaseException().Message + " \r\n and here the stack trace: " + GetBaseException().StackTrace + ""; using (SmtpClient mailhost = new SmtpClient()) { mailhost.Host = "mailhost"; mailhost.Send(EMail); } } } After reading the final line, Rumi calmly minimized the window and pulled up his calendar -- it was time for another "how-to" session with Fabian. Check, Please! "Hey Fabian, you've been doing some great code as of late -- but I checked into that method I mentioned to you earlier and, well, I still have a few questions." Rumi shuffled through the highlighted printout of the source file, briefly considering where to begin. Should he bring up the way the exception handles itself? How about the way the code throws an exception, opens a database connection (an SMTP connection), and reads the web.config file? There's also the fact that the method, which threw the exception, is saved as a string in the exception object; code that throws the exception has to supply this information. But these were all very technical, and most would likely call for a separate training session to fully explain why each point was so wrong -- time that they didn't have. In the end, Rumi picked what he felt to be his biggest, and most obvious, "beef" with the code. "So, every time an exception is thrown ... an e-mail is sent out and there's no way to turn this behavior off. Are you sure this is the best approach?" Rumi felt a little guilty and worried about hurting Fabian's feelings with his honest assessment, but oddly enough, Fabian beamed. "Yeah! Just like you told me in our second training session: reduce duplicated code wherever and whenever possible. All exceptions flow through that code right there. Pretty cool, huh?" Rumi frowned and puffed air out his nostrils. "All right, I'll give you credit there, but who will be receiving these e-mails? You can't assume the other admins and devs in the company want to be flooded with e-mail every time an exception is hit." "Really, it's no big deal, it's just going to one person, the project sponsor." "Wha? Come again? Have you cleared that yet? I mean, that'll potentially result in a lot of e-mails " "Oh, I know, but like I said, it's cool," Fabian interjected, "I set up a message filter in her Outlook and, besides, if something goes really bad, it's not like she'll come down hard, I mean she's my mom, after all." OOHH! Now the pieces had fallen into place. Rumi had met the project owner -- a department head who had never written a line of code in her life -- on only a few occasions, but never put two and two together. Now, more than ever, called for professional tact. "Well, let's not give her that chance to get upset. Let's take a minute to talk about the benefits of writing to a log file "
https://visualstudiomagazine.com/articles/2014/05/01/adventures-in-nepotism.aspx
CC-MAIN-2020-40
en
refinedweb
First time here? Check out the FAQ!). One thing that is really strange to me, while performing the wget/iperf tests I am also doing tcpdump on the qr-* (qrouter) namespace. In the qrouter namespace of the network node I see the "seq" packets and on the qrouter of the compute I see the "ack" packets. mtu is already at 1450 (lowered it to 1400 but no effect). iperf however does much better than wget, against the same server: VM - 120 Mbits/sec - 150 Mbits/sec; Host - 250+ Mbits/sec. It's a confirmed bug:... Yes, I have the same behavior when I use tcpdump. But the problem is that TCP requests don't work. its a single node deployment. l3 agent is in dvr_snat mode Hello, I have a liberty with dvr deployment. I have two machines connected in the same VXLAN network. VM1 is 10.0.0.2 and VM2 10.0.0.3. VM1 has a floating ip assigned (let's say 2.2.2.2). The problem is: if VM2 has no floating ip assigned (so traffic goes through SNAT namespace), I cannot access 2.2.2.2 (but any other requests i.e curl google.com works). If VM2 has floating IP assigned, I can access 2.2.2.2 Scenarios: The MTU of my machines is 1450. Run sudo swift-init restart main sudo swift-init restart main For development I also recommend to use this (...) The local_ip are set ok ? Can you ping controller on local_ip from compute node ? Did you try to write something to /images as a normal user ? Maybe glance doesn't have permissions to /images. is the glance service running ? can't you see anything in the glance logs ? what exactly did you modify in the config files after adding the eth2 ? It's already 1454 on the guest OS. However, the Host OS (controller node running snat router) has MTU 1500 on the router gateway nic. The VM is able to open the socket on port 53 and send packets, but the returning packets are not coming, i think. I have 8.8.8.8 in VM /etc/resolv.conf and I can ping 8.8.8.8. I have google .com in the dns cache and I can ping google .com, but if I try to ping google .de for example, it doesn't work because my machine can't access 8.8.8.8 on port 53 do query the dns server. I can ping anything. The problem comes when I try to do TCP traffic through different ports such as 443 or 80. Hello, I am running a setup with Neutron DVR, having 1 controller node (with l3 agent in dvr_snat mode) and other compute nodes with l3 agent in dvr mode. The external traffic (SNAT) made by VMs without a floating IP is routed through the controller node (dvr_snat router). The external traffic (DNAT/SNAT) made by VMs with a floating IP is routed through the compute node (dvr router). So let's say I create a VM with a private only IP. - wget - doesn't work; the request stays on hold- wget - works, but I can see a delay - apt-get update also doesn't work for all the repositories After I associate a floating IP all the external requests works smoothly. Before associating the floating IP, I went to the SNAT namespace in the controller node and tried these wget commands. All worked smoothly, so my IP is not banned. There might be a connection problem between the compute nodes and the controller node. Can you help me with some instructions how to debug this? Thanks. Reagarding 4.: I am using neutron DVR, meaning that I don't have a network node. My Floating IPs and inter-vm traffic is distributed across the compute nodes, therefore I want to run the lbaas-agent on each compute node. Is neutron able to distribute the load balancers uniform across the nodes? I have a Liberty deployment using DVR scenario. So I have 1 controller (which includes l3 agent in dvr_snat mode) and multiple compute nodes (which includes l3 agent in dvr mode). I want to add LBaaS service to this deployment, but I am a bit confused about how it will integrate. Would it work if I run the LBaaS agent on each compute node ? (compute nodes handles DNAT and floating ips) What about Octavia ? Is it stable enough ? Can you recommend some installation instructions for octavia ? Thank you. I have the following setup:. OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license.
https://ask.openstack.org/en/users/15421/mariusleu/?sort=recent
CC-MAIN-2020-40
en
refinedweb
Spring Boot TransactionOperations.withoutTransaction() I am trying to setup my own spring starter for my own security configuration which seems to work fine when running the application, but when i try to create mock mvc tests it won't load up my WebSecurityConfigurerAdapter when the following conditional is specified: @ConditionalOnMissingBean(WebSecurityConfigurerAdapter.class). I tried this: @SpringBootTest @AutoConfigureMockMvc public class WebSecurityTests { @Autowired private MockMvc mockMvc; @MockBean JwtDecoder jwtDecoder; @SpringBootApplication(exclude = {SecurityAutoConfiguration.class, UserDetailsServiceAutoConfiguration.class}) public static class TestConfig { @RestController public class MockController { @RequestMapping(path = "/", method = {RequestMethod.HEAD, RequestMethod.GET, RequestMethod.POST, RequestMethod.PUT, RequestMethod.PATCH}) public String access(HttpServletRequest request) { return request.getMethod(); } } } ... With this in an auto configuration definition: @Configuration(proxyBeanMethods = false) @ConditionalOnClass(WebSecurityConfigurerAdapter.class) @ConditionalOnWebApplication(type = Type.SERVLET) public class WebSecurityConfiguration { .... @EnableWebSecurity @Configuration(proxyBeanMethods = false) @ConditionalOnMissingBean(WebSecurityConfigurerAdapter.class) @Order(SecurityProperties.BASIC_AUTH_ORDER) static class DefaultConfigurerAdapter extends WebSecurityConfigurerAdapter { ... With @ConditionalOnMissingBean(WebSecurityConfigurerAdapter.class) it seems like no access control takes place, when I remove @ConditionalOnMissingBean(WebSecurityConfigurerAdapter.class) all my tests pass, but this will prevent me from being able to override the WebSecurityConfigurerAdapter if needed in my other projects. Any thoughts on how to resolve this? Hey, It seems like the WebSocketMessageBrokerConfigurer doesn't play nice with spring.main.lazy-initialization=true in Spring Boot 2.2. The clients get stuck on the CONNECT-phase, and does not retrieve a CONNECTED message. I have added @Lazy(false) in our WebSocketMessageBrokerConfigurer bean, but it didn't do any difference. Does anyone have any pointers? AbstractUserclass that is extended by User1and User2. Inside the AbstractUseri have a boolean canCommentthat is set to true by default and overrided by User2to false. Then in my endpoint i retrieve from JPA the AbstractUserand check if he canComment, then proceed. The AbstractUser is declared as an entity with @Inheritanceon Single Table and @DiscriminatorColumn. It doesn't feel to be the right way in my eyes so i'm here asking you for this. I have a service, wich already has a server and so on. But I would like to have continuous build environment, so that changes hot reloaded and I can debug and test out some things. (that java app is quite big and has no tests, some things I don't even understand yet, and I have to fix some bugs) I wonder if I can just start the java app with spring boot and the rest stays as it is. I don't know if this is a bug. I have two configuration files defined in the resources directory. application.properties: role.id=456 application.yml user: id: 123 @Bean public CommandLineRunner runner(Environment environment) { return args -> { Object userId = environment.getProperty("user.id", Object.class); Integer user_id = (Integer) userId; // ClassCastException: java.lang.String cannot be cast to java.lang.Integer Object roleId = environment.getProperty("role.id", Object.class); Integer role_id = (Integer) roleId; }; } Does this mean that properties and yaml are inconsistent? environment.getProperty("role.id", Integer.class);then Spring will convert the Stringto and Integer boolean @GetMapping("/admin-login") public void adminLogin(HttpServletResponse response) { try { String url = AUTH_URL + "?" + "response_type=code" + "&" + "client_id=" + CLIENT_ID + "&" + "redirect_uri=" + REDIRECT_URI + "&" + "scope=" + SCOPE + "&" + "state=" + UUID_; response.sendRedirect(url); } catch (Exception e) { e.printStackTrace(); } } DataSource
https://gitter.im/spring-projects/spring-boot?at=5e1b171ca8aa5166ce22617f
CC-MAIN-2020-40
en
refinedweb
public class MainClass : NSObject { AppDelegate.GetInstance ().InvokeOnMainThread (delegate { AttachmentWebViewController AVC = new AttachmentWebViewController (); AppDelegate.GetInstance().NavController.PushViewController(AVC, true); AVC.showAttachment (attachment); }); } public partial class AttachmentWebViewController : UIViewController { public override void ViewDidLoad () { base.ViewDidLoad (); } public void showAttachment(Attachment baseAttachment) { string fileName = baseAttachment.FileName; string localDocUrl = Path.Combine (NSBundle.MainBundle.BundlePath, fileName); if (new NSUrlRequest (new NSUrl (localDocUrl)) != null) { Console.WriteLine ("new NSUrlRequest(new NSUrl(localDocUrl)) is NULL"); } webView.LoadRequest(new NSUrlRequest(new NSUrl(localDocUrl))); webView.ScalesPageToFit = true; } } I am trying to call showAttachment() from MainClass, but I am getting error as "System.NullReferenceException-object reference not set to an instance of an object" on line "webView.LoadRequest(new NSUrlRequest(new NSUrl(localDocUrl)));" because my webView is Null. I have added UIWebView in my .XIB manually. Thanks Does ViewDidLoad get called before you call showAttachment? If not, that's your problem. webView will remain null until the view loads. @rschmidt "ViewDidLoad" isn't getting called before showAttachment(). But my question, is that the correct way to call a ViewController from NSObject class ? Or is there any other way to call ? Thanks @SatishBirajdar this isn't really a problem with your MainClass class, it's a problem with AttachmentWebViewController which should be able to tolerate the case where clients call showAttachment before its view loads. Here is how you might handle that: That way, if the client calls showAttachment before the view loads, then the class remembers the request and defers loading it until the view does load. @rschmidt Thanks will try this.
https://forums.xamarin.com/discussion/comment/138123
CC-MAIN-2020-40
en
refinedweb
When you start builing serverless applications like Azure functions or Azure web jobs, one of the first things you will need to contend with is logging. Traditionally logging was simply achieved by appending rows to a text file that got stored on the same server your application was running on. Tools like log4net made this simpler by bringing some structure to the proces and providing functionality like automatic time stamps, log levels and the ability to configure what logs should actually get written out. With a serverless application though, writing to the hard disk is a big no no. You have no guarantee how long that server will exist for and when your application moves, that data will be lost. In a world where you might want to scale up and down, having logs split between servers is also hard to retrieve when an error does happen. .net core The first bit of good news is that .net core supports a logging API. Here I am configuring it in a web job to output logs to the console and to Application insights. This is part of the host builder config in the program.cs file. //3. LOGGING function execution : //3 a) for LOCAL - Set up console logging for high-throughput production scenarios. hostBuilder.ConfigureLogging((context, b) => { b.AddConsole(); // If the key exists in appsettings.json, use it to enable Application Insights rather than console logging. //3 b) PROD - When the project runs in Azure, you can't monitor function execution by viewing console output (3 a). // -The recommended monitoring solution is Application Insights. For more information, see Monitor Azure Functions. // -Make sure you have an App Service app and an Application Insights instance to work with. //-Configure the App Service app to use the Application Insights instance and the storage account that you created earlier. //-Set up the project for logging to Application Insights. string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"]; if (!string.IsNullOrEmpty(instrumentationKey)) { b.AddApplicationInsights(o => o.InstrumentationKey = instrumentationKey); } }); Microsofts documentation on logging in .NET Core and ASP.NET can be found here. Creating a log in your code is then as simple as using dependency injection on your classes to inject an instance of ILogger and then using it’s functions to create a log. public class MyClass { private readonly ILogger logger; public MyClass(ILogger logger) { this.logger = logger; } public void Foo() { try { // Logging information logger.LogInformation("Foo called"); } catch (Exception ex) { // Logging an error logger.LogError(ex, "Something went wrong"); } } } Application Insights When your application is running in Azure, Application Insights is where all your logs will live. What’s great about App Insights is it will give you the ability to write queries against all your logs. So for instance if I wanted to find all the logs for an import function starting, I can write a filter for messages containing “Import function started”. One of my favourite where commands is ago(30m). It will output the time for a given timespan in the past. This is great when your running the same query frequently and are only interested in the last x amount of time, as you can simply write where timestamp > ago(30m) for the last 30 minutes of logs rather than trying to remember for date time format your string should be in Queries can also be saved or pinned to a dashboard if they are a query you need to run frequently. For all regular logs your application makes you need to query the traces. What can be confusing with this though is the errors. With the code above I had a try catch block and in the catch block I called logger.LogError(ex, “Something went wrong”); so in my logs I expect to see the message and as I passed an exception I also expect to see an exception. But if we look at this example from application insights you will see an error in the traces log but no strack trace or anything else from the exception. LogError will write to both the traces and exceptions log. If you want to view the exceptions you have to look at the exceptions log, not the traces. This is just the start of the functionality that Application Insights provides, but if your just starting out, hopefully this is a good indication not only of how easy it is to add logging to your application, but also how much added value App Insights can offer over just having text files.
https://himynameistim.com/blog/page/2/
CC-MAIN-2020-40
en
refinedweb
shivang goyal2,003 Points Bummer: '/' with a name gave a non-200 response. Did you change the route and the function? why it's produce an error, i am stuck on it. from flask import Flask app = Flask(__name__) @app.route('/') @app.route('/<name>') def hello(name="jack"): return 'Hello {}'.formate(name) 1 Answer Jennifer NordellTreehouse Staff Hi there, shivang goyal ! You're doing terrific, and you're off by literally one character. Check your very last line for a minor typo. You typed formate, but you meant to type format. Note the omission of the final "e" in the latter version. Hope this helps! shivang goyal2,003 Points shivang goyal2,003 Points thanx jennifer
https://teamtreehouse.com/community/bummer-with-a-name-gave-a-non200-response-did-you-change-the-route-and-the-function
CC-MAIN-2020-40
en
refinedweb
Once you add redux simple router (or redux router) to your project you have two places in your state that need to be kept in sync. import { combineReducers } from 'redux' import { routeReducer } from 'redux-simple-router' import tree from './tree' const rootReducer = combineReducers({ tree: tree, routing: routeReducer }) export default rootReducer Now every time you change tree you need to update the routing part. That’s easy because your action creators may push a new state into the browser history. The other way around, -changing the URL and reduce correctly- is a bit more complex because the router dispatches only one action (UPDATE_LOCATION if using redux-simple-router) and based on this action you need to reduce your state correctly. Matching -and sometimes parsing- the url in your reducers is something that feels wrong, it is a logic that doesn’t belong to the reducer. Redux Sagas Sagas in redux are a way to listen for redux actions and spawn side effects accordingly and because url changes and async requests can be treated as side effects I created a saga to handle both. import { routeActions, UPDATE_LOCATION } from 'redux-simple-router' import { take, put, fork, call } from 'redux-saga' import { show, retrieve } from './actionCreators' /* * Small tree structure in a plain hash that works as my backend server. */ const fakeDB = { '/': [ { id: '/1', title: '1' }, { id: '/2', title: '2' } ], '/1': [ { id: '/1/a', title: 'a' }, { id: '/1/b', title: 'b' } ], '/1/a': [ { id: '/1/a/I', title: 'I' } ] } /* * This saga `take`s the RETRIEVE_NODE action, then executes in order: * 1. Changing the URL with the redux simple router `routeActions.push` method * 2. Request a list of nodes from the server * 3. Once the nodes are back call `show` a custom action that will reduce the state with new nodes */ export function* retrieveNode() { let action = null while(action = yield take('RETRIEVE_NODE')) { yield put(routeActions.push(action.payload.nodeId)) let nodes = yield call(fetchNodes, action.payload.nodeId) yield put(show(nodes)) } } /* * This saga `take`s the UPDATE_LOCATION action, then checks if the update is a push or a pop. * A pop means hiting the back/forward button, in this case -and only in this case- I want to * retrieve new nodes, this will put a RETRIEVE_NODE andthat will be handled by the previous saga. */ export function* urlChanged() { let action = null while(action = yield take(UPDATE_LOCATION)) { if(action.location.action === 'POP') { yield put(retrieve(action.location.pathname)) } } } function fetchNodes (nodeId) { return new Promise(resolve => setTimeout( () => resolve(fakeDB[nodeId]), 1000 ) ) } export default function* root() { yield fork(retrieveNode) yield fork(urlChanged) } The first saga retriveNode watches for the RETRIEVE_NODE action that I dispatch on the connected components, once the action is fired it executes in order some side effects, one of them being the url change provided by the push function. Now we need to tackle another problem, what happens when someone changes the url like as a result of clicking the back or forward button on the browser? In that case, the routing part of our state changes but the tree part remains the same. The solution is in the last saga called urlChanged. I listen to the UPDATE_LOCATION action and check if this action has been a POP, a POP action is our way to say: Did this action occur has a result of a back / forward navigation? This is because I don't want to react to url pushes made from the previous saga. Once the url changed as a result of the back / forward button we send the action to retrieve the node, this will wake up the retriveNode saga and rebuild our state correctly.
https://cherta.website/redux-sagas-and-urls/
CC-MAIN-2020-40
en
refinedweb
Numerical Methods and Statistics Suggested Reading x = 2 print(x) 2 print(x * 4) 8 x = x + 2 print(x) 4 import math x = math.cos(43) print(x) 0.5551133015206257 a = 2 a = a * b b = 4 --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-5-de5ed0aa3cf0> in <module> 1 a = 2 ----> 2 a = a * b 3 b = 4 NameError: name 'b' is not defined x = 2 x * 2 print(x) 2 x = 4 print('This is the simple way of printing x', x) This is the simple way of printing x 4 y = 2.3 print('This is how we print x', x, 'and y', y, '. Notice spaces are inserted for us') This is how we print x 4 and y 2.3 . Notice spaces are inserted for us print('What if I want to print y ({}) without spaces?'.format(y)) What if I want to print y (2.3) without spaces? z = 0.489349842432423829328933 * 10**32 print('{}'.format(z)) 4.893498424324239e+31 print('What if I want to print x twice, like this {0:} {0:}, and y once, {1:}?'.format(x,y)) What if I want to print x twice, like this 4 4, and y once, 2.3? print('With a fixed precision (number of digits): {:.4}'.format(z)) With a fixed precision (number of digits): 4.893e+31 print('Make it take up exactly 10 spaces: {:10.4}'.format(z)) print('Make it take up exactly 10 spaces: {:10.4}'.format(math.pi)) print('Make it take up exactly 10 spaces: {:10.4}'.format(z*math.pi)) Make it take up exactly 10 spaces: 4.893e+31 Make it take up exactly 10 spaces: 3.142 Make it take up exactly 10 spaces: 1.537e+32 print('You can demand to show without scientific notation too: {:f}'.format(z)) You can demand to show without scientific notation too: 48934984243242387076283208040448.000000 print('or demand to use scientific notation: {:e}'.format(math.pi)) or demand to use scientific notation: 3.141593e+00 print('There are many more formatting tags, such as binary: {:b} and percentage: {:%}'.format(43, 0.23)) There are many more formatting tags, such as binary: 101011 and percentage: 23.000000% %%bash pip install fixedint Requirement already satisfied: fixedint in /home/whitead/miniconda3/lib/python3.7/site-packages (0.1.5) from fixedint import * def bprint(x): print('{0:08b} ({0:})'.format(x)) i = UInt8(8) bprint(i) 00001000 (8) i = UInt8(24) bprint(i) 00011000 (24) i = UInt8(2**8 - 1) bprint(i) 11111111 (255) i = UInt8(2**8) bprint(i) 00000000 (0) This effect, where if we add 1 too many to a number and "roll-over" to the beginnig is called integer overflow. This is a common bug in programming, where a number becomes too large and then rolls-back. Ineger overflow can result in negative numbers, if we have signed numbers. i = Int8(2**7 - 1) bprint(i) bprint(i + 1) 01111111 (127) -10000000 (-128) Notice here that our max is half previously, because one of our 8 bits is for sign, so only 7 bits are available for integers. The key takeaway from integer overflow is that if we have integers with not enough bytes, we can suddenly have negative numbers or zero when adding large numbers. Real world examples of integer overflow include view counts becoming 0, having data usage become negative on a cellphone, the Y2K bug, a step counter going to 0 steps.
https://nbviewer.jupyter.org/github/whitead/numerical_stats/blob/master/unit_3/lectures/lecture_1.ipynb
CC-MAIN-2020-40
en
refinedweb
Adding KML to the MapboxTileServer and Vue.js Client Google knows what I search for. I have all my mails stored on Google Servers. And when I was in China my phone was cut off of all Google services and... it couldn't resolve any of my contacts stored in Google Mail. That made me realize: I probably need a little bit of a backup plan. Why does it matter here? Because I also track myself with Google Maps! I want my Pendlerpauschale! What we are going to build In the last post I have shown how to integrate the Mapbox GL JS client in a Vue.js application. So far we are able to display the tiles, markers and lines. Now that we have a solid base, let's add some features. You are probably also tracking yourself using a Smartwatch, Google Maps or an app of your choice. And often enough you want to display where you have been. After all it should also be your data, because you have collected it. Google Maps makes it possible to export a day as KML, which ... .... (Source) So let's add a way to plot KML data in our Vue.js application. At the end of the tutorial we are able to overlay KML Polygons, Points and Lines in our Vue.js app, like this: Starting with the Vue Components Adding a MapboxLayer component to Vue.js The Mapbox GL JS library doesn't understand KML out-of-the box, but it has a great GeoJSON support. So the plan is to upload a KML file from the application, convert it to GeoJSON and display it using our Vue.js component. The properties for the component are: map - The Mapbox GL JS reference we are layering the GeoJSON on. id - A unique identifier for sources and layers in the component. geojson - The GeoJSON object we are going to display. The documentation has a lot of GeoJSON examples we can learn from, like ... - - And in the Style Specification we can learn the properties for all layers available: So I came up with the following Vue.js component: <script> export default { props: { map: { type: Object, required: true }, id: { type: String, required: true }, geojson: { type: Object, required: true } }, mounted() { this.map.addSource(`geojson_${this.id}`, { type: 'geojson', data: this.geojson }); this.map.addLayer({ id: `geojson_polygons_${this.id}`, type: 'fill', source: `geojson_${this.id}`, paint: { 'fill-outline-color': ['case', ['has', 'stroke'], ['get', 'stroke'], '#088'], 'fill-color': ['case', ['has', 'fill'], ['get', 'fill'], '#088'], 'fill-opacity': ['case', ['has', 'fill-opacity'], ['get', 'fill-opacity'], 0.8] }, filter: ['==', '$type', 'Polygon'] }); this.map.addLayer({ id: `geojson_polygons_lines_${this.id}`, type: 'line', source: `geojson_${this.id}`, paint: { 'line-width': 1, 'line-color': ['case', ['has', 'stroke'], ['get', 'stroke'], '#088'] }, filter: ['==', '$type', 'Polygon'] }); this.map.addLayer({ id: `geojson_points_${this.id}`, type: 'circle', source: `geojson_${this.id}`, paint: { 'circle-radius': ['case', ['has', 'circle-radius'], ['get', 'circle-radius'], 6], 'circle-color': ['case', ['has', 'circle-color'], ['get', 'circle-color'], '#088'] }, filter: ['==', '$type', 'Point'] }); this.map.addLayer({ id: `geojson_lines_${this.id}`, type: 'line', source: `geojson_${this.id}`, paint: { 'line-width': 3, 'line-color': ['case', ['has', 'color'], ['get', 'color'], '#088'] }, filter: ['==', '$type', 'LineString'] }); }, render() {} }; </script> In the index.js we are exporting it: // [...] export { default as MapboxLayer } from '@/components/MapboxLayer.vue'; // [...] And that's it! A Button for the File Upload We now need a way to upload files. I thought a Floating Action Button (FAB) would be a nice addition, and there is a great tutorial how to create a Material Design Button here: This can be easily translated into a Vue.js component: <template> <div class="fab" @+</div> </template> <script> export default { name: 'FloatingActionButton', data() { return {}; } }; </script> <style> .fab { width: 48px; height: 48px; background-color: red; border-radius: 50%; box-shadow: 0 6px 10px 0 #666; transition: all 0.1s ease-in-out; font-size: 24px; color: white; text-align: center; line-height: 48px; position: fixed; right: 25px; bottom: 25px; cursor: pointer; } .fab:hover { box-shadow: 0 6px 14px 0 #666; transform: scale(1.05); } </style> Again in the index.js we are exporting the Button: // [...] export { default as FloatingActionButton } from '@/components/FloatingActionButton.vue'; // [...] Converting from KML to GeoJSON in the Backend The /kml/toGeoJson Endpoint It starts by creating a new endpoint for /kml/toGeoJson for converting from KML to GeoJSON. Maybe a different resource would be more RESTful, but I don't overthink it. In the Controller, we are injecting the IKmlConverterService, which handles the conversion. // Copyright (c) Philipp Wagner. All rights reserved. // Licensed under the MIT license. See LICENSE file in the project root for full license information. using MapboxTileServer.Extensions; using MapboxTileServer.Services; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; using System.Threading; using System.Threading.Tasks; namespace MapboxTileServer.Controllers { [ApiController] public class KmlController : ControllerBase { private readonly ILogger<KmlController> logger; private readonly IKmlConverterService kmlConverterService; public KmlController(ILogger<KmlController> logger, IKmlConverterService kmlConverterService) { this.logger = logger; this.kmlConverterService = kmlConverterService; } [HttpPost] [Route("/kml/toGeoJson")] public async Task<IActionResult> KmlToGeoJson([FromForm(Name = "file")] IFormFile file, CancellationToken cancellationToken) { logger.LogDebug($"Uploaded a KML File ..."); if (file == null) { return BadRequest(); } var xml = await file.ReadAsStringAsync(); ; if (!kmlConverterService.ToGeoJson(xml, out string json)) { return BadRequest(); } return Content(json, "application/json"); } } } The Actual Conversion The IKmlConverterService has a single method IKmlConverterService#ToGeoJson for now. This method calls the GeoJsonConverter, which we have to implemented for getting the JSON data from the XML file. // Copyright (c) Philipp Wagner. All rights reserved. // Licensed under the MIT license. See LICENSE file in the project root for full license information. using MapboxTileServer.GeoJson; using Microsoft.Extensions.Logging; using System; namespace MapboxTileServer.Services { public interface IKmlConverterService { bool ToGeoJson(string xml, out string json); } public class KmlConverterService : IKmlConverterService { private readonly ILogger<KmlConverterService> logger; public KmlConverterService(ILogger<KmlConverterService> logger) { this.logger = logger; } public bool ToGeoJson(string xml, out string json) { json = null; try { json = GeoJsonConverter.FromKml(xml); return true; } catch(Exception e) { logger.LogError(e, "Failed to convert Kml to GeoJson"); return false; } } } } For the KML <-> GeoJSON conversion I started by looking at existing .NET projects. You probably think a lot of projects exist for such a task, but most of the projects I found have been abandoned or aren't compatible to .NET Core. Some took additional dependencies on Newtonsoft.JSON, which would be OK in a real project... but for this I don't want to add more and more dependencies. After all it's also a good learning experience. So I downloaded the KML Schema and had a look, if I can generate the Contracts and have a simple way to iterate through the document and build the GeoJSON from it. Long story short: Frustrated I gave up after some hours... Then I found a JavaScript library called toGeoJson, which has originally been developed by Mapbox Employees and is now maintained by Tom MacWright (@tmcw): I have then translated the JavaScript implementation to C# using the XDocument and XElement abstractions. It would be a lie to say this was straightforward... Anyway! The C# version of toGeoJson can be found in the GeoJsonConverter class: // Copyright (c) Philipp Wagner. All rights reserved. // Licensed under the MIT license. See LICENSE file in the project root for full license information. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Text.Json; using System.Xml.Linq; using System.Xml.XPath; using MapboxTileServer.GeoJson.Model; namespace MapboxTileServer.GeoJson { /// <summary> /// This is a shameless copy from:, so there is /// no need for taking additional dependencies on third-party libraries. Updates to kml.js should be /// reflected here. /// /// The Object Model is intentionally ugly (no abstract classes, no interfaces), because this would make /// Deserialization with .NET Core System.Text.Json complicated or impossible, see the Open Issue at: ///. /// </summary> public partial class GeoJsonConverter { private static readonly XNamespace Kml = XNamespace.Get(""); private static readonly XNamespace Ext = XNamespace.Get(""); private static readonly XName[] Geotypes = new[] { XName.Get("Polygon", Kml.NamespaceName), XName.Get("LineString", Kml.NamespaceName), XName.Get("Point", Kml.NamespaceName), XName.Get("Track", Kml.NamespaceName), XName.Get("Track", Ext.NamespaceName) }; public static string FromKml(string xml) { var root = XDocument.Parse(xml); return FromKml(root); } } } At the end we hook the Service in the Startup: private void RegisterApplicationServices(IServiceCollection services) { // ... services.AddSingleton<IKmlConverterService, KmlConverterService>(); } And that's it for the Backend! Extending the Home.vue So we start by adding the FloatingActionButton and MapboxLayer to the Home.vue view: <template> <div id="home"> <Search id="search" : <FloatingActionButton id="fab" @ <MapboxLoader id="map" v- <template slot- <MapboxMarker v- <MapboxLine v- <MapboxLayer v- </template> </MapboxLoader> <input ref="file" type="file" style="display: none;" @ </div> </template> For the KML file upload a hidden <input> has been added, which calls a handleFileUpload on change. In the script, we are first importing the new components, implement the method handling the Floating Action Button click and the callback handleFileUpload. export default { name: 'Home', components: { // ... FloatingActionButton, MapboxLayer }, data: function () { return { // ... layers: [], }; }, mounted: function () { this.addLine(LINE_WALK_THROUGH_MUENSTER); this.addMarker(MARKER_MUENSTER_CITY_CENTER); }, methods: { onFloatingActionButtonClicked() { this.$refs.file.click(); }, async handleFileUpload(event) { var result = await kmlToGeoJsonAsync('', event.target.files[0]); this.addLayer({ geojson: result }); }, }, /... } The kmlToGeoJsonAsync method just posts the first file selected by the <input> to the Backend: export async function kmlToGeoJsonAsync(url, file) { var formData = new FormData(); formData.append('file', file); var response = await fetch(url, { method: 'POST', body: formData }); return await response.json(); } Every id needs to be unique for both Vue.js and Mapbox, so layers and markers don't share the same id's for example. That's why we are also assigning a unique identifier for each layer, when adding it to the array of layers to be displayed by our component: export default { name: 'Home', // ... methods: { addLayer(layer) { var layer_id = this.getLastLayerId() + 1; this.layers.push({ layer_id: layer_id, id: `Layer_${layer_id}`, ...layer }); }, getLastLayerId: function () { if (isNullOrUndefined(this.layers)) { return 0; } if (this.layers.length == 0) { return 0; } return this.layers .map((x) => x.layer_id) .reduce((a, b) => { return Math.max(a, b); }); } } } And that's it for the Frontend! Testing it! I start by opening the Google Timeline page. There you can export a day to KML by clicking the Gear Icon here: It's my way to work, I zoom into a part, so you cannot track me. I now click the floating action button, upload the KML data and the KML data gets overlayed in the Vue.js application: It works! Keep in mind, this is what the KML to GeoJSON converter is tested with right now. Conclusion And that's it! In the end I have to admit... I could have saved a lot of time implementing all this, if I had simply used the great toGeoJson library on client-side or if I had spun up a process for calling rock-solid libraries like GDAL. 😅
https://www.bytefish.de/blog/mapboxtileserver_kml.html
CC-MAIN-2020-40
en
refinedweb
Note: Starcounter 3.0.0 is currently in preview stage. This API might be changed in the future releases without backwards compatibility. The database schema in Starcounter is defined by C# classes with the [Database] attribute: using Starcounter.Nova;[Database]public abstract class Person{public abstract string FirstName { get; set; }public abstract string LastName { get; set; }public string FullName => $"{FirstName} {LastName}";} All instances of database classes created with the Db.Insert method are stored persistently. Database classes support default constructors. Non-private default constructors are called when a new instance is created with Db.Insert. For example, this is a valid database class with a constructor: using Starcounter.Nova;[Database]public abstract class Order{public Order(){this.Created = DateTime.Now;}public abstract DateTime Created { get; set; }} It's possible to have constructors with parameters, although, they are never called when using Db.Insert. Constructors with parameters in database classes can be useful for unit testing purposes when you want to inject dependencies or other arguments into a class. If you add a constructor with parameters to a database class, you also have to add a default constructor. Warning: All C# access modifiers are accepted for constructors, except for internal. Using it will throw ScErrSchemaCodeMismatch (SCERR4177) exception. Database classes should only use properties - either auto-implemented or with an explicitly declared body. Properties should also be public and either abstract or virtual. It is recommended to use abstract properties to reduce application memory footprint. It's possible to have collections in the database class if the collection has an explicitly declared body. For example, the following properties are allowed: public List<string> Branches => new List<string>() { "develop", "master" };public IEnumerable<Person> Friends => Db.SQL<Person>("SELECT p FROM Person p"); These properties and fields are not allowed: public string[] Names { get; set; }public List<Person> People { get; }public IEnumerable Animals; To access collections from database objects, first retrieve the object and then access the property that has the collection: var person = Db.SQL<Person>("SELECT p FROM Person p").First();IEnumerable<Person> friends = person.Friends; Database indexes can be defined with CREATE INDEX SQL query. Unique and not unique indexes are supported. Db.SQL("CREATE INDEX IX_Person_FirstName ON Person (FirstName)"); A single property index can be created with the [Index] attribute: using Starcounter.Nova;[Database]public class Person{[Index]public virtual string FirstName { get; set; }public virtual string LastName { get; set; }} Database classes can have a maximum of 112 properties for performance reasons. The limit applies to the total number of persistent properties (including all inherited) per class. Thus, this is not allowed: [Database]public class LargeClass{public virtual string Property1 { get; set; }public virtual string Property2 { get; set; }// ...public virtual string Property113 { get; set; }} If a database class has more than 113 properties, Starcounter throws ScErrToManyAttributes (SCERR4013). Nested database classes are not supported. The limitation is that inner database classes cannot be queried with SQL. We recommend modeling one-to-many relationships by having references both ways - the child has a reference to the parent and the parent has a reference to all the children. In this example there is a one-to-many relationship between Department and Employee: using Starcounter.Nova;[Database]public class Department{public IEnumerable Employees{get => Db.SQL<Employee>("SELECT e FROM Employee e WHERE e.Department = ?", this);}}[Database]public class Employee{public virtual Department Department { get; set; }} We recommend modeling many-to-many relationships with an associative class. In this example there is a many-to-many relation between Person and Company - to represent this many-to-many relationship we use the associative class Shares: using Starcounter.Nova;[Database]public class Person{public IEnumerable EquityPortfolio{get => Db.SQL<Shares>("SELECT s.Equity FROM Shares s WHERE s.Owner = ?", this);}}[Database]public class Company{public IEnumerable ShareHolders{get => Db.SQL<Shares>("SELECT s.Owner FROM Shares s WHERE s.Equity = ?", this);}}[Database]public class Shares{public virtual Person Owner { get; set; }public virtual Company Equity { get; set; }public virtual int Quantity { get; set; }} Any database class can inherit from any other database class. using Starcounter.Nova;[Database]public class Customer{public virtual string Name { get; set; }}public class PrivateCustomer : Customer{public virtual string Gender { get; set; }}public class CorporateCustomer : Customer{public virtual string VatNumber { get; set; }} The [Database] attribute is inherited from base - to subclasses. Any class that directly or indirectly inherits a class with the [Database] attribute becomes a database class. In the example above, both PrivateCustomer and CorporateCustomer become database classes due to them inheriting Customer. The table Customer will contain all PrivateCustomers and all CorporateCustomers. So if there is a private customer called "Goldman, Carl" and a corporate customer called "Goldman Sachs", the result of SELECT C FROM Customer c will contain both of them. A database class cannot inherit from a class that's not a database class. This will throw, during compile-time, System.NotSupportedException or ScErrSchemasDoNotMatch (SCERR15009) depending on how the base class is defined. It's also not possible to cast a non-database class to a database class. Database objects can be checked for equality with the Equals method. Comparing database objects with object.ReferenceEquals or the == operator always returns false if any of the objects are retrieved from the database: var firstProduct = Db.Insert<Product>();var secondProduct = Db.Insert<Product>();var anotherFirstProduct = Db.Get<Product>(Db.GetOid(firstProduct));// Checks if two database objects are equalConsole.WriteLine(firstProduct.Equals(secondProduct)); // => falseConsole.WriteLine(firstProduct.Equals(anotherFirstProduct)); // => true// Returns false for different object or objects retrieved from the databaseConsole.WriteLine(firstProduct == secondProduct); // => falseConsole.WriteLine(firstProduct == anotherFirstProduct); // => falseConsole.WriteLine(firstProduct == firstProduct); // => trueConsole.WriteLine(object.ReferenceEquals(firstProduct, secondProduct)); // => falseConsole.WriteLine(object.ReferenceEquals(firstProduct, anotherFirstProduct)); // => falseConsole.WriteLine(object.ReferenceEquals(firstProduct, firstProduct)); // => true Starcounter automatically assigns an UInt64 unique key for each database object. The key is unique across entire database not across one table. var p = Db.Insert<Product>();ulong oid = Db.GetOid(p); var p = Db.Get<Product>(oid); var product = Db.SQL<Product>("SELECT p FROM Product p WHERE p.ObjectNo = ?", oid).FirstOrDefault(); Zero ( 0) is not a valid key. Currently it is not possible to insert a database object with predefined unique key. It is possible to compare database objects by their unique keys.
https://docs.starcounter.io/v/3.0.0-alpha-20190701/database-classes
CC-MAIN-2019-39
en
refinedweb
I've been cruising the reddit listings recently and without much searching I"! Now, I think it's fair to say: poppycock. But poppycock aside, the question remains: is jQuery still in 2017 (and as we join 2018) relevant and more importantly, is it worth a newcomer learning the library today? MY WORKSHOPMaster Next.js Everything you need to master universal React with Next.js in a single intense master class. Includes full pass to ffconf, the web developer conf. £449+VAT - only from this link The short answer: yes Yes, of course it's worth learning jQuery. The internet is littered with tutorials and knowledge across your peers is vast - that's to say: help is in abundance. Moreover: jQuery is prolific in today's web and there's an extremely high chance that you'll use it in your career. Sure, you might work for bleeding edge start up from Sillygone Valley that only uses Ember version pre-release 20 (not a thing…yet), but that's not the only job you'll have in your career. Proof in the wild How about we look at some real world web sites, and potential employees and clients and see if they use jQuery, and importantly, how many of them. I've been playing with BigQuery and querying HTTP Archive's dataset (you'll need to sign up to access the tool). The HTTP Archive crawls the top 10,000 web sites from the Alexa top 1,000,000 web sites and exposes all that data in a BigQuery table (or as a downloadable mysql database). This year, the HTTP Archive started testing and collecting JavaScript libraries under httparchive:scratchspace (though it's not an exact science for all libraries, it's reliable for jQuery based queries). I've queried the HTTP Archive and includes the top 20 results in the charts below. JavaScript library distribution The chart below is a query from July 2017 aggregating the count of the library name over a total of 474,058 unique URLs collected. The chart shows that jQuery accounts for a massive 83% of libraries found on the web sites. Reading through the next libraries, jQuery UI requires jQuery, as does Bootstrap and FlexSlider. Modernizr isn't a "do thing" library, nor are many of the libraries until we hit Angular.JS. BigQuery saved SQL statement: Libraries by usage jQuery versions Are web sites staying up to date with their version of jQuery? The version of the library is included in the the httparchive:scratchspace, so I've just looked at the aggregate totals of particular versions: BigQuery saved SQL statement: jQuery by version from latest dataset jQuery@3 accounts for about 6.4% of all the data points. I don't think this is because "jQuery is dying out", but more than jQuery solved a serious problem with browser compatibility and make effects and ajax easy. In today's modern browsers there's CSS transitions and a consistent implementation of XMLHttpRequest or even the fetch API. More specifically, I'd conclude that those web sites with much higher traffic from "older" browsers (i.e. questionable interoperability) are using jQuery to simplify their JavaScript. jQuery over time Finally (and this query appears often): how year on year usage changes. I've included Angular.JS for contrast. The table looks like the delta between 2016 and 2017 is showing the first drop in usage (by 2%): BigQuery saved SQL statement: jQuery year on growth So, although this is a small change, I strongly suspect this will stabilise in the next 5 years, rather than rapidly shrink. Wrapping up I've seen this kind of dominance in the web before with IE4 and then IE6. And that browser was (and is) hated on pretty hard - and to be fair, IE6 was fairly riddled with issues. jQuery isn't the same. jQuery is actively maintained and shipped as the default JavaScript library for a lot of large projects (WordPress being one of them). There's also the startup cost that jQuery affords. It's extremely cheap for a new developer to copy and paste some code that uses jQuery, and to add the jQuery library to immediately see a result. The same isn't entirely true with "modern" code using import or export default or destructuring, etc. It always work in the browser the individual is using and there's potential for a rabbit hole of tooling (though of course, I'd recommend they start with something like Create React App first, but this post is about jQuery, not CRA). You certainly don't need jQuery today. Nor do you need to learn jQuery. However, jQuery is far from dead, dying, outdated or irrelevant. It serves many developers from many different walks of life. So, is jQuery still relevant? At the end of 2017: yes.
https://remysharp.com/2017/12/15/is-jquery-still-relevant
CC-MAIN-2019-39
en
refinedweb
Three different ways of testing your Rails App. So here, I will try to compare three different ways to test the same feature, so you can decide which one is better for you. What are we going to test? Imagine that you are building a catalog that: - Shows all products sorted by name. - Show for each product the name, description and price. - If a product has no price it shows a label with “Call for price”. - If there are featured products, it shows them first. So, lets start with the first way… Testing everything via “system tests” Here is one way to write the tests for the specification… feature "Users sees the catalog" do scenario "with all products sorted by name" do create :product, name: "product A" create :product, name: "product B" visit catalog_path expect_ordered_products("product A", "product B") end scenario "with the name, description and price of each product" do create :product, name: "product 1", description: "p1 - desc", price: 1000 visit catalog_path expect(page).to have_content "product 1" expect(page).to have_content "p1-desc" expect(page).to have_content "$1,000.00" end scenario "with a product without price" do create :product, price: nil visit catalog_path expect(page).to have_content "Call for price" end scenario "with featured products" do create :product, name: "product A" create :product, name: "product B" create :product, name: "product Featured", feature: true visit catalog_path expect_ordered_products( "product Feature", "product A", "product B") end end Look how in each scenario I try to describe the requirements of the specification and also that I try to avoid as much as I can the implementation details, for example I am using a create method that will create products for me, and also I am expecting to have a method expect_ordered_products instead of trying to jump in to the implementation details while I am still thinking in the behavior. Pros: - I think is the must “real” way of testing the behavior. - Tests will be working if we refactor the internals of our application. Cons: - If we do this for every feature, our test suite are gonna get pretty slow. - At this level we have not enough feedback about the design of our code. - Tests are hard to set up. - Tests can fail if we change html code. Testing controllers and views This is a little harder because we are going to test the expected behavior in different places. In the controller we can test… - That all stored products are assigned to the @products instance variable. - That those products are sorted by name. - That the featured products are at the top of the list. RSpec.describe ProductsController do describe "GET index" do describe "assigns @products" do it "with the products sorted by name" do create :product, name: "B" create :product, name: "A" get :index expect(assigns(:products).map(&:name)).to eq(["A", "B"]) end it "with the featured products first" do create :product, name: "B" create :product, name: "A" create :product, name: "Featured" get :index expect(assigns(:products).map(&:name)) .to eq(["Featured", "A", "B"]) end end end end In the view we can test… - That it shows the name, description and price of the products. - That if a product has no price it should show “Call for price”. RSpec.describe "products/index" do it "shows the name, description and price of the products" do assign(:products, [ create :product, name: "Product-A", description: "A-desc", price: 1000 ]) render expect(rendered).to match /Product-A/ expect(rendered).to match /A-desc/ expect(rendered).to match /$1,000.00/ end it "shows 'call for price' for products without price" do assign(:products, [create :product, name: "A"]) render expect(rendered).to match /Call for price/ end end Here look also how in each test description I am trying to express the requirements of the specification. That is why in this case I think that we don’t need model tests, because we have are already tested what our specification says. Pros: - Test are faster. - You will have more feedback about the design of you code Cons: - Tests are by way harder to imagine. - Tests are still hard to setup - If we refactor the internals of our app, we can break our tests. - If we change the html we can break our tests. Testing a non Rails function Imagine that you will call a function in the controller and the thing that it returns is all that you need to render the content… something like this: def index @products = Catalog.get_products(Product) end # products/index.haml - products.each do |product| %td= product.name %td= product.description %td - if product.has_price? = number_to_currency product.price - else Call for price I think that if we test the Catalog.get_products method, we can test… - That returns all the stored products. - That returns the feature products first. - That each product has a name, description and price. - That each products know if a product has price. So translating this in to rspec, we have… module Catalog RSpec.describe "Get products" do it "returns all the stored products sorted by name" it "returns the featured products first" describe "each product" do it "with a name, description and price" it "knows if has price or not" end end end Now lets continue with the implementation of our tests… module Catalog RSpec.describe "Get products" do def get_products(store) Catalog.get_products(store) end it "returns all the stored products sorted by name" do products = get_products(store_with([ product_with(name: "B"), product_with(name: "A") ])) expect(products.map(&:name)).to eq ["A", "B"] end it "returns the featured products first" do products = get_products(store_with([ product_with(name: "B"), product_with(name: "A"), product_with(name: "Featured") ])) expect(products.map(&:name)).to eq ["Featured", "A", "B"] end describe "each product" do it "with a name, description and price" do product = get_products(store_with([ product_with(name: "A", descrition: "A-desc", price: 1000) ])).first expect(product.name).to eq "A" expect(product.description).to eq "A-desc" expect(product.price).to eq 1000 end it "knows if has price or not" do with_price, without_price = get_products(store_with([ product_with(name: "A", price: 1000), product_with(name: "B", price: nil) ])) expect(with_price).to have_price expect(without_price).not_to have_price end end end end Look how we are not testing exactly for what our specification says, but we are testing for something that will help us implement the specification in a very easy way. I really prefer this approach over the other two, but you may like different things. Some pros and cons… Pros: - Test are simpler because we are testing just a function. - As you are passing the Product class, in your test you can replace the concrete implementation with an object that implements just the behavior that you would use. This will make your tests fast. - You are decoupling you application logic from Rails. - In my humble opinion now you don’t need to write tests for your controller, model and views. Cons: - You are creating a new abstraction. - Maybe at the beginning it will not be very clear how to test that behavior. Conclusion Now is up to you… you have a little more knowledge to decide your way… There is no “true way” or at least I don’t know one. I just have one method that works best for me. Originally published at bhserna.com.
https://medium.com/@bhserna/three-different-ways-of-testing-your-rails-app-d574890f900f?source=---------4------------------
CC-MAIN-2019-39
en
refinedweb
Now that we have a provider called contacts-service.ts and our app is freshly updated lets use ForceJS to query for the contacts to display on our Contacts page. Create a Query Like when we added our user query to the Home page to verify things were set up we will need to import ForceJs into the ContactsServiceProvider class. So let’s open up src/providers/contacts-service/contacts-service.ts and get started. Original contacts-service.ts import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; /* Generated class for the ContactsServiceProvider provider. See for more info on providers and Angular DI. */ @Injectable() export class ContactsServiceProvider { constructor(public http: Http) { console.log('Hello ContactsServiceProvider Provider'); } } Above is the complete contacts-service.ts just so you know what we start with. I am going to be removing the import of Http since we will be using ForceJS and the Salesforce Mobile SDK to handle communication with Salesforce. So feel fee to delete it at line 2 and where it is injected to the constructor at line 15. Now let’s add an import of OAuth and DataService from 'forcejs' , I will do it at line 5 but feel free to add at line 2 or 4 or where ever. Then let’s create a method named loadContacts. In case you are wondering we will use this method to get an instance of the oAuth to login with, then create an instance of the dataService that we can use the query method of to get data from Salesforce. If will look something like this: loadContacts Method loadContacts(){ let oauth = OAuth.createInstance(); return oauth.login() .then(oauthResult => { let service = DataService.createInstance(oauthResult); return service.query('SELECT Id, Name FROM Contact LIMIT 50'); }); } Conclusion With this method in place lets update our Contacts page component and html template
https://wipdeveloper.com/saleforce-mobile-sdk-ionic-query-contacts/
CC-MAIN-2019-39
en
refinedweb
). Available,.contrib.corestats.CoreStats': None, } Writing your own extension¶ Writing your own extension is easy. Each extension is a single Python class which doesn’t need to implement any particular method. The main entry point for a Scrapy extension (this also includes middlewares and pipelines) is the from_crawler class method which receives a Crawler instance which is the main object controlling the Scrapy crawler. Through that object you can access settings, signals, stats, and also control the crawler behaviour, if your extension needs to such thing. Typically, extensions connect to signals and perform tasks triggered by them. Finally, if the from_crawler method raises the NotConfigured exception, the extension will be disabled. Otherwise, the extension will be enabled.: from scrapy import signals from scrapy.exceptions import NotConfig): spider.log("opened spider %s" % spider.name) def spider_closed(self, spider): spider.log("closed spider %s" % spider.name) def item_scraped(self, item, spider): self.items_scraped += 1 if self.items_scraped == self.item_count: spider.log("scraped %d items, resetting counter" % self.items_scraped) self.item_count = 0 Built-in extensions reference¶ General purpose extensions¶ Log Stats extension¶ Log basic stats like crawled pages and scraped items. Core Stats extension¶ Enable the collection of core statistics, provided the stats collection is enabled (see Stats Collection). Telnet console extension¶. Memory usage extension¶ Note This extension does not work in Windows. Monitors the memory used by the Scrapy process that runs the spider and: 1, sends a notification e-mail when it exceeds a certain value 2. - libxml2 memory leaks -).
http://doc.scrapy.org/en/0.18/topics/extensions.html
CC-MAIN-2019-39
en
refinedweb
Member since 10-22-2015 241 86 Kudos Received 20 Solutions 09-06-2016 06:05 PM 09-06-2016 06:05 PM For HDP 2.3 (Apache 1.1.2), ./hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java calls HeapMemorySizeUtil.checkForClusterFreeMemoryLimit(conf); There is no HBaseConfiguration.checkForClusterFreeMemoryLimit Can you double check your classpath to see which hbase related jars are present. Please pastebin those jars Thanks ... View more 09-06-2016 04:41 PM 09-06-2016 04:41 PM 09-06-2016 04:39 PM 09-06-2016 04:39 PM Which user did you use to run the code ? What's the output of the following command ? klist -kt /etc/security/keytabs/hbase.service.keytab Normally hbase.service.keytab should be used by user 'hbase'. Please illustrate your use case in more detail. Please take a look at hbase-common/src/main/java/org/apache/hadoop/hbase/AuthUtil.java ... View more 09-01-2016 05:55 PM 09-01-2016 05:55 PM 09-01-2016 04:03 PM 09-01-2016 04:03 PM I checked zookeeper.znode.parent which shows /hbase-secure Are you able to browse master UI ? If so, try using hbase shell next. You may want to check the following line to make sure that path to hbase-site.xml is correct: String hbaseSitepath ="hbase-site.xml"; ... View more 09-01-2016 03:47 PM 09-01-2016 03:47 PM Can you attach the master log so that we can better help you ? You can add hbase.master.namespace.init.timeout to hbase-site.xml by using Ambari. Still, finding the root cause is desirable. ... View more @dengke li Can you attach master log ? You should find it under /grid/0/log/hbase/ Consider increasing hbase.master.namespace.init.timeout Default is 300000 ms ... View more Which version of hbase do you deploy ? thrift version used by hbase is likely 0.9.3 Does Hue use compatible thrift version ? Can you post the whole stack trace ? Thanks ... View more 08-30-2016 03:52 PM 08-30-2016 03:52 PM NoRouteToHostException occurred in both region server and node manager logs. Please check network connectivity. ... View more 08-30-2016 02:53 PM 08-30-2016 02:53 PM w.r.t. AWS instance check failure, have you looked at ... View more Zack: Can you check other regions which failed to open (such as a97029c18889b3b3168d11f910ef04ae ) ? ... View more Zack: You can use hfile tool to inspect: MY_BROKEN_TABLE/8a444fa1979524e97eb002ce8aa2d7aa/0/4f9a5c26ddb0413aa4eb64a869ab4a2c ... View more 08-29-2016 01:53 AM 08-29-2016 01:53 AM 'hbase.master.logcleaner.plugins' is not overridden in hbase-site.xml Need to investigate further. ... View more 08-26-2016 10:12 PM 08-26-2016 10:12 PM Snippet from master log: Here is related code: String plugins = conf.get(HBASE_MASTER_LOGCLEANER_PLUGINS); Looks like the value for "hbase.master.logcleaner.plugins" was null - default should be org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner Can you attach hbase-site.xml ? Thanks ... View more 08-26-2016 12:16 PM 08-26-2016 12:16 PM @Christophe: Are you able to access hbase through hbase shell ? As Ankit said, check master log. ... View more 08-22-2016 03:48 PM 08-22-2016 03:48 PM 08-22-2016 02:59 PM 08-22-2016 02:59 PM Here is the code in HBaseSink.java: if (r instanceof Put) { ((Put) r).setWriteToWAL(enableWal); Here is code in Put.java of HDP 2.3: public Put setWriteToWAL(boolean write) { return (Put) super.setWriteToWAL(write); In pom.xml of Flume in HDP 2.3: <hbaseversion>0.94.2</hbaseversion> Planning to raise an internal JIRA for this incompatibility. ... View more 08-22-2016 02:54 PM 08-22-2016 02:54 PM 08-17-2016 09:45 PM 08-17-2016 09:45 PM Can you provide more information (attaching region server log) ? Load balancer wouldn't create new region. ... View more Please take a look at: There has not been much activity lately. ... View more Please take a look at: You can give write permission to this one user: ... View more 08-11-2016 08:04 PM 08-11-2016 08:04 PM bq. Have column families for timestamp Each KeyValue would have its own timestamp. Can you clarify the schema some more ? ... View more 08-11-2016 06:23 PM 08-11-2016 06:23 PM 08-11-2016 06:04 PM 08-11-2016 06:04 PM Did you mean making certain table read only ? How would initial data be loaded to the underlying table ? Can you describe your use case in more detail ? Potentially you can write custom coprocessor which rejects writes to the underlying table after data is loaded. ... View more 08-11-2016 02:00 PM 08-11-2016 02:00 PM You bring the region online, hbase> assign 'REGIONNAME' or hbase> assign 'ENCODED_REGIONNAME' then issue: hbase(main):017:0> scan 'test', {RAW=>true, VERSIONS=>1000} ... View more 08-07-2016 11:45 PM 08-07-2016 11:45 PM Can you list what you saw under /hbase-unsecure ? Please double check hbase-site.xml to see that you're using the correct values in your Java program. Can you inspect master / region server logs to see if there is anything interesting. Pastebin log snippet if you need help. ... View more I assume VM running hbase is on the same machine as the client. Have you checked that port 2181 is not blocked by firewall ? ... View more
https://community.cloudera.com/t5/user/v2/viewprofilepage/user-id/38211/page/3
CC-MAIN-2019-39
en
refinedweb
These are chat archives for FreeCodeCamp/DataScience discussion on how we can use statistical methods to measure and improve the efficacy of import pandas as pd import numpy as np import statsmodels.api as sm data= np.loadtxt('bostontrain.csv',delimiter=',') X=data[:,0:13] Y=data[:,13] X = sm.add_constant(X) est = sm.OLS(Y, X).fit() test_data=np.loadtxt('bostontest.csv',delimiter=',') test_predict=est.predict(test_data) np.savetxt('anuraglahon.csv',test_predict,fmt='%1.5f') Trying to improve the model I was working for analysing text. The previous method, counting words, worked fine for text of about 300 words or less, but it was not that easy when working with more words. For your info, the method is not mine: it is based on concepts by Luhn. Now I am combining the Luhn method with a entity-centric one. I am also displaying in browser and terminal. Still a work in progress and messy as always. But hope better than just reading through the full text. sklearn.model_selection.train_test_split. All training and test sets are just sampling and splitting. It's been a while since I've done Python, but just choose how you wanna split all your data and select the rows based on the indices you've chosen. erictleung sends brownie points to @evaristoc :sparkles: :thumbsup: :sparkles: @erictleung thanks man! @GoldbergData probably adding something like this: I would introduce a word selection option (probably a tick option) to mark a fixed number of characters (before and after the selected keyword present in text) while making the rest of the text less visible (opacity). evaristoc sends brownie points to @erictleung and @goldbergdata :sparkles: :thumbsup: :sparkles: If I get fancy, I could include a choice to fill in a Google spreadsheet with sentences I am interested in. Anyway... My main goal is to progress on this so I won't work on any procedure if it takes time. Not really the core of the project right now. def error(x, y, initial_thetta): error_sum = 0 m, n = x.shape for i in range(1, m): error_sum = (initial_thetta.T[i] * x[i] - y[i] ** 2) return (error_sum) / (2 * m) def error(x, y, initial_thetta): error_sum = 0 m, n = x.shape for i in range(1, m): error_sum += (initial_thetta.T[i] * x[i] - y[i]) ** 2 return (error_sum) / (2 * m) Initial erorr is [1.73828125e-01 1.15221354e+01 7.75501562e+03 2.54704036e+03 3.29631510e+02 9.78108268e+03 5.30021348e+02 1.48438496e-01 6.07216146e+02] @Sprinting interesting... I haven't competed that much to be honest. I would probably check the data though. It is a topic I used to analyse a lot before. @bigyankarki numpy? I think there are ways to skip the loop. model value[i] - average value? (eg.,) You can also check the following: y) is to its estimated point ( f(x)) as a way to analyse the model fitness). It is expected that the more data points you add to the sum the larger the error. However, the shape of your dataset and your procedure are not clear. I guess m are your "examples"? Hope this helps.
https://gitter.im/FreeCodeCamp/DataScience/archives/2018/03/07?at=5a9ff859e4d1c63604d1ae3a
CC-MAIN-2019-39
en
refinedweb
On some particularly heavy sites, the user needs to see a visual cue temporarily to indicate that resources and assets are still loading before they taking in a finished site. There are different kinds of approaches to solving for this kind of UX, from spinners to skeleton screens. If we are using an out-of-the-box solution that provides us the current progress, like preloader package by Jam3 does, building a loading indicator becomes easier. For this, we will make a ring/circle, style it, animate given a progress, and then wrap it in a component for development use. Step 1: Let's make an SVG ring From the many ways available to draw a circle using just HTML and CSS, I'm choosing SVG since it's possible to configure and style through attributes while preserving its resolution in all screens. <svg class="progress-ring" height="120" width="120" > <circle class="progress-ring__circle" stroke- </svg> Inside an <svg> element we place a <circle> tag, where we declare the radius of the ring with the r attribute, its position from the center in the SVG viewBox with cx and cy and the width of the circle stroke. You might have noticed the radius is 58 and not 60 which would seem correct. We need to subtract the stroke or the circle will overflow the SVG wrapper. radius = (width / 2) - (strokeWidth * 2) These means that if we increase the stroke to 4, then the radius should be 52. 52 = (120 / 2) - (4 * 2) So it looks like a ring we need to set its fill to transparent and choose a stroke color for the circle. See the Pen SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen. Step 2: Adding the stroke The next step is to animate the length of the outer line of our ring to simulate visual progress. We are going to use two CSS properties that you might not have heard of before since they are exclusive to SVG elements, stroke-dasharray and stroke-dashoffset. stroke-dasharray This property is like border-style: dashed but it lets you define the width of the dashes and the gap between them. .progress-ring__circle { stroke-dasharray: 10 20; } With those values, our ring will have 10px dashes separated by 20px. See the Pen Dashed SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen. stroke-dashoffset The second one allows you to move the starting point of this dash-gap sequence along the path of the SVG element. Now, imagine if we passed the circle's circumference to both stroke-dasharray values. Our shape would have one long dash occupying the whole length and a gap of the same length which wouldn't be visible. This will cause no change initially, but if we also set to the stroke-dashoffset the same length, then the long dash will move all the way and reveal the gap. Decreasing stroke-dasharray would start to reveal our shape. A few years ago, Jake Archibald explained this technique in this article, which also has a live example that will help you understand it better. You should go read his tutorial. The circumference What we need now is that length which can be calculated with the radius and this simple trigonometric formula. circumference = radius * 2 * PI Since we know 52 is the radius of our ring: 326.7256 ~= 52 * 2 * PI We could also get this value by JavaScript if we want: const circle = document.querySelector('.progress-ring__circle'); const radius = circle.r.baseVal.value; const circumference = radius * 2 * Math.PI; This way we can later assign styles to our circle element. circle.style.strokeDasharray = `${circumference} ${circumference}`; circle.style.strokeDashoffset = circumference; Step 3: Progress to offset With this little trick, we know that assigning the circumference value to stroke-dashoffset will reflect the status of zero progress and the 0 value will indicate progress is complete. Therefore, as the progress grows we need to reduce the offset like this: function setProgress(percent) { const offset = circumference - percent / 100 * circumference; circle.style.strokeDashoffset = offset; } By transitioning the property, we will get the animation feel: .progress-ring__circle { transition: stroke-dashoffset 0.35s; } One particular thing about stroke-dashoffset: its starting point is vertically centered and horizontally titled to the right. It's necessary to negatively rotate the circle to get the desired effect. .progress-ring__circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%, } Putting all of this together will give us something like this. See the Pen vegymB by Jeremias Menichelli (@jeremenichelli) on CodePen. A numeric input was added in this example to help you test the animation. For this to be easily coupled inside your application it would be best to encapsulate the solution in a component. As a web component Now that we have the logic, the styles, and the HTML for our loading ring we can port it easily to any technology or framework. First, let's use web components. class ProgressRing extends HTMLElement {...} window.customElements.define('progress-ring', ProgressRing); This is the standard declaration of a custom element, extending the native HTMLElement class, which can be configured by attributes. <progress-ring</progress-ring> Inside the constructor of the element, we will create a shadow root to encapsulate the styles and its template. constructor() { super(); // get config from attributes const stroke = this.getAttribute('stroke'); const radius = this.getAttribute('radius'); const normalizedRadius = radius - stroke * 2; this._circumference = normalizedRadius * 2 * Math.PI; // create shadow dom root this._root = this.attachShadow({mode: 'open'}); this._root.innerHTML = ` <svg height="${radius * 2}" width="${radius * 2}" > <circle stroke="white" stroke- </svg> <style> circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%; } </style> `; } You may have noticed that we have not hardcoded the values into our SVG, instead we are getting them from the attributes passed to the element. Also, we are calculating the circumference of the ring and setting stroke-dasharray and stroke-dashoffset ahead of time. The next thing is to observe the progress attribute and modify the circle styles. setProgress(percent) { const offset = this._circumference - (percent / 100 * this._circumference); const circle = this._root.querySelector('circle'); circle.style.strokeDashoffset = offset; } static get observedAttributes() { return [ 'progress' ]; } attributeChangedCallback(name, oldValue, newValue) { if (name === 'progress') { this.setProgress(newValue); } } Here setProgress becomes a class method that will be called when the progress attribute is changed. The observedAttributes are defined by a static getter which will trigger attributeChangeCallback when, in this case, progress is modified. See the Pen ProgressRing web component by Jeremias Menichelli (@jeremenichelli) on CodePen. This Pen only works in Chrome at the time of this writing. An interval was added to simulate the progress change. As a Vue component Web components are great. That said, some of the available libraries and frameworks, like Vue.js, can do quite a bit of the heavy-lifting. To start, we need to define the view component. const ProgressRing = Vue.component('progress-ring', {}); Writing a single file component is also possible and probably cleaner but we are adopting the factory syntax to match the final code demo. We will define the attributes as props and the calculations as data. const ProgressRing = Vue.component('progress-ring', { props: { radius: Number, progress: Number, stroke: Number }, data() { const normalizedRadius = this.radius - this.stroke * 2; const circumference = normalizedRadius * 2 * Math.PI; return { normalizedRadius, circumference }; } }); Since computed properties are supported out-of-the-box in Vue we can use it to calculate the value of stroke-dashoffset. computed: { strokeDashoffset() { return this._circumference - percent / 100 * this._circumference; } } Next, we add our SVG as a template. Notice that the easy part here is that Vue provides us with bindings, bringing JavaScript expressions inside attributes and styles. template: ` <svg : <circle stroke="white" fill="transparent" : </svg> ` When we update the progress prop of the element in our app, Vue takes care of computing the changes and update the element styles. See the Pen Vue ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen. Note: An interval was added to simulate the progress change. We do that in the next example as well. As a React component In a similar way to Vue.js, React helps us handle all the configuration and computed values thanks to props and JSX notation. First, we obtain some data from props passed down. class ProgressRing extends React.Component { constructor(props) { super(props); const { radius, stroke } = this.props; this.normalizedRadius = radius - stroke * 2; this.circumference = this.normalizedRadius * 2 * Math.PI; } } Our template is the return value of the component's render function where we use the progress prop to calculate the stroke-dashoffset value. render() { const { radius, stroke, progress } = this.props; const strokeDashoffset = this.circumference - progress / 100 * this.circumference; return ( <svg height={radius * 2} width={radius * 2} > <circle stroke="white" fill="transparent" strokeWidth={ stroke } strokeDasharray={ this.circumference + ' ' + this.circumference } style={ { strokeDashoffset } } stroke-width={ stroke } r={ this.normalizedRadius } cx={ radius } cy={ radius } /> </svg> ); } A change in the progress prop will trigger a new render cycle recalculating the strokeDashoffset variable. See the Pen React ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen. Wrap up The recipe for this solution is based on SVG shapes and styles, CSS transitions and a little of JavaScript to compute special attributes to simulate the drawing circumference. Once we separate this little piece, we can port it to any modern library or framework and include it in our app, in this article we explored web components, Vue, and React..
https://css-tricks.com/building-progress-ring-quickly/
CC-MAIN-2019-39
en
refinedweb
Or being able to An update — September 2017 A week or so ago, some students applied this concept to the idea of typosqatting (registering malicious packages with names similar to popular libraries). By getting a university to issue a security notice, they generated some interest, and finally resulted in some changes to pypi/warehouse to address these issues. I decided to take another look at the download figures for my packages, and see what damage my malicious alter-ego could have wreaked. Across the 12 system module packages I’m hosting, I’m getting on average 1.5 thousand downloads per day, via pip. This adds up to 491,292 downloads so far this year. I’m hoping to hit 500k downloads before my packages are deleted! By package, the download ratios pretty much match the numbers from May: There’s a plan to delete my fake packages now that restrictions have been added to prevent this sort of attack, but it was fun while it lasted! Intro At a London python dojo in October last year, we discovered that PyPi allows packages to be registered with builtin module names. So what? you might ask. Who would pip install a system package? Well the story goes something like this: - An inexperienced Python developer/deployer realises they need X functionality - Googling/asking around, they find out that to install packages, people use pip - Developer happily types in e.g. pip install sys - Baddie has registered the syspip module, and included a malicious payload - Developer is now pwned by malitious package, but import sysin python works, and imports a functional sys module, so nobody notices. When we discovered this, I was pretty interested in how plausible this was as an attack vector, so did a few things: - Proactively registered all the common system module names that I could think of, as packages - Uploaded an empty package to each of them that does nothing other than immediately traceback: raise RuntimeError("Package 'json' must not be downloaded from pypi") Why upload anything? It’s perfectly possible to squat on a pypi package and not upload any files. But by adding an empty package, I could track the downloads from the pypi download stats. Pypi upload their access logs (sans identifying information) to google big query, which is pretty awesome, and allows us to get a good idea of how many systems each package ends up on. How effective is this attack vector? Big query says that so far this year (19th May 2017), my dummy packages have been download ~244k times, lucky they’re benign huh, otherwise that’s 1/4 million infected machines! Some of the downloads will be people using custom scrapers, others may be automated build jobs, running over and over, but I used some tactics to gauge the quality of this data: - pypi download logs include a column installer.namethis seems equivalent to an HTTP user agent string, by only selecting rows where the installer.name is pip, we’re more likely to be counting actual installs, rather than scrapers, or other bots - Another column: system.releasetracks very high-level system version information (for example 4.1.13–18.26.amzn1.x86_64) By including this in the counts, we can see that lots of different types of setups are downloading these packages, suggesting it’s not just a few bots scraping the site. 3.1k different system versions have downloaded my packages this year, compared with 33k total unique versions across the whole of pypi The query I used is here: What now? I never actually received a reply to my email, so a while later, I raised an issue on the official pypi github issue tracker in January. This also got no reply. I’m currently squatting all the system package names that seem most at risk, and doing so with benign packages, so I don’t see much of a risk of disclosing this now.
https://hackernoon.com/building-a-botnet-on-pypi-be1ad280b8d6
CC-MAIN-2019-47
en
refinedweb
table of contents - buster 4.16-2 - buster-backports 5.02-1~bpo10+1 - testing 5.03-1 - unstable 5.03-1 NAME¶posix_openpt - open a pseudoterminal device SYNOPSIS¶ #include <stdlib.h> #include <fcntl.h> int posix_openpt(int flags); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): posix_openpt(): _XOPEN_SOURCE >= 600 DESCRIPTION¶The¶On success, posix_openpt() returns a nonnegative file descriptor which is the lowest numbered unused file descriptor. On failure, -1 is returned, and errno is set to indicate the error. ERRORS¶See open(2). VERSIONS¶Glibc support for posix_openpt() has been provided since version 2.2.1. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶POSIX.1-2001, POSIX.1-2008. posix_openpt() is part of the UNIX 98 pseudoterminal support (see pts(4)). NOTES¶Some older.
https://manpages.debian.org/buster-backports/manpages-dev/posix_openpt.3.en.html
CC-MAIN-2019-47
en
refinedweb
Hi there. I have a working search input bar and it filters correctly based on value. My problem is when i go to other page and come back to the page where the search input bar is there, the last (history) search value is still there. Example: I search word "Work", then it search. When i leave the page and comeback the word "Work" is still there. It should be blank again. How to clear it once i leave the page automatically? Thank you Hi, Instead of clearing it once you leave, you can clear the search text box when the page is ready: Good Luck, Mustafa Hi, Can you specify the page name where it happens? Thank you for the reply. Appreciated! I think its not a good option to clear the search bar once its load because my code is the reverse of that. My home page and my tour page has both search bar. I pushed my home page search bar to link to tour page search bar using wix location. So what ever i typed on my home search bar the value will be the same on my tour page search bar. Now i used onReady function to run the filter (search) once the page load. It works actually. The only problem is it leaves the last value unless if i used home search bar again (empty). Here are my current codes HOME SEARCH BAR CODE export function searchBar_keyPress(event, $w) { if (event.key === "Enter") { let word = $w("#searchBar").value; local.setItem("searchWord", word); wixLocation.to('/JapanTours'); } } TOUR PAGE SEARCH BAR CODE $w.onReady(function () { var sameWord = local.getItem("searchWord"); $w("#iTitle").value = sameWord; $w("#dataset1").onReady(function () { filter($w('#iTitle').value, lastFilterCategory); }) }); *this both page using import {local} from 'wix-storage'; import wixData from 'wix-data'; Am i missing something? However, it's really not a big deal. Its not just good look when someone visit the tour page directly with a value already. Thank you very much for the big help. Did you get a solution you can share? Clear search bar input when you leave the page.
https://www.wix.com/corvid/forum/community-discussion/search-input-bar-delete-the-value-history
CC-MAIN-2019-47
en
refinedweb
Provided by: alliance_5.0-20110203-4_amd64 NAME savelofig - save a logical figure on disk SYNOPSYS #include "mlu.h" void savelofig(ptfig) lofig_list ∗ptfig; PARAMETER ptfig Pointer to the lofig to be written on disk DESCRIPTION savelofig writes on disk the contents of the figure pointer to by ptfig. All the figure lists are ran through, and the appropriate objects written, independently of the figure mode. The savelofig function in fact performs a call to a driver, choosen by the MBK_OUT_LO(1) environment variable. The directory in which the file is to be written is the one set by MBK_WORK_LIB(1). See MBK_OUT_LO(1), MBK_WORK_LIB(1) and mbkenv(3) for details. ERRORS "∗∗∗ mbk error ∗∗∗ not supported logical output format 'xxx'" The environment variable MBK_OUT_LO is not set to a legal logical format. "∗∗∗ mbk error ∗∗∗ savelofig : could not open file figname.ext" Either the directory or the file are write protected, so it's not possible to open figname.ext, where ext is the file format extension, for writting. EXAMPLE #include "mlu.h" void save_na2_y() { savelofig(getlofig("na2_y")); } SEE ALSO mbk(1), mbkenv(3), lofig(3), addlofig(3), getlofig(3), dellofig(3), loadlofig(3), flattenlofig(3), rflattenlofig(3), MBK_OUT_LO(1), MBK_WORK_LIB(1).
http://manpages.ubuntu.com/manpages/precise/man3/savelofig.3.html
CC-MAIN-2019-47
en
refinedweb
Attribute Class Definition Used to register a class to the Objective-C runtime. [System.AttributeUsage(System.AttributeTargets.Class)] public sealed class RegisterAttribute : Attribute type RegisterAttribute = class inherit Attribute - Inheritance - - Attributes - Remarks While all classes derived from the NSObject class are exposed to the Objective-C world, in some cases you might want to expose the class using a different name to the runtime. In addition, if you want your classes to be available on the iOS designer, you need to annotate those classes with the Register attribute. // This class is never surfaced to Objective-C's runtime class NotSurfaced {} // This class is automatically surfaced to Objective-C's // runtime, since it is a subclass of NSObject, and it // is visible by the Objective-C runtime as "AutomaticallySurfaced" class AutomaticallySurfaced : NSObject {} // This class is surfaced to the Objective-C runtime with // the name 'MyName' [Register ("MyName")] class InternalName : NSObject {} // This UIView is surfaced to the iOS designer. [Register] public class MyView : UIView { public MyView (IntPtr handle) : base (handle) {} public MyView (CGRect rect) : base (rect) {} }
https://docs.microsoft.com/en-us/dotnet/api/foundation.registerattribute?view=xamarin-mac-sdk-14
CC-MAIN-2019-47
en
refinedweb
; using System.Collections.Generic; class C { string ss = "aa"; public IEnumerable<int> GetIter () { yield return 1; yield return 2; // Set a breakpoint here yield return 3; } public static void Main() { foreach (var item in new C ().GetIter ()) { } } } Then go to watch window and add `ss'. Error Unknown identifier: ss if you add Console.WriteLine (ss); to the top of GetIter(), it works. This suggests to me that it's a runtime or compiler issue. I would guess that mcs isn't emitting a "this" reference on the closure class. It should always do that for debug code, even if it isn't used. Good point Michael. Although I don't think it's worth always adding `this' reference, because it can significantly alter generated code and you will still have to handle cases which don't follow this assumption. VS prints "An object reference is required for the non-static field, method, or property 'C.ss'" when the `this' proxy is not available. We could probably show better error message but I'd go with an error message instead of adding `this' everywhere (it's quite tricky for anonymous methods). the error message is fixed in git master IIRC csc always generates the "this" reference for debug code, but I could be wrong. Yes but only for iterators not for other kinds of lifted blocks like anonymous methods or async blocks MD now prints the message even if the name does not exist at all fixed
https://xamarin.github.io/bugzilla-archives/45/4527/bug.html
CC-MAIN-2019-47
en
refinedweb
A proxy view model for JSON data sources. More... JsonListModel allows to transform your JSON data into a QML ListModel for usage with e.g. an AppListView. With Felgo, you develop apps using QML and JavaScript. This makes it easy to work with JSON data and REST APIs, as you can see with the examples of the Access a REST Service topic. But JSON objects are a variant data type. This means that there's no defined data structure that default UI components can expect. When you use this variant type as your data source for e.g. an AppListView, the list view is not able to fully utilize all its features. For example, if individual data properties or list entries in your JSON data are modified, the list view can not track these changes. Thus, each modification of the data requires to recreate and redraw the list from scratch. Of course this is not ideal in terms of performance and user experience. In addition, advanced features like list sections or transition animations are not available. The JsonListModel solves all these issues. It integrates QSyncable, a ListModel implementation by Ben Lau. You can also find the full project on GitHub. The JsonListModel holds a local copy of the specified JSON data. Whenever the source JSON data changes, it is compared to the local copy of the model. After diffing the two sets of data, the JsonListModel applies all detected changes individually. It thus synchronizes the local copy with the JSON data step-by-step. The JsonListModel type implements the full QML ListModel API and fires individual events for all changes in the data. The list view can thus only update relevant entries or apply transition animations. This is super useful, as you can e.g. fetch new data and simply replace the old JSON. The JsonListModel will detect all changes, and the ListView updates its items accordingly - without a full redraw. The JsonListModel is backed by a performant QSyncable model in Qt C++ and exposed to QML for easy usage. With the JsonListModel you can: With the JsonListModel you do not require to implement a custom model in C++ anymore. The JsonListModel itself is your C++ model, which is fully usable from QML and can work with JSON list items of any format. Apart from list views, the model also supports the GridView and Repeater types to display model data. For a simple example project that uses JsonListModel to show data from a REST API, you can see the Basic Demo App. The advanced Todo List Demo App supports fetching of todos, creation and storing of drafts, offline caching, paging, sorting and filtering. It uses list tranisition animations and shows how to integrate PullToRefreshHandler, VisibilityRefreshHandler or SortFilterProxyModel. The following example shows how to use JsonListModel together with AppListView. When adding a new item to the JSON, the JsonListModel detects the change. The AppListView can thus use a transition animation when adding the entry. It is not required to fully redraw the list and exising items in the view are not affected. import Felgo 3.0 import QtQuick 2.0 App { Page { id: page // property with json data property var jsonData: [ { "id": 1, "title": "Entry 1" }, { "id": 2, "title": "Entry 2" }, { "id": 3, "title": "Entry 3" } ] // list model for json data JsonListModel { id: jsonModel source: page.jsonData keyField: "id" } // list view AppListView { anchors.fill: parent model: jsonModel delegate: SimpleRow { text: model.title } // transition animation for adding items add: Transition { NumberAnimation { property: "opacity"; from: 0; to: 1; duration: 1000 easing.type: Easing.OutQuad; } } } // Button to add a new entry AppButton { anchors.horizontalCenter: parent.horizontalCenter anchors.bottom: parent.bottom text: "Add Entry" onClicked: { var newItem = { "id": jsonModel.count + 1, "title": "Entry "+(jsonModel.count + 1) } page.jsonData.push(newItem) // manually emit signal that jsonData property changed // JsonListModel thus synchronizes the list with the new jsonData page.jsonDataChanged() } } } // Page } To learn about ListView features in general, you can find more examples here: Use ScrollViews and ListViews in Your App. As the JsonListModel is a regular QML ListModel, it is compatible with SortFilterProxyModel and also supports list sections: import Felgo 3.0 import QtQuick 2.0 App { Page { id: page // property with json data property var jsonData: [ { "id": 1, "title": "Apple", "type": "Fruit" }, { "id": 2, "title": "Ham", "type": "Meat" }, { "id": 3, "title": "Bacon", "type": "Meat" }, { "id": 4, "title": "Banana", "type": "Fruit" } ] // list model for json data JsonListModel { id: jsonModel source: page.jsonData keyField: "id" fields: ["id", "title", "type"] } // SortFilterProxyModel for sorting or filtering lists SortFilterProxyModel { id: sortedModel // Note: when using JsonListModel, the sorters or filter might not be applied correctly when directly assigning sourceModel // use the Component.onCompleted handler instead to initialize SortFilterProxyModel Component.onCompleted: sourceModel = jsonModel sorters: StringSorter { id: typeSorter; roleName: "type"; ascendingOrder: true } } // list view AppListView { anchors.fill: parent model: sortedModel delegate: SimpleRow { text: model.title } section.property: "type" section.delegate: SimpleSection { } } // Button change the sorting order AppButton { anchors.horizontalCenter: parent.horizontalCenter anchors.bottom: parent.bottom text: "Change Order" onClicked: typeSorter.ascendingOrder = !typeSorter.ascendingOrder } } // Page } To correctly have relevant model roles available for SortFilterProxyModel, make sure to specify all required JsonListModel::fields. Returns the number of list items. Defines the available fields of the model. If not set, the JsonListModel will use the first appended record as reference. You are not able to change the available fields once the model holds data using the given fields. Example: JsonListModel { keyField: "id" fields: [ "id", "value" ] } Sets the key field of the data source. The value in key field should be unique for each entry of the list. If the key field is not set, JsonListModel won't be able to identify insertion, removal and moving of items. JsonListModel is a wrapper for Javascript arrays. Updates to the source property will trigger synchronization of JsonListModel. The list model then emits changes according to the difference of the current and new data set. Example: JsonListModel { keyField: "id" source: [ { "id": "a", "value": 1 }, { "id": "b", "value": 2 } ] fields: [ "id", "value" ] } Appends an item at the end of the list. Returns the item at the specified index in the model. Gets the index of a record that matches the input value for a given field. If no matching record is found, the function returns -1. Inserts an item at the given index position. Moves n items from one position in the model to another. Deletes the content at index from the model. You can specify the number of items to be removed with the count argument. Assigns the passed value to a given property of the list item positioned at index. Only the specified property will be set for the item. Voted #1 for:
https://felgo.com/doc/felgo-jsonlistmodel/
CC-MAIN-2019-47
en
refinedweb
Chapter 23 Windows Services Windows Services are programs that can be started automatically at boot time without the need for anyone to log on to the machine. In this chapter, you learn: - The architecture of Windows Services, including the functionality of a service program, a service control program, and a service configuration program - How to implement a Windows Service with the classes found in the System.ServiceProcess namespace - Installation programs to configure the Windows Service in the registry - How to write a program to control the Windows Service using the ServiceController class - How to troubleshoot Windows Service programs - How to react to power events from the operating system The first section explains the architecture of Windows Services. You can download the code for this chapter from the Wrox Web site at. What Is a Windows Service? Windows Services are applications that can be automatically started when the operating system boots. They can run without having an interactive user logged on to the system and do some processing in the background. For example, on a Windows Server, system networking services should be accessible from the client without a user logging on to the server. On the client system, services are useful as well; for example, to get a new software version from the Internet or to do some file cleanup on the local disk. You can configure a Windows Service to be run from a specially configured user account or from the system user account — ...
https://www.oreilly.com/library/view/professional-c-2008/9780470191378/xhtml/Chapter23.html
CC-MAIN-2019-47
en
refinedweb
Understanding Kubernetes API - Part 3 : Deployments third post in the series which discusses about deployments API. You can access all the posts in the series here. Deployments API Deployment is a kubernetes abstraction that is responsible for running one or more replicas of a pod. Most of the time deployments are preferred over pod as they provide more control over failures of a pod. Deployments API is the part of Kubernetes API which allows user to run CRUD operations on deployments. List Deployments We can list the deployments using GET API call to /apis/apps/v1/namespaces/{namespace}/deployments. The below curl request lists all the deployments in kube-system namespace. curl --request GET \ --url The output contains same fields as pods. But status field of the deployment has information about number of replicas. You can observe in below output "status": { "observedGeneration": 1, "replicas": 1, "updatedReplicas": 1, "readyReplicas": 1, "availableReplicas": 1 } Get Details about Single Deployment We can access the details of individual deployment using /apis/apps/v1/namespaces/{namespace}/deployments/{deployment-name}. Example for kube-dns deployment look as below curl --request GET \ --url Create Deployment Creating deployment requires defining the spec and metadata for the same. Usually user writes a yaml file to define the spec. The below is the YAML definition for creating ngnix Kubernetes API accepts the json rather than YAML. The respective json looks as below { } ] } ] } } } } User needs to send PUT API call to apps/v1/namespaces/default/deployments. curl --request POST \ --url \ --header 'content-type: application/json' \ --data '{ } ] } ] } } } }' Delete Deployment We can delete individual deployment using DELETE API call /apis/apps/v1/namespaces/default/deployments/nginx-deployment. curl --request DELETE \ --url Conclusion Deployment is a abstraction in kubernetes to run multiple replicas of a pod. In this post, we discussed about how to create, delete and list deployments using kubernetes API.
http://blog.madhukaraphatak.com/understanding-k8s-api-part-3/
CC-MAIN-2019-47
en
refinedweb
Boruvka’s algorithm for Minimum Spanning Tree in Python Hello coders, In this tutorial, we are going to study about Boruvka’s algorithm in Python. It is used to find the minimum spanning tree. First of all, let’s understand what is spanning tree, it means that all the vertices of the graph should be connected. It is known as a minimum spanning tree if these vertices are connected with the least weighted edges. For the connected graph, the minimum number of edges required is E-1 where E stands for the number of edges. This algorithm works similar to the prims and Kruskal algorithms. Borůvka’s algorithm in Python Otakar Boruvka developed this algorithm in 1926 to find MSTs. Algorithm - Take a connected, weighted, and undirected graph as an input. - Initialize the vertices as individual components. - Initialize an empty graph i.e MST. - Do the following for each of them, while the number of vertices is greater than one. a) Find the least weighted edge which connects this vertex to any other vertex. b) Add the least weighted edge to the MST if not exists already. - Return the minimum spanning tree. Source Code from collections import defaultdict class Graph: # These are the four small functions used in main Boruvkas function # It does union of two sets of x and y with the help of rank def union(self, parent, rank, x, y): xroot = self.find(parent, x) yroot = self.find(parent, y) if rank[xroot] < rank[yroot]: parent[xroot] = yroot elif rank[xroot] > rank[yroot]: parent[yroot] = xroot else : parent[yroot] = xroot #Make one as root and increment. rank[xroot] += 1 def __init__(self,vertices): self.V= vertices self.graph = [] # default dictionary # add an edge to the graph def addEdge(self,u,v,w): self.graph.append([u,v,w]) # find set of an element i def find(self, parent, i): if parent[i] == i: return i return self.find(parent, parent[i]) #*********************************************************************** #constructing MST def boruvkaMST(self): parent = []; rank = []; cheapest =[] numTrees = self.V MSTweight = 0 for node in range(self.V): parent.append(node) rank.append(0) cheapest =[-1] * self.V # Keep combining components (or sets) until all # compnentes are not combined into single MST while numTrees > 1: for i in range(len(self.graph)): u,v,w = self.graph[i] set1 = self.find(parent, u) set2 = self.find(parent ,v)): if cheapest[node] != -1: u,v,w = cheapest[node] set1 = self.find(parent, u) set2 = self.find(parent ,v) if set1 != set2 : MSTweight += w self.union(parent, rank, set1, set2) print ("Edge %d-%d has weight %d is included in MST" % (u,v,w)) numTrees = numTrees - 1 cheapest =[-1] * self.V print ("Weight of MST is %d" % MSTweight) g = Graph(4) g.addEdge(0, 1, 11) g.addEdge(0, 2, 5) g.addEdge(0, 3, 6) g.addEdge(1, 3, 10) g.boruvkaMST() Output: Edge 0-2 has weight 5 is included in MST Edge 1-3 has weight 10 is included in MST Edge 0-3 has weight 6 is included in MST Weight of MST is 21 Now, lets us understand using an example: Find the least weighted edge for each vertex that connects it to another vertex, for instance. Vertex Cheapest Edge thatconnects it to some other vertex {0} 0-1 {1} 0-1 {2} 2-8 {3} 2-3 {4} 3-4 {5} 5-6 {6} 6-7 {7} 6-7 {8} 2-8 The edges with green markings are least weighted. Component Cheapest Edge that connects it to some other component {0,1} 1-2 (or 0-7) {2,3,4,8} 2-5 {5,6,7} 2-5 Now, repeat the above steps several times, as a result, we will get the least weighted edges. After completing all the iterations we will get the final graph, i.e., minimum spanning tree MST. In conclusion, we have understood how to create an MST of a connected, weighted graph, and also it is not very hard to create a minimum spanning tree.
https://www.codespeedy.com/boruvkas-algorithm-for-minimum-spanning-tree-in-python/
CC-MAIN-2020-50
en
refinedweb
* XYBarChart.java28 * 29 * Created on 8 luglio 2005, 17.5930 *31 */32 33 package it.businesslogic.ireport.chart;34 35 /**36 *37 * @author Administrator38 */39 public class XYBarChart extends Chart {40 41 /** Creates a new instance of PieChart */42 public XYBarChart() {43 44 setName("XY Bar");45 setChartImage(new javax.swing.ImageIcon ( getClass().getResource("/it/businesslogic/ireport/icons/charts/xybar_big.png")).getImage());46 setDataset( new TimePeriodDataset());47 setPlot( new BarPlot());48 }49 50 public Chart cloneBaseChart()51 {52 return new XYBarChart();53 }54 55 }56 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/it/businesslogic/ireport/chart/XYBarChart.java.htm
CC-MAIN-2020-50
en
refinedweb
std::ranges::cend Returns a sentinel indicating the end of a const-qualified range. Let CT be - const std::remove_reference_t<T>& if the argument is a lvalue (i.e. Tis an lvalue reference type), - const T otherwise, a call to ranges::cend is expression-equivalent to ranges::end(static_cast<CT&&>(t)). If ranges::cend(e) is valid for an expression e, where decltype((e)) is T, then CT models std::ranges::range, and std::sentinel_for<S, I> is true in all cases, where S is decltype(ranges::cend(e)), and I is decltype(ranges::cbegin(e)). ::cend denotes a customization point object, which is a const function object of a literal semiregular class type (denoted, for exposition purposes, as cend_ftor). All instances of cend_ftor are equal. Thus, ranges::cend can be copied freely and its copies can be used interchangeably. Given a set of types Args..., if std::declval<Args>()... meet the requirements for arguments to ranges::cend above, cend_ftor will satisfy std::invocable<const cend_ftor&, Args...>. Otherwise, no function call operator of cend_ftor participates in overload resolution. [edit] Example #include <algorithm> #include <iostream> #include <ranges> #include <vector> int main() { std::vector<int> v = { 3, 1, 4 }; namespace ranges = std::ranges; if (ranges::find(v, 5) != ranges::cend(v)) { std::cout << "found a 5 in vector v!\n"; } int a[] = { 5, 10, 15 }; if (ranges::find(a, 5) != ranges::cend(a)) { std::cout << "found a 5 in array a!\n"; } } Output: found a 5 in array a!
https://en.cppreference.com/w/cpp/ranges/cend
CC-MAIN-2020-50
en
refinedweb