text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
On 22 July 2014 15:30, Brian Wylie <briford.wylie at gmail.com> wrote: > Okay, the transformer approach worked amazingly well. It's a bit of a hack > the transformer simply adds a ',' to the beginning of lines where I'm > calling commands that need to be 'auto-quoted'.. but certainly speaks well > of the IPython design that my hack worked so quickly. :) Well, I'm glad it worked. How are you deciding which lines need that treatment? There are two bits of machinery transforming input in IPython. Input transformers handle things where you can tell just from looking at the line, like %magic and !shell commands. Then the prefilter machinery changes things that depend on what's in the current namespace, like autocall. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/ipython-dev/2014-July/013358.html
CC-MAIN-2022-33
en
refinedweb
Upgrading PostgreSQL 9.6 before EOL You need to plan for the end-of-life (EOL) of PostgresSQL 9.6, announced by Amazon. As an AWS environment user, you need to check the version of PostgreSQL you use for Cloudera Data Warehouse environments. If your database version is still PostgreSQL 9.6, you need to upgrade to PostgreSQL 11.12. You can initiate an upgrade of your database instance — either immediately or during your next maintenance window — to the Cloudera-recommended version of PostgreSQL 11.12 using the AWS Management Console or the AWS Command Line Interface. Follow the procedure below to perform the upgrade. The upgrade process shuts down the database instance, performs the upgrade, and restarts the database instance. The database instance may be restarted multiple times during the upgrade process. While major version upgrades typically complete within the standard maintenance window, the duration of the upgrade depends on the number of objects within the database. To estimate the time required, take a snapshot of your database and test the upgrade. - Go to the . - Check the version of PostgreSQL used in your environment, and if it is 9.6.6, go to the next step to start the upgrade process. - In CDP, go to your environment, and in a Database Catalog for that environment, create a Virtual Warehouse or use an existing one. - Run some basic queries in Hive or Impala to see if your PostgreSQL 9.6.6 is alive and well. show tables; use default; show tables; create table tbl1(col1 string, col2 string); show tables; describe tbl1; create table tbl2 as (select * from tbl1); describe tbl2; insert into table tbl1 values ("Hello", "World"); select * from tbl1; - Go to the . Select DB Engine version 10.16 to upgrade Amazon RDS to 10.16.Amazon prevents a direct upgrade to 11.x unless you are on PostgreSQL 9.6.20 or higher. - In CDP, rerun the basic queries in your Virtual Warehouse, and if all goes well, proceed to the next step. - Look for errors, such as those shown below, in the metastore log.Errors in the metastore might look something like this: <14>1 2021-07-31T00:52:38.223Z metastore-0.metastore-service.warehouse-1627669911-vl6x.svc.cluster.local metastore 1 0b245ec4-8419-4968-94bc-ee122960b1aa [mdc@18060 class="txn.TxnHandler" level="INFO" thread="pool-9-thread-200"] Non-retryable error in enqueueLockWithRetry(LockRequest(component:[LockComponent(type:SHARED_READ, level: DB, dbname:default, operationType:NO_TXN, isDynamicPartitionWrite:false)], txnid:79, user:hive, hostname:hiveserver2-0.hiveserver2-service.compute-1627670377-mbqx.svc. cluster.local, agentInfo:hive_20210731005238_4aa14bf0-4e46-448d-b6c7-cdc3ca4ec863, zeroWaitReadEnabled:false)) : Batch entry 0 INSERT INTO "HIVE_LOCKS" ( "HL_LOCK_EXT_ID", "HL_LOCK_INT_ID", "HL_TXNID", "HL_DB", "HL_TABLE", "HL_PARTITION", "HL_LOCK_STATE", "HL_LOCK_TYPE", "HL_LAST_HEARTBEAT", "HL_USER", "HL_HOST", "HL_AGENT_INFO") VALUES (4561258935320160815, 1, 79, 'default', NULL, NULL, 'w', 'r', 0, 'hive', 'hiveserver2-0.hiveserver2-service.compute-1627670377-mbqx.svc.cluster.local', 'hive_20210731005238_4aa14bf0-4e46-448d-b6c7-cdc3ca4ec863') was aborted: ERROR: index "hl_txnid_index" has wrong hash version^M Hint: Please REINDEX it. Call getNextException to see other errors in the batch. (SQLState=XX002, ErrorCode=0) - Connect to the HiveServer pod or metastore pod, and using psql, connect to the RDS instance.For example, psql -h env-kg.us-west-2.rds.amazonaws.com --u hive --d postgresLogin using the hostname from the AWS console. The postgres database password is stored in JCEKS file which is mounted using a secret volume inside the HiveServer pod or metastore pod. Note the namespace of pod and obtain the password. Validate the upgrade to PostgreSQL 10.16 You need to connect to the postgres 10.16 database, and validate the upgrade. - Connect to the PostgreSQL 10.16 database using the namespace of the pod and the password you obtained in the last procedure. - On the Postgres command line, look at all the databases.For example, type \l. - Fix any problematic indexes. Go to a database in your metastore named something likeFor example, go to a database in your metastore named warehouse-1629221321-pf2h-metastore. \c warehouse-1629221321-pf2h-metastore - Run the REINDEX command. REINDEX INDEX hl_txnid_index; - In CDP, rerun the basic queries in your Virtual Warehouse, and if all goes well, proceed to the next step. - Upgrade the Amazon RDS version of PostgreSQL to 11.12, and rerun the basic queries. Upgrade to PostgreSQL 11.12 - Go to the , and upgrade the Amazon RDS version to PostgreSQL 11.12. - Rerun the basic queries in your Virtual Warehouse to validate the upgrade.
https://docs.cloudera.com/data-warehouse/cloud/aws-environments/topics/dw-aws-upgrade-postgres.html
CC-MAIN-2022-33
en
refinedweb
Screensaver in JavaScript All of us know very well the screensavers in our operating systems. In this post, I'd like to show how to implement such functionality in our web application using Javascript. The animation I present is not very sophisticated and complicated, but it's a place to start implementing your own, more complex solution here. The code I present here is a part of my first npm package and it may be reused in your website. Class properties First, I defined a few class properties: interface BaseConfig { text?: string background?: string baseElement?: HTMLElement | Element backgroundImg?: string animationSpeed?: 'slow' | 'regular' | 'fast' customElement?: HTMLElement | Element | string, triggerTime?: number, } class JsScreensaver { private config: BaseConfig = baseConfig; private windowDimensions: IDimensions = {width : 0, height : 0}; private playAnimation: boolean = true; private screensaverElement: HTMLElement = document.body; private eventsList: string[] = ['keydown', 'mousemove']; private defaultScreensaver: string = ` <div class="screensaver__element-wrapper"> <div class="screensaver__element-content"> <p class="screensaver__element-text"></p> </div> </div> ` In the BaseConfig interface, I listed all options that may be passed into the screensaver configuration. Screensaver is initialized with the start() method. If there are no options passed as an argument, the baseConfig is loaded. start(config?: BaseConfig): void { this.config = {...baseConfig, ...config}; this.setActionsListeners(); } In the next step, listeners for the events are added. Screensaver will be turned on after the time defined (in milliseconds)in the triggerTime property. The default value is set to 2 seconds. For each of the events in the array (keyup and mousemove) the addEventListener is set, with a callback function that creates the screensaver container after a certain time. If the event is triggered, the timeout is cleared and the screensaver element is removed. private stopScreensaverListener() { this.eventsList.forEach(event => window.addEventListener(event, (e) => { e.preventDefault(); this.playAnimation = false; this.screensaverElement.remove(); })); } private setActionsListeners() { let mouseMoveTimer: ReturnType<typeof setTimeout>; this.eventsList.forEach(event => window.addEventListener(event, () => { clearTimeout(mouseMoveTimer); mouseMoveTimer = setTimeout(() => { this.createContainer(); }, this.config.triggerTime) })) } The stopScreensaverListener method is triggered from the createContainer. The latter creates a DOM element with appropriate classes and styling. The screensaver container and element (a rectangle in this case) are appended to the body as a default, but we can define any other container, passing it into a configuration in a baseElement property. Here, the animation is triggered. For now, I have only one animation available in this package. It's a simple one, just a rectangle bouncing around the screen with text inside. I want to extend this package by adding more predefined animations to it. In addition, the user should be able to define its own animation as well. But that's something that needs to be developed in the nearest future. Not, let's focus on the existing animation. I use the requestAnimationFrame API which I described in my previous post. In that post I showed the same animation. In this package, it's a little bit enhanced. private runAnimation(element: HTMLElement): void { this.playAnimation = true; element.style.position = 'absolute'; let positionX = this.windowDimensions.width / 2; let positionY = this.windowDimensions.height / 2; let movementX = this.config.animationSpeed ? speedOptions[this.config.animationSpeed] : speedOptions.regular; let movementY = this.config.animationSpeed ? speedOptions[this.config.animationSpeed] : speedOptions.regular; const animateElements = () => { positionY += movementY positionX += movementX if (positionY < 0 || positionY >= this.windowDimensions.height - element.offsetHeight) { movementY = -movementY; } if (positionX <= 0 || positionX >= this.windowDimensions.width - element.clientWidth) { movementX = -movementX; } element.style.top = positionY + 'px'; element.style.left = positionX + 'px'; if (this.playAnimation) { requestAnimationFrame(animateElements); } } requestAnimationFrame(animateElements) } The rectangle's start position is set to the center. That's calculated in the positionX and positionY variables. The movement represents the number of pixels that the object will move in every frame. Here I used the values from the configuration, letting the user set the speed of movement. In every frame, the position of the rectangle is checked, whether it's inside the container or if it hits the border of the container. If the breakpoint values are reached, the movement values are set to the opposite, which generates the motion in the opposite direction. Usage Usage of the screensaver is very simple. The whole class is exported: const classInstance = new JsScreensaver(); export { classInstance as JsScreensaver }; So you only have to import the class somewhere in your code with import { JsScreensaver } from "../js-screensaver"; And use the start() method with the configuration (or leave the config blank). JsScreensaver.start({ text: "Hello Screensaver", customElement: document.querySelector('.screen-saver'), triggerTime: 4000, animationSpeed: 'slow' }); The customElement property lets you create the screensaver from the HTML or component in your own project. So you can inject any customized element with styling that sits in your project. Conclusion That's the final result, the screensaver with a custom HTML, styling, text inside: I did not show every line of code in this post. The whole project is available here, so you can check every method and configuration. This package is very simple and not much customizable so far, but - it has potential ;-).
https://michalmuszynski.com/blog/screensaver-in-javascript/
CC-MAIN-2022-33
en
refinedweb
Here is a something cool you can add to your web site. You have a registration form that asks for a username. In order to save the user time and a trip to the server, you want to see if the username exists as they type it in. Let's look at how Spry could handle this. First, let me build a simple form: <form id="userform" action="null.html" method="post"> <h2>Register</h2> <table width="600" border="0"> <tr valign="top"> <td align="right" width="200">username (min 4 characters)</td> <td width="400"><input type="text" id="username" name="username" onKeyUp=> The only thing you want to pay attention to here are these two lines: <input type="text" id="username" name="username" onKeyUp="checkUsername()"> <span id="resultblock" class="error"></span> What I've done here is used a bit of JavaScript to execute when the form field is changed. (And I know onKeyUp has a few issues. Can folks recommend a more well rounded approach?) As the user types, it will call my function, checkUsername(): function checkUsername() { var uvalue = document.getElementById("username").value; if(uvalue.length < 4) { status(''); return; } Spry.Utils.loadURL("GET","userchecker.cfm?username="+encodeURIComponent(uvalue), false, usercheck); } In this code I grab the value of the form field. If the size is less than 4, I clear my result message and leave the function (the status function will be described in a bit). If we have enough characters, I then use loadURL, from the Spry.Utils package, to call a server side file. (Here is my earlier entry on loadURL.) I fire off the event and wait for the result. (At the end I'll talk about how to modify it to not wait.) Lastly, a function named usercheck will be called with the result. Let's take a look at that function: function usercheck(request) { var result = request.xhRequest.responseText; if(result == 0) status("Username not available!"); else status(''); } When the result returns from the request, I have an object that contains information returned in the result. In this case, my server side script will return either a 1 or a 0. 0 is the flag for the username not being available, so I use my status function to write that result. Here is the status function in case you are curious: function status(str) { var resultblock = document.getElementById("resultblock"); resultblock.innerHTML = str; } As you can see, it is just using a bit of DHTML to update the span block next to my form field. Last but not least, here is the ColdFusion code running behind the scenes. Obviously it is not hooked up to anything real: > You can test this out here, and be sure to view source on the HTML page for a more complete view. So I mentioned earlier that if I made my request not wait (asynch), then I'd have to modify things a bit. Because the user could keep on typing, I would need to return both the result and the username in my server side code. I'd then need to check and see if the username in the form field was still the same. I'll post a followup showing an asynch version later on. Edited on August 26: Rob Brooks-Bilson raises some very good points about the security of this. Please be sure you read the comments below. Archived Comments Nice. Works good. Can you explain this call: request.xhRequest.responseText Ray, you might be able to use Prototype's formObserver instead of onkeyup. If I wasn't on vacation typing this from my laptop lying on the hotel bed, I might give a rip at a sample for you, but at the very least I'll bookmark this for a look when I get back! Yeah - You keep raving about it and I can't wait to see it at the UG meeting. You know - something we should consider - maybe doing a joint preso on Breeze sometime from my office. If our meeting goes well "live", lets talk about it more. I bet my readers would love to see me do Spry again, _and_ see an alternative. BL: That is just the API I found in other examples. I don't believe it is properly documented it yet though. I see examples like this just-in-time lookup in AJAX libraries, but I'm not convinced. Even at my modest typing speed, the HTTP calls are so far behind me that I'm done and on to the next field before the first of 5 to 7 calls returns. For that reason, I'm still a big fan of the onBlur event. Nonetheless, thanks for the Spry example! I think the type ahead idea is cool, but I don't think I'd ever use it to show usernames for an application. That seems to me to be inviting trouble as you're essentially giving people (potentially the wrong people) one component of the login. It's a security issue. A person with less than the best of intentions could essentially hit the letter "a" and get a list of all your "a" users, etc. For "sensitive" sites such as customer portals, you may be giving away more information than that to competing customers, or competitors. You're blog reaches a lot of people, so I'd hate to see any of them adopt the type ahead technique for this exact purpose, knowing that it creates a security concern. Edward: As I mentioned, I was using a synch approach. This slows down the lookup. This is NOT optimal and should be switched to an asynch approach. There was another reason why I didn't do that, but that reason no longer applies so I'm going to post a follow up (because it shows another technique as well.) Rob: Rob, was that to me? I don't use the type ahead to reveal usernames, I just use it to reveal dupes. So you can't type A and see Andy, Andrew, etc. The code only fires when usernames are more than 3 characters and to find a dupe you would need to know the name of the dupe. You _could_ use it to get a list of usernames, but only a bit faster than you could w/o it. If I wanted to sniff usernames w/o AJAX, I'd fill the form out, click submit, and go Back and tweak the username. I _do_ see your point, but I don't believe this would help hackers very much. Now - one thing I should point - and I just realized this, the CFM file does NOT check the length, so it would be a security hole. It should also check the referer (which can be faked, but would stop most script kiddies). See what I get for skimming the post! My appologies Ray, you're right. I missed the part about pulling back just the dupes. I still don't think it's a good idea to show them if a username exists, though. Same for a bad login. I know some sites will tell you "username not found", but in my book, sites shouldn't give back any more information than "invalid login". Rob, this isn't a logon though, it is a registration (and if that isn't clear, it is my fault). However - all in all - I feel strongly enough about your comments that I'm going to ban you - no wait, I mean I'm going to edit the entry to make sure folks read your comments below. You know - if you have any doubt when it comes to security crap, it is probably almost always a good idea to lean towards the more secure option. (Entry will be edited in 5 mins.) Ok, I obviously shouldn't post this early in the morning on a Sat, and definitely not before a cup of coffee. Ray, just ignore my comments. Right after I posted the last one I realized you weren't checking on a login form, but rather an account sign-up form. Rob, absolutely not! If there is ANYTHING to be anal or 'sensitive' about, this is exactly it. I'm very happy you made these comments. I'm still going to ban you, but thanks. ;) Seriously though - folks may be so turned on by AJAX that they may forget security issues. This is something folks should definitely keep in mind. Is this supposed to be "Check if user exists"? it/if thanks for the examples. Yes. Was it not clear? just referring to the potential typo in the heading. Oops. Thanks. Back to security issues, doing this does open the door a bit for a hacker. Right now as this script stands, it's very easy for a hacker to start posting data to the userchecker.cfm template to harvest a list of valid usernames. It's always important to be things in place to monitor for suspicious looking traffic. I definitely agree that the CFM file, as stands, is too insecure. It should have a length check, like the front end. It should check REFERER, which can be faked, but it would be nice to check. I'm planning a followup to this article to show using XML results in Spry (again, non-dataset ones). In my followup, I'm going to change it to a simple string lookup type function. For example, you can imagine a CMS that demands you use unique page names. JUST TESTING YOUR COMMENTS VIA BLOGCFC FLEX I am not able to get the demo included here to work. I also tried to copy the code and test. Nothing happens - I even tried with a onBlur event - same results. Could some one tell me what I am doing wrong and point me to a working version? Thanks for all the great work. The code I tested with: REGISTER.CFM <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "..."> <html xmlns="" xmlns: <head> <meta http-<title>Spry Test</title> <script language="JavaScript" type="text/javascript" src="../SpryAssets/xpath.js"></script> <script language="JavaScript" type="text/javascript" src="../SpryAssets/SpryData.js"></script> <script language="JavaScript" type="text/javascript" src="../SpryAssets/SpryUtils.js"></script> <script language="JavaScript" type="text/javascript"> <!-- function checkUsername() { var uvalue = document.getElementById("username").value; if(uvalue.length < 4) { status(''); return; } Spry.Utils.loadURL("GET","registerChecker.cfm?username="+encodeURIComponent(uvalue), false, usercheck); } function usercheck(request) { var result = request.xhRequest.responseText; if(result == 0) status("Username not available!"); else status(''); } function status(str) { var resultblock = document.getElementById("resultblock"); resultblock.innerHTML = str; } --> </script> </head> <body> <form id="userform" action="null.html" method="post"> <h2>Register</h2> <table width="600" border="0"> <tr valign="top"> <td align="right" width="200">username (min 4 characters)</td> <td width="400"><input type="text" id="username" name="username" onBlur=> </body> </html> REGISTERCHECKER.CFM > For the demo, try here:... As for your own code - do you get a JS error? No JavaScript error. Now that I checked, it is actually sending back the value of available - the issue is it is not displaying this either in the resultblock div or in an alert box (i just get a blank alert box). When i check the response on my Firebug console, i do see the value as 1. I need to capture the values returned and update the form fields as well as the dataset in the master detail set up that i am planning to use this for. Thanks, -Xavier I am testing this on a form I am developing. I have put the REGISTERCHECKER.CFM in the form of a component. I am having trouble with the loadURL return (var result = request.xhRequest.responseText;). I got tons of html. The cfc returns the proper result if I enter the url directly ( ). I have an alert that pops up with the var result @ I am testing it on the "Invitation Code" textfield. I have been struggling with this all day. Any ideas how to fix? Actually, you aren't. View source on the URL and you see a lot of stuff on top. Both comments and the fact that the CFC wddx-encoded the response. In order to use a CFC and not have stuff around it, you need to use returnFormat, which will work in CF8. See the docs for more info on that. Looked through livedocs for CF8. Saw where invoking a cfc by url "returns a result using the cfreturn tag, ColdFusion converts the text to HTML edit format (with special characters replaced by their HTML escape sequences), puts the result in a WDDX packet, and includes the packet in the HTML that it returns to the client." I couldn't find specific instructions to get rid of said HTML! I came up with returnformat="plain". This removed the wddx packet stuff but all the HTML info is still showing up before the 0 or 1. How do you get rid of the rest of the HTML? If you view source, you will see it's a lot of html comments. That's being generated by something else on your server, most likely an Application.cfm or Application.cfc file. It was in the application.cfm file. I commented out all the cfoutput html. Hopefully I won't need any of it. I meant to conclude with "It works now!" I am now trying to get "check if user exists" to work with spry textfield validation for required, minchar, and maxchar. I have written a javascript function that runs through each form field to check validation status before performing the next process. I am having trouble adding checkUsername process to the validation function. I tried making 'result' a global variable but it caused both the usercheck and validation function to fail. What is a better method? Here is the key elements of what I have now. **************My function***************** function userValidate(){ userNameObj.validate(); firstNameObj.validate(); lastNameObj.validate(); genderObj.validate(); zipCodeObj.validate(); if ((userNameObj.validate() && firstNameObj.validate() && lastNameObj.validate() && genderObj.validate() && zipCodeObj.validate() === true) && (result != 0)) { sp1.showPanel('findCause'); }else{ return false; } ***********What I changed in your example*********** var result = ""; function usercheck(request) { result = request.xhRequest.responseText; if(result == 0) status("Username not available!"); else status(''); } The issue is that my code uses an asynch process to check the value. Ie, fire and call some func when done. You would need to change it to a synch process in your validation. Check the docs for Spry.Utils.loadURL to see if that is possible. It should be. I am trying to use your script to check for existing product title in the DB, which works well except when i click on another tab in firefox, then it gives me a following spry error (pops up in the lowest right corner of the browser): Exception caught while loading agentSites/userchecker.cfm?productname=hiking%20in%20cuba: Function expected I tried googling it and changing all the possible things in the script, but the error still exists. Maybe you can shine the light on the problem? i am using onBlur() to trigger the check, can that be an issue? Well in theory, if onBlur was the issue, then you could click anywhere on the page to force it. To be honest, I've _never_ seen a JS bug thrown by clicking on another tab. Do you also get it if you click in the chrome some place? Thanks a lot for answer! It seems like clicking on another tab (or anywhere else for that matter, since onblur triggers anyways) causes that error most of the time, though it happens when i just click off the field too, just more rarely i guess. What really throws me off is that it is such a random error... It pops up both when it finds the same product name and when there is no same name in db. Here is the code: <script> Spry.Data.Region.debug = true; function checkProductname() { var uvalue = document.getElementById("productTitle").value; Spry.Utils.loadURL("GET","agentSites/userchecker.cfm?productname="+encodeURIComponent(uvalue), false, productcheck); } function productcheck(request) { var result = request.xhRequest.responseText; if(result == 0){ status("<br />This title already exists! Please choose another one."); document.getElementById("productTitle").focus(); document.getElementById("productTitle").select(); }else{ status(''); } } function status(str) { var resultblock = document.getElementById("resultblock"); resultblock.innerHTML = str; } </script> And here is the usechecker.cfm: <cfsetting showdebugoutput=false> <cfparam name="url.productname" default=""> <cfquery name="checkName" datasource="agent_sites"> SELECT productTitle FROM products </cfquery> <!--- Create a list of existing products ---> <cfset productList = valueList(checkName.productTitle, ",")> <cfif listFindNoCase(productList, url.productname)> <cfset available = 0> <cfelse> <cfset available = 1> </cfif> <cfoutput>#available#</cfoutput> I mean if your example works fine for me, than its gotta be something with backend... Also, that page uses spry for form validation as well, maybe its two spry scripts colliding??? Thanks again for help, i am so desperate already, i am trying to punish this error for like ever now ;) Well it says that productcheck doesn't exist. productcheck is the call back function you wrote to handle the result. I don't see any possible way for your code to accidentally overwrite it. Honestly I have no idea unfortunately. Is it online where I can see? Its on the company's internal website, so i can't really give access to that :( Thank you very much for trying to help out, i really appreciate it. I will let you know if i find the solution (the weirdest thing is that when i google something like "spry function expected" it doesnt even give me any good results, so it must be something wrong with my setup...) This script is within another massive website, which i havent built, so its possible that something there is causing the problems. So i guess i should really start with isolating your script and seeing how it works standalone... As i promised, i would post the result of my fight with the script. Well it seems like it's fixed now. What i did is get rid of status(str) function (since that is what was giving me a hard time) and in the end have that: if(result == 0){ document.getElementById("resultblock").innerHTML = "<br />This title already exists! Please choose another one."; }else{ document.getElementById("resultblock").innerHTML = ""; } Not sure if anyone else will have a problem like that, but i thought i would just share it. Thank you again for trying to help out and mostly for the script itself! Hello Raymond, not the first time I write to you and always with great attention from you. Raymond, I wasn't able to make your demo (not the code, but the demo) to work. The link on your post doesn't work, and the link you gave to Xavier on June 25, 2007 doesn't respond to the validation. I'm creating a form where designers add collections to the database. Basically I need to avoid the same collection is entered again and I would love to do it without submitting the form (real time username validation). Do you have a sample for this? Thanks so much, as usual. I just tested and it worked for me in FF3.5. I entered "victor" and it told me the user was taken. Did you enter that? My mistake. I apologize. Let me ask you something else: my hosting provider needs to have what installed? Just CF8 or something else? Thanks again. Not even CF8 - this should work with ... shoot, CF1 even. It's mainly the Spry code. CF is just returning a 1 or a 0. Hello again Ray, I have a question about this: I'm doing some tests with two different names. It works with one of them but not with the other one. I've tried putting a text message in case the name is available and tried entering the one taken and it shows it as available. Do you know why this could be happening? Thanks, as usual. Something new just discovered: It will respect the latest value. Let's say I enter a new username, so if I try to re-enter that one, it will display the warning. Now, if I enter another one, the the previous one will show as available....checking the code to see where I'm telling my query to check the latest record... I think it's solved with Daniii comment from October 27 2008. Almost a year ago. Thanks Daniii, thanks Ray. Catching up now. Glad you got it. :)
https://www.raymondcamden.com/2006/08/25/Spry-Example-Check-it-user-exists
CC-MAIN-2022-33
en
refinedweb
SoShadowSpotLight.3coin3 man page SoShadowSpotLight — The SoShadowSpotLight class is a node for setting up a spot light which casts shadows. Synopsis #include <Inventor/annex/FXViz/nodes/SoShadowSpotLight.h> Inherits SoSpotLight. Public Member Functions virtual SoType getTypeId (void) const SoShadowSpotLight (void) virtual void GLRender (SoGLRenderAction *action) Static Public Member Functions static SoType getClassTypeId (void) static void initClass (void) Public Attributes SoSFNode shadowMapScene SoSFFloat nearDistance SoSFFloat farDistance Protected Member Functions virtual const SoFieldData * getFieldData (void) const virtual ~SoShadowSpotLight () Static Protected Member Functions static const SoFieldData ** getFieldDataPtr (void) Additional Inherited Members Detailed Description The SoShadowSpotLight class is a node for setting up a spot light which casts shadows. This node can be used instead of a normal SpotLight if you need to improve the performance by supplying a simplified scene graph to be used when rendering the shadow map(s). For instance, the shadow map scene graph doesn't need any textures or materials, and any non-casters can also be excluded from this scene graph. It's more optimal to use this node than to use the SoShadowStyle node to control this, at the cost of some extra application complexity. It's especially useful if you have a scene with few shadow caster nodes and lots of shadow receiver nodes. Currently, this node must be placed somewhere in the SoShadowGroup subgraph to cast shadows. FILE FORMAT/DEFAULTS: ShadowSpotLight { shadowMapScene NULL nearDistance -1 farDistance -1 } Here is the the example from SoShadowGroup, modified to use SoShadowSpotLight instead of a normal SoSpotLight. Notice that only the sphere casts shadows. DirectionalLight { direction 0 0 -1 intensity 0.2 } ShadowGroup { quality 1 # to get per pixel lighting ShadowSpotLight { location -8 -8 8.0 direction 1 1 -1 cutOffAngle 0.35 dropOffRate 0.7 shadowMapScene DEF sphere Separator { Complexity { value 1.0 } Material { diffuseColor 1 1 0 specularColor 1 1 1 shininess 0.9 } Shuttle { translation0 -3 1 0 translation1 3 -5 0 speed 0.25 on TRUE } Translation { translation -5 0 2 } Sphere { radius 2.0 } } } # need to insert the sphere in the regular scene graph as well USE sphere Separator { Material { diffuseColor 1 0 0 specularColor 1 1 1 shininess 0.9 } Shuttle { translation0 0 -5 0 translation1 0 5 0 speed 0.15 on TRUE } Translation { translation 0 0 -3 } Cube { depth 1.8 } } Separator { Material { diffuseColor 0 1 0 specularColor 1 1 1 shininess 0.9 } Shuttle { translation0 -5 0 0 translation1 5 0 0 speed 0.3 on TRUE } Translation { translation 0 0 -3 } Cube { } } Coordinate3 { point [ -10 -10 -3, 10 -10 -3, 10 10 -3, -10 10 -3 ] } Material { specularColor 1 1 1 shininess 0.9 } Complexity { textureQuality 0.1 } Texture2 { image 2 2 3 0xffffff 0x225588 0x225588 0xffffff } Texture2Transform { scaleFactor 4 4 } FaceSet { numVertices 4 } } - Since: Coin 3.0 Constructor & Destructor Documentation SoShadowSpotLight::SoShadowSpotLight (void) Constructor. SoShadowSpotLight::~SoShadowSpotLight () [protected], [virtual] Destructor. Member Function Documentation SoType SoShadowSpotoSpotLight. const SoFieldData * SoShadowSpotLight::getFieldData (void) const [protected], [virtual] Returns a pointer to the class-wide field data storage object for this instance. If no fields are present, returns NULL. Reimplemented from SoSpotLight. void SoShadowSpotoSpotLight. Author Generated automatically by Doxygen for Coin from the source code.
https://www.mankier.com/3/SoShadowSpotLight.3coin3
CC-MAIN-2017-22
en
refinedweb
Workflow-based composition of web-services: a business model or a programming paradigm? - Britton Newman - 1 years ago - Views: Transcription 1 Workflow-based composition of web-services: a business model or a programming paradigm? Dinesh Ganesarajah 1 Orbis, NDS, London, UK Emil Lupu Department of Computing Imperial College, London, UK Abstract providers and provides fast service creation, customisation and deployment. The system caters for multiple workflow paradigms, provides an extensible language for workflow specification and emphasises encapsulation and tight constraints on workflow execution. To expose a workflow of web-services as a web-service, several design steps have been required including the deployment as a webservice of the generic workflow engine and a generalisation of the Visitor Pattern to concurrent visitors. 1. Introduction In the early days the Web was mainly used to publish information. Then, application servers were used to offer services to human customers. We now witness development of the Services Web where the services can be accessed programmatically and application servers collaborate with each other, typically using the Simple Object Access Protocol (SOAP). The Services Web evolves out of the desire to perform transactions in an open and automated environment with ubiquitous services, rather than using other mechanisms such as EDI or manual processing. The Services Web environment typically exhibits the following characteristics: Web Services are black-box components that encapsulate behaviour. The underlying object model and implementation technology are hidden though the functionality is not. Web Services interact using SOAP over HTTP thus providing, through the use of wrappers, interoperation between technology specific components such as DCOM, CORBA components or Enterprise Java Beans (EJBs). Web Services can be discovered at run-time for dynamic binding. Their APIs are published in a standard format (WSDL) that can be inspected and invoked dynamically; in essence a liberal form of reflection. Several Web Services can be orchestrated to perform a series of functions in a workflow. Web Services are units of extremely low coupling; they communicate with each other with low dependency on the other party. In many respects they are akin to deployed components; a similarity that has created both confusion and controversy. Can web services be composed like components? Can the result be exposed as a web service, thus providing hierarchical composition? What are the restrictions imposed by such strict encapsulation? One possibility is to manually program in a Web Service its interactions with other Web Services, or to implement in a Web Service the functionality needed to find other services and bind to them. However, Web Services are expected to become ubiquitous. In Hewlett- Packard s Cool Town project [1] every entity has a URI and can be represented by a Web Service. This ranges from businesses to handheld devices and even people. In such a world, the ability to compose Web Services through workflows rapidly and with less effort than through general programming will be of substantial value. The potential for this evolution is considerable. Multiple layers of value-added service providers could easily be formed where providers aggregate existing services into new web-services. Services customised to each client s specific needs could be easily created and deployed. And even customers themselves could aggregate existing services into new more convenient applications. 1 Work undertaken while at Imperial College, London, UK 2 Workflows emphasize the separation of control and information flows between components from the actual execution of the code in the underlying components. This separation provides the ability to easily rearrange and change the components. By and large, current workflow languages consider workflow as a graph problem, with control flow and data flow described in terms of lines in a graph; this is exemplified by the Web Service Flow Language (WSFL) [2]. However, such an approach produces systems that are difficult to maintain and modify because control lines are similar to programming with goto statements. The approach to workflow specification we consider here is that of encapsulating boxes of control. Each box, defines a Non-Terminal Expression of the workflow language, which determines control within that box. For example, a Sequence Expression entails that all constituent expressions are executed in order, while a Concurrent Expression entails that all constituent expressions are executed in parallel. Boxes (nonterminals) can then be recursively encapsulated avoiding the need for lines that define the control path between the expressions. A substantial advantage is that modifications to any part of the workflow are contained within the expression concerned, and Non-Terminals act as a structuring mechanism for decomposing large complex problems, and providing scalability for large workflows; a form of encapsulation. Additionally, an entire workflow or any subsection can be deployed as a new Web Service, for use in another workflow or in the workflow itself, and thus providing composition. This paper presents a complete web-service environment having the characteristics mentioned above. The design and implementation cover the workflow service implementation as well as the specification language, workflow execution, monitoring tools and visualisation. The paper is organised as follows: Section 2 will present the relevant related work from both the Web- Service environments and workflow management systems; Section 3 will present the overall system architecture; Section 4 will focus on the workflow language while Sections 5 and 6 will focus on the workflow engine and the user interface respectively. 2. The Scene The success of Web Services will be determined by the availability of tools and product support for building, enacting and interoperating between web services. All major software vendors have tried to position themselves into the field by developing their own solutions. As a first step, software development environments have been extended to facilitate the development and deployment of web-services. In Microsoft s.net framework, programming code can be written in almost any language, including Microsoft s C#, and targeted for deployment on a variety of mediums including Web Forms and Web Services [4]. The cornerstone of the architecture is that all the deployment models produce self-describing components that are not directly dependent on other components. IBM, who has played an active role in the development of standards like UDDI, SOAP and WSDL, has integrated support for creating web applications in its VisualAge and WebSphere products and now provides a suite of freely available test tools for web-service development through these products and from AlphaWorks. The Sun ONE architecture [3] provides a method of web service development based on Java that permits the deployment as Web Services and Applications of macro services composed from developer written pre-built components (micro services). Hewlett Packard has developed a Web Services Platform that includes tools for the graphical specification, creation and management of Web Services. Specifically, the HP Services Composer can be used to automatically create WSDL files and deploy Java Beans as Web Services. The Web Services work has evolved from HP s previous work on E-Speak [5]. Existing Business Workflow frameworks are not always suitable for orchestrating Web Services. Firstly, because Web Services do not immediately fit into the current workflow components. Secondly, because Web Services have properties like reflection that are not readily considered in business workflow systems. Finally, because business process workflows are geared for dealing with human users whereas Web Service workflows need only interact with other automated services human users are implicitly represented through the Web Services they use. However, the techniques used in business workflow systems and Enterprise Application Integration teach valuable lessons for the orchestration of Web Services. By and large Workflow Management Systems (WFMS) have similar structures, irrespective of the application domain or type. The general pattern used is characterized by the Workflow Management Coalition s Workflow Reference Model [6]. OTSArjuna [7], a WFMS for CORBA-based environments uses a graph-like notation to represent workflows. The graph is made up of nodes denoting tasks, which represent units of computation. Each task has a group of input and output sets. At runtime, a Workflow Repository service holds schemas of different workflows. A Workflow Execution service co-ordinates the workflow and delegates responsibility for executing and managing tasks to Task Controllers associated with each task, thus decentralizing the management of the workflow. 3 METEOR 2 [8] uses a more declarative language approach towards workflow specification than in OTSArjuna. Each task is associated with a directed graph representing the states into which the task can change to. Changes from one state to another arise through interdependencies to other tasks. These inter-dependencies are described using the Workflow Intermediate Language (WIL), which is specific to METEOR 2. Both control and data inter-dependencies can be specified and the specification can be generated from a GUI application. The creators of RainMan [9] argue against centralized Workflow Management Systems and consider some novel use-cases that are typical in an Internet environment. These include the ability to download and run workflow schemas, to reconfigure the workflow at runtime and to cater for devices that can go offline but that can still be assigned to tasks. RainMan workflows have Performers and Sources. Sources request Performers to complete tasks and can hold activities, which are essentially workflow schemas and comprise several tasks. Each Performer has a list of the different tasks it has been asked to perform. The RainMan system defines interfaces for Performers and Sources, which can be implemented in different ways. RainMan has been implemented as a RainMan Builder, which is essentially an applet for creating workflow specifications, which can also act as a Source and can monitor the completion of the activities. WSFL [2] has two models of flow, Flow Model and Global Model. The former is concerned with describing workflow between several parties. The latter describes interfaces for Web Services and patterns of interaction between them. Conceptually, a Flow Model is made of Control Links between activities which represent business tasks within a process. Activities can be interpreted as a method call within a conversation of methods calls in the business process. Control Links can have transition conditions and data links are superimposed over control links. The Workflow Management Facility specification for CORBA accepted by the OMG is jointflow [10], which describes a set of interfaces that can be implemented to create WFMS systems built on parts that can interoperate with other WFMS systems. The interfaces include WfRequesters that have WfProcesses, representing workflow schemas. WfProcesses hold a set of WfActivities, which are the tasks to be completed. Each WfActivity is assigned to a WfResource. Microsoft s BizTalk [11] Orchestration framework provides the means to coordinate in a workflow the applications and components it supports including SOAP accessible components. At first sight BizTalk orchestration has similar objectives to those presented in this paper. However, BizTalk Orchestration lacks flexibility and some of the advanced aspects presented here. The language supports concurrent tasks, synchronization and dynamic task assignment to components. However, functionality such as choice is not provided in the workflow. Furthermore, BizTalk Orchestration focuses on enterprise level workflow between components, and does not consider some of the more general use-cases of Web Service workflow. During workflow execution, monitoring is limited to querying the state of each component. The system offers a simple approach to the Web Service workflow but it is very constrained and does not solve other problems considered here, such as recursive workflow encapsulation. 3. Architecture The ability to provide arbitrary web-service composition through workflow requires a WFMS architecture that caters for the creation of workflows, which can be used by other WFMS systems. Thus, other WFMS systems should be able to invoke this WFMS to enact workflows created with this system. To satisfy these requirements the fundamental design approach is to treat a workflow over web-services as a composite component, thus placing strict restrictions on the workflow encapsulation. The Workflow Engine, which enacts the workflow, is also implemented as a Web Service and accessible via SOAP. Thus, SOAP clients can be other workflow engines that access the engine enacting each workflow as a web-service. In turn, this requires an execution environment that caters for the concurrent execution of different workflow schema or several instances of the same schema over the same web services. The execution environment encapsulates the interpreters for the workflow schemas and provides persistency for schemas that have been created and deployed (Figure 1). A User Application interacts with the WorkflowEngine Web Service using standard SOAP messages in order to: deploy a workflow schema, invoke the relevant workflow, obtain information on the progress of the workflow enactment and perform other operations such as retracting the workflow. The User Application provides both a graphical and a textual interface. Different user-interface applications can be used, so long as the same protocol is used to interact with the Workflow Service (Figure 1). The workflow engine, which enacts the workflow, is itself a Web Service, provides the mechanism to recursively encapsulate workflows within other workflows and can itself can be called by the workflow. Much depends on the workflow specification language which expresses the workflow structure, dependencies and concurrency. Its design and constituent elements are described in the next section. 4 User Application create workflow monitor workflow Workflow Engine (as Web Service) Workflow Schema Interpreter Monitoring Service Web Service (in Workflow) Web Service (in Workflow) Web Service (in Workflow) CircusFlow which is a dialect concerned with information flow rather than execution control. The WorkflowService class acts as an interface or marker class for both, rather than implementing specific functionality. Although the model suggests that all nonterminals are shared between the two dialects the user application will restrict each dialect to the appropriate subset of what is available. * * Figure 1 Overall System Architecture AbstractExpression Workflow Language NonTerminal Expression Terminal Expression Workflow Service The workflow language is designed to be used on both the workflow engine and in the user application with minimal modifications. The language caters for sequencing of web service invocations, parallel execution, choice and synchronisation mechanisms. The design of the language emphasises its flexibility and reusability. The main motivation is to be able to extend the workflow language with new primitives with minimal impact on the existing ones. The language is based on the principles of Java Beans, self-contained reusable software components. Each expression in the language is a Bean definition with properties that can be set to appropriate values. For example, an instance of a composition expression will be an instance of a Bean with its properties set to the constituent expressions that are composed. 4.1 Language design and dialects The starting point of the language is the Interpreter Pattern [12]. However, the traditional implementation of the pattern defines an Abstract Expression at the highest level of the hierarchy, descending into Terminal and Non- Terminal Expressions. The top-level expression of the language (e.g. class or module ) will usually then descend from the Non-Terminal expression. However, in our implementation the top-level node, a WorkflowService, extends directly from the Abstract Expression (Figure 2). This design permits the implementation, of more than one language-type, or dialect, by providing different descendents to the abstract WorkflowService class. These dialects can then share some of the expressions within the Terminal and Non- Terminal descendants (Figure 2). In the current implementation there are two different descendents of the WorkflowService: SafeFlow which provides a structured approach to workflow design, with tight execution control. Concurrent Bayesian Sequence Failure Sequence Sync WSStatic has a WebService WSDynamic SafeFlow Service Figure 2 Language Design CircusFlow Service In addition to Concurrent and Sequence composition the non-terminals include FailureSequence and Bayesian. The former is a mechanism for dealing with failures, while the latter is a complex choice mechanism based on probabilistic inference. If an invoked Web Service fails, FailureSequence defines the actions necessary to overcome the failure. The terminals are either Sync, for synchronisation, or Web Service (see Section 4.2). The enactment of SafeFlow and CircusFlow schemas, by their respective interpreters on the Workflow Engine, is different, particularly in terms of concurrency. The SafeFlow dialect provides explicit control of concurrency through the Concurrent and Sequence non-terminals. All non-terminals are mapped into components and thus, a Concurrent expression is implemented as a component which contains only sub-expressions that will be executed simultaneously. All sub-expressions in concurrent expressions must terminate before computation can proceed. Similarly, the Sequence expression component will contain only sub-expressions executed in sequence. Note, that the above is more restrictive that general workflow expressions based on arbitrary graphs. In particular, an expression such as the one shown in Figure 3 cannot be represented. However, in SafeFlow, expressions, particularly Non- Terminals, can hide all their internal workings from all other external expressions. The AbstractExpression, from which all expressions descend, provides the common means for one expression to interact with any other expression, in a black-box way, without knowing what the 5 expression is. Thus, Workflow Services themselves can be encapsulated and reused in other workflow schemas. A C Figure 3 Graph prevented by strict encapsulation Note that control in SafeFlow departs from traditional workflows such as OTSArjuna or WSFL. While in OTSArjuna or WSFL control is implicit and described by lines in SafeFlow all control is explicit and determined by component types and their encapsulation. Although OTSArjuna allows encapsulation of workflow as intermediate composite tasks, these do not represent control but denote perimeter sub-graphs of control and data-lines. To our knowledge none of the current available ORB workflow languages use the structured Interpreter Pattern style approach to their design, which is largely exploited in SafeFlow, and prefer a more scripting oriented approach. Even when some means of workflow composition are integrated in to these languages, control, by and large, remains unstructured. The CircusFlow dialect adopts a more liberal approach disregarding control flow over information flow. Thus, in CircusFlow, all constituent expressions of a workflow are executed as soon as the data representing their input parameters is available. The sync terminal expression provides explicit synchronisation when needed. Note, that neither the Concurrent nor the Sequence non-terminals are used in CircusFlow since these provide control specifications. The idea behind CircusFlow can also be encountered in data-flow based computing, primarily used in multithreaded execution, signal processing and reconfigurable computing. OTSArjuna can also be seen as similar to a certain extent. Although, in OTSArjuna control and data flow are mixed, with control implemented as notification requests. Control and data are then implicitly synchronised at each task or composite task. CircusFlow provides a high degree of concurrency and overcomes the limitation identified for SafeFlow. However, this is at the expense of control which is entirely omitted. Additionally, recursive encapsulation is not possible. Note however that SafeFlow and CircusFlow can be used in conjunction to overcome the problems of both. In particular, web services deployed using CircusFlow can be included in a SafeFlow and vice-versa. B D 4.2 Terminals WebService expressions (see Figure 2) are Beans which contain the fields necessary to invoke a Web Service namely: the URL of the Web Service, the particular Service Name, the Method Name to be called, the HTTP SOAP Action (header information on call intent) and the XML Namespace defining the encoding style of the call. With WebService expressions, import parameters are cast into Apache SOAP parameters and sent to the Web Service. After the invocation, the returned parameters are converted to export parameters (see Section 4.3). The WebService expression is an abstract class deferring instantiation to a more specialised expression which can be one of the following: WSStatic, used when the details of the Web Service are defined at design time, as fields. WSDynamic, used when the details of the Web Service are passed as special ImportParameters at runtime using the names #soapend, #nameuri, #method, #encode and #action. Sync Beans are markers for synchronisation in CircusFlow interpreters, and do not have specific data. 4.3 Guards and Parameters Interactions between expressions in the workflow language are achieved through import and export parameters. Both types of parameters are implemented as a specialisation of the Parameter bean defined in the Apache SOAP implementation. Each parameter is characterised by its name, value, type and XML namespace to which it belongs. In addition, parameters have a Reference field that specifies from which neighbouring or other expression the value for this parameter can be derived from. Only one of the Reference or Value fields will usually have an assignment. Abstract Expression * Guard * Export * Parameter Parameter * * Import Parameter Figure 4 Expressions, parameters and guards The ability to express choice in workflow languages is important in order to provide multiple alternatives in different sets of circumstances. In our language choice is expressed through the more general mechanism of guards. In addition to the parameters each AbstractExpression also maintains a list of guards and the expression is evaluated only if all the guards are satisfied. Each guard is 6 <SafeFlowService id="somewsid"> <ImportParam name="sourcedata" value="" type="string" encodingstyleuri="" ref="#import/data" /> <Sequence id="someschema"> <ImportParam name="sourcedata" value="" type="string" encodingstyleuri="" ref="#import/sourcedata" /> <WSStatic id="someproxy" soapend="" nameuri="urn:xmethodsbabelfish" method="babelfish" encode=""> <ImportParam name="translationmode" value="en_de" type="string" ref="" /> <ImportParam name="sourcedata" value="" type="string" ref="#import/sourcedata" /> <ExportParam name="return" value="" type="string" encodingstyleuri="" ref="#details/return" /> </WSStatic> <WSStatic id="someproxy2" soapend="" nameuri="urn:xmethodsbabelfish" method="babelfish" encode=""> <ImportParam name="translationmode" value="de_fr" type="string" ref="" /> <ImportParam name="sourcedata" value="" type="string" ref="#someproxy/return" /> <ExportParam name="return" value="" type="string" encodingstyleuri="" ref="#details/return" /> </WSStatic> <ExportParam name="return" val="" type="string" encodingstyleuri="" ref="someproxy/return" /> <ExportParam name="return2" val="" type="string" encodingstyleuri="" ref="someproxy2/return" /> </ Sequence > <ExportParam name="return" val="" type="string" encodingstyleuri="" ref="someschema/return" /> <ExportParam name="return2" val="" type="string" encodingstyleuri="" ref="someschema/return2" /> </SafeFlowService> Example 1 Representing a SafeFlow schema in XML associated with an import parameter, which can have several guards (Figure 4). graphical representation for the workflow described in more detail in the Example 1 below. 4.4 The Role of Data Binding In terms of internal representation, the language is based on Java Beans that encapsulate data only. Decorators and visitors are then used to add functionality to the Beans as required by the program using them. The transport used between the workflow engine and the user application (Figure 1) is an XML representation that has a direct one-to-one mapping with the Bean representation. This tight mapping enables the conversion process between Beans and XML to be independent of the Beans and XML themselves; it describes only how to map any of the language s Bean to an equivalent XML form and vice-versa. The language can therefore be extended by simple addition of Beans, without modifying the mapping to and from the transport. In essence, this is a custom and specialised implementation of Sun s Java- XML Data Binding [13], which was not yet available at the time of the implementation. 4.5 XML and Graphical Specification The graphical representation of the language (also inspired from the graphical representation of componentbased systems) has a direct one-to-one mapping to the XML representation. Each workflow expression is represented by a box where the import parameters are represented on the left hand side and the output parameters on the right hand side. Figure 5 gives the Figure 5 Visual Representation for SafeFlow Example 1 below gives an outline of an XML schema for a SafeFlow Service with a Sequence expression that has two Static Web Services. The second bolded text import parameter refers to the parameter imported from the outer expression, using the reserved #import directive, and the name of the parameter, sourcedata. The first highlighted import parameter follows a similar convention except that data refers to the name of the parameter passed in at runtime when the method for the workflow is called on the Web Service Engine. Export parameters use a different referencing convention. The export parameter for someproxy refers to the parameter called return returned from the Web Service. This return value is accessed using the reserved #details. The import parameter for someproxy2 refers to the export parameter from someproxy. The export parameters for the Sequence and Workflow Service refer by convention to export parameters from their constituent expressions. The 7 export parameters for the SafeFlow Service, return and return2 near the bottom, refer to the workflow results. 5. Workflow Engine The workflow engine encapsulates the functionality of the workflow enactment and permits the encapsulated system to be easily accessible while respecting low coupling with clients. It also permits the deployment of several workflow schemas which are enacted through delegation to the appropriate workflow. The WorkflowEngine is deployed as a Web Service, and allows for several WorkflowServices to be deployed on it. Each WorkflowService is an instance of a workflow schema and is associated with an Interpreter (which may create further child interpreters) to enact it. Clients (including the user interface) can create several WorkflowServices corresponding to possibly different schemas and deploy them on the WorkflowEngine for enactment. The WorkflowEngine is installed as a Web Service on a server supporting a SOAP implementation that allows for the deployment of SOAP services in our case a servlet that allows for the invocation of the Java Beans that it holds. As shown in Figure 6, the WorkflowEngine is implemented as a single Java Bean with static methods and a persistent Hashtable that maintains the services. Each deployed schema behaves like a new method of the WorkflowEngine which can be invoked with parameters that become ImportParamters during enactment. The WorkflowEngine bean provides the methods necessary to: Add a new Workflow schema by providing an XML encoding of the schema. Remove a Workflow schema. Run a Workflow schema. An XML String version of an EMethod object is passed to the enact method. After parsing the string to an internal representation, the enact method retrieves the service name and passes the EMethod object as a set of import parameters to it. The set of export parameters returned from the WorkflowService is converted to an XML version of a new EMethod object and returned to the client. List the WorkflowServices deployed, together with their required parameters. Enact a workflow service. This method is similar to the Run method, but is used only by clients capable of using the monitoring protocol. Update returns monitoring information on the current state of a WorkflowService that is being enacted. When the WorkflowService has finished executing, this method returns the final result. SOAP over HTTP does not allow for call-backs, so relaying of calls to Update is necessary. Note that more complex management of the Workflow Schema is possible by combining these operations in a Web Service that acts as a wrapper for the WorkflowEngine Web Service. Although alternatives to the use of the EMethod object were investigated, for example by dynamically adding a new method to the workflow engine for each schema, this would have required stopping and restarting the server, which would have been unacceptable. Interpreter (f rom arctic) registers/unregister with MonitorPost getmonitorpost() MonitorP ost() register() update() unregist er() -$thisinstance uses WorkflowEngine schemalist : java.util.hashtable add() remove() retrive() enact() run() update() recieves Figure 6 Workflow Engine Bean EMethod _id : String EMethod() getid() setid() seteparameter() geteparameter() Update calls are made on a shared MonitorPost component implemented as a singleton. When entering an AbstractExpression, interpreter instances enacting the workflow register with the MonitorPost the identity of the AbstractExpression they have entered, together with the workflow schema. When the enactment is finished the AbstractExpression is unregistered. When an update call comes from the WorkflowEngine, with a specified workflow schema, the MonitorPost returns an EMethod object identifying a list of active AbstractExpressions for the schema. This system for monitoring the workflow enactment is simple, but provides all the necessary functionality. However, the singleton instance means that the system is constrained in terms of scalability and a more complicated implementation using Publisher- Subscribers is desirable in future developments. 6. Interpreter A Concurrent Visitor Once the schema is converted from XML to the internal representation, the enactment of the workflow is performed by an Interpreter, implemented as a Visitor that traverses the hierarchy of Beans which forms the internal representation of the workflow, as described in the Visitor Pattern [12]. This allows the encapsulation of interpretation control and co-ordination in one logical object and avoids placing interpretation code across the different expression beans, thus making future modifications difficult because of interdependencies. However, the implementation of the Visitor Pattern is modified, in that each time a new bean is visited, a new 8 instance of the interpreter evaluates it rather than the one evaluating the current bean. This design decision was made in order to: (i) provide support for concurrent expressions, which require several threads of execution, (ii) provide natural concurrency within the design and (iii) cater for several instantiations of the workflow by different clients, all represented by their own sets of Interpreters. In this way, the information held in the Bean, during interpretation is not changed and the interpreter holds temporary information derived from processing the Bean. If instead a single instance of an Interpreter were to visit all the different Beans, it would be necessary to use a stack to hold temporary information that must be saved before traversal returns to the Expression where that temporary information was needed. When the workflow is invoked in a WorkflowService, such as SafeFlowService or CircusFlowService, a new instance of the Interpreter is created for the first time, and the accept function of the top-level WorkflowService schema Bean is passed the instance. Within the Bean, the Interpreter has its visit function invoked with the Bean itself, passed as a parameter (Figure 7). Interpreter visit(..){...; start();} starts a new execution thread visited.accept(new Interpreter()) i.visit(this) usually, 'this' is passed Visited Expression Figure 7 Interpreter and Visited Exrpression While interpreting a parent expression that contains several child expressions, an interpreter will be created for each child expression. The reference to the child interpreter will be passed to the child expression through its accept method which in turn will call the child s interpreter visit method with the child expression bean as parameter. The visit method will call the start method, thus forking execution and then an interpret method which is overloaded for the different types of expression. Once execution has forked, the parent s execution thread will be put to sleep. In case of a sequence expression the parent interpreter will sleep until the child interpreter finishes and then start visitation of the following child. In case of a concurrent expression the visitation of the other child expressions will start immediately. All instances of child interpreters are grouped in a Thread Group, and when the Thread Group indicates that all its threads are dead and all the data from their enactment accumulated, the original execution of the hosting Interpreter continues, effectively synchronising all constituent Expressions. In the case of CircusFlow, if any of the child expressions has sufficient data to execute the visitation process starts for it. All child expressions are checked each time a child interpreter finishes and produces additional data. According to this enactment pattern several concurrently acting interpreters can be spawned. However the Interpreters also need to communicate to each other the import parameters and export parameters as they become available. This was achieved by using a double dispatched Publisher-Subscriber version of the Observer pattern between the visitors. Parent Interpreter publish List of child subscribers Child Interpreter Child Interpreter publish Child Interpreter Parent subscriber Interpreter Figure 8 Publish-Subscribe coordination The Parent Interpreter is registered with the Child Interpreters, and the Child is also registered with the Parent (Figure 8). The Parent publishes to its Children the ImportParameter data necessary for a Child to execute. The method the Child listens on (i.e. the method invoked by the Parent) is protected as a synchronised method, for concurrency purposes. Once a Child has finished interpreting, it publishes the ExportParameter data to the Parent. The actual co-ordination of information is dependent on the nature of the expression. In concurrent expressions (SafeFlow) the parent publishes its own ExportParameters only once all the Children have published their ExportParameters. In sequence expressions the ExportParameters of a child are made available to the next child through the parent. In CircusFlow export parameters are published to the next children as soon as they are available. Finally, the Parent can publish any data it has to any of its Parents. 7. The User Interface Although XML can be used for workflow schema specification, XML is far from being a user-friendly language. A user interface was therefore developed which permits workflow specification, workflow enactment and monitoring. The user interface follows a similar approach to that used in component environments such as Darwin [14] and in many respects it was designed to resemble an Integrated Development Environment. The hierarchical language data structure allows to maintain a tight integration between the graphical specification and its XML textual description, thus allowing knowledgeable users to manipulate text directly. 9 The user interface comprises: a top window defining the workspace area, one or several graphical composition windows, a property window and a monitor browser (Figure 9). The graphical composition windows permit a top-down workflow specification by allowing various language constructs to be selected from a toolbar, drawn on the canvas and then linked to the components already present. The properties window displays the properties of the current selected element. The Monitor Browser permits workflow deployment (and retraction) on a workflow engine, and workflow invocation. When the Monitor Browser is in use the graphical composition window is used to display workflow execution state by highlighting the elements currently being enacted. Figure 9 Graphical specification tool The top-down approach to workflow specification was inspired by the B Method which describes a process of progressive refinement of a formal specification into more concrete descriptions. At the lowest level, implementations such as pre-built machines can be substituted to the components. This is not unlike our environment where existing Web Services behave like pre-built machines that provide the leaf nodes in the hierarchical design. Thus, the workflow can be designed at a higher level and decomposed into Non-Terminals and predefined workflows that can be Web Services. This can be a recursive process. Several types of objects need to be drawn on the canvas, ranging from AbsractExpressions like Web Services, which have no constituent expressions, to ImportParameters, which do not descend from AbsractExpression, but are distinct fields of AbstractExpressions, and need to be drawn as well. The existing data structure of beans used for the language is therefore decorated with the necessary graphical information. However, this is not straightforward. Although, a single Decorator Class, for AbstractExpressions or EParameter, can be defined, it is also necessary to traverse the hierarchy of language Beans during graphical operations, for example to add constituent expressions to the right Non-Terminals. While a Decorator can refer to a language Bean, and the language Bean to its constituents, it is also necessary to derive the Decorator for the constituents as well, for example when resizing an inner expression. To achieve this the Visitor Pattern style double dispatching was applied between each object that can be drawn on the canvas and the Decorator Class that contains that object s graphic information. AbstractExpressions and EParameters implement the Drawable interface which requires that each implementing Bean provide methods for getting and setting a VisualExpr decorator associated with it. The VisualExpr decorator manages the graphical and language data associated with each Drawable and becomes the single point of contact for manipulating language or graphical data associated with it. The VisualExpr delegates all functionality such as expression formatting to other classes which are therefore isolated from the data elements and can be changed independently. Amongst the different delegated decorator classes VisualExprLogic is used to manipulate the hierarchy of expressions while enforcing some of the language restrictions e.g., FailureSeqs can only be added to Web Services, Terminals can only be added to Non-Terminals, etc. Another, the VisualExprFormatter, caters for the visual manipulation of expressions on the canvas enforcing graphical constraints e.g., parameters remain on the expression boundary and expressions cannot move out of parent expressions. By taking advantage of the graphical representation already used, it is possible to provide a simple way to observe the enactment of the workflow, and its results. In the same way as a train monitoring system shows the progress of the trains between departure and destination, the enactment of a workflow can be shown on the canvas on which the workflow is drawn by highlighting the elements currently active. Note that while in Monitoring mode the canvas cannot be simultaneously used for editing. The output of the workflow itself is displayed in a form similar to a web-browser integrated in the monitor browser component. The monitor browser also integrates the functionality of a MonitorManager component which caters for the deployment, revocation and invocation of the workflow. While the workflow is executing, the 10 monitor manager receives continuous updates on its execution state, which can be passed onto the graphical display window. 8. Discussion When applied to Web Service orchestration the fundamental limitation of current workflow languages is that they largely approach workflow definition as a graph problem; nodes that perform work and lines of control between them that describe how execution flows. This means that in large workflows, managing all the different nodes can become difficult as the workflow definition graphically starts to look like a complex spider diagram. Within the context of distributed systems, OTSArjuna, METEOR2, RainMan and WSFL, which are probably some of the most significant references for comparison, are to an extent graph-based. SafeFlow s distinguishing characteristic is that it takes a component-style programmatic approach to workflow design. Whereas all the above-mentioned languages approach workflow as a graph problem, SafeFlow is motivated by encapsulation and tight control over the enactment of the workflow. This means that control is considered visually as a box with nodes in it, rather than as a line between two nodes. The exact nature of the control, such as concurrency or sequence is dependent on the type of box used. Boxes can be nested and provide encapsulation by hiding the inner workings of a particular box, which will represent a sub-workflow. This structured approach allows the workflow definition to be made in a similar way to component composition where a higher level component hides the lower level details of the components it contains. At a lower level, within a box, changes will impact only the other components within that box. With a graph-based approach, manipulating whole sub-workflows at a high level is difficult because there may be many tangled inter-dependencies. Changing the workflow at a low level is also difficult, as the workflow designer needs to be aware of dependencies of a particular node across the entire workflow, rather than just in the local vicinity. The encapsulating component-style composition approach to workflow design is largely due to the language s origins from the Interpreter Pattern - an approach seemingly not considered in the other workflow languages. WSFL and OTSArjuna are the strongest related to graph problems. The former lays control out as a graph and overlays a data flow graph above it that respects the control graph. OTSArjuna, METEOR 2 and RainMan mix both control and data flow in the same graph. Although Composite Tasks (OTSArjuna), which define sub-graphs that can be represented as nodes, can be used as a structuring mechanism, the underlying approach remains graph-based. Graph-based approaches cause difficulties in the context of maintainability and problem structuring a graph of control lines is analogous to goto statements. In contrast, the argument for representing control as a graph is that of increased flexibility and expressiveness. SafeFlow uses recursively encapsulated control boxes, which dictate the control within the box. The significant advantage of such an approach is that any encapsulating box can be easily removed and replaced without effect to the rest of the workflow. Furthermore, SafeFlow allows for high scalability, due to the natural presence of recursive encapsulation. Thousands of nodes of Web Services, or other distributed component, could be organised effectively. Subsections within a workflow can also be effectively reused, or the entire workflow itself. SafeFlow has some synchronisation limitations owing to the strict encapsulation. We have shown how these limitations can be overcome by using an unstructured workflow (CircusFlow) encapsulated in a web service and used within the context of the SafeFlow structure. While in essence, this may seem a way of circumventing the problem, it contains the uncontrolled part of the workflow in a strictly encapsulated manner. The entire system described in this paper was implemented and tested for varying case scenarios. However, there are few freely available web-services, which provide meaningful services that can be composed in realistic experiments of large scale. If the momentum gathered towards the Services Web environment is sustained, we will be in a better position to test the framework in larger scale environments. 9. Conclusions and Future Work Workflow Management Systems have been traditionally difficult to manage and evolve according to changing business requirements. This problem will acquire an entire new dimension in a Services Web environment if significant parts of the business model are based on the aggregation of existing services. Changes in business requirements, availability of suppliers, personalisation of services, mergers with other businesses and acquisitions are only few amongst the factors that will require changes in the workflows used. In the absence of a structured environment providing strict encapsulation, the impact of change will often be unforeseen and sometimes unforeseeable. Workflows originated in an organisational management background and typically lack the structures and models that programming languages have evolved to. This work has investigated adding structure and approaching workflow as a programming problem; the SafeFlow language described here can be reasoned for workflow properties. 11 Workflow encapsulation is a powerful structuring mechanism, which provides a way of managing complexity that would otherwise be difficult with standard approaches. Because each constituent part in a workflow can be closed as a black box, either using a Non-Terminal expression or by outsourcing to another Web Service standard refinement and decomposition techniques can be applied. When the implementation of the basic framework was completed, our programming landscape had somewhat changed. Far from fuelling the component/web-service debate [15],[16], the framework emphasised some of the analogies. By using a structured (component-based) approach to workflow specification and workflow as a constrained form of programming it was possible to build new components and deploy them in a uniform way. The framework s usability, greatly helped by the graphical specification tool, constituted one of the main temptations. In essence, it became easy to aggregate, deploy and re-use web-services at will to build potentially large-scale systems. Is this approach suitable for application development in general, thus providing a new programming paradigm? Workflows however, are not a general-purpose programming tool, and in any realistic setting it is unlikely that all of the required functionality of a new service or application can be found by simply composing existing web-services. The environment provides a high degree of concurrency as multiple workflows and web-services can be executed concurrently. However, intensive network use and remote communications increase the delay experienced substantially. These performance considerations impact the granularity at which services can be composed and restrict the settings in which this approach can be adopted. However, it is relatively easy to redistribute coordination work by redeploying the workflows closer to the underlying services they use, thus minimising the impact of those communications with large delays. The framework described in this paper constitutes a first step, and many improvements remain to be made both in the workflow language and in the implementation. In particular, the extensibility of the workflow language has not been exploited and we intend to develop additional language elements and dialects. For example, a new dialect could combine aspects of both SafeFlow and CircusFlow. In this model, Non-Terminal control expressions do not synchronise data as it enters and leaves the expression boundaries. In Concurrent Expressions, constituent Web Service expressions that have sufficient data to enact will do so, and for any other inner Non- Terminal Expressions data will be passed straight through the boundaries of that expression to any Web Services that need it. Concurrent Expressions are similar to CircusFlow, but the boundaries for intermediate Non-Terminal composites like Concurrent / Sequence are not synchronising. This creates a workflow model that is like a graph of data, but where sub-graphs can be perimetered into sections. The flexibility of the sub-graph does not change, though this allows the sub-graph to be encapsulated, and replaced with sub-graphs with the same endpoints. Sequence expressions force ordering on the constituent expressions. The specification toolset also needs additional improvements particularly for the manipulation of visual expressions and to provide better support for debugging. Business in an open environment raises issues relating to the reliability, availability, and quality of service provided by the services. These issues give rise to contracts, trust models and preferences, which not only complicate the model but also require greater adaptability, tolerance to failures and performance deteriorations. Further work is needed in order to provide support for such scenarios within our framework. Historically, workflow originates from business and management as a way of modelling business processes that could wholly or partially be automated. However, the graph based models used have largely not evolved. Programming is similar to the methods of describing workflow, but has evolved incredibly to encapsulate complexity and allow for greater manageability and maintainability. This paper describes how adopting some of the lessons learnt from programming can improve business modelling using workflow. 10. References [1] T. Kindberg and J. Barton. A Web-based Nomadic Computing System, Computer Networks, 35(4): , March [2] F. Leymann, Web Services Flow Language. IBM Software Group specification, May ibm.com/software/solutions/webservices/pdf/WSFL.pdf) [3] Open Net Environment (ONE) Software Architecture, Sun Microsystems, [4] M. Kirtland, The Programmable Web: Web Services Provides Building Blocks for the Microsoft.NET Framework MSDC magazine, Sept available at: tform/webplatform.asp [5] Hewlett-Packard. E-Speak Architecture Specification, version 2.2, see also [6] D. Hollingsworth, Workflow Management Coalition The Workflow Reference Model (1995), [7] Frédéric Ranno, Santosh K. Shrivastava, Stuart M. Wheater, A Language for Specifying the Composition of Reliable Distributed Applications, Proc. 18th Int. Conf. on 12 Distributed Computing Systems (ICDCS '98), Amsterdam, The Netherlands, May 26-29, [8] A. Sheth and K. J. Kochut. Workflow Application to Research Agenda: Scalable and Dynamic Work Coordination and Collaboration Systems. In Workflow Management and Interoperability. A. Dogac et al. (eds.). Springer Verlag, 1999, pp also see: [9] S. Paul, E. Park, and J. Chaar, RainMan: A Workflow System for the Internet, Proc. USENIX Symp. on Internet Technologies and Systems, December 8-11, 1997, Monterey, California. [10] Object Management Group. Workflow Management Specification v1.2, available from: [11] BizTalk Orchestration White Paper (July 1999), Microsoft, orchestration.htm [12] E. Gamma, R. Helm, R. Johnson and J. Vlissides. Design Patterns, Addison Wesley Longman Publishing, 1994 [13] M. Reinhold, An XML Data-Binding Facility for the Java Platform (30 July 1999), Core Java Platform Group Java Software, Sun Microsystems, Inc., [14] Ng, K., Kramer, J. and Magee, J., Automated Support for the Design of Distributed Software Architectures, Journal of Automated Software Engineering (JASE), 3 (3/4), Special Issue on CASE-95, (1996), pp [15] C. Szyperski. Components and Web Services, Software Development Magazine, August available from: [16] B. Meyer. Product or Service. Software Development Magazine, October available from: Web Service Workflow Individual Project Report, June 2001 Web Service Workflow Individual Project Report, June 2001 Dinesh Ganesarajah, MEng Computing Imperial College Dept of Computing Acknowledgements Emil Lupu for helping Service Oriented Architectures (SOA) Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) - Version from: 26.10.2007 1 Content 1. Introduction A Review of Distributed Workflow Management Systems A Review of Distributed Workflow Management Systems F. Ranno and S. K. Shrivastava Department of Computing Science, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK. Abstract: An increasing number zen Platform technical white paper zen Platform technical white paper The zen Platform as Strategic Business Platform The increasing use of application servers as standard paradigm for the development of business critical applications meant Research on the Model of Enterprise Application Integration with Web Services Research on the Model of Enterprise Integration with Web Services XIN JIN School of Information, Central University of Finance& Economics, Beijing, 100081 China Abstract: - In order to improve business SOFT 437. Software Performance Analysis. Ch 5:Web Applications and Other Distributed Systems SOFT 437 Software Performance Analysis Ch 5:Web Applications and Other Distributed Systems Outline Overview of Web applications, distributed object technologies, and the important considerations for SPE Service Oriented Architectures 8 Service Oriented Architectures Gustavo Alonso Computer Science Department Swiss Federal Institute of Technology (ETHZ) alonso@inf.ethz.ch The context for SOA A bit of history Service Oriented Architecture Service Oriented Architecture Charlie Abela Department of Artificial Intelligence charlie.abela@um.edu.mt Last Lecture Web Ontology Language Problems? CSA 3210 Service Oriented Architecture 2 Lecture Outline Introduction to Testing Webservices Introduction to Testing Webservices Author: Vinod R Patil Abstract Internet revolutionized the way information/data is made available to general public or business partners. Web services complement this Web Application Development for the SOA Age Thinking in XML Web Application Development for the SOA Age Thinking in XML Enterprise Web 2.0 >>> FAST White Paper August 2007 Abstract Whether you are building a complete SOA architecture or seeking to use SOA services A Meeting Room Scheduling Problem A Scheduling Problem Objective Engineering, Inc. 699 Windsong Trail Austin, Texas 78746 512-328-9658 FAX: 512-328-9661 ooinfo@oeng.com Objective Engineering, Inc., 1999-2007. Photocopying, Service-Oriented Architectures Architectures Computing & 2009-11-06 Architectures Computing & SERVICE-ORIENTED COMPUTING (SOC) A new computing paradigm revolving around the concept of software as a service Assumes that entire systems Business Process Management with @enterprise Business Process Management with @enterprise March 2014 Groiss Informatics GmbH 1 Introduction Process orientation enables modern organizations to focus on the valueadding core processes and increase Distributed Objects and Components Distributed Objects and Components Introduction This essay will identify the differences between objects and components and what it means for a component to be distributed. It will also examine the Java Challenges and Opportunities for formal specifications in Service Oriented Architectures ACSD ATPN Xi an China June 2008 Challenges and Opportunities for formal specifications in Service Oriented Architectures Gustavo Alonso Systems Group Department of Computer Science Swiss Federal Institute Software Life-Cycle Management Ingo Arnold Department Computer Science University of Basel Theory Software Life-Cycle Management Architecture Styles Overview An Architecture Style expresses a fundamental structural organization schema Service Computing: Basics Monica Scannapieco Service Computing: Basics Monica Scannapieco Generalities: Defining a Service Services are self-describing, open components that support rapid, low-cost composition of distributed applications. Since services Chapter 7 Application Protocol Reference Architecture Application Protocol Reference Architecture Chapter 7 Application Protocol Reference Architecture This chapter proposes an alternative reference architecture for application protocols. The proposed reference The Service Revolution software engineering without programming languages The Service Revolution software engineering without programming languages Gustavo Alonso Institute for Pervasive Computing Department of Computer Science Swiss Federal Institute of Technology (ETH Zur in e-commerce applications Service-oriented architecture in e-commerce applications What is a Service Oriented Architecture? Depends on who you ask Web Services A technical architecture An evolution of distributed computing and Scientific versus Business Workflows 2 Scientific versus Business Workflows Roger Barga and Dennis Gannon The formal concept of a workflow has existed in the business world for a long time. An entire industry of tools and technology devoted RUP Design Workflow. Michael Fourman Cs2 Software Engineering RUP Design Workflow Michael Fourman Introduction Design architecture that can meet all requirements Understand non-functional requirements and constraints related to technologies Identify subsystems (overall Monitoring Infrastructure (MIS) Software Architecture Document. Version 1.1 Monitoring Infrastructure (MIS) Software Architecture Document Version 1.1 Revision History Date Version Description Author 28-9-2004 1.0 Created Peter Fennema 8-10-2004 1.1 Processed review comments Peter Web Presentation Layer Architecture Chapter 4 Web Presentation Layer Architecture In this chapter we provide a discussion of important current approaches to web interface programming based on the Model 2 architecture [59]. From the results Tools to Support Secure Enterprise Computing Tools to Support Secure Enterprise Computing Myong H. Kang, Brian J. Eppinger, and Judith N. Froscher Information Technology Division Naval Research Laboratory Abstract Secure enterprise programming is Introduction to Web Services Department of Computer Science Imperial College London CERN School of Computing (icsc), 2005 Geneva, Switzerland 1 Fundamental Concepts Architectures & escience example 2 Distributed Computing Technologies 1 Introduction. 2 The need for Engineering Workflow. 3 Example engineering workflow -3- NLR-TP-2000-313 -3- Engineering Workflow The Process in Product Data Technology D.J.A. Bijwaard, J.B.R.M. Spee, P.T. de Boer National Aerospace Laboratory NLR, P.O.Box 90502, 1006 BM AMSTERDAM, The Netherlands Fax:+31 COMPUTER AUTOMATION OF BUSINESS PROCESSES T. Stoilov, K. Stoilova COMPUTER AUTOMATION OF BUSINESS PROCESSES T. Stoilov, K. Stoilova Computer automation of business processes: The paper presents the Workflow management system as an established technology for automation Cross Organizational Workflow Management Systems Cross Organizational Management Systems Venkatesh Patil & Avinash Chaudhari Tata Consultancy Services, India Paper presented at Product Data Technology Europe 2002 At Centro Ricerche Fiat, Turin, Italy Lesson 4 Web Service Interface Definition (Part I) Lesson 4 Web Service Interface Definition (Part I) Service Oriented Architectures Module 1 - Basic technologies Unit 3 WSDL Ernesto Damiani Università di Milano Interface Definition Languages (1) IDLs Core J2EE Patterns, Frameworks and Micro Architectures Core J2EE Patterns, Frameworks and Micro Architectures Deepak.Alur@sun.com Patterns & Design Expertise Center Sun Software Services January 2004 Agenda Patterns Core J2EE Pattern Catalog Background J2EE 1 What Are Web Services? Oracle Fusion Middleware Introducing Web Services 11g Release 1 (11.1.1) E14294-04 January 2011 This document provides an overview of Web services in Oracle Fusion Middleware 11g. Sections include: What: Distributed systems. Distributed Systems Architectures Distributed systems Distributed Systems Architectures Virtually all large computer-based systems are now distributed systems. Information processing is distributed over several computers rather than confined New Methods for Performance Monitoring of J2EE Application Servers New Methods for Performance Monitoring of J2EE Application Servers Adrian Mos (Researcher) & John Murphy (Lecturer) Performance Engineering Laboratory, School of Electronic Engineering, Dublin City University, Virtual Credit Card Processing System The ITB Journal Volume 3 Issue 2 Article 2 2002 Virtual Credit Card Processing System Geraldine Gray Karen Church Tony Ayres Follow this and additional works at: Part of the E-Commerce Agents and Web Services Agents and Web Services ------SENG609.22 Tutorial 1 Dong Liu Abstract: The basics of web services are reviewed in this tutorial. Agents are compared to web services in many aspects, and the impacts of An Object Model for Business Applications An Object Model for Business Applications By Fred A. Cummins Electronic Data Systems Troy, Michigan cummins@ae.eds.com ## ## This presentation will focus on defining a model for objects--a generalized Literature Review Service Frameworks and Architectural Design Patterns in Web Development Literature Review Service Frameworks and Architectural Design Patterns in Web Development Connor Patrick ptrcon001@myuct.ac.za Computer Science Honours University of Cape Town 15 May 2014 Abstract Organizing IBM Rational Rapid Developer Components & Web Services A Technical How-to Guide for Creating Components and Web Services in Rational Rapid Developer June, 2003 Rev. 1.00 IBM Rational Rapid Developer Glenn A. Webster Staff Technical Writer Executive Summary WEB SERVICES. Revised 9/29/2015 WEB SERVICES Revised 9/29/2015 This Page Intentionally Left Blank Table of Contents Web Services using WebLogic... 1 Developing Web Services on WebSphere... 2 Developing RESTful Services in Java v1 Java Application Developer Certificate Program Competencies Java Application Developer Certificate Program Competencies After completing the following units, you will be able to: Basic Programming Logic Explain the steps involved in the program development cycle What Is the Java TM 2 Platform, Enterprise Edition? Page 1 de 9 What Is the Java TM 2 Platform, Enterprise Edition? This document provides an introduction to the features and benefits of the Java 2 platform, Enterprise Edition. Overview Enterprises today Developers Integration Lab (DIL) System Architecture, Version 1.0 Developers Integration Lab (DIL) System Architecture, Version 1.0 11/13/2012 Document Change History Version Date Items Changed Since Previous Version Changed By 0.1 10/01/2011 Outline Laura Edens 0.2 A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles Jørgen Thelin Chief Scientist Cape Clear Software Inc. Abstract The three common software architecture styles Issues in Implementing Service Oriented Architectures Issues in Implementing Service Oriented Architectures J. Taylor 1, A. D. Phippen 1, R. Allen 2 1 Network Research Group, University of Plymouth, United Kingdom 2 Orange PCS, Bristol, United Kingdom email: D. SERVICE ORIENTED ARCHITECTURE PRINCIPLES D. SERVICE ORIENTED ARCHITECTURE PRINCIPLES 1. Principles of serviceorientation 2. Service exchange lifecycle 3. Service composition 4. Evolution of SOA 212 D.1 Principles of service-orientation 213 HISTORICAL Oracle Service Bus Examples and Tutorials March 2011 Contents 1 Oracle Service Bus Examples... 2 2 Introduction to the Oracle Service Bus Tutorials... 5 3 Getting Started with the Oracle Service Bus Tutorials... 12 4 Tutorial 1. Routing a Loan Glossary of Object Oriented Terms Appendix E Glossary of Object Oriented Terms abstract class: A class primarily intended to define an instance, but can not be instantiated without additional methods. abstract data type: An abstraction ActiveVOS Server Architecture. March 2009 ActiveVOS Server Architecture March 2009 Topics ActiveVOS Server Architecture Core Engine, Managers, Expression Languages BPEL4People People Activity WS HT Human Tasks Other Services JMS, REST, POJO,... PIE. Internal Structure PIE Internal Structure PIE Composition PIE (Processware Integration Environment) is a set of programs for integration of heterogeneous applications. The final set depends on the purposes of a solution, CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter A Survey Study on Monitoring Service for Grid A Survey Study on Monitoring Service for Grid Erkang You erkyou@indiana.edu ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide Architectural Design Software Engineering Architectural Design 1 Software architecture The design process for identifying the sub-systems making up a system and the framework for sub-system control and communication is architectural Introduction into Web Services (WS) (WS) Adomas Svirskas Agenda Background and the need for WS SOAP the first Internet-ready RPC Basic Web Services Advanced Web Services Case Studies The ebxml framework How do I use/develop Web Services? The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets!! Large data collections appear in many scientific domains like climate studies.!! Users and Service Oriented Architecture 1 COMPILED BY BJ Service Oriented Architecture 1 COMPILED BY BJ CHAPTER 9 Service Oriented architecture(soa) Defining SOA. Business value of SOA SOA characteristics. Concept of a service, Enterprise Service Bus (ESB) SOA Event-based middleware services 3 Event-based middleware services The term event service has different definitions. In general, an event service connects producers of information and interested consumers. The service acquires events JOURNAL OF OBJECT TECHNOLOGY JOURNAL OF OBJECT TECHNOLOGY Online at. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani, 2 (18) - SOFTWARE ARCHITECTURE Service Oriented Architecture - Sven Arne Andreasson - Computer Science and Engineering. Service Oriented Architecture Definition (1) Definitions Services Organizational Impact SOA principles Web services A service-oriented architecture is essentially a collection of services. These services A Guide to Creating C++ Web Services A Guide to Creating C++ Web Services WHITE PAPER Abstract This whitepaper provides an introduction to creating C++ Web services and focuses on:» Challenges involved in integrating C++ applications with For Version 1.0 Oklahoma Department of Human Services Data Services Division Service-Oriented Architecture (SOA) For Version 1.0 Table of Contents 1. Service Oriented Architecture (SOA) Scope... Cloud Computing & Service Oriented Architecture An Overview Cloud Computing & Service Oriented Architecture An Overview Sumantra Sarkar Georgia State University Robinson College of Business November 29 & 30, 2010 MBA 8125 Fall 2010 Agenda Cloud Computing Definition Middleware and the Internet. Example: Shopping Service. What could be possible? Service Oriented Architecture Middleware and the Internet Example: Shopping Middleware today Designed for special purposes (e.g. DCOM) or with overloaded specification (e.g. CORBA) Specifying own protocols integration in real world Integration of Hotel Property Management Systems (HPMS) with Global Internet Reservation Systems Integration of Hotel Property Management Systems (HPMS) with Global Internet Reservation Systems If company want to be competitive on global market nowadays, it have to be persistent on Internet. If we Middleware Lou Somers Middleware Lou Somers April 18, 2002 1 Contents Overview Definition, goals, requirements Four categories of middleware Transactional, message oriented, procedural, object Middleware examples XML-RPC, SOAP,: 1 What Are Web Services? Oracle Fusion Middleware Introducing Web Services 11g Release 1 (11.1.1.6) E14294-06 November 2011 This document provides an overview of Web services in Oracle Fusion Middleware 11g. Sections include: Workflow Management Standards & Interoperability Management Standards & Interoperability Management Coalition and Keith D Swenson Fujitsu OSSI kswenson@ossi.com Introduction Management (WfM) is evolving quickly and expolited increasingly by businesses
http://docplayer.net/945446-Workflow-based-composition-of-web-services-a-business-model-or-a-programming-paradigm.html
CC-MAIN-2017-22
en
refinedweb
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Luke once pointed to Peter Norvig's embedding of Prolog in Lisp as an example of the power of the language, and asked whether (or if) it could be done in Haskell. I said I would think about it, but never got around to working out the details. Well, it turns out it's been done, and the essential bits are quite simple and intuitive, though macros would help to hide some of the abstract syntax-looking stuff (not apparent from the paper's append example). append J. M. Spivey and S. Seres. Embedding Prolog in Haskell. In Proceedings of Haskell '99 (E. Meijer, ed.), Technical Report UU-CS-1999-28, Department of Computer Science, University of Utrecht. The Fun of Programming includes a chapter on the same topic (using a slightly different technique, IIRC). Also mentioned before: Prolog in Javascript. Apparently Prolog is pretty easy to implement, it's only 15k, parser included. Look up the unify function in the source! Update: That was an old version, new version is here. Dear All, Just found a nice implementation of Prolog in Scala. Unfortunately I didn't have time to try it, so my impression is only based on looking at the source code that can be found here: The above points to a couple of test programs. The Prolog interpreter is written in Scala in such a way that Prolog clauses can be embedded as Scala objects written in Scala. I don't fully understand the magic behind it, here is a sample how the tak function is written: tak('X, 'Y, 'Z, 'A) :- ( 'X =< 'Y, CUT, 'Z === 'A ) tak('X, 'Y, 'Z, 'A) :- ( 'X1 is 'X - 1, 'Y1 is 'Y - 1, 'Z1 is 'Z - 1, tak('X1, 'Y, 'Z, 'A1), tak('Y1, 'Z, 'X, 'A2), tak('Z1, 'X, 'Y, 'A3), tak('A1, 'A2, 'A3, 'A) ) Why do I mention it here? Well a git branch (click on Network) by akimichi mentions the paper by J. M. Spivey and S. Seres. But the Scala implementation looks rather continuation based and less stream based. So it would be difficult to change the search strategy, by simply changing the implementation of a connective, right? Bye qi prolog, for whatever it is worth. Again Mini Karen: ; For more detail, please see `The Reasoned Schemer' Already covered on LtU in a couple of places. Note 1: It has a simpler unification than Norvigs, no occurs check. And the code is cleaner since deref is not convoluted into unify. Note 2: One might find much older Prolog in Lisp implementations from around 1983 by Ken Kahn and Mats Carlsson (LM-Prolog). Cited from Paul Taraus post on comp.lang.prolog: "Styla is a fairly complete Prolog interpreter written in Scala, derived from Kernel Prolog (see Fluents: A Refactoring of Prolog for Uniform Reflection and Interoperation with External Objects in CL'2000). The genuinely open sourced (Apache license) code is hosted at: and it is designed with simplicity and extensibility in mind - hoping it would be useful to people experimenting with new Prolog extensions or alternative logic programming languages." This is an old thread where the focus has been on the features of the language a more or less regular Prolog interpreter is to embedded in, and how they enable the implementation. I periodically find myself thinking of a different question: "What features would be most useful in an embedded Prolog-like library/module?" I mean the answer to include things like: I think that's an interesting question. I'm not sure that a newly-resurrected thread from 2004 about a tangentially-related paper is the best place to ask it. I would simply suggest you favor Datalog or Dedalus instead of Prolog. Thanks for the comment. I'm familiar with Datalog as a more declarative Prolog influenced language. Dedalus is new to me, though I have a long standing belief that many reasoning and AI related problems are best tackled by including an explicit time-interval argument in the every representation. I'll also add that what is most useful for a special purpose embedded module might be different than what works best for something trying to be the main language. If I had thought of it earlier, I might have included in my list 9. Should a choice of breadth first vs. depth first and optional depth-limited goal search be available as explicit query options? I see Prolog as a cool, but also somewhat arbitrarily bundled collection of mechanisms. I mean to be talking about which particular bundle of related mechanisms would be most beneficial as an add-in to another general purpose language. I've used Datalog as a basis for data types - i.e. values as databases (sets of propositions) - and I find it interacts very nicely with functional abstraction and composition mechanisms. Basically, each function says how to take one or more databases and extract and combine facts from them to generate new databases. This offers much more modularity and compositionality than Datalog has on its own, and `lazy evaluation` works very well since we want to direct the final search (with both forwards and backwards chaining) based on the full composition rather than wastefully generating every intermediate database. The composition functions essentially embed the logic, constructing and refining searches. If you haven't heard about it, then you could look up the functional-relational programming model (the other meaning of `FRP`). Similarly, Bloom extends the work done on Dedalus with a more modular composition structure. Pursuant to that, I developed a technique for abstracting identity away from its arbitrary representation (e.g. surrogate keys). I've also examined a few different inspirations for representing these sets - including extended set theory and generative grammars. However, I've yet to be entirely satisfied with any of these approaches (in particular, with respect to my goal of predictable performance) and I've tabled further research on them to pursue the reactive elements of my programming model. Anyhow, when I suggest using Dedalus or Datalog instead of Prolog, I didn't mean that at a shallow "these are interesting, go see them" level. I mean that "these are the answer, they fill the gaps in expressiveness quite effectively, and all the additional features Prolog provides are unnecessary and undesirable." When you say "these are the answer" it's not clear to me exactly which problems you are claiming were the important ones that got answered. I think we are both fairly clear about problems related to the non-declarative aspects of Prolog, but my the scope of my topic includes a lot more than that. You also mention more flexible approaches to search, but it isn't clear to me where F-R-P or Bloom stands in relation to the type of flexibility demanded by many CLP algorithms. Mozart/Oz was somewhat Prolog influenced and seemed to focus on hosting generic (roll your own or use a library) CLP. I'm not mainly trying to ask "What's the best descendant of Prolog?" I'm mainly trying to ask "If you've already settled on general purpose language L which doesn't provide the things Prolog does easily as a builtin, what would be the most practically useful way of adding those capabilities to L?" To the extent that Datalog, Dedalus, or Bloom fix problems with Prolog and provide other interesting functionality then they are relevant to the answer, but don't say anything about how best to embed that in L.. Perhaps you wish to argue "Forget about L, go with Bloom" or something along those lines? That's kind of a different topic. "What features would be most useful in an embedded Prolog-like library/module?" Datalog and Dedalus are the answer to the primary question you asked. I'm assuming the embedding is in some other language, which already handles the orthogonal issues like concurrency.. My own interests pertain to maintaining and interacting with such views in heterogeneous data models. In any case, your desire here pertains to L, not to the embedded logical model.. Some translation/notification is, of course, required. But let's separately consider the runtime overhead and programmer overhead of that translation. By saying I wanted to avoid "serialization" I meant avoiding the runtime overhead of unpacking and repacking many large data structures. Writing the analog of SWIG interfaces might still be required of the programmer. If I'm allowed to modify L - either as language designer, with sufficiently powerful macros, or via runtime algorithms using reflection, then I can, as you say, possibly get rid of a lot of programmer overhead. Even if L was C++ or Java, a lot of programmer overhead could be saved by writing generic code for the generic containers.. I'm not sure what that means or relates to the specifics of the discussion. I was saying that. By saying I wanted to avoid "serialization" I meant avoiding the runtime overhead of unpacking and repacking many large data structures. Sure. But, I repeat, whether this is feasible will depend more on language L than on the embedded logic language. Related issues include mutation of observed data structures and maintaining indexes for logical searches.. Embedded languages are not standalone languages. They don't come "out of the box". The relationship and interaction between an embedded language and its host is of prominent importance to understanding how the embedded language is used and what features are offered by the embedding. Hi, 6) Should execution with instances of the module be asynchronous with an input queue for new queries? 7) Should instances of the module be objects that can be named and passed around in the parent program? Well one typically ends up with this kind of questions if the architecture is a Prolog consumer thread that is loosly coupled with the application as a producer of queries. But many Prolog systems are multi-threaded nowadays and can be directly coupled with the application. A typicall application could be a web server, where multiple interpreter threads serve requests and share the same knowledgebase. In the web server scenario there are two kinds of instances: Knowledgebases and Interpreters. They don't need names, they are just objects on the heap programmatically managed by the application. The pooling of the threads can be delegated to the web server framework, when the Interpreter object is separated from the thread object. Namespaces are important in (even) compiled languages because programmers re-use the same names for different stuff. The nature and applications of Prolog make it especially prone to name re-use. So I understand what you are saying, but that wasn't really my concern with regard to namespaces. I didn't refer to namespaces in my post. Namespaces are usually called modules in Prolog systems. Many modern Prolog systems support modules. And also many modern Prolog systems support multi-threading. These are two independent dimensions of a Prolog system. There is a little nameclash since your questions starts asking for a Prolog library/module. And then you start talking about namespaces. But by Prolog library/module I understand a Prolog library for a Prolog system that can hold multiple Prolog modules, which are names inside a namespace. (*) There is ISO document for Prolog modules, but I guess most Prolog systems divert from it, and implement something along what SICStus Prolog does. There are also further extensions around for object orientation. But all this happens inside the Prolog language and not in the interface between the host language and the Prolog system. For the multi-threading dimension, the instances that appear are outside of the language, respectively seldom represented inside the language. We hardly find a Prolog system that will represent its own knowledge base as a reference object, since this would eventually only be useful when communicating with other instances. But it would be also conceivable. (*) Although most Prolog systems only allow a flat namespace backed-up by a hierarchical file system instead of a namespace that knows about packages and modules. The naming issues may be clearer if we list them all at once:. If you don't want to go one process with a shared heap, then embedding of the Prolog system is probably not the right name in the first place. Then the architecture will be a loosely coupled one, and for such an architecture we would need to start the discussion all over again.. 3) Why do you have different names for databases and then overlapping facts? What do you mean by that? In the Prolog module systems the idea is of course to have inside a module private and/or public facts that are different from facts in other modules. If one wants facts across multiple files, then one can use the multifile directive, which is already defined in the ISO core standard. But the multifile directive does hardly carry over to modules. Since a fact is identified by its predicate name and its arguments, and predicates in different modules are considered different predicate names. Eventually something can be done with import and export. So eventually you can import a predicate name in your module, and when this predicate is dynamic, you can eventually add more facts to the predicate. But I doubt that this can be done for a static predicate. Not sure. On the other hand the name of a Prolog variable has a very small scope. Its scope is only the clause or query it occurs in. So not really a big namespace. But the extend of a Prolog variable can be greater, will be found in its continuation as long as the continuation runs. But when you assert a clause with a variable a new internal object for the variable is generated. To represent more state one can extend the notion of a variable. For example this is done in variables with attributes used in constraint solving. Or one can reserve special compounds to denote special data. For example the ISO core standard allows that there is an implementation dependent compound that will denote a stream. Some Prolog systems even allow a generic way to embed objects from the host language in the Prolog term model by means of a reference data type. The later, the reference data structure (*), is only needed when you want to manipulate some objects both from within the host language and the Prolog system. An example can be the interaction with a database management system (**). (*) Prolog Reference Data Type: (**) SQL Statements as Prolog References:. I'm not making any assumptions about the implementation. For the moment, assume that the people writing a compiler for the host language L are also designing and implementing some embedded "Prolog influenced" language K. I'm going to refer to dynamic instances of K as "interpreter objects" below even though they may well be compiled as well. The exact nature of the mapping between functions and data structures in L and K is yet to be defined. For the sake or argument, let's say that some statements of the syntactic form maps_as_func(Lfunc1,"kname1") and maps_as_data(Lobject2,"kname2") can be used by application programmers to define a mapping.. In the particular implementation you describe, the pointers to the interpreter objects are using in naming a particular interpreter. In C/C++ actions on a given interpreter are qualified as ptrToInterpreter->funcCall(...). We are in agreement that this form of naming doesn't require namespace mechanisms. We might however have a separate naming issue for brand new terms that dynamically defined within different interpreter instances. It's reasonable to suppose that the new terms could appear in queries directed from L. In the absence of programmer mistakes, the queries would be directed at a particular interpreter object and only reference terms defined by that object, but if code in L can potentially query multiple interpreter objects in close proximity, it might be good software engineering to have something like a namespace associated with the terms defined within a given interpreter object for K. 3) Why do you have different names for databases and then overlapping facts? What do you mean by that? I'm thinking of functors as relations and relations like tables in a relational database. I want to be able to re-use the same base functor name for more than one "table" by qualifying it in some other way. Of course it could be done with an extra argument, but that is not always ideal for either convenience or implementation.. To retrieve data from the K Prolog processor in the L host language there are usually the following means: a) A constructor of the K Prolog processor API is invoked, this might involve a name (for atoms) or not (for numbers, for variables). Or it might involve already available data (for compounds). (Not all APIs deal with variable construction in an annonymous way, but assume now). b) A parser of the K Prolog processor API is invoked, names are involved similar to for constructors, except that variables now typically have names. c) A text consulting via the K Prolog processor API is invoked and thus clause data is added to the knowledge base. What names are involved now depends on the Prolog text. d) A query execution via the K Prolog processor API is invoked and the result is retrieved. What names are involved now depends what kind of query is run against what Prolog text.. I don't understand how this should happen. All 4 approaches constructor, parser, consulting and execution basically generate new Prolog terms. And an eventual reference by the host language L is in each time a new reference. Unused reference by the host language can either be explicitly returned via a dispose function or implicitly via garbage collection. but if code in L can potentially query multiple interpreter objects in close proximity, it might be good software engineering to have something like a namespace associated with the terms defined within a given interpreter object for K. See above, since the host language gets new pointers anyway by the methods constructor, parse, consult and execute the host language should not be confused about terms comming from one interpreter object I1 versus terms coming from another interpreter object I2. Usually they will be different pointers. But YES, integrity of the interpreters might be destroyed if a pointer from interpreter I1 is used inside an interpreter I2. But there are more integrity constraints for a Prolog interpeter, for example there are sequencing constaints in the query API etc.. But I don't see that they are related primarily to names. I'm thinking of functors as relations and relations like tables in a relational database. I want to be able to re-use the same base functor name for more than one "table" by qualifying it in some other way. You can use the same name accross knowledgebases bases, with different meaning. At least in the model for a Prolog processor API that I am aware of. In this model with have the following relationship: +---------------+ 1 m+-------------+ | Knowledgebase |-------| Interpreter | +---------------+ +-------------+ That is you can have multiple knowledgebases. And an interpreter is uniquely associate with one knowledgebase, whereby there can be one or many interpreters for a knowledgebase (multi-threading). Of course a name has a different meaning in a knowledgebase W1 compared to its meaning in a knowledgebase W2. The meaning does not influence the API operations constructor, parse they will just return new pointers, and consulting is also not much influenced, except that consulting via an interpreter I1 associated with W1, will of course add clauses in W1 whereby consulting via an interpreter I2 associated with W2, will add clauses in W2. The meaning difference is most seen when executing a query. A query executed against a knowledgebase W1 might differ in side effect and result compared to a query executed against a knowledgebase W2, when diese knowledgebases contain different clauses. So any name contained in the query might be considered as having different meaning in the two different knowledgebases. The meaning difference happens not only for relation names but also for function names. The logical consequence relation that is behind execution, if we don't look so much at side effects, is a two place relation, that relates a knowledge base with a query: W |- Q S So a different knowledge base W leads to a different answer substitution S for the same query Q. Interestingly in the Scala solution the Prolog text of a knowledge base is not consulted, but constructed. Well such an approach might also be followed, but I guess it doesn't change some of the above observations. But there could be a subtle difference, that the constructors are not free in the Scala approach. So instead of constructing just an atom later used as a functor: new TermAtom("name") One eventually does a construction inside a context, which also sees to it to return the same pointer for same names. Namely: lookupFunctor("name", parentKnowledgebase) YES such variations can happen and are found in practical systems. Terms that have variables in it usually are bound to an interpreter context. For example because variables get internal serial numbers dependent on the interpreter. Uninstatiated clauses are usually not seen from the API and are bound to a knowledgebase context. As result to run the same query against multiple knowledgebases W1 and W2: W1 |- Q, W2 |- Q The query Q has to be constructed twice. P.S.: I use the name interpreter here, but it can be replaced by compiler. I am not assuming that the Prolog processer only "interprets" the knowledge base and the query, it might also compile it to some machine representation etc.. We've gotten really focused on issues related to naming. It seems a small part of the topic to me, but I say that with the intent of asking you whether you see it that way also, and not to be dismissive or curtail discussion.. I'm explicitly not focused on what existing Prolog systems offer. To retrieve data from the K Prolog processor in the L host language there are usually the following means:. I think the text above describes something different than what your message assumes, but it wasn't clear to me where we started talking past each other.) When I first described the idea you are responding to above, I mentioned explicitly that it is an idea apart from Prolog that Prolog would treat by using an extra argument, and that I thought there were reasons to choose a different approach. One of the issues with the form above is that the programmer needs to be aware of the existence of multiple knowledge bases at the time they write code lookupFunctor, whereas namespace constructions are often considered for situations where this overlap is only apparent in retrospect - e.g. some other code can call KB1::lookupFunctor("name") or KB2::lookupFunctor("name") if both are needed together, and it can indicate a default choice when only one or the other is to be mainly used in a given context. What you describe here, except for the FIFO queue:. Is usually archived by a foreign function interface. I exclude the FIFO since query answers could also directly be populated into a GUI table etc.. And also a FFI does not preclude that it is used in this continuation style. It can also often be used inverted, like an iterator with open(), next() and close(). But lets turn to the naming issue you mention with a FFI. Well there are at least two name spaces involved. The name space of the foreign language, i.e. something like modules or classes, and inside these modules or classes, something like procedures or methods. And there is the name space for the Prolog predicates. If you only have a couple of unorganized FFI predicates, than you can explicitly map these two name spaces. The mapping can either be done inside the foreign language, when it registers a FFI procedure or method, or it can be done inside the Prolog system, when it registers a FFI procedure or method. For the later approach it is useful when the foreign language has a reflection capability, so that the Prolog system can find the procedure or method at runtime. Otherwise compile time approaches have to be pursued. Here is an example of such a mapping, from within a Prolog system via Java reflection: :- foreign(write_result/1, 'myPackage.myClass', myMethod('Term')). What do we have in the above: write_result/1: The predicate indicator from the Prolog name space. myPackage.myClass: The class from the foreign name space, i.e. Java. myMethod('Term'): The method and formal parameters from the foreign name space, i.e. Java. Now if you have a bulk of organized FFI predicates, there are a couple of methods to do this. Or lets better say how to avoid doing this. I guess it will never happen that you have a method for each table, it will rather be the case that there is a small number of foreign procedure responsible for all tables. Something along: For the above we end up with the references I already gave (*). So I guess the problem is more on the Prolog side. Probably we have to put aside the idea that a term P(x,..,y) will access a table P'. Why not use a term of the following form: ft(P',[x,..,y]). ft is an acronym for Foreign Table. You need to implement this binary predicate once, with the help of the above FFI (Open Cursor, Issue Select Statement, Close Cursor, etc..) and you can define what ever mapping you like between P and P'. You might then implement a goal expansion rule (**), that converts P(x,...,y) into ft(P',[x,...,y]) for execution, so that you can go along and write P(x,...,y) in your Prolog code. Clause expansion and goal expansion is widely available in Prolog systems. You can define a goal expansion rule by hooking into the multi file predicate goal_expansion/2. Something along: :- multifile(goal_expansion/2). goal_expansion(C,ft(Q,L)) :- callable(C), functor(C,P,A), /* is P/A in Prolog table name space */, /* determine Foreign table name Q */, C=..[_|L]. You have to provide the check and the mapping by yourself. callable/1, functor/3 and =../2 (univ) are standard Prolog predicates. (*) SQL Statements as Prolog References: (**) Clause Expansion Interesting. I would probably have gone in a different direction with the syntax and the design trade offs than what you are working on. A lot depends, of course, on the use cases that one has in mind. My use cases would be more about adding constructive relational inference to L (Java in your implementation), keeping the type checking of L, rather than trying to embed a full traditional (fully dynamic) Prolog within L. Type checking of L is still done. But the type system of K is not very fine grained. I only showed how to register a method which has as argument an object that belongs but is not necessarely an instance of the Java class Term. Actually the Java class Term is abstract, so there can be no direct instances of this class. Usually through a FFI you can register a couple of formal parameters types from L that are derived from the data types in K. When K is the Prolog languages these usually contain atoms, variables, compounds, floats and integers. Lets add references as well to it. Type checking is now done at two time instances. The first instance is during registering, Since Java has reflection it is really checked whether the a corresponding method exists. Also it is checked whether the given arity matches the number of formal parameter. This match can involved counting return types as a special last argument of the predicate and skipping some control parameters, such as a formal parameter of type interpreter. The second time instance when the type is check is during the invokation of the predicate that has been associated with the method during the registration. Depending on the given formal type in language L, there are some rules for the language K what is allowed and what mapping does happen, whether some narrowing or widening is implied etc.. This has nothing to do with the Java reflection mechanism but solely how the datatypes between the language K and the language L map. For example an atom from the language K might easily map to a string in the language L. But some Prolog systems that table atoms might also offer other mappings. Then typically the floats and integers of the language K map to the floats and integers of the language L. For variables and compounds of K the API provides datatypes in L so that these datatypes of K can be mapped to L. The mapping can lead to exceptions. For example when an atom/string is expected and the predicate is invoked with an uninstantiated variable. Then the ISO instantiation error is throw in the language K by, since it is a errorneous call out. No exception is thrown in the language L. But the exception might ripple back to the language L where a call in happened. Other errors than instantiation error might be thrown such as type error, domain error or representation error. The ISO standard requires that an ISO compliant Prolog features a full exception mechanism. So a decent implementation of the language K will have exceptions. The mechanism is also used in the Prolog language itself to indicate errors of arguments of built-ins, since it does not have type declarations. There are Prolog like languages that have type declarations which would allow compile time checks, but when I hear Prolog I am identifying it with the language defined in the ISO standard. A FFI for languages with declarations should also be possible. I guess one then would not only register methods but also types, i.e. the mapping between types of K and L. A very tight integration where practically K=L and introducing a type in L automatically introduces it in K is not evident. If you study for example the code: You will find that the data structures of K must implement the protocoll of the class Term. But then in some places it is even a little bit more complex, for example the method unify has two Term arguments. So it is usually not part of the protocol of Term but rather part of the Interpreter protocoll, since it changes the state of the Interpreter. So if you introduce a new type in L and want to mirror this in K, there are many places that need to be changed. And there are no general rules how a type L should propagate to the protocoll of K. It depends on how you want to see your type L behave in K. In the end you invest a lot of time mapping your types and you will recognize that they don't behave much different than atom, compound or reference. What would be handy would be a record type in the language K. There are some Prolog systems that have special support for record types. But it is not part of the ISO Prolog core standard. But the support is thin, a record is basically modelled as a compound plus a declaration. You could try adopt this if you are interested in automatic the mapping between L and K. For more information how this is practically done see for example this Prolog system: The ECLiPSe Constraint Programming System 5.1 Structure Notation What kind of types are you interested in having available? Contrary to the strategy of the Scala example, I remember once writing an interpreter by postulating the following protocoll. So making it as Term centric as possible: public abstract class Term { public abstract boolean eq(Term other); public abstract int cmp(Term other, Interpreter inter); public abstract Term deref(); public abstract boolean unify(Term other, Interpreter inter); [...] } The above is also contrary to the Lisp example and to the Haskell example. In the Haskell example we have for example unify : Subst -> (Term, Term) -> List Subst, but nothing via some class mechanism or so. Dito in the JavaScript example, it has also: function unify(x, y, env). The Scheme example is also subject to the same problem. Means people are not very innovative and simply copy each other... (Pun) But it was not very performant for various reasons, so I abandoned it. But if the objects of your language L implement this protocoll, then they can be used by the language K. The problem that usually renders this approach not feasible is the fact that one might deal with objects from foreign libraries that don't implement this protocoll. One would then need to wrap these objects. Some better ideas around? Sketching an example of what I mean to clarify. The SOCI C++ library is an attempt to make a pleasant embedded interface to generic relational databases, with the capabilities of various actual relational db's exposed through a factory pattern. The Mercury language is a Prology-influenced language that is compiled, typed with respect to the read/write status of variables, and more declarative than Prolog. What I am imagining is an embedded compiled language with a query syntax similar Mercury (which is similar to Prolog), special types of functors that are know by declaration to be evaluated as regular function calls in the host runtime for the host language L, and variables which are typed not only by their read/write status but also by inheriting types from L in a way that's similar to the way iterators in C++ STL (of which C++ pointers are a special case) inherit a relationship to the type that they reference. L would "input" queries to K that are typed in the same way (possibly to an asynchronous FIFO input queue) and get back corresponding answer objects (possibly from an asynchronous FIFO output queue) that contain iterators of the the type described. Both how dynamic the embedded K system is and how strong its typing is would depend on the nature of L. SOCI looks to me like mixing approaches already mentioned here: Note most Prolog system ALREADY OFFER similar programming interfaces to their system. In principle the toy systems mentioned in this thread, i.e. the Scala, Lisp, Haskell, JavaScript, you name it examples offer already a programming integration with their host language. And also more mature systems that already exist for years, typically offer programming interfaces. Of course you could make your hands dirty, and start implementing your language K. But it will take you a couple of months to arrive at a non toy level.: There is a certain composting of logic languages. I encountered "logic engines" two times in my career. In one case it was about filling action tables within an Excel sheet that was compiled via VB into a "fact base" of some test tool. Those facts were then used by the tool to verify response data of a system-under-test. Half of the team used it, the other half tried to work around it. The other time it was a simple "reasoning engine" which was fed with facts written in an "in-house" XML language designed by some guy who left the company. It was very semantic web or so - I don't remember exactly the rhetoric packaging used to sell it to the engineers. There was one fan in the team who kept it alive. In neither case it was used for anything sophisticated like solving NP complete problems declaratively, but even the little it could do, was done badly or cumbersome. So the experience curve goes like this: very sophisticated rule based engines usable by only a minority of trained expert programmers are replaced by dumbed down versions of such engines which are meant to be used by average programmers for occasional use which gets replaced by nothing like that at all. The final backtracking step was to dump the backtracking engines into the dustbins and track back to a mixture of procedural / OOP approaches. This could be seen as an entirely negative result but there was at least a desire for declarative programming methodologies and moving from step 1 to step 2 and becoming "mainstream" if only within an organization, something which is quite a lot. I'll just note that it is the simplest mechanism I know of that extends resolution to model the closed world assumption. Since CWA is pretty frequently wanted, what is there in the way of attractive alternatives? I'm not sure if that's what you're looking for, but there are datalog with negation, e.g. via stratification. Prolog cut isn't the simplest mechanism at all -- it is a combination of committed choice with negation. At the very least we should separate the two. The main problem with cut is that it is unsound. See for a thorough explanation. The links after the article describe alternatives and variations, and the ways to ensure soundness. it is a combination of committed choice with negation. At the very least we should separate the two. Possibly. But then you would be able to synthesise cut again, so the language in toto is not going to become easier to reason about. See I had not read this piece of yours and am pleased to have it pointed out to me: it is clever and well-argued, and would probably make good front-page material, especially given it has been 3 weeks since we have had a new story. That said it is not altogether to the point: I am talking about the case where the CWA is intended and so negation-as-failure is almost certainly the intended semantics. I'm familiar with a fair bit of the literature around LF, so I did know that there are simple, principled mechanisms for doing logic programming efficiently that ditch negation-as-failure. Furthermore, predicates like var/1 may make the semantics of Prolog hard to understand, but they are essential to how reflection and metaprogramming work in Prolog. It is like criticising Common Lisp because macros can make code hard-to-read: it is not so much criticising one of Lisp's language features, but condemning a whole ethos of program construction. So to rephrase my question: if we want a Prolog-like language and we want to model a problem using CWA, is there a mechanism that is not much more complex than cut which is cleaner? Which might be the first part of a larger question, can we clean up Prolog without destroying it? Lee Naish has argued that without cut or a similar committed choice operator one cannot write any large practical logic program. Oleg, I am familiar with George Boolos's Don't Eliminate Cut paper but am unfamiliar with Lee Naish's argument. (I assumed Charles Stewart's "In defense of cut" was a play on Boolos's paper title.) Thanks for the reference. The CWA is not only supported by cut. It already starts with how (=)/2 is implemented via unification. In normal logic we have: ?- a=b & b=c -> a=c true Try this in prolog. So when investigates CWA one has not only to look at negation but also equality. Domain independence and Clark's Equational Theory (1978) come to mind. Unfortunately I am not able to write down in a few lines what these are all about. I also read somewhere that we are about to experience the first semantic web war, to CWA or to open world assumption (OWA). Given the many dimensions that influence CWA I guess there are many shades between CWA and OWA. A language that covers all shades would be useful. I guess some of the proof assistants, verification or automation systems around can do it. But retrofiting Prolog is probably impossible, doesn't it? It's not clear to me that the failure to model Leibniz's rule is part of the CWA and not a not-quite-orthogonal design choice made for efficiency reasons. We do know how to extend resolution to encompass Leibniz's rule, and it seems to me that the CWA makes perfect sense together with Leibniz's rule. It seems that the term equational logic is a rather a new term, and not exactly the same as some logic with equality. Actually I wanted to stay first order and point out that Prolog is not based on FOL_= but rather on FOL_CET where CET is the Clark Equational Theory. Leibniz's rule, if stated as s=t -> (P(...s...) -> P(...t...)), neither fails in FOL_CET nor in FOL_=. In pure Prolog if s and t unify and the call P(...s...) succeeds, then the call P(...t...) also succeeds. Whether Leibniz's rule is then stated as an inference rule or an axiom is not essential, some quantifier shuffling makes them the same. What makes FOL_CET partly CWA is that =/2 is fixed, compared to FOL_=. We cannot have =/2 on the left hand side of the consequence relation in Prolog, since FOL_CET is categoric on (=)/2 and having it there therefore woudn't make sense. But you are right, resolution for FOL_= is also around, i.e. para-modulation etc... But then in Equational Logic more happens. It is essentially higher order, for example the Equanimity rule, we have equality between propostions. This is not anymore FOL. But higher order logics can be subject to the same equational problems as FOL, and thus being rather OWA than CWA. But if we mingle to much with HOL, and fix equality similarly as is done by CET for FOL, we end up with something that is close to CWA concerning equations. For example in set theory, via extensionality, equality is bootstrapped from the membership relation. But I havent much looked into HOL concerning domain independence etc.. There is a notion of absoluteness in set theory, which could eventually cover some of the concerns. P.S.: Equational OWA is useful for solving murder riddles. Bob saw Alice carring a dagger. Carlo saw Eva running away from Debby, the victim. It later turns out that Alice, resp. Eva was the murderer, but they were known by different names to Bob and Carlo. P.S.S.: And this is how equational OWA makes the simple negation as failure wrong. Take the train table example. There is no train from Ragusa on the timetable, therefore today no train from Ragusa will arrive? Wrong, what if Ragusa is a different name for Dubrovnik, and a train from Dubrovnik is on the timetable? Schroeder-Heister has written quite extensively on the logical principles of definitional reflection/closure, which IMO serves as a good logical foundation for the closed-world assumption. The basic idea is that if you assume that a set of clauses define an atomic proposition, then that justifies (in the Martin-Lof/Prawitz/Dummett sense) an elimination principle for them. Basically it starts to give atomic propositions some actual logical content. (I've been meaning to look at definitional reflection again to see if it can also be used to give a better account of datatype declarations in ML and Haskell than the usual method of desugaring them to sum/product/recursive types. I've been implementing a language, and have come to dislike giving semantics by elaboration.) On an unrelated note, Girard discusses negation-as-failure in The Blind Spot, though I forget what he said about it (it was not wholly uncomplimentary, which I guess means he likes it). Dale Miller has thought about this in the context of uniform provability, with applications to logic programming. I don't know where this line of work went. I dare say it is worth digging out those notes of Girard. I never did get around to reading them. Girard seems to be a real basher on Prolog: (corsang1.pdf, 4.D.4) "[...] The amateur logicians recruited by hundreds around the fifth generation wrote whatever came to their mind on this. For instance that a query that does not succeed is false. Since non-success in PROLOG est recessive, one is in front of a non axiomatisable notion, the closed world assumption or CWA. [...]" "[...] Too much joggling with negations, and one slipped up the hands : provability was mistaken with truth in a professed smallest Herbrand model. What is factually true for formulas ∃x1 ...∃xp(R1 ∧...∧Rq), but which is a conceptual nonsense. Such an error of perspective mechanically entailed a conceptual and technical catastrophe : the truth of a universal formula in a bizarre model, what is it, exactly ? It is the CWA, a pretext for ruining a lot of paper and for discrediting an approach after all interesting and original. [...]" Instead of disputing the charges, I will go about and buy the following book, to generally cheer me up: Formalizing Common Sense: Papers by John McCarthy (Ablex Series in Artificial Intelligence) But I guess one can draw an analogue between uniform proofs and polarized proofs, later found in the tome. (Didn't verify) At least part of Girard's complaint about Prolog is that its theory makes use of structural proof theory in a very coarse and unrefined manner - he's criticising the Prolog crowd for being vulgar logicians, not bad programmers. And I guess he's particularly sharp about the kind of logician who passes off work on the theory of Prolog as being a contribution to proof theory. He's been much friendlier about work on logic programming coming from the contribution of people like Dale Miller and the uniform provability crowd, and has said flattering (by his standards) things about Jean-Marc Andreoli's work on focussing proofs. Girard's point connecting the CWA to least Herbrand models is well put: thank you for posting that. I'll defend it so: it is perfectly legitimate to model a problem in this way if you know what you are doing. Herbrand models, least or otherwise, aren't the private property of logicians. I guess one can draw an analogue between uniform proofs and polarized proofs If polarised proofs are the same kind of thing as focusing proofs, then there is a connection. I saw this term in John McCarthy's writing. He deals with Circumscription and not with CWA. There are subtle differences and there are also overlaps. Idea is that although some of these concepts (CWA, Circumscription, etc..) are inherently not only non- monotonic but also higher order and/or hyper arithmetic. There are collapsible cases, i.e. where matters become first order and/or semi-decidable. So if Girards focus is on linear logic and its perfection, ignoring many application areas of logic, then it is clear for me that Girards blind spot are all these results. If these results are formulated proof theoretical than they contribute to proof theory, despite what Girard values. if these results are done model theoretic, then they contribute to model theory. Here is an example of a nice proof theoretic result: Take the Schroeder-Heister formulation of definitions, most probably inspired by negation as failure, I guess we get an easy cut elimination of definitions. And thus consistency of theories extended by definitions. There are a couple of other ways to show this, but this looks nice. (Didn't verify) Structural proof theory is also possible to do without the introduction of exponentials etc.. there have been for example results about contraction etc.. before Girard. Eventually one more blind spot of Girard. But I must admit Girard had a positive impact on logic, here and then one sees nice expositions successively building on subsets of linear logic. But I guess one has to find the right balance between the means and the end. (*) (*) You probably don't take the airplane to fetch some bread around the corner. There is some inconsistency in either Girards original view or your interpretation of it. I consider Andreoli also part of the Prolog crowd, he developed his Linear Logic stuff while at ECRC, Munich, right? But I guess that Girard or you forget that there is a notion of academic freedom, or lets better call it there should be room for unconditioned explorative research if one desires (which ECRC, Munich was able to provide I guess). Labeling some proof theory as vulgar versus non-vulgar automatically gives a special status to the non-vulgar proof theory. So that there is the danger that we arrive at a point of mental inertia, because we believe that we have found the holy grail. If we spinn this further like some islam has found mohamad or some christians have found jesus, we might start a relgious war. Although linear logic is appealing, the two polarities are like matter and anti matter and suggest a first principle that cannot further be reduced (kind of a more radical bivalence than already found in Aristoteles), but one never knows what else might come in proof theory. Remember Πάντα ῥεῖ (panta rhei) "everything flows". But my main argument against dogmatism runs as follows. Although powerful first principles might be very useful, there is a danger in a fixation on a particular set of first principles. Namely an emergent phänomena might not only have one reduction. Theoretical computer science was very successful in showing that: Take the phänomena of computation and the reductions to turing machine, lambda calculus, etc.. That's also why I read the title "LtU" of this forum rather ironical than verbatim, won't you? Some publications from 1991 - 1994 by ECRC: ECRC viewed as an internet pionier I have expressed myself badly if I there is a suggestion in what I wrote that the criticism of some work in logic as vulgar is something like defending a citadel. It is rather about people who pick up theories and insights and use them in a vulgar way, failing to really appreciate them - I could lift Girard's words: "an error of perspective mechanically entailed a conceptual and technical catastrophe". He is criticising a kind of error of taste that can undermine the judgement of whole cliques of scientists. This part I agree with, even as I agree that vulgarity sometimes gets results. I certainly don't regard myself as being a Girardian, if that means revering whatever comes from Marseille. The picture that Girard draws is just a carricature. And has maybe the same value like when today somebody draws a carricature of Merkel and Sarkozy. It raises some emotions, but then one can proceed as usual. But I agree that there can be an error of perspective, respectively a too narrow perspective. Take for example this thread, and my pondering that CWA involves an ingredient that has to do with equality and for example your response here. So why is there still such a lack of a widely available full understanding of the programming language Prolog. Probably you are also right here, that the vulgar reception of Prolog, especially the many light handed introductions to Prolog that are offered to miriad of students by professors all over the world do probably some harm. But historically Prolog is often explained based on resolution theorem proving. Contributions from for example the Dale Miller camp take an other route and identify the minimal logic resp. linear logic in Prolog. These new explanations and partly new discoveries did not yet find the way into the mainstream. I guess it will take a couple of years and we will have a different view point here. There is a little danger that the homework is not correctly done, I have the feeling that an alternative perspective has not yet been fully worked out and is not yet available in one piece of work, but I would appreciate pointers. Non-dogmatism helps here. The danger is in a drifting away and a possible abandonment of Prolog altogether, for example in favor of the many fancy dependent type systems. Very exciting scientific thriller to watch over the next years. You name it Schroeder-Heister, Robert Stärk (1989), etc.. it all amounts to Clark Completion (how to deal with implication) and Clark Equational Theory (how to deal with equation). Lets first show that Robert Stärks axioms are essentially Schroeder-Heisters inferences rules. Robert Stärk says (*) (now only looking at the propositional case): Section 7.1: D_1 -> P .. D_n -> P Section 7.2: P -> D_1 v ... v D_n The former axiom is already seen in Schroeder-Heister (**), the implication is just written the other way around. The later can be turned into the following inference rule and vice versa: G, D_1 |- C ... G, D_n |- C --------------------------------- G, P |- C So much for the implication. But what happens with the equations? We only looked at the propositional case. Robert Stärks approach is extrem since it confronts us with an infinite axiom system, just instantiating the axioms based on the term model. But CET is present in 5) and 6) of Robert Stärk. On the other hand Schroeder-Heisters rule with variables is directly defined via some mgu and does not have equation in the first place. On page 19 he then discussses what a definition X=X would do. He basically shows that it implies Robert Stärks 5) and 6). Not sure whether this tie could be easily broken for Schroeder-Heister, except if we mix the definition part with arbitrary logical reasoning. For example would like to be able to reason as follows, which is rather unlikely to be done directly in the discussed definition framework: P(a), a=b |- P(b) P(a), a=b |/- ~P(c) Bye (*) (**) I wasn't aware of Stark's work before, so now I have something more to read. :) The SOCI approach is apparently different than what you are thinking of in a few different ways. Some of those ways are relevant to our discussion and some are not. There is no parsing involved, but that's not too relevant. What is relevant is that "into()" and "use()" are fully typed by L (C++ in this case) and work with the actual C++ variables designated. SOCI used typed wrappers which know how to data conversion between C++ types and SQL types for some set of standard choices, and it provides an interface to adding custom object mappings. SOCI is designed for talking to relational databases, so it is natural to assume that data needs to be serialized in order to be passed between a program and a database, and so it is general to access collections serially via iterators.). That idea again leads to the desirability of read/write typing. Being able to access facts based on data in a relational database - jit as needed by a query - would also be an interesting feature.: The use cases I am most interested in might well be different than the ones that Jekejeke is designed or optimized for. But I'm independently interested in your presentation of what you think the most interesting use cases for embedded Jekejeke or any other existing Prolog system are. You could describe them here or make a new thread. That survey paper would also have benefited from examples illustrating the concrete details of what they are abstractly (and subjectively) classifying.). This is indeed possible if the language K exhibits a data protocoll which can be satisfied by the language L. See this response here. If no such protocol is available we must shift data around as in SOCI. There are then basically 4 technical use cases. Technical use cases are the means for the technical interactions in your business use cases/workflow of your application which is to satisfy the business case of your customer. Lets abstract from FIFO/iterator issues for the moment: See the concepts described here ff.: What technical use cases do you have in mind? Please note that all the Prolog Forreign Function Interfaces (FFI) have typed "into"/"use" in some way or the other. "into"/"use" are part of both the Call in technical use case and the Call out technical use case. In both use cases data might flow from L to K and vice versa. But "into"/"use" are sub steps in the technical use cases and are not considered technical use cases here in itself. Here is how a typed "use" can be done for a float and for an atom when L=Java: /* query construction */ ... new TermFloat(<language L float>) ... ... new TermAtom(<language L string>) ... And here is how a "into" can be done for a float and for an atom when L=Java: /* result destruction */ <language L float> = ((TermFloat)var.deref()).getValue() <language L string> = ((TermAtom)var.deref()).getValue() It has a similar safety like the SQL interface SOCI. In SOCI you can specify a flag holder for an into. And inspect the flag whether a null value was the result, or whether a coversion error occured in the result, or whether the value was successfully submitted. A Prolog system is usually stricter. It does not support the many conversions that an SQL API usually supports, i.e. string to number and vice versa. Although you can retrieve from the SQL API the native column type. In a Prolog system you get an object which belongs to some class along the Term hierarchy. In the simple case as above you simply cast to what type you expect. But I guess some trickery with Java generics might eliminate some keyboard strokes needed to put down the code. Eventually with C type templates the most keystrokes can be saved. But you said yourself the language L should be a determinant of the interface. So I guess our whole discussion is not about Prolog the language K, but about the language L. Since the FFI interface is usually written in L itself, changing L also changes the possibilities for building the FFI. So maybe you found an interesting case for the application of C templates. Or for type inference in general. But I think it is not very specific to a relation language or Prolog. Any interfacing problem with a variety of types would do. Imaging interfaceing with MatLab. The exact same questions pop up. When I wrote "use case" I meant it with the common usage of "Here is an application programming task that this software module would be particularly useful for." In the case of your jekejeke, it would be something that a Java programmer wants to do that is cumbersome to do in native Java. Most of data conversions in SOCI are designed and coded by a library programmer who is writing the factory implementations for a particular database backend. The appropriateness of the type conversion, and whether or not a round trip store and retrieve from the database is always bijective, is something that, from the POV of the SOCI user, is determined by the correctness of the library programmer's implementation. For example, if the programmer is storing a signed 64 bit integer and a given database didn't support such a type natively, then a correct backend implementation should convert to and from strings in order to maintain a bijective mapping between the application and the database. I have no association with SOCI and was not proposing it an ideal design - there are many things I would have done differently. It was just brought up as an example of relatively clean "query" interface that works with the native types of the library client and is optimized for the convenience of and efficient use by the library client. Coming back to jekejeke, it would help me to understand your POV better if you'd provide some examples of both "use case" in the general sense and what you mean by "technical use case" to show where you think jekejeke would be particularly appealing. In the case of your jekejeke, it would be something that a Java programmer wants to do that is cumbersome to do in native Java. I am more thinking of "make or buy" scenarios that would make a business case for the integration of various components. And of course I am happy if I don't have to care in which language a component is implement. This is slightly different meaning than "cumbersom", and more in the spirit of what CORBA tried to archive in the early days. The free posibility to mix and match arbirary components. But I am heading more for a tight integration in the first place and not an RPC like integration, i.e. I am interested in the embedding/library idea. But I am not advocating that Jekejeke Prolog implements something different from the main-stream. I have already mentioned the main stream in this post here. If you look at the paper cited there, you will find a more fine grained discussion of FFIs. The 64-bit example where a conversion to a string happens is a good point in favour of a broader mapping utility inside the FFI. Which is not so much a problem for Prolog systems since they anyway feature arbitrary long integers. But a type mismatch happens for example in the area of decimals, many databases feature large decimals, for example oracle with up to a mantissa of 36 digits in the old times. But Prolog systems have not adopted this type. Here Jekejeke stands out since it has a arbitrary long decimal datatype. It is internally derived from java.math.BigDecimal. But this feature, the data mapping, is on another level than the technical use cases. It is a property inside the technical use cases I have already mentioned. The paper does not distingusih between these level of aspects, it does not discuss the different criteria in a hierarchical way, like for example first checking what technical uses cases the various Prolog system and then looking into the details how the technical use cases are supported. Here is a little account how the technical use casse are covered by the paper. This corresponds probably also to the average awareness that the author had at the time of writing. Most likely influcended by the systems he considered an their state of the art at the time (2000). Here is the little list already mentioned here: So we see some weakness in the first two technical use cases. So we would need to consult other sources to see what is around. The first technical use case is tied a little bit to the aspect whether your language K is supposed to be multi-threaded or not. A little blind spot I have already mention early on in this thread. But which we have more or less resolved during the discussion. Here is a FFI example for multi-threading: You find all kind of "Interpreter allocation". Their is a data type "Prolog" and a data type "SICStus". I can not tell you so much about these. It is not exactly the same design as in Jekejeke Prolog. Most likely the SICStus object corresponds to the Interpreter object in Jekejeke Prolog. And the Prolog object corresponds to the CallOut object in Jekejeke Prolog. But these would be the details of the technical use cases, important is that there is this concept and that we do not build upon some notion of a globally available Prolog system or Prolog application. Next comes "Function registering". This was to some extend part of your bullet list, namely the issue of namespace related to predicates. Again the aspect is not extremly good covered in the paper. This is a result of the globally available Prolog system or Prolog application view. If the Prolog system or Prolog application is globally evailable, then of course the predicates inside it are also globallay available. This amounts to the "external" keyword of C and the linking of C against your application with the Prolog system or Prolog application. What I have tried to convey is a more dynamic "function registering" model. It assumes in the first place a dynamic interpreter allocation, and underlying a dynamic knowledge base allocation. From this "function registering" is derived as dynamically establishing a relationship between two objects. The function of the language L and the predicate of the function K. The relationship is usually only used in the "Call out" scenario when the Prolog system or Prolog application calls the foreign language. In the "Call in" scenario, when the foreign language calls the Prolog system or Prolog application, the foreign language simply picks the desired predicate name when building the query against the interpreter. But we find systems that skip the technical use case of "function registering" and use the same approach for "Call out" as for "Call in". Pick what you need and do what you need. This is seen in the last part of the SICStus example. We have there: :- use_module(library(jasper)). main:- jasper_initialize(JVM), jasper_new_object(JVM, 'MultiSimple2', init, init, Obj), jasper_call(JVM, method('', 'CallBack', [instance]), 'CallBack'(+object('')), 'CallBack'(Obj)). So the "Call out" happens in that the Prolog system dynamically looks up an object instance and then dynamically invokes a method of this object instance. No registering of a function happens. There is one little drawback of this approach. When we have a separate "function registering" technical use case, we can already do some validation during this step. In the case of Jekejeke Prolog, and in the case of some other Prolog systems, the "function registering" is done in a directive fashion. So when these directives succeed we know that we have established a valid mapping, so that we don't get some runtime errors concerning the mapping during our "Call out". Here is how the same would look like with function registering. (*) :- foreign(callback/0, "MultiSimple2", 'CallBack'). main :- callback. In the SICStus example, where this separation is not seen, we might get exceptions at runtime that either stem from invalid FFI mappings or from errors inside the "Called out" objects, when these execute. For example we might get an error at runtime when the "MultiSimple2" class is not available at runtime, since it was not put into the class path or whatever. In the "function registering" technical use case we would get the error already at consult time of the little Prolog text that does the callback. Bye (*) Currently one would need to redesign the code a little and make 'CallBack' static and no separate process would automatically be spawn. This could be done inside the 'CallBack' implementation if really needed. Why is Prolog such a polarizing subject? We really shouldn't blame Prolog for this. The problem has to do with many unresolved issues in logic itself. The subject of "logic programming" has put these issues into sharp contrast. This is what I mean. Consider two old papers from the early days of Logic programming. The first is Kowalski and Emden, The Semantics of Predicate Logic as a Programming Language , and the second is Non-Resolution Theorem Proving. Here we have two entirely correct but contrasting visions of what logic is and what it can do. One has a practical foundation and the other is more theoretical. On the one hand we have model theory and algebra and on the other we have the school of Frege and Russell. It is becoming clear already in the early years of the twenty first century that this is nothing but bad metaphysics. Perhaps it is time to clean up the shop, and throw out the trash? I guess we must be aware that over the last 50 years probably some progress has been made in understanding non-classical logics and putting them to use. But there has been even further progress in the form of linear logic, since it deals with a much deeper idea than only refuting the excluded middle. If I look at the two papers, and the two contrasts they represent. Namely resolution theorem proving and non-resolution theorem proving, then I find that there is no awareness for classical vs. non-classical logic in them. I guess in both cases logic is identified with classical logic. The non-resolution theorem paper also lacks a little bit in its promis, since it uses much of terminology and apparatus from resolution theorem proving, i.e unification, skolem etc.. But how can we convey the feeling for the two approaches? Without looking at the idea whether something is classical or not for the moment? The view of resolution theorem proving I like best is the one of the thinking soup. We have a soup of "clauses", which might collide and reshape into new "clauses". There is basically only one collision/reshape rule, namely the resolution step. On the other hand the picture I like most for non-resolution theorem proving is the one of a Gentzen proof. Namely we start with a set of premisses and conclusions, and then built a proof tree. There is not anymore a single rule that says when a node in the proof tree is valid, but a whole set of rules. The rules are sometimes grouped along the connective they deal with. Now when Prolog enters the stage, for the thinking soup, the viewpoint is typically that some "clause" shapes are excluded, i.e. everything non-horn and as a consequence some collision patterns can be preferred i.e. input resolution, where at least one of the "clauses" has to be from the initial soup, can be pursued. But conceptually also something else happens. The original means of classical logic is not anymore needed in full for the new end. But Prolog can also enter the stage in the Gentzen proof world. And if worked out correctly it becomes much more clearer there that the full terrain of classical logic is not anymore needed. And one can then also clearly work out the logical limitations of Prolog. Unfortunately in the case of Prolog, applying linear logic doesn't give any further insight about the logical limitations in my opinion. I have rather the feeling when applying linear logic we get some insight from below, if we compare Prolog with more primitive languages. So it shows us more reductions, but it wouldn't for example explain better some of the para-consistency or para-completness properties of Prolog, which we can already explain by applying for example minimal logic instead of classical logic. But the reductions are also of interest, since they can be transfered back to minimal logic, and give us a deeper insight why certain Gentzen proof tree methods are possible. These are then not anymore meta-logical properties about admissibility of certain conclusions but meta-logical properties about proof objects. Which puts us again closer to programming languages and lambda calculus. The later is the reason why I expect that the resolution theorem proving explanation of Prolog will be abandoned in the near future by the mainstream in favor of another explanation. But I am not sure whether this coincidences with your "clean up the shop". P.S.: It is also not that easy to design a thinking soup for intuitionistic logic from start. There is an attempt in: Basic Proof Theory, 2nd Edition A.S.Troelstra & H. Schwichtenberg Cambridge University Press, 2000 Section 7.5: Resolution for Ip But not sure whether somebody is using it. As you point out the two approaches seem to have a lot in common; so why is it such a big deal. I think it has to do with the intent of the practitioner not with logical theory. Non-resolution is an inductive agenda, while resolution is deductive. This is why metaphysics is important. Logic is not really a self contained answer to everything. You can't solve logical problems simply by being meta-logical. Logic is not the foundation of everything, not even of mathematics. But the inductive and deductive categories have also failed to solve our metaphysical dilemma. Personally, I like Pragmatism and its close cousin Cybernetics. And since this is about logic we should probably mention Modal logic, the brain child of the last great pragmatist C. I. Lewis, introduced as a counter argument to Principia Mathematica. Apparently Lewis thinks that possible and necessary are better categories than T or F. It makes sense if you think pragmatically. If Non-Resolution would be on the inductive agenda, then there would be something wrong. By changing the explanation of Prolog we don't want to loose some fundamental properties of the ascribed underlying logical reasoning. Resolution or Non-Resolution, both should model the same skeptical reasoning for example in terms of answer computation. But of course when somebody sees this thread, which contains a couple of inquiries into the explanation of Prolog und the ascribed underlying logical resoning, one might loose track and propel one self into spheres with a different frame and questions. And exhilarate on revolutionizing these spheres. But back to the Prolog sphere, I guess the revolution already happened. But it did not yet reach the main stream. And maybe some of the revolutionaries (Girard, etc..) are simply frustrated that they were not allowed to start from scratch, and that Prolog was already tainted by their precursors.
http://lambda-the-ultimate.org/node/112
CC-MAIN-2017-22
en
refinedweb
Today I came across a LINQ method, DefaultIfEmpty(), which is quite similar to the Left Join of SQL.DefaultIfEmpty works like a left join and gives all the records from the left table including the matching records from the right table. Use DefaultIfEmpty<TSource>(IEnumerable<TSource>) to provide a default value in case the source sequence is empty.For more information about DefaultIfEmpty(), please have a look into this link: Kindly look into the code given below: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace LINQ_IfByDefault { //Class Employee having two properties class Employee { public string Name { get; set; } public string EmpID { get; set; } } //Class Worker having two properties class Worker public string WId { get; set; } public string City { get; set; } class Program static void Main(string[] args) { //Object Initialization for Employee class List<Employee> objEmployee = new List<Employee>{ new Employee{ Name="Sachin",EmpID="I001"}, new Employee{ Name="Vijay",EmpID="I002"}, new Employee{ Name="Ashish",EmpID="I003"}, new Employee{ Name="Syed",EmpID="I004"}, new Employee{ Name="Ravish",EmpID="I005"}, }; //Object Initialization for Worker class List<Worker> objWorker = new List<Worker>{ new Worker{ WId="I001",City="Delhi"}, new Worker{ WId="I002",City="Haridwar"}, new Worker{ WId="I007",City="Roorkee"}, new Worker{ WId="I008",City="Amritsar"}, new Worker{ WId="I009",City=""}, }; //using of DefaultIfEmpty method provided by LINQ var resultDefaultIfEmpty = from emp in objEmployee join worker in objWorker on emp.EmpID equals worker.WId into ResultEmpWorker from output in ResultEmpWorker.DefaultIfEmpty() select new { EmployeeName = emp.Name, City = output!= null ? output.City : null }; Console.WriteLine(string.Join("n", resultDefaultIfEmpty.Select(emp => " Employee Name = " + emp.EmployeeName + ", City Name = " + emp.City).ToArray<string>())); Console.ReadLine(); } }I have also observed a few things which may be a bit beneficial for you; whilst working with defaultifempty, the line in the above code segment: The output variable contains only records from the right side list of the Join Clause; in our case it is Worker List:Please have a look into how DefaultIfEmpty() works.Your thoughts are highly appreciated.Keep Coding. View All
http://www.c-sharpcorner.com/UploadFile/97fc7a/linq-method-defaultifempty/
CC-MAIN-2017-22
en
refinedweb
hi, i want to emulate VIM way of entering a number followed by command, so if 'dd' is used to delete 1 line, 'd3d' will remove 3 lines etc. e.g. i would like the following to work: > but it doesn't. am i doing it wrong, or it just not supported. but it doesn't. am i doing it wrong, or it just not supported. That won't work, but you should be able to do it via the not too much more verbose: <namespace name="motion"> <binding key="j" command="lines 1"/> <binding key="/(\d+)/,j" command="lines $1"/> <binding key="k" command="lines -1"/> <binding key="/(\d+)/,k" command="lines -$1"/> <binding key="d" command="lines 1"/> <binding key="/(\d+)/,d" command="lines $1"/> </namespace> hmm, it was just an example. i wanted it in a more general way, for all the key bindings. in VIM when you start with integer followed by another command it is just like repeat X, so i thought of doing it in general. thought it could be nice ability. anyway, thanks. Another one...Because of Sublime is growing up rapidly and there are a lot of packages available it causes problem with keys binding.Sometimes they are overwritten by another one and it's hard to determine which sublime-keymap and binding is used.I think it would be good idea to add something like development or test mode to Sublime (new parameter or option) to display small window or something like this where user will be able to display which sublime-keymap files are used and which binding key/keys is/are triggered when you are pressing given key. Regards,Artur ++1 I would like this too...
https://forum.sublimetext.com/t/complex-key-binding/466/5
CC-MAIN-2017-22
en
refinedweb
WATER EVERYWHERE DIGGING DEEP Isaac soaks soggy coastal areas Nation 6A Free State pulls out volleyball wins Sports 1B L A W R E NC E JOURNAL-WORLD ® 75 CENTS &2)$!9 s !5'534 s ‘Restore the promise of America’ LJWorld.com Part-time driver hired for KU’s chancellor ——— Change expected to boost executive’s efficiency on out-of-town trips By Andy Hyland ahyland@ljworld.com Charles Dharapak/AP Photo REPUBLICAN PRESIDENTIAL NOMINEE MITT ROMNEY acknowledges delegates before speaking Thursday at the Republican National Convention in Tampa, Fla. Romney stressed economic themes and shared stories showing his personal side in his speech Thursday. Romney makes case for jobs, jobs, jobs create 12 million of them in perilous economic times. “Now is the time to restore the promise of America,” Romney declared to a nation struggling with 8.3 percent unemployment and the slowest economic recovery in decades. Often viewed as a distant politician, he made a pressthe-flesh entrance into the hall, walking slowly down one of the convention aisles and shaking hands By David Espo and Robert Furlow Associated Press TAMPA, FLA. — Mitt Romney launched his fall campaign for the White House on Thursday night with a rousing, remarkably personal speech to the Republican National convention and a prime-time TV audience, proclaiming that America needs “jobs, lots of jobs” and promising to Please see ROMNEY, page 6A Blue moon to grace tonight’s sky, sort of By Sara Shepherd sshepherd@ljworld.com When it comes to blue moons, one thing is certain: No matter which definition you’re using, they don’t happen very often. Now, about that definition. If you’re content with the increasingly accepted modern definition, there’s a blue moon tonight. It’s the second full moon in a month. It isn’t blue. And it looks and acts just like any other full moon. It just happens to fall on a certain calendar day. The last such blue moon was New Year’s Eve 2009, and the next will be July 2015, according to Sky and Telescope magazine. But count Sky and Telescope among entities not content with the aforementioned definition. “ I t ’ s wrong!” cries an article on the magazine’s website. “At least if you’re a stickler about these things.” The colorful term is actually a “calendrical goof” that worked its way into the magazine back in 1946, then ballooned, senior contributing editor Kelly Beatty writes. A contributor made an incorrect assumption about the Maine Farmers’ Almanac definition, which used “Blue Moon” to describe the third full moon in a season containing four. By that definition, there’s Please see MOON, page 2A Gray-Little. Please see DRIVER, page 2A Census: Number of uninsured Kansans rising WICHITA . The census also reported for the first time on health HEALTH insurance coverage for those between 50 and 64, a group more likely to use health insurance than younger age groups. In 2010, about 60,800 people ages 50 to 64, or 11.4 percent of that group, did not have health insurance. The findings didn’t surprise Susette Schwartz, CEO of Hunter Health Clinic in Please see UNINSURED, page 2A INSIDE Storm chance Business Classified Comics Deaths High: 84 Low: 71 Today’s forecast, page 12A 8A 7B-12B 11A 2A Events listings Horoscope Movies Opinion 12A, 2B Puzzles 11B Sports 4A Television 10A 11B 1B-6B 4A, 2B, 11B Join us at Facebook.com/LJWorld and Twitter.com/LJWorld Disagreeing on benefits Vol.154/No.244 24 pages U.S. Rep. Lynn Jenkins, R-Topeka, says some people “are happy” to stay unemployed to collect benefits rather than work. Her Democratic opponent takes exception. Page 3A SUDS LAUNDRY SERVICE Laundry service for 50% Off Only reg. $30 reg 15 $ Suds Laundry Ser Service will transform your dirty duds into clean, fluffy, having to lift a finger! They can clean comforters of any size! folded clothes without ha Redeem online This Prin Print advertisement is not redeemable for advertised deal. Get your deals voucher online at Lawrencedeals.com Deal ENDS 9-6 2A | Friday, August 31, 2012 -"83&/$&t45"5& . DEATHS Journal-World obituary policy: For information about running obituaries, call 8327151. Obituaries run as submitted by funeral homes or the families of the deceased. JOHN J. FLUMMERFELTLarkin Funeral Home and Cremation Services. Condolences may be sent or to share a memory go to Please sign this guestbook at Obituaries. LJWorld.com. RUTH EVELYN WIENEKE Ruth Evelyn Wieneke, 90, Lawrence, died Aug. 28, 2012. Graveside services will be at 11 am Saturday at Woodlawn Cemetery, Pomona. Uninsured CONTINUED FROM PAGE 1A Wichita, which serves uninsured and underinsured people along with those with insurance. “It’s just horrendous,” Schwartz said. “Our number of patients has gone up, and keeps going up, and we just can’t keep doing this.” The report found that more than 17 percent of Sedgwick County residents under 65 — nearly 75,000 — did not have health insurance in 2010. That’s 10,000 more uninsured people than the previous year and nearly 30,000 more than in 2005, according to the report. “I think we all know, with the economic downturn, that a lot of folks that did have employment are now unemployed or underemployed,” said Dave Sanford, CEO of GraceMed, another health safety net clinic. “We saw our demand increase several years ago, and it really hasn’t stopped. One of the challenges we have is continuing to keep up with the demand for care.” Schwartz said the Hunter Health Clinic has seen more than 41,000 people at its five sites this year, with about 70 percent uninsured. If President Barack Obama’s national health Moon CONTINUED FROM PAGE 1A no blue moon tonight. The last one was in November 2010, and the next one will be August 2013. But even the almanac’s definition is questionable. “What’s interesting is that we use it as if it was longstanding folklore, and it’s really not,” said Barbara Anthony-Twarog, Kansas University professor of physics and astronomy. “Only in the last half-century or so have people decided to give that the title of a blue moon.” Once upon a time, once in a blue moon simply referred to something that happened extremely rarely. We’ve got a hunch that usage came from a phenomenon that actually Locally There were 16,404 Douglas County residents under age 65 who did not have health insurance in 2010, or 17.7 percent of that population, according to the U.S. Census Bureau. That compares with 14,668 in 2009, or 14.3 percent. The census also reported for the first time on health insurance coverage for those between ages 50 and 64, a group more likely to use health insurance. In Douglas County, 1,888 people in that age group, or 11.2 percent, did not have health insurance. care law takes effect — particularly expanding Medicaid for low-income Americans — about 20,000 of Hunter Health Clinic’s patients would have insurance, Schwartz said. Those patients would get more regular.” made the moon appear blue: volcanic eruptions violent enough to shoot plumes of ash to the top of Earth’s atmosphere. One such instance was the 1883 eruption of the Indonesian volcano Krakatoa, according to NASA Science News. The moon, full crescent or otherwise, appeared blue for years. Less forceful eruptions, such as Mount St. Helens in 1980, spawned reports of blue moons, too. If thinking about astronomy happens once in a blue moon for you, folks who study it for a living will take multiple definitions in stride. “I think anything that encourages people to actually pay attention to the sky is probably a plus,” Anthony-Twarog said. — Features reporter Sara Shepherd can be reached at 832-7187. Follow her at Twitter.com/KCSSara. L AWRENCE J OURNAL -W ORLD Eudora police chief resigns By Shaun Hittle sdhittle@ljworld.com E. — Reporter Shaun Hittle can be reached at 832-7173. Music site hopes to rock industry Town Talk Editor’s note: These are excerpts from Chad Lawhorn’s enormously popular Town Talk column that appears on LJWorld. com daily, Monday through Friday. The print edition of Town Talk appears frequently. it does come with occasional surprises. year industry today. Enter a Lawrence startup company that hopes to change the trend. Back in March, we reported on Audio Anywhere and its founder, Kyle Johnson. The company a second-floor office space, has launched its beta version of the streaming music site, audioanywhere.com. If you are an independent band, there are all types of freebies for you. The site is looking for independent bands to sign up and make their music available on the site. Eventually, those bands will get paid based on how often Driver CONTINUED FROM PAGE 1A timemanagement strategies that can help them be more productive. With KUMC and KU’s Chad Lawhorn clawhorn@ljworld.com. “People like to spoil their pets here, but we also have a lot of practical items that people have been coming in for,” she said. It helps that the store is right next to another dogoriented what they would make on Pandora or Spotify. was the best it had been since at least 2008. The city issued permits for 16 new homes in June, up from nine in June 2011. — City reporter Chad Lawhorn can be As we have reported, home sales in Lawrence reached at 832-6362. Look for his entire Town Talk blog on LJWorld.com daily, have picked up this sumMonday through Friday. mer, and the Realtors I. — Higher education reporter Andy Hyland can be reached at 832-6388. ljworld.com 609 N.H. (offices) • 645 N.H. (News Center) Lawrence, KS 66044 (785) 843-1000 • (800) 578-8748 EDITORS 25 28 49 54 56 (28) TUESDAY’S MEGA MILLIONS 4 9 40 45 50 (39) WEDNESDAY’S HOT LOTTO SIZZLER 5 7 8 18 39 (4) WEDNESDAY’S SUPER KANSAS CASH 6 13 19 24 31 (25) THURSDAY’S KANSAS 2BY2 Red: 4 14; White: 5 8 THURSDAY’S KANSAS PICK 3 3 9 6 Do you think humans will ever walk on the moon again? ¾Yes ¾No ¾Not sure Thursday’s poll: Will you be traveling over Labor Day weekend? No, 72%; Yes, 22%; Not sure yet, 5%. Go to LJWorld.com to see more responses and cast your vote. LAWRENCE&STATE LAWRENCE JOURNAL-WORLD LJWorld.com/local Friday, August 31, 2012 3A Morning landscapes Jenkins’ remark about unemployment rankles By Scott Rothschild srothschild@ljworld.com TOPEKA — U.S. Rep. Lynn Jenkins, R-Kan., said some people “are happy” to stay unemployed to collect benefits rather than work. Jenkins’ Democratic opponent, Tobias Schlingensiepen, criticized her remark. Jenkins’ statement about the unemployed came during a meeting on Wednesday in Columbus, which is in Cherokee County in southeast Kansas. According to The Joplin Globe, Lori Johnson, chairwoman of the Cherokee County Republican Party, said some residents remain on unemployment rather than taking a low-paying job, then they claim a deduction on their taxes. The Globe reported that Jenkins responded by saying, “Right now we Richard Gwin/Journal-World Photo WATER COLORISTS JOHN HULSEY AND ANN TRUSTY, both of Lawrence, incorporate the morning light into their landscapes Thursday near Lawrence. 2 men arrested in connection with pistol-whipping incident By George Diepenbrock gdiepenbrock@ljworld.com Lawrence police arrested two 19-year-old Topeka men Thursday morning an hour after a Kansas University student was pistol-whipped and a gun discharged in a robbery at The Reserve apartment complex, 2511 W. 31st St. Sgt. Trent in a sport-utility McKinley, a Lawvehicle in a parkrence police ing lot Wednesspokesman, said day night, and police were called another student at 11:50 p.m. to the was standing outapartment comside the vehicle, plex. when two men Police gave this account approached them and deof the incident: manded marijuana. Three KU students re“When the occupants in ported they were sitting Please see POLICE, page 4A Tree pest appears in Kansas Staff Report The U.S. Department of Agriculture reports that the emerald ash borer is now in Kansas with a case confirmed Wednesday in Wyandotte County. In a news release, state officials said the discovery of the destructive pest was made by Kansas Department of Agriculture and USDA staff during a survey being conducted as a result of the July 2012 confirmation of emerald ash borer Please see BORER, page 4A What is emerald ash borer?half inch long and they emerge in late spring. Trees infested with emerald ash borer will have canopy dieback, water sprouts, bark splitting, serpentine-like galleries and D-shaped exit holes. To learn more about the emerald ash borer, visit emeraldashborer.info. have people who are happy to collect unemployment and not work. We have a problem with peo- Jenkins ple working the system.” Schlingensiepen said Jenkins’ comment was insulting. “Congresswoman Jenkins sounds like she has spent so much time in Washington she has forgotten what real life is like,” Schlingensiepen said. “Nobody is happy to be unemployed. To offer a catchphrase instead of a solution is insulting. When I go to Washington, I’ll work with both parties to create jobs for the people of Cherokee County, not criticize them,” he said. Southeast Kansas has among the highest unemployment rates in the state. Cherokee County, which borders both Schlingensiepen M i s s o u r i and Oklahoma, had an 8.6 percent jobless rate in July compared with 6.7 percent statewide. Cherokee County also has a higher rate of people living below the poverty line: 17.5 percent compared with 12.4 percent statewide, according to U.S. Census Bureau data. Bill Roe, who is campaign manager for Jenkins, said that Jenkins recently met with a small businessman who said that when he tried to make a hire, the applicant said he couldn’t start for a few weeks because that is when his unemployment Please see JENKINS, page 4A Labor Day Sale Starts Today! Up To 25 % Off New fall arrivals! • • • • • All Levi’s Bandolino Blues NYDJ Jeans Ruby Road Woolrich & Much More Shown: Levis 512™ Bootcut Jean $ 3999. Slim flattering fit for misses with mid rise and 191/2 leg opening. Reg. $54 9th & Massachusetts • 843-6360 SHOP ‘TILL 6:00... SUNDAY 12:00-5:00 4A | Friday, August 31, 2012 -"83&/$&t45"5& . SOUND OFF Panel raises issue of public defender funding Q: TOPEKA — A group of senators used a confirmation hearing as a chance to raise their concerns about the level of support for funding the Kansas public defender system. Questions were raised Wednesday during hearings to confirm appointments by Gov. Sam Brownback to various state agency positions, including Paul Eugene Beck and Kevin Mark Smith who were named to the Board of Indigent Defense, The Topeka Capital Journal reported. Sen. Tim Owens, an Overland Park Republican, asked the pair if they would be willing to press Brownback At the bridge being replaced on 23rd Street, the construction zone says 25 mph and then it goes down to 20 mph at the bottom of the hill. Is that enforced after hours when there are no workers, or is that just the expectation when workers are there? A: Speeds in posted work zones are enforced continuously, in part, because of altered traffic patterns, lane closings, reduction of lane width and changes in lighting, according to Meagan Gilliland, the city’s communications manager. SOUND OFF If you have a question, call 832-7297 or send email to soundoff@ ljworld.com. ? ON THE STREET By Adam Strunk Read more responses and add your thoughts at LJWorld.com What comes to mind when you hear the term “blue moon”? Asked on Massachusetts Street See story, page 1A Hayley Battenberg, student, Shawnee “I think of almond ice cream from Wisconsin.” for funding to keep the public defenders’ office functioning. “Will you be willing to go to the governor and go to the Legislature and say we need more money for this?” asked Owens, a lawyer and chairman of the Senate Judiciary Committee. Beck and Smith indicated they would push for funds to ensure defendants have access to competent legal counsel. Owens, who lost a GOP primary Aug. 7 and won’t be back in Topeka next year, said the state should be concerned that a lack of adequate legal support could lead to lawsuits against the state. L AWRENCE J OURNAL -W ORLD Borer Jenkins CONTINUED FROM PAGE 3A CONTINUED FROM PAGE 3A Cherokee County thieves and happy to be unemployed,” he said. “Besides the obvious insult to people who are trying hard to get jobs, it’s that kind of tone that is never going to solve problems for the people of the 2nd District,” he added. Jenkins is seeking a third two-year term to represent the 2nd Congressional District, which includes Lawrence and much of eastern Kansas. Schlingensiepen, a Topeka pastor, is making his first run for elective office. Libertarian Dennis Hawver of Ozawkie is also on the ballot. in Platte County, Mo. The compensation ran out. staff identified a tree dur“It’s precisely this type ing the visual survey that of abuse of the system that showed symptoms of the is siphoning money away emerald ash borer. They from folks who legitiremoved a portion of the mately need it,” Roe said. tree and sent it to a USDA “Unlike Tobias Schlingenlab in Michigan for further siepen, Congresswoman analysis. Jenkins has no intention of Regulatory officials re- standing by silently while moved a live insect from thieves steal money from the sample and confirmed individuals in need,” he the presence of emerald said. ash borer. Schlingensiepen said “In Kansas, we have Roe’s comment repreworked for years on em- sented more of the same erald ash borer prevention insulting. and surveillance efforts. “In two days’ time, These vigilant surveillance Congresswoman Lynn efforts allowed us to catch Jenkins and her campaign — Statehouse reporter Scott Rothschild can be reached at 785-423-0668. the pest early,” said Jeff Vo- have called the people of ON THE RECORD LJWORLD.COM/BLOTTER gel, KDA Plant Protection and Weed Control program There were no incidents The Journal-World does not manager. “We are making to report Thursday. print accounts of all police reports additional plans right now filed. The newspaper generally reports: for increased surveillance HOSPITAL • Burglaries, only with a loss of efforts to prevent further $1,000 or more, unless there are spread of emerald ash borBIRTHS unusual circumstances. To proer.” tect victims, we generally don’t Michael Morgenstern and identify them by name. Emerald ash borer, Karissa Adams, Lawrence, a • The names and circumstanc- which is a pest of ash trees boy, Wednesday Miranda Wymer, Ozawkie, es of people arrested, only after and is native to Asia, was they are charged. a boy, Wednesday first discovered in North • Assaults and batteries, only if Ryan and Amber Luckie, major injuries are reported. America near Detroit in Lawrence, a boy, Wednesday • Holdups and robberies. summer 2002. Since that Christopher Grammer and time, the beetle species has Tracy Pressgrove, Lawrence, a girl, Thursday killed millions of ash trees in 15 states, from Minnesota PUMP PATROL to Connecticut. Financially, CORRECTIONS the United States risks an The Journal-World’s polThe Journaleconomic loss of $20 bilicy is to correct all signifiLAWRENCE World found lion to $60 billion because cant errors that are brought of this pest. gas prices as to the editors’ attention, low as $3.67 The state has impleusually in this space. If at Phillips mented an emergency inyou believe we have made 66, 1548 trastate quarantine of ash such an error, call (785) E. 23rd St. trees, firewood and other 832-7154, or email news@ If you find a ash tree materials such as ljworld.com. lower price, call 832-7154. compost or wood chips for Wyandotte County to prevent further spread of the pest. mont Street. The quarantine will re“Witnesses reported main in effect for 90 days seeing two individuals or until rescinded or modiwho matched the de- fied by the state. The quarCONTINUED FROM PAGE 3A scription of the suspects antine requires all ash trees the SUV said they didn’t involved in the incident and materials in Wyanhave any marijuana, the at The Reserve checking dotte County to be treated suspects demanded their several vehicles parked or disposed of properly. cellphones and cash,” in the city parking lot,” If Kansans think any McKinley said. “One McKinley said. “They ad- of their trees may have of the suspects circled vised the subjects made the pest, they should noaround the car and went entry to one vehicle prior tify KDA immediately at to the driver’s side where to officers arriving.” 862-2180 or at ppwc@kda. he pointed a pistol at the Police recovered a ks.gov. person sitting in the driv- handgun, holster and keys er’s seat and threatened to the robbery victim’s veto shoot him if the others hicle. didn’t give up their money Officers arrested the two and property.” suspects on charges of auto A struggle ensued be- burglary and possession of tween the armed suspect marijuana, and after an inand the male student vestigation they connected standing outside the SUV. the two men to the robThe suspect punched the bery outside the apartment victim. Then the suspect complex, the sergeant said. struck the victim on top One of the suspects, Sterof his head with the pistol. ling James Wilkins, was The pistol also fired into also detained on outstandMOVIES KIDS BEST BETS SPORTS FRIDAY Prime Time August 31, 2012 the ground. ing Shawnee County warKNO DTV DISH 7 PM 7:30 8 PM 8:30 9 PM 9:30 10 PM 10:30 11 PM 11:30 The suspects took the rants, McKinley said. victim’s keys and fled in a Douglas County Dis- Network ChannelsHigh School Football KCTV5 News at 9 (N) Raymond Raymond Inside Ed. Payne M 3 62 62 vehicle. trict Attorney Charles E Æ News News TMZ (N) Seinfeld $ 4 4 4 Bones The team investigates roadside remains. FOX 4 at 9 PM (N) News Late Show Letterman The Insider Medics treated the rob- Branson said prosecutors B % 5 5 5 Undercover Boss CSI: NY “Clean Sweep” Blue Bloods h Meet Past Charlie Rose (N) h Great Performances (N) h bery victim, but he de- would likely make a de- D 3 7 19 19 Wash. KCWIR McLaughlin Need America’s Got Talent Grimm h Dateline NBC (N) News Tonight Show w/Leno J. Fallon C ; 8 clined to be taken to the cision about filing formal A ) News Two Men Big Bang Nightline 9 9 9 Shark Tank h 20/20 h D KTWU 11 Wash. Need I’ve Got. Great Performances (N) h BBC World Business Charlie Rose (N) h hospital, McKinley said. charges today. News Nightline Jimmy Kimmel Live A Q 12 Shark Tank h 20/20 h About 12:35 a.m. ThursUndercover Boss CSI: NY “Clean Sweep” Blue Bloods h News Late Show Letterman Ferguson B ` 13 — Reporter George Diepenbrock can C I 14 41 41 America’s Got Talent Grimm h day, police received a reDateline NBC (N) News Tonight Show w/Leno J. Fallon be reached at 832-7144. Follow him at port of vehicle burglaries ’Til Death ’Til Death King King Family Guy South Park KMCI 15 38 38 ThisMinute ThisMinute The Doctors L KCWE 17 29 29 America’s Next Model Nikita “Wrath” h News Ent The Office The Office 30 Rock Chris Twitter.com/gdiepenbrock. in the 1000 block of Ver- Police Local TV LISTINGS now on… Listings for CABLE, BROADCAST & SATELLITE! Ian Gambill, cook, Lawrence “A beer or the song.” ION KPXE 18 50 Cold Case Cold Case “Fly Away” Cold Case Flashpoint Cable Channels MOVIE GUIDE 2016: OBAMA'S AMERICA AA PG Scholar and author Dinesh D'Souza delves into President Barack Obama's past for clues about America's possible future if Obama wins a second term. Hollywood Southwind Cinema 12 THE BOURNE LEGACY AAA PG-13 The actions of Jason Bourne spell the possible end of secret intelligence programs, so a specially enhanced operative goes on the run with a research scientist when it appears that their lives will become forfeit. Hollywood Southwind Cinema 12 Hannah Bassett, student, Salina “I think of a beer.” THE CAMPAIGN AAB R Hoping to gain political influence in their North Carolina district, two wealthy CEOs put up a naive candidate to challenge a longtime incumbent congressman. Hollywood Southwind Cinema 12 CELESTE AND JESSE FOREVER AAB R A divorcing couple try to maintain their friendship while harboring mixed feelings about their split and pursuing other relationships. Liberty Hall Cinema THE DARK KNIGHT RISES AAA Cheyenne Boelk, student, Walnut Creek, Calif. “I think of Indians and wolves.” HOPE SPRINGS AAA PG-13 A woman drags her skeptical husband to a renowned counselor's marriage retreat to try to put the spark back in their relationship. Hollywood Southwind Cinema 12 LAWLESS R A sadistic Chicago lawman comes to 1931 Virginia to shut down the Bondurant brothers' bootlegging business. Hollywood Southwind Cinema 12 MOONRISE KINGDOM AAAB PG-13 In 1965 New England, a peaceful island community descends into turmoil when two love-struck 12-year-olds run away together just before the approach of a violent storm. Liberty Hall Cinema THE ODD LIFE OF TIMOTHY GREEN AAB PG A boy magically appears on the doorstep of a childless couple who desperately want a family but are unable to conceive. Hollywood Southwind Cinema 12 PARANORMAN AAA PG A ghoul-whispering youngster battles zombies, ghosts, witches and ignorant adults to save his town from an ancient curse. Hollywood Southwind Cinema 12 PG-13 Eight years after he took the blame for Harvey Dent's death and vanished into the night, Batman is forced out of his self-imposed exile by a cunning cat burglar and a merciless terrorist called Bane. Hollywood Southwind Cinema 12 PG-13 Parents must work together to save their young daughter from a dybbuk, a malevolent spirit that inhabits and ultimately devours its human host. THE EXPENDABLES 2 AAB Hollywood Southwind Cinema 12 R Mercenary Barney Ross and his team cut a swath of destruction through opposing forces as they take revenge for the vicious murder of a comrade. Hollywood Southwind Cinema 12 PREMIUM RUSH AAA HIT & RUN AA R A former getaway driver finds feds and his former gang members on his tail when he breaks out of the Witness Protection Program to help his girlfriend get to Los Angeles. Hollywood Southwind Cinema 12 THE POSSESSION PG-13 A bike messenger's last delivery of the day turns into a lifeor-death chase through Manhattan. Hollywood Southwind Cinema 12 SAFETY NOT GUARANTEED AAA R A disaffected magazine intern befriends an unusual guy, who is looking for a partner to accompany him on a trip back through time. Liberty Hall Cinema Flashpoint “The Farm” Football Kitchen 6 News Home Turnpike Pets 6 News eHigh School Football Chris 307 239 How I Met How I Met How I Met How I Met WGN News at Nine (N) Funniest Home Videos Chris Heartbrkr ››‡ Heartbreakers (2001, Comedy) Sigourney Weaver. › A Guy Thing (2003) Jason Lee. City Bulletin Board, Commission Meetings City Bulletin Board, Commission Meetings School Board Information School Board Information SportsCenter (N) SportsCenter (N) 206 140 eCollege Football Boise State at Michigan State. (N) (Live) h Baseball Tonight (N) 209 144 E2012 U.S. Open Tennis Men’s Second Round and Women’s Third Round. World/Poker 672 aMLB Baseball Minnesota Twins at Kansas City Royals. (Live) h Royals Lve Tex Tech Preview Football fMLS Soccer Colorado Rapids at Portland Timbers. (N) 603 151 eCFL Football BC Lions at Montreal Alouettes. (N) Greta Van Susteren The O’Reilly Factor Hannity h 360 205 The O’Reilly Factor (N) Hannity (N) h Ultimate Factories American Greed Ultimate Factories 355 208 Costco Craze 16 Blocks (2006, Action) Bruce Willis. ››› 16 Blocks (2006) h Bruce Willis. ››› The Negotiator Law & Order: SVU Law & Order: SVU CSI: Crime Scene CSI: Crime Scene 242 105 Law & Order: SVU 265 118 Hoggers Hoggers Hoggers Hoggers Hoggers Hoggers Hoggers Hoggers Hoggers Hoggers 246 204 World’s Dumbest... World’s Dumbest... World’s Dumbest... Forensic Forensic World’s Dumbest... 254 130 ››› The Princess Bride (1987) Cary Elwes. ››› The Princess Bride (1987) Cary Elwes. ›‡ The Reaping Payne Worse Worse The Office 247 139 Payne ›› Valentine’s Day (2010) h Jessica Alba. (DVS) 237 129 ›››‡ The Aviator (2004, Biography) Leonardo DiCaprio, Cate Blanchett. ›››‡ The Aviator (2004) King King King 304 106 Home Imp. Home Imp. Raymond Raymond Raymond Raymond King American Pickers American Pickers American Pickers American Pickers 269 120 American Pickers Lost Girl (N) Alphas “Alphaville” 244 122 WWE Friday Night SmackDown! (N) h Lost Girl h 248 136 ›‡ Push (2009, Suspense) h Chris Evans. Premiere. ›‡ Push (2009) h Chris Evans, Dakota Fanning. Tosh.0 Tosh.0 Tosh.0 Daily Show Colbert The Burn Daily Show 249 107 Tosh.0 Futurama h Jonas Jonas Fashion Police (N) Chelsea E! News h Chelsea 236 114 Kardashian Reba Cowboys Cheerleaders Cowboys Cheerleaders Cowboys Cheerleaders Cowboys Cheerleaders 327 166 Reba 329 124 ›››‡ Ray (2004) Jamie Foxx. Ray Charles overcomes hardships to become a legend. Chris Bro. Wendy Williams Show Love, Hip Hop Hip Hop 335 162 ››‡ ATL (2006, Comedy-Drama) Tip Harris, Lauren London. Love, Hip Hop Ghost Adventures Ghost Adventures Ghost Adventures 277 215 Ghost Adventures The Dead Files h Gown Say Yes Say Yes Gown Gown 280 183 Say Yes Say Yes Say Yes Say Yes Gown 252 108 America’s Most Wanted America’s Most Wanted America’s Most Wanted America’s Most Wanted America’s Most Wanted Preacher’s Daughter 253 109 The Preacher’s Daughter (2012, Drama) Lies My Mother Told Me (2005) h Diners Diners Diners Diners Diners Diners Diners Diners Diners 231 110 Diners Extreme Homes (N) Hunters Hunt Intl Hunt Intl Hunt Intl Extreme Homes h 229 112 Cool Pools h Friends Friends Friends Friends 299 170 Victorious Victorious My Wife My Wife George Lopez Zeke Zeke Zeke Ninja Phineas Kings Suite Life Kickin’ It Suite/Deck 292 174 Zeke Gravity Gravity Gravity ANT Farm Good Luck Jessie Jessie ANT Farm ANT Farm 290 172 Gravity King of Hill King of Hill Amer. Dad Amer. Dad Family Guy Family Guy Chicken Squidbill. 296 176 Cartoon Planet Bering Sea Gold: Under Yukon Men (N) h Bering Sea Gold: Under Yukon Men h 278 182 Gold Rush h Prince Prince 311 180 ››› Mean Girls (2004) ››› Mean Girls (2004) h Lindsay Lohan. The 700 Club h Abandoned Abandoned The Lusitania 276 186 Titanic: Final Dark Secrets of the Lusitania (N) h Frasier Frasier Frasier Gold Girls Gold Girls 312 185 Little House on Prairie Little House on Prairie Frasier Law on the Border (N) Law on the Border Law on the Border 282 184 North Woods Law: Hunt Law on the Border H. Lindsey Harvest P. Stone Something to Sing About Praise F.K. Price Life Focus 372 260 Behind Campus Rosary Showcase The Saints Women of Daily Mass: Our Lady 370 261 Life on the Rock ››‡ Pals (1987) George C. Scott, Don Ameche. The Florence Hender ››‡ Pals (1987) George C. Scott, Don Ameche. Capital News Today 351 211 Tonight From Washington Politics & Public Policy Today 350 210 Politics & Public Policy Today Deadly Women (N) 285 192 Deadly Women h Deadly Women h Deadly Women h Deadly Women h Weapons ››‡ Ike: Countdown to D-Day (2004) 287 195 ››‡ Hamburger Hill (1987, War) Anthony Barrile. Police Women Police Women Police Women Police Women 279 189 Police Women 362 214 Twist Fate Twist Fate Ice Pilots Ice Pilots Weather Center Live Twist Fate Twist Fate Ice Pilots Ice Pilots General Hospital Young & Restless Days of our Lives General Hospital 262 253 Days of our Lives 256 132 ››› Rollerball (1975) James Caan. ››› Hide in Plain Sight (1980) James Caan. ››‡ Thief (1981) Hard Knocks Real Time/Bill Maher Real Time/Bill Maher 2 Days Hard 501 300 The Newsroom Skin-Max Strike Back Sexual 515 310 ››› X-Men: First Class (2011) James McAvoy. Strike Back (N) 545 318 ›››‡ Traffic (2000) h Michael Douglas, Don Cheadle. Katt Williams ››‡ Blitz (2011) Jason Statham. 535 340 ››‡ 30 Minutes or Less (2011) ››› Silverado (1985) Kevin Kline. ›› Assassins (1995) Boss “Ablution” 527 350 Camelot “Guinevere” Boss “Ablution” (N) ›› Colombiana (2011) Zoe Saldana. For complete listings, go to 6A | Friday, August 31, 2012 BRIEFLY Shooting suspect may have called doc Holmes. Ex-SEAL author may face legal action WASHINGTON —. NATION . L AWRENCE J OURNAL -W ORLD Weakening Isaac hovers over Drought-weary farmers await waterlogged Louisiana storm’s remnants By Cain Burdeau and Michael Kunzelman By Jim Suhr Associated Press Associated Press Gerald Herbert/AP Photo PEOPLE RESCUE COWS FROM FLOODWATERS after Isaac passed through the region Thursday in Plaquemines Parish, La. Isaac staggered toward central Louisiana early Thursday, its weakening winds still potent enough to drive storm surge into portions of the coast and the river parishes between New Orleans and Baton Rouge.. Romney droughttracking. CONTINUED FROM PAGE 1A poWikiLeaks case set litical hoopla, the evening one of a very few for trial in February marked opportunities any presiFORT MEADE, MD. — An dential challenger is grantArmy private accused of ed to appeal to millions of handing over a trove of voters in a single night. classified documents to the The two-month camwebsite WikiLeaks is sched- paign to come includes uled for trial in February. other big moments — Army Col. Denise Lind is principally a series of onethe judge handling the case. on-one debates with DemShe said Thursday that ocrat Obama — in a race Pfc. Bradley Manning’s trial for the White House that is now scheduled to take has been close for months. place between Feb. 4 and In excess of $500 million March 15. The 24-year-old has been spent on camManning faces a possible paign television commerlife sentence if convicted cials so far, almost all of it of leaking hundreds of in the battleground states thousands of documents, of Florida, North Carolina, including cables and war Virginia, New Hampshire, logs, to the secret-spilling Ohio, Iowa, Colorado and website. Nevada. Lawyers have discussed Romney holds a funvarious evidentiary issues draising advantage over during a three-day preObama, and his high comtrial hearing that concluded mand hopes to expand Thursday in Fort Meade, Md. the electoral map soon if Both sides are scheduled to post-convention polls in return to court in October for Pennsylvania, Michigan, another hearing. Wisconsin and perhaps Manning is being held elsewhere indicate it’s in Fort Leavenworth, Kan., worth the investment. ahead of his trial. Romney was often al- ADJUSTABILITY You need it - We have it! 2329 Iowa Street Lawrence 785-832-0501 J. Scott Applewhite/AP Photo FROM LEFT, REPUBLICAN VICE PRESIDENTIAL NOMINEE Rep. Paul Ryan, Janna Ryan, Ann Romney and Republican presidential nominee Mitt Romney wave to the delegates Thursday during the Republican National Convention in Tampa, Fla. most!” About Obama, Romney said, “Many Americans have given up on this president, but they haven’t ever thought about giving up. Not on themselves, Not on each other. And not on America.” It’s the economy. Romney offered no new information on what has so far been a short-ondetails. LOCALLY OWNED & OPERATED FULL SERVICE OIL CHANGE 5 $ OFF Includes: • New oil & filter • Top Off Fluids • Up to 5 quarts of featured oil • 14 point vehicle inspection • Reg. Price $36.99 No Extra charge for 4x4’s. Expires 8/31/12 -"83&/$&t"3&" L AWRENCE J OURNAL -W ORLD BRIEFLY Man, 74, faces child sex charges Law today. The Journal-World generally does not identify sex crime suspects unless they are convicted. Tongie student hit by SUV A Tonganoxie Middle School student was taken to an area hospital after being hit by a vehicle after school Thursday. Tonganoxie police Chief Jeff Brandau said Allyson Sparks, a TMS fifth-grader, was struck by a sport-utility vehicle about 3:15 p.m. The girl was crossing Washington Street and headed north on East Street, Brandau said. There is not a crosswalk at the intersection. She was taken to Children’s Mercy Hospital in Kansas City, Mo., by Leavenworth County Emergency Medical Services, Brandau said. She suffered a broken femur in the accident, Brandau said. The accident still is under investigation, but no citations currently are planned to be issued, Brandau said. County clerk offers voter assistance A voter identification and registration drive is scheduled from 3 p.m. to 4:30 p.m. Sept. 13 at Lawrence Presbyterian Manor, 1429 Kasold Drive. Staff members from the Douglas County Clerk’s office will issue voter identification cards to people who do not have the government-issued photo IDs required to vote. To get such a card, a voter must be registered. A utility bill, bank statement, government check or other government-issued document with name or address can be used to obtain a photo ID card. The public is invited to participate. Farmers markets topic of meeting. Friday, August 31, 2012 Police, firefighters prepare for flag football showdown team of firefighters. “And I’m sure there’s a little bit of pride on the other side to try to get the first win, too.” It’s a competitive, fun rivalry that both sides hope will become a longstanding By George Diepenbrock gdiepenbrock@ljworld.com After.” Eisenhower Memorial to have high-tech side WASHINGTON in- clude a mobile app using “augmented reality” technology to superimpose historic images and recordings onto Gehry’s memorial scene and tapestries. The memorial park, though, will remain a contemplative space for visitors who want a quieter experience.. Eisenhower’s family has called for a simple memorial to reflect Ike’s modesty. Susan Eisenhower, one of the president’s granddaughters, has said broader storytelling from history should be left to museums, not monuments. LOCAL • FRESH • SAVE $$ PRICES GOOD FRI AUGUST 31st THRU SUN SEPTEMBER 2nd, 2012 $AVINGS KICK-OFF! PEPSI 2 LITER LAY’S IPS H C O T A T PO 10-10.5 OZ BAG 88 ASSORTED ¢ $ 75 1 EA VARIETY EA CHEEZ-ITS 13.7 OZ BOX ORIG ONLY 2 $ 18EA VAN CAMP’S PORK AND BEANS KC MASTERPIECE BBQ SAUCE 18 OZ ORIGINAL ONLY 15 OZ 3/ 1 $ 88 ¢ BEST CHOICE CRINKLE FRIES OR TATER PUFFS 32 OZ 2/$3 BEST CHOICE BIG CHUNK CHEESE 2/ 5 AQUAFINA DRINKING WATER HALF GALLON 3 $ 88 8 CONTINENT SOYMILK TH 64 OZ 1 $ 99EA 3 2 CT 6.5-9 OZ PKG 12 OZ MICHELINA’S ENTREES 5-9 OZ 1 $ 88 OLD ORCHARD JUICE BLENDS EA 88 EA HOT POCKETS ½ LITER BIG 32 CT $ 98 EA BLUE BELL ICE CREAM 1 LB PKG $ 4/$5 EA SHASTA 24 PK 12 OZ CANS 3 ¢ $ 48 EA Your Local City Market! 23RD & LOUISIANA Locally owned & operated since 1987 | 7A EA IF YOU SEE A LOWER LOCALLY ADVERTISED PRICE, BRING THE AD IN AND CHECKERS WILL MATCH IT. SEE MGR FOR DETAILS. Visit us @ checkersfoods.com & “like” us on facebook @ Checkers Foods 8A | Friday, August 31, 2012 #64*/&44t/"5*0/ . L AWRENCE J OURNAL -W ORLD Court Republicans ignoring signs Crash involving rejects of some economic gains 100-year-old driver rekindles age debate Texas Man struck 11 people backing voter ID out of grocery store parking lot law By Tom Raum ——— Associated Press By Will Weissert Associated Press AUSTIN, TEXAS —judge Fischer,. WASHINGTON —, By John Rogers Associated Press J. Scott Applewhite/AP Photo REPUBLICAN NATIONAL COMMITTEE RNC CHAIRMAN Reince Priebus announces the display of the debt ticker Monday during the Republican National Convention in Tampa, Fla. You wouldn’t know it from listening to the Republican National Convention, but the nation’s economic picture seems to be slowly getting a little brighter.. Eco- nomic. BUSINESS AT A GLANCE Notable. Thursday’s markets Dow Industrials —106.77, 13,000.71 Nasdaq —32.48, 3,048.71 S&P 500 —11.01, 1,399.48 30-Year Treasury —0.03, 2.74% Corn (Chicago) —5 cents, $8.09 Soybeans (Chicago) +9 cents, $17.62 Wheat (Kansas City) —6.5 cents, $8.90 Oil (New York) —87 cents, $94.62 Gold —$5.90, $1,657.10 Silver —47.6 cents, $30.45 Platinum —$16.60, $1,503.70 DILBERT Retailers see best sales growth since March By Anne D’innocenzio Associated Press NEW YORK — This summer, Americans were walking contradictions: They opened their wallets despite escalating fears about the slow economic recovery and surging gas prices. A group of 18 retailers ranging from discounter Target to departmentstore chain Macy’s reported August sales on Thursday that rose 6 percent — the industry’s best performance since March — according to trade group International Council of Shopping Centers. At the same time, the government released numbers showing that 18-year 11 people, nine of them children. The accident in front of a South Los Angeles elementary school where children had lined up to buy afterschool80 drive as bad as teenagers — the nation’s riskiest drivers, he said. Pearson Collision Repair 749-4455 AUG. 30TH - SEPT. 2ND Buy 3 or more regular priced items & receive reg e by Scott Adams Pluss a Pl an addition $5 off for every $100 spent! pe ent n! Bare Escentuals excluded be a little eccentric. 716 Massachusetts | 830-9100 WORLD L AWRENCE J OURNAL -W ORLD A relaxing world record Friday, August 31, 2012 | 9A Egypt leader in Iran: World must back Syrian rebels By Brian Murphy and Nasser Karimi Associated Press Apichart Weerawong/AP Photo THAI MASSEUSES PERFORM MASS MASSAGING Thursday at a sport arena on the outskirts of Bangkok, Thailand.. BRIEFLY Message in bottle sets world record. Cooking oil fumes lead to jet landing VIENNA —.” Rowling to build tree houses LONDON —. U.N. nuke agency: Iran ‘significantly’ hampers probe “ By George Jahn Associated Press The window of opportunity to resolve this diplomatically remains open but it will not remain open indefinitely.” — White House spokesman Jay Carney. TEHRAN, IRAN — socalled nonaligned nations. His speech, delivered while seated next to Iranian President Mahmoud Ahmadinejad, prompted Syria’s delegation to walk out of the gathering. Iran’s leaders have claimed that the weeklong meeting, which wraps up today, displayed the futility of Western attempts to isolate the country over its nuclear program. But Iran also was forced to endure criticism from Morsi and another highprofile guest, U.N. Secretary-General Ban Kimoon, Raouf Mohseni/Mehr News Agency U.N. SECRETARY-GENERAL BAN KI-MOON, left, looks on as Iranian President Mahmoud Ahmadinejad, right, confers with Foreign Minister Ali Akbar Salehi, center, and an unidentified man Thursday at summit of the Nonaligned Movement in Tehran, Iran.owned. HIGH SCHOOL DIPLOMA COMPLETION PROGRAM FOR ADULTS (18 & OVER) Receive either an LHS, FSHS, or PLHS Diploma. The program is a part of the Lawrence Public Schools and open to any Kansas resident. Call 785-832-5960 Lawrence Diploma Completion Program Enroll Before 2145 Louisiana Street Sept. 20th! Our family caring for yours. WALK-IN CLINIC HOURS Weekdays: 8 a.m. to 6 p.m. Saturdays: 9 a.m. to noon OR CALL (785) 841-6540 FOR AN APPOINTMENT 4951 W. 18th Street Lawrence, KS 66047 lawrencefamilypractice.com OPINION LAWRENCE JOURNAL-WORLD LJWorld.com Friday, August 31, 2012 10A EDITORIALS Oread plan The strength of design guidelines being developed for the Oread neighborhood is that they recognize the varied needs of a diverse area. A plan to create design guidelines for the Oread neighborhood east and north of the Kansas University campus is a positive step because it recognizes that various parts of the neighborhood serve different purposes and have different needs. The Oread neighborhood wraps around campus on the north and east from Ninth to 17th streets and from Michigan Street to downtown. While areas adjacent to campus are heavily populated by students living in multi-family housing, other parts of the neighborhood still include more single-family homes, some of which have a historical character worth preserving. There always has been a certain amount of tension in Oread between single-family property owners and multi-family landlords. Recognizing and formalizing some natural divisions within the area is a valid way to stabilize the neighborhood and allow various uses to happily coexist. Although details of the plan still are being fleshed out, the basis of the new design guidelines is to separate the neighborhood into distinct areas. Most of the area just east of campus from 10th Street to 16th Street would be designated as a high-density district to accommodate the many multi-family uses that already exist in that area. An area that’s generally just east and north of the highdensity district would be designated as medium-density, and several blocks just north of Memorial Stadium would be low density. The maps currently under discussion also recognize to two historic districts: the Hancock Historic District, which covers a small area where 12th Street dead-ends west of Indiana Street; and the Oread Historic District, which covers a considerably larger area and includes a number of notable structures just north, east and south of KU’s GSP-Corbin residence hall complex. Although the plan may include some architectural guidelines, protection for mature trees and other measures to protect the character of the neighborhood, much of the document will focus on building sizes, how buildings are positioned on lots and how to incorporate parking for residents — a perennial bone of contention among Oread residents. The goal is not to downzone or reduce multi-family housing in the neighborhood, but to set standards that will allow the different uses in the neighborhood to exist in harmony. One of the signs that the city is on the right track with this plan is that both owners who occupy houses in the neighborhood and owners of rental property there seem satisfied with the direction the city is heading. Oread is an important part of the Lawrence and the Kansas University community and both entities should take an active interest in maintaining it as an attractive and active neighborhood. Election a test for conservatives WASHINGTON — George Will georgewill@washpost.com “ Twice as many Americans identify themselves as conservative as opposed to liberal. Nov. 6, we will know if they mean it. If they are ideologically conservative but operationally liberal.”. — George Will is a columnist for Washington Post Writers Group. OLD HOME TOWN 100 From the Lawrence Daily Journal-World for Aug. 31, 1912: YEARS “Fred Laptad. AGO living north of IN 1912.” — Compiled by Sarah St. John Read more Old Home Town at LJWorld.com/news/lawrence/ history/old_home_town. Air-conditioned Robinson is rec answer By Jerry Harper Forrecruits participate “on its campus or at an off-campus facility regularly used by the institution for practice and/ or competition by any of the institution’s sport programs.” It’s a transparent “fig leaf” to argue that KU isn’t “hosting” tournaments ostensibly sponsored by others, whether YOUR TURN on campus or at the sports village, given the fact that: The June 19 letter from a city-hired consulting firm says, “It is our understanding that the City of Lawrence is contemplating a public-private partnership to develop a youth sports complex with potential partners including the City, the University of Kansas, Harper the Assists Foundation (Bill and Cindy Self) and others.” The city manager says, “We think the fact Lawrence and college basketball are thought of together by so many people across the country will be a marketing advantage.” (Journal-World, Aug. 3) Roger Morningstar, former KU star and tournament expert, says, for the sports village to succeed “you have to have a tremendous amount of cooperation among the organizations that may use it. I think that is what makes Lawrence’s proposal unique. The city and the university could really work together to make this something more than a place with just a few gyms.” (Journal-World, Aug. 3) Since the Schwada-Fritzel scheme isn’t necessary, the real quandary is what to do with all of that extra sales tax/ infrastructure money. Here are some ideas (in no particular order of preference):. — Jerry Harper is a Lawrence resident and a semi-retired attorney. Friday, August 31, 2012 DEAN YOUNG/JOHN MARSHALL CHRIS BROWNE GARRY TRUDEAU MUTTS BABY BLUES GET FUZZY JERRY SCOTT & JIM BORGMAN PATRICK MCDONNELL JERRY SCOTT/RICK KIRKMAN DARBY CONLEY | 12A Friday, August 31, 2012 TODAY WEATHER . SATURDAY SUNDAY L AWRENCE J OURNAL -W ORLD DATEBOOK TUESDAY MONDAY 31 TODAY Cooler; a t-storm this afternoon Humid with clouds and sun Mostly sunny, hot and humid Bright sunshine and hot A thunderstorm possible High 84° Low 71° POP: 55% High 84° Low 65° POP: 25% High 91° Low 68° POP: 5% High 92° Low 68° POP: 10% High 84° Low 65° POP: 30% Wind E 7-14 mph Wind NNE 7-14 mph Wind WNW 3-6 mph Wind NW 6-12 mph Wind NE 6-12 mph POP: Probability of Precipitation Kearney 90/61 McCook 94/58 Oberlin 92/60 Clarinda 88/70 Lincoln 90/68 Grand Island 92/64 Beatrice 88/69 St. Joseph 84/69 Chillicothe 86/73 Sabetha 86/68 Concordia 88/66 Centerville 92/69 Kansas City Marshall Manhattan 84/72 83/72 Goodland Salina 87/70 Oakley Kansas City Topeka 92/57 88/69 92/60 88/71 Lawrence 84/71 Sedalia 84/71 Emporia Great Bend 81/72 84/70 89/65 Nevada Dodge City Chanute 82/70 90/62 Hutchinson 82/71 Garden City 86/69 90/61 Springfield Wichita Pratt Liberal Coffeyville Joplin 76/71 86/71 87/65 89/62 76/72 84/72 Hays Russell 89/63 90/66 Shown is today’s weather. Temperatures are today’s highs and tonight’s lows. LAWRENCE ALMANAC Through 8 p.m. Thursday. Temperature High/low 100°/58° Normal high/low today 85°/63° Record high today 108° in 2000 Record low today 47° in 2009 Precipitation in inches 24 hours through 8 p.m. yest. 0.00 Month to date 1.60 Normal month to date 3.91 Year to date 15.72 Normal year to date 28.45 REGIONAL CITIES Today Sat. Today Sat. Cities Hi Lo W Hi Lo W Cities Hi Lo W Hi Lo W Atchison 84 69 t 84 65 pc Independence 84 71 t 90 70 pc Fort Riley 86 70 pc 90 65 pc Belton 84 71 t 83 71 t 84 71 t 82 69 t Burlington 83 69 t 90 66 pc Olathe Coffeyville 84 72 t 90 70 pc Osage Beach 81 73 t 77 69 r 86 69 t 85 66 pc Concordia 88 66 pc 91 66 pc Osage City Ottawa 84 71 t 84 67 pc Dodge City 90 62 s 92 65 s 86 71 pc 92 72 pc Holton 86 70 t 87 66 pc Wichita Weather (W): s-sunny, pc-partly cloudy, c-cloudy, sh-showers, t-thunderstorms, r-rain, sf-snow flurries, sn-snow, i-ice. NATIONAL FORECAST SUN & MOON Aug 31 Sat. 6:50 a.m. 7:51 p.m. 8:09 p.m. 7:57 a.m. New First Sep 8 Sep 15 Sep 22 LAKE LEVELS As of 7 a.m. Thursday Lake Clinton Perry Pomona Level (ft) 873.52 888.30 972.49 Discharge (cfs) 24 25 25 Shown are today’s noon positions of weather systems and precipitation. Temperature bands are highs for today. Fronts Cold Forecasts and graphics provided by AccuWeather, Inc. ©2012 INTERNATIONAL CITIES Today Cities Hi Lo W Acapulco 91 78 t Amsterdam 63 52 sh Athens 90 73 s Baghdad 110 81 s Bangkok 92 80 t Beijing 91 73 pc Berlin 65 53 r Brussels 62 45 sh Buenos Aires 70 54 s Cairo 94 72 s Calgary 74 48 pc Dublin 61 53 c Geneva 59 50 r Hong Kong 91 81 t Jerusalem 84 65 s Kabul 95 64 s London 64 51 c Madrid 85 52 s Mexico City 77 54 pc Montreal 84 59 pc Moscow 63 41 pc New Delhi 90 77 t Oslo 65 46 pc Paris 67 48 sh Rio de Janeiro 76 64 sh Rome 82 67 sh Seoul 81 70 t Singapore 88 79 t Stockholm 63 59 r Sydney 66 41 s Tokyo 90 79 t Toronto 92 62 t Vancouver 69 53 pc Vienna 67 59 sh Warsaw 76 59 c Winnipeg 85 59 s Hi 91 66 89 106 92 86 68 67 72 94 69 64 60 90 84 94 70 82 70 75 60 90 68 69 81 78 86 88 65 66 88 83 69 69 70 83 Sat. Lo W 78 t 58 pc 72 s 81 s 78 t 70 c 53 pc 51 pc 52 s 72 s 44 pc 55 sh 48 sh 82 t 66 s 67 s 55 pc 57 s 54 t 54 s 39 pc 79 t 52 pc 51 pc 65 s 64 sh 70 pc 79 t 52 pc 45 s 77 sh 60 s 53 pc 61 r 53 pc 64 t Warm Stationary Precipitation Showers T-storms Rain Flurries Snow Ice -10s -0s 0s 10s 20s 30s 40s 50s 60s 70s 80s 90s 100s 110s National Summary: Heavy rain and flash flooding will spread northward over the Central states today. Severe storms will dot the South. Heat will build into the East as the northern Plains cool. Much of the West will be dry. Today Sat. Today Sat. Cities Hi Lo W Hi Lo W Cities Hi Lo W Hi Lo W Memphis 80 76 r 91 76 t Albuquerque 84 65 t 92 67 s 91 79 pc 90 79 t Anchorage 56 50 r 56 51 sh Miami Milwaukee 86 67 s 80 67 pc Atlanta 86 73 t 90 72 t 85 67 s 85 67 s Austin 98 73 s 97 73 pc Minneapolis 86 72 t 88 73 r Baltimore 96 68 s 90 68 pc Nashville New Orleans 86 77 t 89 76 t Birmingham 88 73 t 90 74 t New York 92 72 s 88 69 s Boise 92 61 s 87 56 s Omaha 92 69 s 89 65 pc Boston 93 68 s 79 59 s Orlando 90 74 t 90 72 t Buffalo 90 62 s 82 61 s 95 72 s 89 68 pc Cheyenne 85 57 s 87 61 pc Philadelphia Phoenix 103 87 pc 103 87 pc Chicago 95 72 s 82 72 t Pittsburgh 92 67 s 85 64 t Cincinnati 92 69 s 83 70 t Cleveland 92 67 s 83 66 pc Portland, ME 88 63 s 77 53 s Portland, OR 76 52 s 77 51 s Dallas 97 76 pc 96 77 s 88 57 s 85 56 s Denver 89 60 s 94 62 pc Reno Richmond 94 70 s 93 71 t Des Moines 92 70 s 84 67 t 85 51 s 84 56 s Detroit 94 67 s 82 65 pc Sacramento St. Louis 86 75 t 80 73 r El Paso 92 70 s 93 71 s Fairbanks 60 46 c 58 42 sh Salt Lake City 90 69 t 89 63 t San Diego 80 72 pc 82 71 pc Honolulu 87 75 s 88 73 s San Francisco 62 54 pc 66 52 pc Houston 96 77 pc 93 77 t Seattle 71 49 pc 74 51 s Indianapolis 92 71 pc 80 71 r Spokane 81 50 s 77 50 s Kansas City 84 71 t 83 68 t 96 74 t 98 73 pc Las Vegas 98 81 t 92 77 pc Tucson Tulsa 83 73 t 93 72 pc Little Rock 80 75 r 91 74 t Wash., DC 94 73 s 90 73 pc Los Angeles 87 67 pc 85 67 s National extremes yesterday for the 48 contiguous states High: Bullhead City, AZ 111° Low: West Yellowstone, MT 24° WEATHER HISTORY Hurricane Carol roared northward just off the New Jersey coast during the morning of Aug. 31, 1954. WEATHER TRIVIA™ Q: Has there ever been a season without an Atlantic hurricane? Yes, in 1907 and 1914. Full Today 6:49 a.m. 7:52 p.m. 7:39 p.m. 6:54 a.m. A: Sunrise Sunset Moonrise Moonset FRIENDS & NEIGHBORS Overbrook semi-annual 3-day flea market, 8 a.m.-5 p.m., Osage County Fairgrounds, 510 Cedar. Mike Shurtz Trio, jazz music, 10:15-11:15 a.m., Signs of Life, 722 Mass. Perry Lecompton Farmers Market, 4-6:30 p.m., U.S. Highway 24 and Ferguson Road. Read Across Lawrence: Lawrence Book Night, giveaway of “Winter’s Bone” and “Cabinet of Wonders,” 6 p.m., Lawrence Percolator, in the alley behind the Lawrence Arts Center. Harvest Time Outreach Ministry Tent Revival, 6:30-8:30 p.m., Watson Park, Seventh and Tennessee streets, free meal will be served after each service. Roving Imp Comedy Show, 8 p.m., Ecumenical Christian Ministries, 1204 Oread Ave. Final Friday 5-9 p.m. unless otherwise noted After-parties with music at Frank’s North Star Tavern, 508 Locust, and the SeedCo Studios, 826 Pa. Lawrence Arts Center, 940 N.H.: Kansas University Visual Art Faculty Exhibit; “Turkish Suburbia,” solo exhibition by Mark Slankard; “Special,” solo exhibition by Amy Kligman; “Art Tougeau Photographs,” by instructor Ann Dean, Intermediate and Darkroom Photography student work; Ice Cream Social. Blue Dot Salon, 15 E. Seventh St.: John Clayton, photos; Zane Batson. painting; Mikkell Lappin, ceramic bowls and trinkets, 5:30-8:30 p.m. Lawrence Public Library, 707 Vt.: Dream Rocket Project, 5-7 p.m. The Lawrence Art Party, 718 N.H.: live jazz and show and sale of art, 5-9:30 p.m. Lucky Paws Bakery & Unique Barktique, 4 E. Seventh St.: “Life Gone to The Dogs,” with DOGBOTS by Rebecca Jackson and art by Melissa Bee, 5-8 p.m. Teller’s Restaurant Upstairs, 746 Mass.: Jacob Burmood: “Moving at the Speed of Time.” Pachamamas, 800 N.H.: “Terrain Wreck,” works by Jeromy Morris. Wonder Fair, 803 1/2 Mass.: “The Cat, the Dish, & the Spoon: New work by Michael Krueger, Randy Bolton, & Tom Reed,” 6-10 p.m. Phoenix Gallery, 825 Mass.: Works by Gary and Sherrie Dick of Duet Designs and Cindy Buehler of Cinderelish, music by Michael Paull. Lost Art Space, 825 Mass.: Lost Art Sp_ce is opening “BOOM!”, a salon-style exhibition from the Fresh Produce Art Collective and SeedCo Studios. The Bourgeois Pig, 6 E. Ninth St.: “6x6 More or Less,” free hot dogs, Wayne Propst. TODAY’S BEST BETS Pooch Plunge, 4-7 p.m., Outdoor Aquatic Center, Eighth and Kentucky streets. League of Women Voters voter outreach at Final Fridays, 5-9 p.m., Ninth and Massachusetts streets. Final Friday events Do’s Deluxe, 416 E. Ninth St.: Photographs and paintings by Dave DeHetre, 5-8 p.m. Lawrence Percolator, in the alley behind Lawrence Arts Center: Read Across Lawrence kickoff party for ”Winter’s Bone” by Daniel Woodrell, with book giveaway and music from Americana Music Academy and the Hairy Vetch String Band. Five Bar / Ingredient, 947 Mass.: live jazz combo Blueprint, formerly the Tommy Johnson Band, plays from 7-10 p.m. Aimee’s Café & Coffee Shop, 1025 Mass.: Works by Sheila McGuire Watkins Community Museum, 1047 Mass.: Celebrating the move of the Milburn Electric Car with a family-friendly party featuring Lawrence’s Longest Toy Car Racetrack, 6-8 p.m. 1109 Gallery, 1109 Mass.: “Color Collision” featuring artists Sherrie Taylor and Pat Young in the large gallery, and works by more than 20 area artists in the small and main galleries. The Invisible Hand Gallery, 846 Pa.: Aaron Marable: “Domestic Bliss.” Flash Space, 830 Pa.: A one-night-only exhibition of works by Matt Ridgway and Charles Ray. SeedCo Studios, 826 Pa.: Open Studio & AfterOurs tour, local music by Whatever Forever, 6 p.m.-1 a.m. 8 Flavors, 2210 Iowa: Works by Matthew Obrakta. 1 SATURDAY Saturday Farmers’ Market, 7-11 a.m., 824 N.H. Red Dog’s Dog Days workout, 7:30 a.m., parking lot at Ninth and Vermont streets. Overbrook semi-annual 3-day flea market, 8 a.m.-5 p.m., Osage County Fairgrounds, 510 Cedar. Lawrence Flea, 9 a.m.4 p.m., Eighth and Pennsylvania streets. League of Women Voters voter outreach at Lawrence Flea, 9 a.m.-4 p.m., Eighth and Pennsylvania streets. Railfest 2012, 25th anniversary celebration, 9 a.m.-4 p.m., Midland Railway Depot, 1515 W. High St., Baldwin City. Soroptimist Plant Sale (Mums), 10 a.m.-2 p.m., Eagles Lodge, 1803 W. Sixth St. Great Books Discussion Group, Marcus Aurelius, “The Meditations,” 2-4 p.m., Lawrence Public Library, 707 Vt. Harvest Time Outreach Ministry Tent Revival, 6:30-8:30 p.m., Watson Park, Seventh and Tennessee streets, free meal will be served after each service. Walt Babbit performs The Roots of Country Music, 7-9 p.m., Moni’s Restaurant, 711 High St., Baldwin City The Crumpletons, 7 p.m., the Jazzhaus, 926 1/2 Mass. Wild Hayride, 8 p.m., Knights of Columbus, 2206 E. 23rd St. 2 SUNDAY Overbrook semi-annual 3-day flea market, 8 a.m.-3 p.m., Osage County Fairgrounds, 510 Cedar. Railfest 2012, 25th anniversary celebration, 9 a.m.-4 p.m., Midland Railway Depot, 1515 W. High St., Baldwin City. O.U.R.S. (Oldsters United for Responsible Service) dance, 6-9 p.m., Eagles Lodge, 1803 W. Sixth St. Poker tournament, 7 p.m., Johnny’s Tavern, 410 N. Second St. Smackdown! trivia, 8 p.m., The Bottleneck, 737 N.H. 3 MONDAY Labor Day Railfest 2012, 25th anniversary celebration, 9 a.m.-4 p.m., Midland Railway Depot, 1515 W. High St., Baldwin City.; Kansas University Visual Art Faculty Exhibit, through Sept. 22,. FOCUS FOIL (up to 10 - long hair extra) $15 PARTIAL FOIL (20 foils - long hair extra) $30 FULL FOIL (40 foils - long hair extra) $45 TOTAL BLONDE THE. Have something you’d like to see in Friends & Neighbors? Submit your photos at LJWorld.com/submit/friendsandneighbors or mail them to Friends & Neighbors, P.O. Box 888, Lawrence, KS 66044. (40 + foils - long hair extra) TOP OF THE HILL, 2005-2011 $55 & up HAIR CUTS $7 Specials on BASEBALL: Royals complete sweep of Tigers. 4B CRASH AND BURN SPORTS Nate Eachus and the Chiefs were tripped up by the Packers, 24-3. Page 6B B LAWRENCE JOURNAL-WORLD OLJWorld.com/sports OFriday, August 31, 2012 Guard Barber visiting Kansas FREE STATE VOLLEYBALL Tri-umphant John Young/Journal-World Photo —— FREE STATE’S CHASKA ROCHA (21) and Bonner Springs’ Dante Crider battle for control of a ball. The Firebirds won, 6-0, Thursday at FSHS. Coveted ‘Cat’ ‘difference-maker’ By Gary Bedore gbedore@ljworld.comAmerica.” O.” FSHS soccer nails opener By Jesse Newell jnewell@ljworld.com Mike Yoder/Journal-World Photo FREE STATE SENIOR KYLIE DEVER, CENTER, PREPARES TO BUMP during the Firebirds’ sweep of Lansing on Thursday night at FSHS. Dever is flanked by Logan Hassig, left, and Shelby Holmes. Firebirds go 2-0 in debut By Benton Smith “ A lot of times you’ve just got to dig down deep On the brink of seeing her and get that one good team lose a match in its season-opening home triangular play.” basmith@ljworld.com- — Free State senior Molly Ryan fi- nal.arching. Please see SOCCER, page 5B It’s go time: Prep football kicks off tonight Free State takes to road for opener with familiar Ravens By Benton Smith basmith@ljworld.com combina- tion Please see FSHS, page 5B LID LIFTERS Who: Free State at Olathe Northwest When: 7 tonight Where: College Boulevard Activity Center Who: SM West at LHS When: 7 tonight Where: Lawrence High Visiting SM West has Lions wary of speed, air attack By Benton Smith basmith@ljworld.com The.” Please see LHS, page 5B Sports 2 2B | LAWRENCE JOURNAL-WORLD | FRIDAY, AUGUST 31, 2012 COMING SATURDAY TWO-DAY s ! LOOK AHEAD TO +ANSAS 3OUTH $AKOTA 3TATE FOOTBALL s ,(3 &3(3 OPEN THEIR FOOTBALL SEASONS SPORTS CALENDAR KANSAS UNIVERSITY TODAY • Volleyball vs. Sam Houston (11:30 a.m.), Tulsa (7 p.m.). • Soccer vs. Creighton, 5 p.m. SATURDAY • Football vs. South Dakota State, 6 p.m. • Volleyball vs. Arkansas State, 2 p.m. • Cross country, Bob Timmons Classic. No. 9 South Carolina survives scare NASHVILLE, TENN. (AP) — No. 9 South Carolina and coach Steve Spurrier got a big scare to open the season. Marcus Lattimore and Connor Shaw helped the Gamecocks grind their way past plucky Vanderbilt. Lattimore ran for two touchdowns and 110 yards in his first game back after tearing his left ACL, and Shaw ran for 92 yards while playing the second half with an injured shoulder as No. 9 South Carolina rallied for a 1713 victory against Vanderbilt on Thursday night. Shaw bruised his right, throwing shoulder late in the first half and missed the first two series SUMMARY of the third quarter before returning. The junior drove the No. 9 South Carolina 17, Gamecocks for the go-ahead Vanderbilt 13 Carolina 7 3 0 7—17 touchdown and ran 12 yards to South Vanderbilt 0 10 3 0—13 the Vandy one before rolling in First Quarter SC-Lattimore 29 run (Yates kick), 4:55. pain in the end zone. Second Quarter Lattimore scored the goSC-FG Yates 20, 11:44. Van-Matthews 78 pass from Rodgers (Spear ahead TD on a one-yard run kick), 10:37. with 11:25 to go. Van-FG Spear 25, 6:51. Vanderbilt had plenty of time Third Quarter Van-FG Spear 44, 6:02. to attempt a comeback, the last Fourth Quarter with 5:08 left. But The CommoSC-Lattimore 1 run (Yates kick), 11:25. A-38,393. dores turned it over on downs with 1:47 to go when Jordan SC Van 17 11 Matthews couldn’t handle a First downs 47-205 36-62 fourth-down pass from Jordan Rushes-yards Passing 67 214 Rodgers. Comp-Att-Int 7-15-1 13-23-1 NFL PRESEASON Return Yards Punts-Avg. Fumbles-Lost Penalties-Yards Time of Possession 52 6-39.0 2-1 6-30 31:36 32 4-43.5 3-0 5-35 28:24 INDIVIDUAL STATISTICS RUSHING-South Carolina, Lattimore 23-110, C.Shaw 14-92, M.Davis 1-4, Miles 1-3, Team 3-0, Thompson 5-(minus 4). Vanderbilt, Stacy 13-48, Tate 7-17, Kimbrow 2-5, Grady 1-0, Rodgers 13-(minus 8). PASSING-South Carolina, C.Shaw 7-11-1-67, Thompson 0-3-0-0, Strickland 0-1-0-0. Vanderbilt, Rodgers 13-231-214. RECEIVING-South Carolina, Lattimore 3-21, Sanders 2-13, Cunningham 1-20, Byrd 1-13. Vanderbilt, Matthews 8-147, Krause 2-9, Grady 1-32, Tate 1-17, Boyd 1-9. FREE STATE HIGH TODAY • Football at Olathe Northwest (CBAC), 7 p.m. SATURDAY • Cross country at St. Thomas Aquinas, 8 a.m. LAWRENCE HIGH TODAY • Football vs. SM West, 7 p.m. SATURDAY • Cross country at Manhattan Invitational, 9 a.m. | SPORTS WRAP | The Associated Press SEABURY ACADEMY SATURDAY • Cross country at Topeka Hayden, 8:30 a.m. Jaguars 24, Falcons 14 JACKSONVILLE, FLA. — Kevin Elliott had a 77-yard touchdown reception, likely solidifying his spot on the regular-season roster, and Jacksonville won on Thursday night. The teams took vastly different approaches to the game. The Jaguars played their offensive starters into the second quarter; the Falcons played just two regulars: linebacker Akeem Dent and defensive tackle Peria Jerry. Eagles 28, Jets 10 PHILADELPHIA — Greg McElroy became the first quarterback to lead New York. Titans 10, Saints 6 NASHVILLE, TENN. — New Orleans rested all starters in a loss to Tennessee. The Saints now prepare to return home to storm-ravaged Louisiana and put a scandal-ridden offseason behind them. Steelers 17, Panthers 16 PITTSBURGH — Charlie Batch completed 11 of 14 passes for 102 yards and a touchdown to bolster his hopes of playing a 15th NFL season, and Pittsburgh beat Carolina. Batch hit Emmanuel Sanders for a 37-yard play on Pittsburgh’s first drive and later found Will Johnson for a 27yard gain to set up a field goal as the Steelers (3-1) won a battle of the backups against the Panthers (2-2). Rams 31, Ravens 17 ST. LOUIS — Sam Bradford threw three touchdown passes in 11⁄2 quarters, giving St. Louis a win over Baltimore.). Lions 38, Bills 32 DETROIT — Matthew Stafford threw a 24-yard touchdown pass to Calvin Johnson, and Detroit went on to beat Buffalo. Stafford and Johnson went to the sideline healthy after their only drive to make the Lions happy that their dynamic duo avoided injuries in the fourth and final preseason game. Texans 28, Vikings 24 HOUSTON — Trindon Holliday had his third kick return for a touchdown of the preseason, and Justin Forsett rushed for 114 yards and two more scores in Houston’s victory over Minnesota. Colts 20, Bengals 16 INDIANAPOLIS — Chandler Harnish threw a 42-yard touchdown pass to tight end Dominique Jones, leading Indianapolis over Cincinnati. Bears 28, Browns 20 CLEVELAND — Quarterback Colt McCoy did little to solidify winning Cleveland’s backup job — or impress any other NFL team — and Chicago’s Josh McCown threw two touchdown passes in the first half. VERITAS CHRISTIAN SATURDAY • Football vs. Steelville, Mo., 7 p.m. ROYALS TODAY • vs. Minnesota, 7:10 p.m. SATURDAY • vs. Minnesota, 6:10 p.m. SPORTING K.C. SATURDAY • vs. Toronto FC, 7:30 p.m. SPORTS ON TV Frank Franklin II/AP Photo ANDY RODDICK APPEARS AT A NEWS CONFERENCE Thursday in New York to announce his retirement. Baseball Roddick reveals he’ll retire after U.S. Open White Sox v. Detroit or Baltimore v. Yankees 6 p.m. MLB Minnesota v. Kansas City 7 p.m. FSN 155,242 36, 236 College Football Time Cable N.C. St. v. Tennessee Boise St. v. Mich. St. 6:30p.m. ESPNU 35, 235 7 p.m. ESPN 33, 233 NEW YORK — tonight.” TENNIS Tsonga falls at U.S. Open NEW YORK — After three days of the top players not only winning but winning decisively at the U.S. Open, fifth-seeded Jo-Wilfried Tsonga was defeated 31⁄2 hours. Time High School Football Time Net Net Net SM West v. LHS replay 10:30p.m. Knol. COLLEGE FOOTBALL Missouri QB Mauk arrested COLUMBIA, MO. — Missouri freshman quarterback Maty Mauk has been arrested on suspicion of four charges including leaving the scene of an accident after a scooter mishap. CYCLING Armstrong accused in book AUST. PRO FOOTBALL Union approves IR rule change. LATEST LINE NFL Favorite ............ Points (O/U) ........... Underdog Wednesday, Sept. 5 Week 1 NY GIANTS ...................... 4 (47) ................................ Dallas Sunday, Sept. 9 CHICAGO ........................91⁄2 (42).................. Indianapolis Philadelphia ...................8 (41) ..................... CLEVELAND NY JETS ........................... 3 (40) .............................. Buffalo NEW ORLEANS .............91⁄2 (50).................. Washington New England ...............61⁄2 (48)................... TENNESSEE MINNESOTA ..................41⁄2 (38)................. Jacksonville HOUSTON ........................10 (43) ............................... Miami DETROIT .........................81⁄2 (47)......................... St. Louis Atlanta ................... 2 (41) .......... KANSAS CITY GREEN BAY ...................51⁄2 (45).............. San Francisco Carolina ........................21⁄2 (46)................... TAMPA BAY Seattle ............................21⁄2 (41) ........................ ARIZONA DENVER ............................1 (44) ....................... Pittsburgh Monday, Sept. 10 BALTIMORE ......................6 (41) ........................ Cincinnati San Diego ......................11⁄2 (47) ....................... OAKLAND NCAA FOOTBALL Favorite ............ Points (O/U) ........... Underdog a-Tennessee ...................3 (51).......................... N.C. State MICHIGAN ST ................. 7 (46) ............................ Boise St STANFORD ......................25 (51) ................... San Jose St Saturday b-Notre Dame ............ 161⁄2 (55) ............................... Navy TODAY WEST VIRGINIA ..... 25 (67) ................. Marshall PENN ST .......................... 6 (44) ................................... Ohio Northwestern .........Pick’em (53) ................ SYRACUSE OHIO ST ........................241⁄2 (48) ................. Miami-Ohio ILLINOIS ...........................10 (49) ................ Western Mich Tulsa .....................11⁄2 (50) ................ IOWA ST CALIFORNIA ....................11 (56)............................. Nevada NEBRASKA .....................20 (53).............. Southern Miss Miami-Florida ..............21⁄2 (44)....... BOSTON COLLEGE c-Iowa ............................. 10 (50) .................... Northern Ill d-Colorado ..................... 6 (47) .................... Colorado St GEORGIA .........................38 (58)............................. Buffalo FLORIDA .........................29 (48).............. Bowling Green TEXAS ....................31 (51)................. Wyoming HOUSTON .....................361⁄2 (62)........................ Texas St a-Clemson ....................31⁄2 (55)........................... Auburn USC ..................................42 (63).............................. Hawaii e-Alabama ......................14 (46) ......................... Michigan Rutgers ..........................20 (48)........................... TULANE Oklahoma .............. 31 (63) ...................... UTEP ARIZONA .......................... 11 (62) .............................. Toledo WASHINGTON ...............141⁄2 (57) ............... San Diego St Troy .................................. 6 (62) .................................... UAB DUKE ................................. 3 (54) ...................... Florida Intl LSU ...................................43 (52)................... North Texas OREGON .......................... 37 (67) ................... Arkansas St SOUTH ALABAMA ............61⁄2 ............. Tex San Antonio Sunday LOUISVILLE .....................13 (42) ........................ Kentucky BAYLOR ................. 10 (58) ....................... Smu Monday VIRGINIA TECH ............71⁄2 (48)............... Georgia Tech a-at the Georgia Dome in Atlanta b-at Aviva Stadium in Dublin, Ireland c-at Soldier Field in Chicago d-at Sports Authority Field in Denver e-at Cowboys Stadium in Arlington, Texas MLB Favorite .................. Odds ................. Underdog National League San Francisco ..............71⁄2-81⁄2 ............ CHICAGO CUBS WASHINGTON ................ Even-6 .......................... St. Louis NY Mets ..........................61⁄2-71⁄2 .............................. MIAMI ATLANTA ........................ Even-6 .................. Philadelphia Cincinnati ......................81⁄2-91⁄2 ....................... HOUSTON MILWAUKEE ................... Even-6 ...................... Pittsburgh COLORADO ..................... Even-6 ....................... San Diego LA DODGERS .................51⁄2-61⁄2 ........................... Arizona American League Texas ...............................71⁄2-81⁄2 ................... CLEVELAND Tampa Bay ....................51⁄2-61⁄2 ....................... TORONTO NY YANKEES .................81⁄2-91⁄2 ...................... Baltimore DETROIT ..........................51⁄2-61⁄2 ............... Chi White Sox KANSAS CITY ....... 51⁄2-61⁄2 ............. Minnesota OAKLAND .......................71⁄2-81⁄2 ............................ Boston LA Angels ..........................6-7.............................. SEATTLE Home Team in CAPS (c) 2012 Tribune Media Services, Inc. Golf Time Net Cable Cable 6, 206 Cable European Masters 7:30a.m. Golf Deutsche Bank Champ. 1 p.m. Golf 156,289 156,289 Tennis Time Cable U.S. Open U.S. Open noon ESPN2 34, 234 6 p.m. ESPN2 34, 234 Auto Racing Time Net Net Cable Truck series qualifying 3:30p.m. Speed 150,227 Sprint Cup qualifying 5 p.m. Speed 150,227 Truck series 7 p.m. Speed 150,227 SATURDAY College Football Time Net Navy v. Notre Dame 8 a.m. CBS Buffalo v. Georgia 11 a.m. KSMO Ohio v. Penn State 11 a.m. ESPN N’western v. Syracuse 11 a.m. ESPN2 W. Mich. v. Illinois 11 a.m. ESPNU Appalach. St. v. E. Caro. 11 a.m. FSN Troy v. Ala.-Birm. 11 a.m. FCSP Miami (Ohio) v. Ohio St. 11 a.m. BTN S. Miss. v. Nebraska 2:30p.m. ABC Bowling Green v. Fla. 2:30p.m. ESPN Miami v. Boston Coll. or S. Miss. v. Nebraska 2:30p.m. ESPN2 Iowa v. North. Illinois 2:30p.m. ESPNU Tulsa v. Iowa St. 2:30p.m. FSN Iowa v. Wisconsin 2:30p.m. BTN S. Dakota St. v. Kansas 6 p.m. Jayh’k Cable 5, 13, 205,213 3, 203 33, 233 34, 234 35, 235 36, 236 146 147 9, 209 33, 233 Auburn v. Clemson North Texas v. LSU Jackson St. v. Miss. St. Hawaii v. USC Alabama v. Michigan Indiana St. v. Indiana Arkansas St. v. Oregon Toledo v. Arizona Oklahoma v. UTEP 6 p.m. ESPN 6 p.m. ESPNU 6 p.m. FSN 6:30p.m. Fox 7 p.m. ABC 7 p.m. BTN 9:30p.m. ESPN 9:30p.m. ESPNU 9:30p.m. FSN 34, 234 35, 235 36, 236 147 13, 37, 213 33, 233 35, 235 36, 236 4, 204 9, 209 147 33, 233 35, 235 36, 236 Baseball Time Net Cable San Francisco v. Cubs noon WGN Philadelphia v. Atlanta 2:30p.m. Fox White Sox v. Detroit 6 p.m. WGN 16 4, 204 16 High School Football Time Cable Net SM West v. LHS replay 11 a.m. Knol. Golf Time Net 6, 206 Cable European Masters 6 a.m. Golf Deutsche Bank Champ. 1 p.m. Golf 156,289 156,289 Tennis Time Cable U.S. Open 11 a.m. CBS 5, 13, 205,213 Auto Racing Time Cable Net Net Nationwide qualifying 2:30p.m. Speed 150,227 Nationwide series 6 p.m. ESPN2 34, 234 Soccer Time Net Cable Kansas City v. Toronto 7:30p.m. KSMO 3, 203 1978 2012 SAVE NOW! All-new 2013 Ford ESCAPE UP TO 33 MPG HIGHWAY UP TO • V6 • Moonroof • SYNC® • Spoiler UP TO 38 MPG HIGHWAY Stk. No. 12C923 33 MPG New 2012 Ford FOCUS SE Hatchback SALE PRICE $ HIGHWAY Stk. No. 12C775 New 2012 Ford FUSION SE 17,495 SALE PRICE Sale price plus tax, title license & $249 administrative fee. After all manufacturer rebates. With approved credit. Not all buyers will qualify. See dealer for details. $ 19,999 Sale price plus tax, title license & $249 administrative fee. After all manufacturer rebates. With approved credit. Not all buyers will qualify. See dealer for details. 0 OR % 60 UP TO APR FOR MONTHS New 2012 Ford F-150 SAVE UP TO $ ON SELECT NEW MODELS SUPERCREW 4X4 XLT 11,000 OFF MSRP Stk. No. 12T989, MSRP $43.470. Sale Price: $32,470. Price includes $1,500 National Discount, $5,750 Factory Rebate, $3,750 Noller Discount. Must qualify for Trade Assistance.** Must finance thru FMCC. With approved credit. **Must trade ‘95 or newer vehicle. UP TO UP TO HIGHWAY HIGHWAY 28 MPG 2013 Ford Explorer $ Stk. No. 13T088 189 LEASE PER MONTH Louisiana St Naismith Dr Alabama St $189 a month, 24 months, 10.5K miles per year lease. $5189 due at signing plus tax, title, license and administration fee. No security deposit required. After all manufacturer rebates. With approved credit. Not all buyers will qualify. MSRP $30,185, residual 69%, total of payments 4872. Offer ends 9/3/12. See dealer for details. W 23th St 30 MPG 2013 FORD EDGE $215 a month, 24 months, 10.5K miles per year lease. $6215 due at signing plus tax, title, license and administration fee. No security deposit required. After all manufacturer rebates. With approved credit. Not all buyers will qualify. MSRP $37,035, residual 68%, total of payments 5640 Offer ends 9/3/12. See dealer for details. $ Stk. No. 13T184 215 LEASE PER MONTH 2013 Ford Escape SE $191 a month, 24 months, 10.5K miles per year lease. $5691due at signing plus tax, title, license and administration fee. No security deposit required. After all manufacturer rebates. With approved credit. Not all buyers will qualify. MSRP $28,065, residual 68%, total of payments 5016 Offer ends 9/3/12. See dealer for details. 23rd & Alabama • Lawrence 785-843-3500 • 1-800-281-1105 10 $ Stk. No. 13T206 191 Laird Noller Guarantee “We’ll Sell You a New Vehicle for Less OR We’ll Give You $10,000” See dealer management for details. USED CAR 23rd & Alabama • 785-843-3500 2002 Ford Ranger 4x4, V6, Ext. Cab, Blue ...................................... $6,996 2005 Lincoln Aviator 1996 Plymouth Grand Voyager SE 2010 Ford Focus SE V6, Blue, 68,000 miles, Carfax 1 owner ................. $7,800 2009 Mercuryy Milan Premier 2001 GMC Sierra 1500 15,000 4x4, V8, Auto, Ext. Cab....................................... $8,995 V6, Auto, PW, Leather Carfax 1 owner 18,000 4 Cyl, Auto, Carfax 1 owner, Certified 100K Warranty 2010 Dodge Ram V8, Auto, Carfax 1 owner ................................. $15,000 2001 Kia Sportage 2011 Subaru Outback 2.5i 4 Cyl, 6 Spd, Gray, Carfax 1 owner..................... $16,000 Auto, 1 owner, Great car! Winter’s coming! ........ 2006 Hyundai Azera Ltd 2011 Hyundai Tucson GLS 2007 Honda Civic Hybrid 2011 Chevrolet Camaro 1LT 2008 Nissan Rogue SL 1990 Mazda Miata V6, Pwr Seats, Rides like new! Local trade .......... $13,500 $ 4 Cyl, Silver, Carfax 1 owner ............................ $13,765 2011 Ford Fusion SE 4x4, Leather, Extra Clean! 5 Spd. .......................... $7,288 Auto, Save on gas! Low priced!........................ $16,000 4 Cyl, 1 owner, Infinity stereo, Clean! ................. $16,500 2007 Toyota Camry Solara Convertible, Local trade, Navi, Leather................ $21,000 2829 Iowa • 785-838-2327 $12,995 AWD, V6, Auto .............................................. $12,995 2006 Ford Five Hundred AWD, V6, Green ............................................. $12,000 2011 Hyundai y Santa Fe GLS 4x4, V8, Red, Carfax 1 owner .......................... 2006 Chevy Impala V6, Auto .......................................................... $9,600 $ 2005 Dodge Dakota SLT $22,000 4x4, Auto, 100K warranty, Certified .................... $21,500 V6, Auto, Carfax 1 owner ................................. $23,000 Local car, 69,000 mi, Great price! ........................... 2011 Hyundai Genesis $7,995 V6, Auto, 100K warranty, 1 owner, Certified ......... $26,052 | 4B Friday, August 31, 2012 SPORTS . L AWRENCE J OURNAL -W ORLD BASEBALL SCOREBOARD MAJOR-LEAGUE ROUNDUP Royals complete sweep The Associated Press American League Royals 2, Tigers 1 Kplacescoring single. Detroit Kansas City ab r h bi ab r h bi AJcksn cf 5 0 0 0 L.Cain cf 3 0 10 Dirks lf 5 0 2 0 AEscor ss 4 0 00 MiCarr 3b 5 0 2 0 AGordn lf 4 1 21 Fielder 1b 4 1 3 0 Butler dh 4 0 20 DYong dh 4 0 1 0 S.Perez c 4 0 10 Berry pr-dh 0 0 0 0 Mostks 3b 4 1 10 Boesch rf 3 0 0 0 Francr rf 2 0 10 JeBakr ph-rf 1 0 0 0 Hosmer 1b 2 0 10 JhPerlt ss 4 0 2 1 Giavtll 2b 3 0 01 Infante 2b 40 1 0 Laird c 30 1 0 Avila ph 00 0 0 Totals 38 112 1 Totals 30 2 9 2 Detroit 000 000 010—1 Kansas City 000 011 00x—2 E-S.Perez 2 (4). DP-Detroit 2, Kansas City 1. LOBDetroit 11, Kansas City 7. 2B-Fielder (27), Butler (20), Moustakas (28). HR-A.Gordon (10). IP H R ER BB SO Detroit Porcello L,9-10 5 8 2 2 2 4 D.Downs 1 0 0 0 1 0 Villarreal 2 1 0 0 0 2 Kansas City 1 1 0 3 Guthrie W,3-3 71⁄3 10 1⁄3 1 0 0 0 1 Collins H,8 1⁄3 Crow H,15 0 0 0 0 0 K.Herrera S,1-2 1 1 0 0 1 0 Porcello pitched to 3 batters in the 6th. WP-Porcello. Umpires-Home, Manny Gonzalez; First, Greg Gibson; Second, Phil Cuzzi; Third, Ted Barrett. T-2:41. A-12,997 (37,903). Blue Jays 2, Rays 0 TORONTO — Carlos Villanueva pitched six sharp innings, and Toronto beat slumping Tampa, snapping a five-game losing streak against the Rays. Tampa Bay Toronto ab r h bi ab r h bi DJnngs lf 4 0 0 0 RDavis lf 4 0 10 BUpton cf 4 0 0 0 Rasms cf 4 0 00 Zobrist ss 3 0 1 0 Encrnc 1b 3 1 00 Longori 3b 4 0 1 0 Lind dh 4 1 10 Joyce dh 2 0 1 0 YEscor ss 3 0 20 RRorts ph-dh 1 0 0 0 KJhnsn 2b 4 0 32 Kppngr 2b 2 0 1 0 Sierra rf 3 0 10 Scott 1b 3 0 0 0 Mathis c 3 0 00 Loaton c 3 0 1 0 Hchvrr 3b 3 0 00 Fuld rf 30 0 0 Totals 29 0 5 0 Totals 31 2 8 2 Tampa Bay 000 000 000—0 Toronto 200 000 00x—2 DP-Toronto 1. LOB-Tampa Bay 4, Toronto 9. 2B-K. Johnson (16). CS-Zobrist (9). S-Keppinger. IP H R ER BB SO Tampa Bay M.Moore L,10-8 6 6 2 2 3 7 W.Davis 1 0 0 0 0 2 2⁄3 Howell 2 0 0 0 0 1⁄3 Badenhop 0 0 0 1 1 Toronto Villanueva W,7-4 6 5 0 0 1 7 Oliver H,14 1 0 0 0 0 2 Lincoln H,2 1 0 0 0 0 1 Janssen S,18-21 1 0 0 0 0 1 T-2:34. A-22,711 (49,260). STANDINGS American League East Division New York Baltimore Tampa Bay Boston Toronto Central Division Chicago Detroit Kansas City Cleveland Minnesota West Division National League W 75 72 71 62 59 L 55 58 60 69 71 Pct GB .577 — .554 3 .542 4½ .473 13½ .454 16 W 72 69 59 55 53 L 58 61 71 76 78 Pct GB .554 — .531 3 .454 13 .420 17½ .405 19½ W 77 73 68 64 L 53 57 62 68 Pct GB .592 — .562 4 .523 9 .485 14 Texas Oakland Los Angeles Seattle Thursday’s Games Kansas City 2, Detroit 1 Oakland 12, Cleveland 7 Baltimore 5, Chicago White Sox 3 Seattle 5, Minnesota 4 Toronto 2, Tampa Bay 0. Tampa Bay at Toronto, 12:07 p.m. L.A. Angels at Seattle, 3:05 p.m. Chicago White Sox at Detroit, 6:05 p.m. Texas at Cleveland, 6:05 p.m. Minnesota at Kansas City, 6:10 p.m. Boston at Oakland, 8:05 p.m. Orioles 5, White Sox 3 BALTIMORE — Zach Britton struck out a careerhigh 10 in eight innings, Taylor Teagarden and Adam Jones homered, and Baltimore beat Chicago for its eighth win in 11 games. Baltimore took three of four from the AL Centralleading White Sox to complete a 5-1 homestand that started with a two-game sweep of Toronto. Chicago ab r h bi 40 0 0 40 2 0 40 0 0 40 0 0 41 2 0 30 0 0 11 1 0 41 1 0 40 3 2 40 1 1 36 310 3 Baltimore ab 3 4 4 3 4 4 3 3 3 r 1 1 1 0 0 1 0 0 1 h bi 10 11 12 10 00 10 00 10 22 Wise cf Markks rf JoLopz 3b Hardy ss A.Dunn dh AdJons cf Konerk 1b MrRynl 1b Rios rf Ford dh Przyns c McLoth lf HGmnz c Machd 3b Viciedo lf Andino 2b AlRmrz ss Tegrdn c Bckhm 2b Totals Totals 31 5 8 5 Chicago 010 000 002—3 Baltimore 004 100 00x—5 DP-Chicago 1. LOB-Chicago 6, Baltimore 4. 2B-Jo. Lopez (14), Al.Ramirez (20), Markakis (25), Hardy (24), Mar.Reynolds (23), Teagarden (2). HR-Ad.Jones (26), Teagarden (2). IP H R ER BB SO Chicago 5 5 1 1 Quintana L,5-3 32⁄3 7 0 0 1 2 N.Jones 21⁄3 1 H.Santiago 1 0 0 0 0 2 Veal 1 0 0 0 0 3 Baltimore Britton W,4-1 8 7 1 1 0 10 1⁄3 2 2 2 0 0 Strop 1 0 0 0 1 Ji.Johnson S,41-44 2⁄3 T-2:31. A-10,141 (45,971). Athletics 12, Indians 7 CLEVELAND — Jarrod Parker pitched into the sixth inning, and Oakland hit four home runs, leading the Athletics to their sixth straight win. Oakland ab r h bi 52 2 1 41 1 1 51 3 3 51 2 0 41 1 0 41 0 0 32 1 1 41 1 3 32 2 2 00 0 1 10 0 0 38121312 Cleveland ab r h bi Crisp cf Kipnis 2b 4 2 22 Drew ss AsCarr ss 4 0 00 Reddck rf Choo rf 4 0 13 Cespds dh Brantly cf 4 0 10 S.Smith lf CSantn dh 3 1 00 Moss 1b Ktchm 1b 5 1 10 Dnldsn 3b Carrer lf 4 2 20 Kottars c Hannhn 3b 4 1 21 Pnngtn 2b Marson c 4 0 11 Carter ph Rosales 2b Totals Totals 36 7 10 7 Oakland 001 522 101—12 Cleveland 100 113 100—7 E-Carrera (1). DP-Oakland 1. LOB-Oakland 7, Cleveland 10. 2B-Crisp (19), S.Smith (18), Kottaras (1), Kipnis (16), Brantley (35), Carrera (4). HR-Crisp (9), Reddick (27), Donaldson (4), Pennington (4), Kipnis (13). SB-As.Cabrera (7), Carrera (4). SF-Drew, Carter. IP H R ER BB SO Oakland J.Parker W,9-7 5 8 5 5 3 3 2⁄3 Scribner 1 1 1 2 0 Blevins H,11 1 0 1 1 2 0 J.Miller 21⁄3 1 0 0 0 2 Cleveland Masterson L,10-12 4 8 8 8 2 1 Seddon 2 3 2 2 0 1 2 Sipp ⁄3 0 1 1 4 0 J.Smith 11⁄3 1 0 0 0 1 C.Perez 1 1 1 1 0 2 Masterson pitched to 3 batters in the 5th. J.Parker pitched to 2 batters in the 6th. HBP-by J.Miller (Choo). WP-Seddon. PB-Kottaras. T-3:35. A-14,500 (43,429). East Division Washington Atlanta Philadelphia New York Miami Central Division Cincinnati St. Louis Pittsburgh Milwaukee Chicago Houston West Division W 79 74 62 61 59 L 51 57 69 70 72 Pct .608 .565 .473 .466 .450 GB — 5½ 17½ 18½ 20½ W 80 71 70 62 50 40 L 52 60 60 68 80 91 Pct GB .606 — .542 8½ .538 9 .477 17 .385 29 .305 39½ W L Pct GB San Francisco 74 57 .565 — Los Angeles 70 61 .534 4 Arizona 64 67 .489 10 San Diego 61 71 .462 13½ Colorado 53 76 .411 20 Thursday’s Games Philadelphia 3, N.Y. Mets 2 Chicago Cubs 12, Milwaukee 11 Washington 8, St. Louis 1 San Francisco 8, Houston 4 Arizona at L.A. Dodgers, (n) Today’s Games San Francisco (Bumgarner 14-8) at Chicago Cubs (Volstad 1-9), 1:20 p.m. St. Louis (Wainwright 13-10) at Washington (G.Gonzalez 16-7), 6:05 p.m. N.Y. Mets (Dickey 16-4) at Miami (Eovaldi 4-9), 6:10 p.m. Philadelphia (Halladay 8-7) at Atlanta (Minor 7-10), 6:35 p.m. Cincinnati (Leake 6-8) at Houston (Abad 0-1), 7:05 p.m. Pittsburgh (Karstens 5-3) at Milwaukee (M.Rogers 2-1), 7:10 p.m. San Diego (Richard 11-12) at Colorado (White 2-6), 7:40 p.m. Arizona (Cahill 9-11) at L.A. Dodgers (Harang 9-8), 9:10 p.m. Saturday’s Games San Francisco at Chicago Cubs, 12:05 p.m. Philadelphia at Atlanta, 3:05 p.m. St. Louis at Washington, 3:05 p.m. Cincinnati at Houston, 6:05 p.m. N.Y. Mets at Miami, 6:10 p.m. Pittsburgh at Milwaukee, 6:10 p.m. San Diego at Colorado, 7:10 p.m. Arizona at L.A. Dodgers, 8:10 p.m. Mariners 5, Twins 4 MINNEAPOLIS — Blake Beavan gave up two runs in seven innings, and Trayvon Robinson drove in two runs to lift Seattle over Minnesota. Beavan (9-8) scattered five hits, walked two and struck out one. Kyle Seager also drove in two runs for the Mariners, who have won 11 of their last 15 games. Seattle h bi 0 0 2 0 1 2 1 1 0 0 0 0 0 0 1 2 0 0 1 0 Minnesota ab r h bi Revere cf 4 1 00 ACasill 2b 3 2 21 Mauer c 3 0 10 Wlngh lf 4 1 12 Mornea 1b 3 0 01 Doumit dh 3 0 10 Mstrnn pr 0 0 00 Parmel rf 4 0 10 Plouffe 3b 3 0 00 JCarrll pr 0 0 00 Flormn ss 4 0 00 Totals 31 5 6 5 Totals 31 4 6 4 Seattle 100 004 000—5 Minnesota 200 000 020—4 E-Willingham (4). DP-Seattle 1, Minnesota 1. LOBSeattle 4, Minnesota 6. 2B-Gutierrez (3), Doumit (27). HR-Willingham (33). SB-Gutierrez (3), Revere (31), A.Casilla (16), Mastroianni 2 (17). SF-Seager, J.Montero, Morneau. IP H R ER BB SO Seattle Beavan W,9-8 7 5 2 2 2 1 2⁄3 0 1 1 1 0 Furbush H,4 1⁄3 1 1 1 0 0 Pryor H,2 Wilhelmsen S,21-24 1 0 0 0 2 1 Minnesota 4 3 2 3 Duensing L,3-10 51⁄3 4 2⁄3 1 1 0 1 2 Fien Waldrop 1 0 0 0 0 0 Burton 1 1 0 0 0 1 Perkins 1 0 0 0 0 1 T-3:11. A-32,578 (39,500). Ackley 2b Gutirrz cf Seager 3b JMontr c Smoak 1b Olivo dh Jaso ph-dh TRonsn lf Thams rf Ryan ss ab r 31 42 30 31 30 20 10 40 40 41 National League Cubs 12, Brewers 11 CHICAGO — Jonathan Lucroy hit a grand slam and drove in seven runs for Milwaukee, but Alfonso Soriano’s RBI single capped a three-run comeback in the ninth inning that lifted Chicago over the Brewers. In a seesaw game featuring a combined 15 extra-base hits, the Cubs led 3-0, trailed 9-3 and were still down 11-9 going into the ninth. Milwaukee ab r 50 54 43 41 51 41 00 10 00 00 41 50 20 10 00 00 20 h bi 1 0 5 0 3 2 1 0 3 7 1 0 0 0 0 0 0 0 0 0 3 2 0 0 0 0 0 0 0 0 0 0 0 0 Chicago ab r h bi DeJess lf-rf 4 3 32 Valuen 3b 5 2 31 SCastro ss 5 3 22 Rizzo 1b 6 0 22 LaHair rf 3 0 00 ASorin ph-lf 1 1 11 Clevngr c 2 0 00 WCastll ph-c 1 0 0 1 BJcksn cf 3 2 22 Barney 2b 5 0 00 Raley p 2 0 10 Bowden p 0 0 00 Vitters ph 0 1 00 BParkr p 0 0 00 Russell p 0 0 00 T.Wood ph 1 0 00 Camp p 0 0 00 Marml p 0 0 00 Mather ph 1 0 00 Totals 42111711 Totals 39 12 1411 Milwaukee 005 202 110—11 Chicago 210 006 003—12 One out when winning run scored. DP-Chicago 1. LOB-Milwaukee 7, Chicago 13. 2B-R. Weeks 2 (28), Braun (27), Ransom (11), DeJesus 2 (25), Valbuena (16), Rizzo 2 (8), B.Jackson 2 (5). 3B-S. Castro (10). HR-Braun (36), Lucroy (9), Ransom (10). SB-Braun (21), C.Gomez (28). CS-Aoki (7). IP H R ER BB SO Milwaukee Marcum 4 5 3 3 4 4 Li.Hernandez 11⁄3 3 5 5 2 1 1⁄3 1 1 1 2 0 M.Parra BS,2-2 Veras 11⁄3 1 0 0 0 0 Henderson H,5 1 0 0 0 2 1 Fr.Rodriguez L,2-7 BS,7-10 1⁄3 4 3 3 1 1 Chicago Raley 4 10 7 7 2 2 Bowden 2 3 2 2 1 0 1⁄3 B.Parker 1 1 1 0 0 2⁄3 Russell 0 0 0 0 0 Camp 1 3 1 1 0 0 Marmol W,2-2 1 0 0 0 0 1 WP-Bowden. PB-Lucroy. T-4:09. A-28,859 (41,009). Aoki rf RWeks 2b Braun lf Hart 1b Lucroy c CGomz cf Veras p ArRmr ph Hndrsn p FrRdrg p Ransm 3b Bianchi ss Marcm p Ishikaw ph LHrndz p MParr p Morgan cf Phillies 3, Mets 2 PHILADELPHIA — Phillies standout Jimmy Rollins was benched after a pair of base-running blunders in Philadelphia’s win over New York.. New York ab r 41 40 40 40 40 41 30 00 00 00 00 40 20 10 h bi 1 1 1 0 2 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Philadelphia ab r h bi Rollins ss 4 1 10 L.Nix rf 1 0 00 Frndsn 3b 5 1 41 Utley 2b 4 0 00 Howard 1b 3 0 01 Mayrry cf 4 1 30 Wggntn lf 3 0 21 Pierre lf 0 0 00 Mrtnz rf-ss 4 0 00 Lerud c 4 0 10 Papeln p 0 0 00 Kndrck p 2 0 10 Valdes p 0 0 00 DBrwn ph 0 0 00 Kratz ph-c 1 0 00 Totals 34 2 7 2 Totals 35 3 12 3 New York 110 000 000—2 Philadelphia 001 110 00x—3 E-Niese (2). LOB-New York 6, Philadelphia 12. 2B-D. Wright (37), Rollins (30), Frandsen (3), Mayberry (18), Wigginton (9), K.Kendrick (2). HR-Baxter (2), Hairston (15). SB-Rollins (24). S-K.Kendrick. SF-Howard. IP H R ER BB SO New York Niese L,10-8 6 9 3 3 1 4 R.Ramirez 1 2 0 0 0 2 R.Carson 1 1 0 0 0 1 Philadelphia 2 2 0 6 K.Kendrick W,8-9 72⁄3 7 1⁄3 0 0 0 0 0 Valdes H,2 Papelbon S,30-33 1 0 0 0 0 1 HBP-by Niese (Utley), by Papelbon (Ju.Turner). PB-Thole. T-2:25. A-43,141 (43,651). Baxter rf DnMrp 2b DWrght 3b I.Davis 1b Duda lf Hairstn cf RCeden ss RRmrz p RCarsn p JuTrnr ph AnTrrs pr Thole c Niese p Tejada ss Nationals 8, Cardinals 1 WASHINGTON — Bryce Harper hit his third home run in two games, Jason Werth homered for the first time since May, and Edwin Jackson struck out 10 as Washington defeated St. Louis. The Nationals opened an 11-game homestand with an overwhelming performance against a wild-card contender that failed to score an earned run for the third straight game. St. Louis Washington ab r h bi ab r h bi Jay cf 4 0 0 0 Werth rf 4 3 22 Beltran rf 4 0 1 0 Harper cf 5 1 23 T.Cruz 1b 0 0 0 0 Zmrmn 3b 4 0 10 Hollidy lf 4 0 1 0 Morse lf 4 1 30 Mujica p 0 0 0 0 LaRoch 1b 3 1 01 Craig 1b-rf 4 0 0 0 Dsmnd ss 4 0 20 YMolin c 2 0 0 0 Espinos 2b 3 1 10 Lynn p 0 0 0 0 Flores c 4 1 22 SRonsn lf 1 0 0 0 EJcksn p 4 0 00 Freese 3b 3 0 0 0 McGnzl p 0 0 00 Schmkr 2b 40 1 0 Furcal ss 20 0 0 Descals ss 10 0 0 JGarci p 20 0 0 Salas p 00 0 0 BryAnd c 11 1 0 Totals 32 1 4 0 Totals 35 8 13 8 St. Louis 000 000 010—1 Washington 201 012 20x—8 E-Zimmerman (11). DP-St. Louis 1. LOB-St. Louis 6, Washington 9. 2B-Bry.Anderson (1). HR-Werth (4), Harper (15). SF-LaRoche. IP H R ER BB SO St. Louis 6 6 2 2 J.Garcia L,3-6 51⁄3 9 2⁄3 1 0 0 0 1 Salas Lynn 1 2 2 2 2 2 Mujica 1 1 0 0 1 0 Washington E.Jackson W,8-9 8 4 1 0 2 10 Mic.Gonzalez 1 0 0 0 0 0 WP-Salas. T-2:59. A-23,269 (41,487). Giants 8, Astros 4 HOUSTON — Hunter Pence hit a go-ahead tworun single in the seventh inning, and San Francisco rallied past Houston. (19) to shallow center field that put San Francisco on top 6-4. San Francisco ab r 51 51 41 41 41 50 51 20 21 21 10 00 10 h bi 1 1 2 1 1 2 1 0 2 2 4 1 2 0 0 0 1 1 1 0 0 0 0 0 0 0 Houston ab r h bi Pagan cf Altuve 2b 4 1 10 Scutaro 2b Greene ss 4 1 11 Sandovl 3b Wallac 1b 4 0 00 Posey c JCastro c 3 1 11 Pence rf Pareds rf 4 0 21 Belt 1b FMrtnz lf 3 0 00 GBlanc lf Wrght p 0 0 00 BCrwfr ss R.Cruz p 0 0 00 Arias ph-ss MGnzlz ph 1 0 00 Vglsng p Dmngz 3b 4 1 30 HSnchz ph Bogsvc cf 2 0 11 Mota p Lyles p 1 0 00 FPegur ph BBarns ph 1 0 00 Maxwll lf 2 0 00 Totals 40 815 8 Totals 33 4 9 4 San Francisco 000 030 311—8 Houston 211 000 000—4 DP-San Francisco 1. LOB-San Francisco 8, Houston 5. 2B-Pagan (30), Scutaro (24), Belt 2 (23), J.Castro (12), Dominguez (1). 3B-Dominguez (1). HR-Arias (4), Greene (8). SB-Bogusevic (13). SF-Sandoval, Bogusevic. IP H R ER BB SO San Francisco Vogelsong W,12-7 6 7 4 4 1 7 Mota H,4 1 0 0 0 1 1 0 0 0 0 Ja.Lopez H,14 12⁄3 2 1⁄3 Romo S,8-9 0 0 0 0 1 Houston Lyles 5 7 3 3 0 2 2⁄3 X.Cedeno H,2 1 0 0 0 2 Fe.Rodriguez L,1-9 1 3 3 3 1 1 1⁄3 1 0 0 0 1 W.Wright R.Cruz 2 3 2 2 1 0 WP-Fe.Rodriguez 2. T-3:19. A-12,835 (40,981). U.S. Open Thursday At The USTA Billie Jean King National Tennis Center New York Purse: $25.5 million (Grand Slam) Surface: Hard-Outdoor Singles MenHen. Doubles Men First Round Aisam-ul-Haq Qureshi, Pakistan, and Jean-Julien Rojer (9), Netherlands, def. Mikhail Kukushkin, Kazakhstan, and Yen-hsun Lu, Taiwan, walkover. Jamie Delgado and Ken Skupski, Britain, def. Johan Brunstrom, Sweden, and James Cerretani, United States, 4-6, 6-4, 6-3. Carlos Berlocq and Leonardo Mayer, Argentina, def. Lukas Dlouhy, Czech Republic, and Alexandr Dolgopolov, Ukraine, 6-3, 7-6 (4). Robert Lindstedt, Sweden, and Horia Tecau (3), Romania, def. Daniele Bracciali, Italy, and Horacio Zeballos, Argentina, 7-6 (4), 6-4. Jesse Levine, United States, and Marinko Matosevic, Australia, def. Chase Buchanan and Bradley Klahn, United States, 6-2, 6-4. Ivan Dodig, Croatia, and Marcelo Melo (12), Brazil, def. Juan Sebastian Cabal and Robert Farah, Colombia, 6-3, 6-4. Jurgen Melzer, Austria, and Philipp Petzschner (10), Germany, def. Ashley Fisher and Jordan Kerr, Australia, 6-3, 6-4. Pablo Andujar and Guillermo GarciaLopez, Spain, def. Mark Knowles, Bahamas, and Xavier Malisse, Belgium, 1-6, 6-4, 6-3. Women First Round Renata Voracova and Klara Zakopalova, Czech Republic, def. Simona Halep, Romania, and Olga Savchuk, Ukraine, 6-3, 6-0. Natalie Grandin, South Africa, and Vladimira Uhlirova (14), Czech Republic, def. Chan Hao-ching and Chan Yung-jan, Taiwan, 7-6 (4), 6-1. Liezel Huber and Lisa Raymond (1), United States, def. Eleni Daniilidou, Greece, and Casey Dellacqua, Australia, 6-4, 6-7 (8), 6-4. Liga Dekmeijere, Latvia, and Mervana Jugic-Salkic, Bosnia-Herzegovina, def. Samantha Crawford and Alexandra Kiick, United States, 3-6, 7-6 (4), 6-4. Julia Goerges, Germany, and Kveta Peschke (11), Czech Republic, def. Kimiko Date-Krumm, Japan, and Aleksandra Wozniak, Canada, 6-1, 6-3. Eva Birnerova, Czech Republic, and Romina Oprandi, Switzerland, def. Katarina Srebotnik, Slovenia, and Zheng Jie (7), China, 6-4, 7-5. Sabine Lisicki, Germany, and Peng Shuai, China, def. Shahar Peer, Israel, and Laura Robson, Britain, 6-0, 6-3. Chuang Chia-jung, Taiwan, and Zhang Shuai, China, def. Kim Clijsters and Kirsten Flipkens, Belgium, 6-3, 6-4. Hsieh Su-wei, Taiwan, and Anabel Medina Garrigues (16), Spain, def. Michaella Krajicek, Netherlands, and Pauline Parmentier, France, 6-4, 6-0. Maria Kirilenko and Nadia Petrova (4), Russia, def. Anne Keothavong, Britain, and Anna Tatishvili, Georgia, 6-0, 6-3. NFL Preseason AMERICAN CONFERENCE East W L T Pct PF New England 1 3 0 .250 55 Buffalo 0 4 0 .000 59 N.Y. Jets 0 4 0 .000 31 Miami 0 4 0 .000 43 South W L T Pct PF Houston 3 1 0 .750 101 Jacksonville 3 1 0 .750 100 Tennessee 3 1 0 .750 89 Indianapolis 2 2 0 .500 99 North W L T Pct PF Pittsburgh 3 1 0 .750 104 Baltimore 2 2 0 .500 108 Cincinnati 2 2 0 .500 70 Cleveland 2 2 0 .500 84 West W L T Pct PF San Diego 3 0 0 1.000 61 Denver 1 2 0 .333 65 Oakland 1 2 0 .333 58 Kansas City 1 3 0 .250 61 NATIONAL CONFERENCE East W L T Pct PF Philadelphia 4 0 0 1.000 106 Dallas 3 1 0 .750 73 Washington 3 1 0 .750 98 N.Y. Giants 2 2 0 .500 80 South W L T Pct PF Carolina 2 2 0 .500 69 Tampa Bay 2 2 0 .500 60 New Orleans 2 3 0 .400 87 Atlanta 1 3 0 .250 73 North W L T Pct PF Chicago 3 1 0 .750 84 Detroit 2 2 0 .500 102 Green Bay 2 2 0 .500 74 Minnesota 1 3 0 .250 76 West W L T Pct PF Seattle 3 0 0 1.000 101 San Francisco 2 1 0 .667 55 St. Louis 2 2 0 .500 84 Arizona 1 3 0 .250 85 Wednesday’s Games Washington 30, Tampa Bay 3 N.Y. Giants 6, New England 3 Dallas 30, Miami 13 Thursday’s Games Diego at San Francisco, (n) Oakland at Seattle, (n) Denver at Arizona, (n) PA 69 119 88 96 PA 80 117 67 75 PA 71 92 72 82 PA 43 62 54 116 PA 60 60 59 58 PA 72 95 81 85 PA 99 94 72 71 PA 41 50 92 103 College EAST Delaware 41, West Chester 21 Fordham 55, Lock Haven 0 Kutztown 58, St. Anselm 6 New Hampshire 38, Holy Cross 17 UConn 37, UMass 0 SOUTH Carson-Newman 56, Glenville St. 46 McNeese St. 27, Middle Tennessee 21 Morehead St. 55, S. Virginia 0 SC State 33, Georgia St. 6 Shorter 31, Campbell 20 South Carolina 17, Vanderbilt 13 Tennessee Tech 41, Hampton 31 W. Carolina 42, Mars Hill 14 Walsh 40, Kentucky Wesleyan 10 MIDWEST Ashland 37, Indianapolis 14 Ball St. 37, E. Michigan 26 California (Pa.) 30, Hillsdale 22 Cent. Michigan 38, SE Missouri 27 Drake 28, Grand View 8 E. Illinois 49, S. Illinois 28 Ferris St. 35, St. Francis (Ill.) 24 Findlay 45, N. Michigan 10 Gannon 36, Lake Erie 33 Kent St. 41, Towson 21 Minn. Duluth 45, SW Minnesota St. 20 Minn. St.-Mankato 44, Minot St. 10 North Dakota 66, South Dakota Mines 0 Notre Dame Coll. 59, Mercyhurst 42 Sioux Falls 32, St. Cloud St. 19 St. Joseph’s (Ind.) 36, Valparaiso 34 Trine 24, Manchester 14 UCF 56, Akron 14 W. Illinois 23, Butler 15 Winona St. 58, Minn.-Crookston 6 SOUTHWEST Missouri Southern 25, Cent. Oklahoma 20 UCLA 49, Rice 24 FAR WEST New Mexico St. 49, Sacramento St. 19 Utah 41, N. Colorado 0 Utah St. 34, S. Utah 3 High School Sophomores Thursday at Olathe Free State 18, Olathe Northwest 14 FSHS scoring: Nyle Anderson 12 run, 30 run; Carson Bowen 50 pass from Anderson. FSHS highlight: Joe Lane fumble recovery. FSHS record: 1-0. Next for FSHS: Thursday at SM West. High School Scores BV West 28, Washburn Rural 7 Crest 45, Pleasanton 0 Derby 27, Salina South 14 Leavenworth 41, SM North 16 Ottawa 40, Prairie View 20 Satanta 28, Goodwell, Okla. 0 South East 34, Columbus 22 Wichita Carroll 47, Wichita Heights 16 MLS Today’s Games Colorado at Portland, 9:30 p.m. Saturday’s Games Philadelphia at New England, 6:30 p.m. Montreal at Columbus, 6:30 p.m. Toronto FC at Sporting Kansas City, 7:30 p.m. D.C. United at Real Salt Lake, 8 p.m. Vancouver at Los Angeles, 9 p.m. Sunday’s Games Seattle FC at FC Dallas, 6 p.m. Houston at Chicago, 6 p.m. Chivas USA at San Jose, 8 p.m. High School Thursday at Leavenworth LAWRENCE 3, GARDNER-EDGERTON 1 LAWRENCE 3, SM NORTH 1 LAWRENCE 3, LEAVENWORTH 1 LHS results Singles Whitney Simons 1-1, Kendal Pritchard 1-1, Katie Gaches 1-1. Doubles Abby Gillam-Zoe Schneider 2-0, Lilly Abromeit-Brooke Braman 2-0, Carly Davis-Haley Ryan 2-0. Thursday at Junction City Team results: 1. Free State. 2. Salina South. 3. Manhattan. 4. Junction City. Free State results Singles Alexis Czapinski def. Kristen Fraley, M, 6-0; def. Kenedy Obrecht, JC, 6-0; def. Katie Siemsen, SS, 6-0. Megan McReynolds def. Cathy Lei, M, 7-5; def. Alex Moore (JC) 6-0; lost to Amber Rayl, SS, 6-1. Doubles Taylor Hawkins-Caitlin Dodd def. Roberson-Stigge, Man, 7-5; def. ShaneBogen, JC, 6-0; def. Darnell-Irwin, SS, 6-3. Alyssa Raye-Rachel Walters def. Wichmann-Colburn, Man, 6-1; def. Ford-Hamilton (JC) 6-1; def. MainNowak (SS) 7-5. WNBA Thursday’s Games Atlanta 82, Washington 59 Indiana 76, New York 63 Connecticut 84, San Antonio 73 Tulsa 99, Los Angeles 85 Phoenix at Seattle, (n) Today’s Game Tulsa at Minnesota, 7 p.m. High School Thursday at Holton Seabury def. Jackson Heights, 25-17, 25-16; Seabury def. Onaga, 25-22, 25-22. Seabury highlights: vs. Jackson Heights: Ellen Almanza 6 aces, Courtney Hoag 8 kills, Alexa Gaumer 10 assists, Taylor Hodge 12 digs. vs. Onaga: Almanza 5 aces, Hoag 7 kills, Sarah McDermott 9 assists, Hodge 10 digs. Seabury record: 2-0. Freshmen Thursday at Overland Park St. Thomas Aquinas def. Lawrence High 25-14, 25-11. Roeland Park Miege def. LHS, 26-24, 25-15. HIGH SCHOOLS L AWRENCE J OURNAL -W ORLD FSHS STARTERS CONTINUED FROM PAGE 1B. LHS OFFENSE LT — Fred Wyatt. LG — Riley Buller. C — Reid Buckingham. RG — Cody Stanclift. RT — Derick Davis QB — Kyle McFarland. RB — TJ Cobbs. TE — Zach Bickling. WR — Tye Hughes. WR — Chris Heller. WR — Sam Hearnen DEFENSE DE — Cody Stanclift. NT — Riley Buller. DE — Fred Wyatt. OLB — Stan Skwarlo. LB — Kieth Loneker. MLB — Corban Schmidt. LB — Blake Winslow. OLB — Tye Hughes CB — Kyle McFarland. CB — Demarko Bobo. FS — Joe Dineen “We think we can work on our own techniques,” he said, adding that while FSHS might have lessexperienced linemen, they’re ready to learn through experience. STARTERS CONTINUED FROM PAGE 1B,” OFFENSE LT — Alex Jones. LG — Jacob Warren. C — Kyle Wittman. RG — Kharon Brown. RT — Chris Gillespie QB — Brad Strauss. RB — Tyrone Jenkins. WR — Erick Mayo. WR — Drake Hofer. WR — Will Thompson. WR — Josh Seybert. DEFENSE NG — Cole Cummins. DT — Kharon Brown. DT — Josh Seybert. LB — Jordan Brown. LB — Drew Green. LB — Asaph Jewsome. LB — Kieran Severa CB — Erick Mayo. CB — Will Thompson. SS — Tucker Sutter. FS — Brad Strauss. Lawrence, Brown added, will have to play disciplined and tenacious football, because he knows SMW hasn’t forgotten its 35-12 loss to the Lions last year..” “They’re gonna have a lot of energy coming in,” the lineman said. Most of Lawrence’s Oline week.” Friday, August 31, 2012 | 5B AREA ROUNDUP Soccer De Soto CONTINUED FROM PAGE 1B direction.” Ex-LHS coach Freeman to be honored J-W Staff Reports Former Lawrence High football coaching legend Bill Freeman will receive the Kansas Sports Hall of Fame’s “Pride of Kansas Award” prior to the opening kickoff of the Southern Coffey County High football game on Sept. 7 in Freeman’s hometown of LeRoy. Freeman compiled a 36year career record of 24281-3, with head coaching stints at Baxter Springs, Parker Rural, Nickerson, LeRoy, Osawatomie and Lawrence High. De Soto sophomore Brittani Jenson won the Cat Classic Golf Tournament on Wednesday at Alvamar Golf Course, leading her De Soto/ Mill Valley girls golf team to a first-place finish in the process. Jenson shot a 79 in the par-72 event. Mill Valley senior Hadley Tharp placed eighth with a 95. De Soto/ Mill Valley shot a combined 378 to place ahead of Topeka Hayden (407) and Silver Lake (412). O The Wildcats begin their bid for a fifth-straight football playoff appearance at 7 tonight at Holton, selected by many as the top team in 4A. O De Soto’s volleyball team opened its season Saturday in the Frontier League Invitational. The Wildcats went 3-0 in pool play, defeating Baldwin (20-25, 25-23, 26-24), Ottawa (25-21, 25-19) and Basehor-Linwood (25-16, 25-17). De Soto finished fourth overall after falling in the winners’ bracket to Paola (24-26, 20-25) and Baldwin (17-25, 25-18, 21-25). On Tuesday, the Wildcats (3-5) lost to Wellsville, Osawatomie and Olathe Northwest. Baldwin Baldwin’s volleyball team entered Saturday’s Frontier League Invitational seeded eight with few expectations. That changed by day’s end when the Bulldogs pulled off a third-place finish, going 2-1 in pool play against De Soto, Basehor-Linwood and Ottawa. After a loss to Louisburg, Baldwin recovered to avenge its prior defeat to De Soto, winning 2-1 (25-17, 18-25, 25-21) behind the leadership of players like Katie Pattrick and Morgan Lober. The Bulldogs were 4-5 after Tuesday’s 1-2 showing in Bonner Springs against the Braves, BasehorLinwood and Eudora. O Baldwin football begins at 7 tonight at home against Wellsville. The Bulldogs went 6-4 in 2011 and return four all-league players to a team that finished as 4A district runner-up. Included in that bunch is honorable mention all-state quarterback Chad Berg. Senior tight end Dayton Valentine and senior center Austin Chavez also return after earning postseason honors last year. O son on Aug. 24 against fellow 2011 state qualifier Tonganoxie. Two quick second-half goals by juniors Russell Cloon and Alec Petry (assisted by junior Nick Joslyn) gave the Bulldogs a 2-1 victory. Mill Valley Mill Valley girls tennis swept Leavenworth Immaculata, 9-0, in a dual meet Monday to begin its season. O Kaw Valley League favorite Mill Valley begins its football season on the road at 6A Manhattan at 7 tonight. Tonganoxie. O Tonganoxie begins its campaign under first-year head coach Al Troyer, who hopes to improve the Chieftains’ 1-8 record from 2011. Junior running backs Shane Levy (431 yards) and Cole Holloway return (105 yards against Bishop Ward) while junior quarterback/defensive back Tyler Ford looks to lead Tonganoxie’s offense. The Chieftains kick off the season at 7 tonight at Lansing. Eudora Eudora fall sports began Saturday at the Frontier League Invitational volleyball tournament. The Cardinals went 1-4, with a victory against Basehor-Linwood. O On Tuesday, boys soccer began the year with a 4-0 loss to Ottawa. O Eudora football kicks off at 7 tonight at K.C. Piper. In 2011, the Cardinals went 12-2 (6-0 in the Frontier League) en route to a 4A state title game appearance. Coach Gregg Webb brings back senior running back/linebacker Gabe Cleveland (1,100 rushing yards and 12 touchdowns; 97 tackles). Junior Andrew Ballock will move from wide receiver to quarterback. Tanner Tornedon (5-11, 160 pounds) transferred from Lawrence and will look to be a factor in the receiving corps. Boys soccer opened the sea- — STEPHEN MONTEMAYOR BRIEFLY Lawrence tennis goes 3-0 in quad Free State tennis cruises at J.C. LEAVENWORTH — Kendal Pritchard, Whitney Simons and Katie Gaches each went 1-1 in singles matches, and Lawrence High’s girls tennis team defeated GardnerEd. JUNCTION C ITY — Free State High dropped just one match on the day and cruised to victory in the Junction City High tennis quadrangular on Thursday. Alexis Czapinski at No. 1 singles, Taylor HawkinsCaitlin Dodd at No. 1 doubles and Alyssa RayeRachel at Lawrence High. O Results on page 4B Seabury volleyball takes 2 victories HOLTON — Seabury Academy opened its volleyball season Thursday with victories over Jackson Heights and Onaga. The Seahawks defeated Jackson Heights, 25-17, 25-16, and Onaga, 25-22, 25-22. Offer expires 9/4/12. Laird Noller Guarantee Correction “We’ll Sell You a New Vehicle for Less Or We’ll Give You $10,000” The name of a Lawrence High freshman football player was incorrect in Thursday’s Journal-World. Price Morgan was the player who caught a 21yard touchdown pass in the Lions’ freshman game against Shawnee Mission West. See Dealer Management for details. MITSUBISHI 23rd & Alabama • 843-3500 • 1-800-281-1105 SAVE NOW! UP TO UP TO 40 MPG 35 MPG HWY HWY New 2013 Hyundai Sonata New 2013 Hyundai Elantra PW, PL, Cruise, Auto, Bluetooth, Hands Free Phone System. $ 139 119 $ $ Lease Per Month Stk. No. 12H1112, 36 month HMF lease, 12,000 mile per year lease. $2,999 due at signing plus tax, license and $249 administrative fee. No deposit required. After all manufacturer rebates. With approved credit. MSRP $16,515, Residual 56%. Total Payments 4165. 28th Terr Iowa St 2829 Iowa • 838-2327 • 1-800-281-1105 HYUNDAI Lease Per Month Stk. No. 13161, 36 month HMF lease, 12,000 mile per year lease. $3,599 due at signing plus tax, license and $249 administrative fee. No deposit required. After all manufacturer rebates. With approved credit. MSRP $18,650, Residual 67%. Total Payments 5004. Laird Noller Guarantee 27th St Laird Noller 199 New 2012 Hyundai Accent Lease Per Month Stk. No. 13H209, 36 month HMF lease, 12,000 mile per year lease. $3,599 due at signing plus tax, license and $249 administrative fee. No deposit required. After all manufacturer rebates. With approved credit. MSRP $18,650, Residual 67%. Total Payments 5004. Hands Free Blue Tooth Phone System, Bluelink Navigation System and More!!! “We’ll Sell You a New Vehicle for Less OR We’ll Give You $10,000” See dealer management for details. 6B | Friday, August 31, 2012 LOCAL . L AWRENCE J OURNAL -W ORLD Harrell, Packers hammer Chiefs, 24-3 GREEN BAY, WIS. (AP) — han- Tom Lynn/AP Photo KANSAS CITY’S EDGAR JONES (71) PRESSURES GREEN BAY QUARTERBACK GRAHAM HARRELL during the second half. The Packers routed the Chiefs, 24-3, on Thursday in Green Bay, Wis. dling line showed improvement from last week. “It was a great job up front,” Hillis said. “That’s where we found the holes. I think as far as coming out and trying to work the running game and try to get that accomplished, I believe we did that.” SUMMARY-121). KU thrower Finley transfers to Wyoming J-W Staff Reports Former Kansas University track distance thrower Mason Finley has transferred to the University of Wyoming for his senior season, the Wyoming athletic department has announced. Finley, an eight-time All-American who is a four-time NCAA runnerup in the shot and discus and four-time Big 12 champion, will compete indoors in 2013 and red-shirt the outdoor season, then have one full season in 2014. The Salida, Colo., native’s dad, Jared, was an STK. NO. 12L760 KICKOFF FOR A CAUSE All-America discus thrower at Wyoming in 1979. “We are obviously thrilled to have Mason join our team, not only for his tremendous talent level but also because he is a great young man with the highest moral character,” Wyoming associate head track and field coach Paul Barrett said. “He has incredible potential as one of the best young talents in the country, so we will work together to help him achieve his goals of winning NCAA titles and making world and Olympic teams in the future.” Mike Yoder/Journal-World Photo VIN3LNDL2L32CR824365 STK. NO. 12L301 KANSAS UNIVERSITY FOOTBALL COACH CHARLIE WEIS AND HIS WIFE, MAURA, make their way to KU’s Rock Chalk BBQ event Thursday at Memorial Stadium to kick off the football season. Proceeds will go to the Weises’ foundation, Hannah & Friends, and also an organization in Kansas City, Kan., called Juniper Gardens, which provides financial support for low-income families with children with different abilities. In the background from left are Max Falkenstien and KU athletic director Sheahon Zenger. VIN3LNHL2GC3CR809854 $349 a month. 36 months, 10,500 miles per year lease. $750 due at signing plus tax, title, license and $249 administration fee. No security deposit required. After all manufacturer rebates. With approved credit. Not all buyers will qualify. MSRP $36,720, residual 52%, total of payments $13,968. Offer ends 8/31/12. See dealer for details. Laird Noller Automotive • 23rd & Alabama • 843-3500 • SAVE NOW! Automatic Stk. No. 12M660 $ 14,950 OR $ 199 MO* *72 mo @ 2.9% with approved credit. $4500 Down. $15.40 per $1000 financed. Not All Buyers Will Qualify. Offer Ends 9/4/12. New 2012 Mazda5 Sport Stk. No. 11255 $ 17,950 OR $ 259 $ MO* *72 mo @ 2.9% with approved credit. $4500 Down. $15.40 per $1000 financed. Not All Buyers Will Qualify. Offer Ends 9/4/12. 19,599 27,950 $ **2012 EPA-estimated mpg. All offers are plus tax, title, license and $249 administrative fee and include Mazda owner loyalty rebate of $500 on Mazda6. Mazda CX-9 offer includes Mazda Owner Loyalty Rebate of $500 and Trade-in Bonus Cash of $500, Mazda3 offer includes Mazda Owner Loyalty Rebate of $500 and Trade-in Bonus Cash of $500. Offers are in lieu of 0% APR financing. Photos for illustration only. See dealer for details. Offers end 9/3/12. W 23th St Louisiana St 23rd & Alabama • 843-3500 • Naismith Dr LAIRD NOLLER MAZDA Alabama St Stk. No. 12M857 10 Stk. No. 12M1121 Stk. No. 12B1110 Laird Noller Guarantee “We’ll Sell You a New Vehicle for Less OR We’ll Give You $10,000” See dealer management for details. Friday, August 31, 2012 B PLACE YOUR AD ONLINE AT SUNFLOWERCLASSIFIEDS.COM OR CALL 785.832.2222 or 866.823.8220 Announcements CNA/CMA CLASSES CNA M-TH 8am-2:30pm. Sept. 6th- Oct. 4th CNA MWF 8am-3:30pm Sept. 10th-Oct. 10th, CNA TU&TH Sept. 18th-Nov.1 Sept 8&9 CNA Refresher Sept. 8&9 CMA Update Call now 785-331-5495 trinitycareerinstitute.com Lawrence Jewish Community Congregation is now enrolling for Religious School beginning September 9th. Non-members welcome. or call 841-7636. VFW Post 852 Baked Chicken Dinner All you can Fri. Aug. 31, 6PM. $7 Donation Public Welcome VFW 852, 138 Alabama Lawrence 785-843-2078 BALD EAGLE RENDEZVOUS 19th Century Fur Trade, Living history encampment at Lecompton’s scenic Bald Eagle Park SEPT. 20-22 9am. to 5pm. FREE admission Excellent Educational experience for children 785-887-6520 consthallakshs.org Visit museum and shops Need an apartment? Place your ad at ljworld.com or email classifieds@ljworld.com Featured Ads Athletic MINDED? Factory Distributor needs immediately high energy people to fill vacancies created due to expansion and promotions. • Full Time only • Promotions possible within 90 days $400-$600/wk to start For immediate interview call 785-856-0355 Must be willing to start immediately 2BR in 4 plex, excellent Location at 1104 Tennessee. Near downtown & KU. CA, no pets, $490. 785-842-4242 PUT YOUR EMPLOYMENT AD IN TODAY!! Go to ljworld.com or call 785-832-1000. UP TO FOUR PACKAGES TO CHOOSE FROM! Days in print vary with package chosen. Found Pet/Animal Adult Care Provided FOUND German Shepherd, on Wellman 37th. Street. Has collar but no tags. Very sweet and beautiful dog. Call 785-331-5623 Found Kitten, small, orange & white. South of Douglas County Fairgrounds. 785-749-0248 Lost Item Lost in NW Lawrence, 6th St., about Aug 12. Bags contained Knipex and Snap-on tools. Both bags are 12” by 7” green camouflage, made by Klein. (785) 979-2480. Lost Pet/Animal Lost Cat, black w/ white chest, socks, and diamond on forehead. 8/28 near Providence Rd. and Princeton Blvd. Blue collar w/ yellow ID tag. (785) 979-3371 Reward. Auction Calendar ESTATE AUCTION Sun., Sept. 9th, 10:00 A.M. 4209 Wimbledon, Lawrence Guns, Furn. & Collectibles, Vintage Toys & More!! Seller: Grissett Trust Auctioneers: Mark Elston & Wayne Wischropp Home (785-594-0505) Cell ( 785-218-7851). net/elston REAL ESTATE AUCTION Fri., Sept. 14, 12:30 PM Brush Creek Community Center, 3801 Emanual Cleaver Blvd. KC, MO. 8 Single Family Homes! LIVE or ONLINE BIDDING: Auction Sat., Sept. 8, 10:00 A.M. 203 Perry St., N. Lawrence Shop Equipment, Toolboxes, Tools, Misc. Seller: Fred Inyard Paxton Auction Service Chris Paxton & Doug Riat 785-331-3131 or 785-979-6758 PUBLIC AUCTION 640 S. 138th St., BONNER SPRINGS, KS Sat., Sept. 1, 10:00 a.m. CARS, TRACTORS, TRAILERS, HOUSEHOLD, JEWELRY, COINS, GUNS Owner: BUD SCHUBERT MOORE AUCTION SERVICE, INC.. net/moore (913) 927-4708, mobile Need to Sell a Car? Place your ad at ljworld.com or email classifieds@ljworld.com Loving Caregiver Are you in need of a caregiver to maintain your quality of life? 20yrs. exp. Prof. refs. Call Yvonne 785-393-3066 Child Care Provided Stepping Stones is excited to offer a new PT preschool program. morning sessions avail. call 785-843-5919 for more info. Education Instructional Design Specialist: Neosho County Community College seeks an individual to assist faculty with instructional design, technology, and distributed learning to enhance student learning and effective teaching for new and existing courses and related activities. Master’s degree in Instructional Design Technology preferred; bachelor’s degree in an appropriate field and two years of teaching or relevant training experience required. Send resume, online application, 5 references, and unofficial transcripts to Instructional Design Specialist, NCCC, 800 W. 14th Street, Chanute, KS 66720. Full position description and online application at. This position is pending Board approval. NCCC is an AA/EEO employe Automotive General Experienced Horse Barn help needed immediately. 785-760-0526 Rockhaven Horse & Training Center. 2 Technicians Dale Willey Automotive seeks two service technicians. One for diagnostic & repair and one for light duty repairs including tires, brakes & fluid changes. Must have experience, a positive attitude, team skills, driver’s license, good driving record & pass drug screen. Contact Verlin Weber at Dale Wiley Automotive, 2840 Iowa St. Briggs Auto Body of Lawrence is now taking applications for Auto Body Techs. Good pay, benefits, etc. Some experience necessary. Please call 785-856-8889 or E-mail jhaller@briggsauto.com Childcare PT Nanny needed to care for our 3 yr. old triplets. Prior exp. Own transportion & refs. 785-760-4069 Customer Service Customer Service Representative/Sales The Eye Doctors is looking to fill a full-time customer service representative/sales position. Must have an outgoing personality and excellent work ethic. We are willing to train the right person. Please apply at The Eye Doctors 2600 Iowa St Lawrence, KS Go to & watch 9 min. video. Local training & bus. building assistance. Call Jerry Methner 913-244-7007 AccountingFinance Receptionist, Multiline phone & general office duties. Send resume to sharonholladay@west heffer.com fax to 843-4486 Thicker line? Bolder heading? Color background or Logo? Ask how to get these features in your ad TODAY!! Attention Caregivers!!! We are looking for reli a ble caregivers with hands on care experi ence as ei ther a caregiver, CNA or HHA. On-Call bonuses, train ing and various shifts available. To apply please call 785-856-0937! MA/LPN Derm experience preferred. Great benefits. M-F. Lawrence. Please fax resume to: 785-354-1255. Hotel-Restaurant Perry Unified School District #343 Perry-Lecompton High School is taking applications for immediate openings for Assistant Boys and Girls Basketball Coaches and possible Head Girls Basketball Coach. Qualified individuals should send resume and cover letter to: Theresa Beatty, Athletic Director Perry-Lecompton High School, PO Box 18, Perry, KS 66073 tbeatty@usd343.org Applications will be accepted until September 12, 2012. Office-Clerical Chiropractic Receptionist Strong computer & customer service skills. Part time. Email resume to info@backdoctorsue.com Front Desk Staff needed in busy office. Great Benefits. M-F. Lawrence. Please fax resume to: 785-354-1255 Sales-Marketing Leasing Consultants Greystar is looking for a Leasing Consultant to join our team in Lawrence, KS. Leasing Consultants should have a professional image and a strong background in sales and customer service. Weekends are required. We offer excellent pay and benefits. Send resume to afertitta@greystar.com. EOE/DFW. DriversTransportation DRIVER Wholesale greenhouse is looking for a seasonal driver -CDL -airbrakes to make local KC metro runs dropping floral loads. Some warehouse work between runs. Job is seasonal. Up to 40 hours per week during peak season, with no work during off peak. Job could lead to permanent backup driver position. Some heavy lifting is required (40-50 lbs). Ideal for a retired local driver. Call 913-301-3281 Ext. 229 for application. FOOD SERVICE WORKERS Numerous part time Food Service openings available with the KU Memorial Unions. Excellent employment for Students, flexible work schedules and hours from August to May. $7.80 per hour. Apartments Unfurnished Applications available online at or in the Human Resources Office, 3rd Floor Kansas Union, 1301 Jayhawk Blvd., Lawrence, KS 66045. EOE. 1-2 BRs, nice apts. 1 block to KU, off street pkg. $450 $500/mo. Great location. 913-963-5555, 913-681-6762. Media-Printing and Publishing is in need of Newspaper Delivery Route Drivers to deliver the Lawrence Journal-World to homes in Lawrence. We have two routes available. All available Routes are delivered 7 days per week, before 6AM. Valid driver’s license, proof of auto insurance, and a phone required. If you’d like to be considered, please email Anna Hayes at ahayes@ljworld.com and mention your name and phone number. Newspaper Route Carriers wanted to deliver the Dispatch in the city of Shawnee. For details please call Perry Lockwood at 785-832-7249 and leave a message. General 10 HARD WORKERS NEEDED NOW! Immediate Full Time Openings! 40 Hours a Week Guaranteed! Weekly Pay! 785-841-0755 Athletic MINDED? Business Opportunity Healthcare Maintenance Factory Distributor needs immediately high energy people to fill vacancies created due to expansion and promotions. • Full Time only • Promotions within 90 days possible $400-$600/wk to start For immediate interview call 785-856-0355 Must be willing to start immediately Ready for a new career? Are you a meticulous cleaner? Do you possess leadership skills? Be part of a team with 28 years of satisfied customers. Cleaning and/or 1 year of supervisory experience, good driving record. Mon-Fri 8 am-5pm, pay commensurate w. experience, benefits. Apply/resume 939 Iowa Street. 785-842-6264 1BR centrally located apt. Storage & parking. Water paid. 785-843-7815 1BR — 740-1/2 Massachusetts, above Wa Restaurant, 1 bath, CA. $650/mo. No pets. 785-841-5797 Cedarwood Apts AD ORDER & TRAFFIC COORDINATOR The World Company, a fast-paced, multi-media organization is looking for an Ad Order and Traffic Coordinator to manage all daily production deadlines while directing productivity of ad builders and quality assurance for mechanical/ technical aspects of ads. Coordinator will ensure daily ad deadlines are met by communicating with advertising sales staff and directing workflow; enter and track jobs; assign work to ad builders; enter ads from salespeople in the field; assist advertising sales reps and coordinators with special requests; general oversight of mechanical integrity of ads; accommodate late advertising needs and make certain there is a smooth production process; and provide employee performance input to manager. Ideal candidate will have minimum two years of traffic experience in a fast-paced publishing or printing operation; demonstrated leadership qualities; bachelor’s degree preferred; strong organizational with ability to meet deadlines, multitask and maintain sharp focus; strong written and verbal communication skills; demonstrated problem solving and conflict management experience; ability to achieve goals with little supervision; proficient in MS Office; and experience with basic design software including InDesign, Illustrator and Photoshop. To apply submit a cover letter and resume to: hrapplications@ljworld.com. We offer an excellent benefits package including medical insurance, 401k, paid time off, employee discounts and more! Background check, pre- employment drug screen and physical lift assessment required. EOE Place your Garage Sale Ad Today! Crew Supervisor 1BRs — 622 Schwarz. CA, laundry, off-street parking, No pets. $435/mo. Gas & water paid. 785-841-5797 For $39.95, your ad will run Wednesday- Saturdayin the Lawrence Journal -World as well as the Tonganoxie Mirror and Baldwin Signal weekly newspapers, and all of our online websites. You have up to 45 lines in print! Just go to: place/classified Sept 30, 2012 AND College Students GET 10% DISCOUNT —————————————————— CALL TODAY (Mon. - Fri.) 785-843-1116 785.843.4040 Flexible leases starting at $680 - water, trash, sewer incld. PARKWAY COMMONS 2BR: $695 * 3BR $795 W/D, Pool, Small Pet Ok! Fall KU Bus Route Avail.! 3601 Clinton Parkway 785-842-3280 2BR, 2412 Alabama, 2nd fl, roomy, CA, washer/dryer. plenty of parking, No pets. $470/mo. Call 785-841-5797 Apartments Unfurnished A GREAT PLACE TO LIVE LEASING 2BRs Units avail. NOW 2BR apts, 2BR Townhomes, 3BR Townhomes VILLA 26 APARTMENTS & Townhomes Quiet, great location on KU bus route, no pets, W/D in all units. 785-842-5227 lawrence.com 2BR — 1030 Ohio, for fall, CA, DW. $500 per month. No pets. Call 785-841-5797 2BR - 415 W. 17th, CA, wood floors, laundry, off street parking. No pets. $550/mo. Water paid. 785-841-5797 2BR — 1214 Tennessee, for fall, in 4-plex, 1 bath, CA, DW. No pets. $460/mo. Call 785-841-5797 2BR — 1315 E. 25th Terrace, for fall, 1 story, 1 bath, CA, DW, W/D hookup. No pets. $480/mo. 785-841-5797 2BR - 741 Michigan, for fall, 1.5 bath, 2 story, CA, DW, W/D hookup, full unfin. bsmt. 1 pet ok. $730/mo. Call 785-841-5797 1, 2 and 3 Bedrooms Near KU, Pool, Pet Friendly and Lease Special First Month Free Rollins PL& Briarstone- 2BR Mackenzie Place- 3 BR Bob Billings & Crestline Call or see website for current availability. $200 per person deposit No App Fee! 785-842-4200 Also, Check out our Luxury Apartments & Town Homes! 2BR, $420-$500/mo. Sm. pets ok, W/D hookup, on bus route AC Management 1815 W. 24th 785-842-4461 2,3, 5 BRs Garages - Pool - Fitness Center • Park West Gardens Apts • Park West Town Homes Call for more details 785.840.9467 Move In Specials Call for Details 625 Folks Rd • 785-832-8200 Houses Duplexes 2BR, near West turnpike, eat-in kitchen, oak cabinets, W/D, Avail. now. No pets. $585/mo. 785-423-1565 4 BR, 2 bath ranch, garage. Quiet cul-de-sac. Quick K-10 access. 2018 Barker Court. Walk to schools/KU $1400/mo. 913-626-7637 2BR, 1 bath, 1 car, 1409 E. 2-3BR, 1 bath- Clean, yard, 21st St. Terr., lawn care. new appliances, $735/mo. New vinyl, $650. No pets. +deposit. 785-841-1284 No smoking. 913-219-3863 2BR, 1 bath, country home, 2 porches, 1 deck. SE of 2BR, in a 4-plex. New carLawrence. Quiet. 1 Small pet, vinyl, cabinets, counpet ok. Call 785-838-9009 tertop. W/D is included. $575/mo. 785-865-2505 3BR Gem - S. of KU at 2213 Naismith Dr. 1.5 Bath, CA, 3BR, 2 bath duplex. 2 car wood floors, garage, DW, garage. W/D included, lg W/D hook-up, bsmt. No basement walkout on golf Smoking. $850/mo. Avail. course. 5 mins to KU. now. Call 816-835-0190 $1,200 + dep. Avail Sept 1. Please call 785-841-5010 LARGE 4BR DUPLEX 913 Christie Ct., Lawrence - New exterior. 3 full bath, 2 kitchens, 2LRs, walk-out basement, 2 car. $1,200/ mo. Rent-to-own option available. 913-687-2582 3-4BR, 3-1/2 bath homes at Candy Lane. 1,900 sq. ft., 1 car gar $995/mo. Pets ok w/pet deposit. 785-841-4785 Apartments, Houses & Duplexes. 785-842-7644 3BR, 2 bath, 2 car, close to campus, fenced yard, CA, DW, pets ok, $1000/mo. Avail. now. 785-766-7589 Townhomes 2BR, 2 bath, fireplace, CA, W/D hookups, 2 car with opener. Easy access to I-70. Includes paid cable. Pets under 20 lbs. allowed Call 785-842-2575 4BR, 2.5 bath available August at 1423 Monterey Hill Dr. (Quail Run School area) $1,500/mo. 785-218-7264 Apartments, Houses & Duplexes. 785-842-7644 ENHANCE your listing with Crescent Heights ——————————————————————————— - Saddlebrook & Overland Pointe LUXURY TOWNHOMES Loft BR, 1226 Prairie, 1.5 bath, 2 story, CA, W/D hookup, 1 pet ok. $630/mo. 2BR, 1 bath, CH, spacious Call 785-841-5797 bedrooms & LR, privately & managed. 2BRs - 27th & Ridge Court, owned Baldwin City Windmill Estates, all elec, $600/mo. 785-766-9139 2 story, 1 bath, CA, W/D 3BR, 2bath, full partially hookup, DW. $595/mo. No PARKWAY 4000 finished bsmt, covered pets. 785-841-5797 • 2BR, 2 bath avail. Sept. deck, rent w/option to buy. • W/D hookups owner financed. $850/mo. LAUREL GLEN APTS • 2 Car garage w/opener Baldwin 785-242-4844 • New appls. & carpets 2 & 3BR All Electric units. • Maintenance free Water/Trash PAID. Vinland Small Dog and 785-749-2555/785-766-2722 Students WELCOME! 2BR home avail., 1.5 Bath, Income restrictions apply Four Wheel Drive stove, refrig., W/D hookup, Now Accepting CA, electrical heat. Pets Townhomes Applications for August maybe. $700/ mo. + de2859 Four Wheel Drive Call NOW for Specials! posit. 785-594-3846 Amazing 2BR, tranquil inti785-838-9559 EOH mate setting, free standing townhome w/ court- Office Space yard, cathedral ceilings, skylights, & W/D. Most EXECUTIVE OFFICE residents professionals. AVAILABLE at WEST Pets ok. Water & trash pd. LAWRENCE LOCATION $685/mo. 785-842-5227 $525/mo., Utilities included Sunrise Place Conference Room, Fax Sunrise Village Machine, Copier Available Apartments & Townhomes LUXURY LIVING AT Call Donna at (or e-mail) AFFORDABLE PRICES $200-$400 OFF 1st month 785-841-6565 On KU Bus Route Advanco@sunflower.com RANCH WAY 2 Bedrooms at TOWNHOMES 837 MICHIGAN Office Space Available on Clinton Pkwy. Near KU. Pool, microwave, at 5040 Bob Billings Pkwy. 3BR, 2 bath, $850/mo. DW, and laundry facilities 785-841-4785 2BR, 1 bath, $780/mo. 3 & 4 Bedrooms at Half Off Deposit 660 GATEWAY COURT $300 FREE Rent FREE wireless internet, DW, W/D, pool, tennis Gage Management courts. 3BRs with garages. 785-842-7644 Call 785-841-8400 3BR, 2 bath, all amenities, CAMPUS LOCATIONS! garage. 2835 Four Wheel Drive. $795/mo. Available Mins away -Utility Pkg Avail Arkansas Villas - 3BR/3Bath Now. Call 785-766-8888 Reserve YOUR Apt. Now Call 785-842-3040 or email village@sunflower.com ——————————————————————————— - Apartments, Houses & Duplexes. 785-842-7644 Studios, 1712 W. 5th, all elec, laundry, A/C, off st. 3 BR, 2 bath, 2 car, Newer, pkg, $410, water/cable pd, I-70, Deerfield School, cul No pets, 785-841-5797 de sac. 3016 Winston. $1150/mo. 785-843-3993 Village Square Close to KU, 3 Bus Stops 785.856.7788 Studios, 2400 Alabama, all elec., A/C, laundry, off st. pkg, $490, water & cable pd, no pets, 785-841-5797 Stonecrest • Hanover YOUR PLACE, YOUR SPACE ——————————————————————————— - Parkway Terrace Apts. $450/mo 1 BDRM $500/mo 2 BDRM $300 deposit 2340 Murphy Dr. w propertiesks.com (785) 841-1155 Townhomes 3BR, 1 bath, W/D hookup, lg fenced yd, 1 car, Move-in incentives, Pets welcome. 2-4BR, 1310 Kentucky. Near 3 Bdrm, 1.5 bath, Newer $900/mo. 785-760-0595 Townhouse, great location KU. $595 - $1,200/mo. $200 $400 Deposit. 785-842-7644 by FSHS, aquatic ctr, shop- 3BR, 2 story, 2 baths, 2 car ping. 1800 sft, w&d, loft, garage, 3624 W. 7th, has lawn maint., privacy fence, study, FP, unfinished bsmt, gas fp. $1150. 785-218-7832. C/A, dw, W/D hooks, 1 pet ok, $1250, 785-841-5797 1008 Emery *785-749-7744 2BR, 2406 Alabama, bldg 10, 1.5 baths, C/A, W/D hook- 3BR — 2323 Yale, 2 story, 2 ups, DW, $570, no pets, bath, CA, DW, FP, 2 car gar785-841-5797 age, no pets. $750/mo. Call 785-841-5797 2BR, 3052 W. 7th, 2 baths, has study, 2 car garage, 3BR, 1.5 bath, 1131-35 Ohio, C/A, W/D hookups, DW, W/D, no pets. $925/mo. & $199/deposit. Close to KU $640, no pets, 785-841-5797 campus. Call 785-749-6084 2BR, 951 Arkansas, 1 month free, 2 bath, C/A, laundry, dw, microwave, $750, no Apartments, Houses & pets, 785-841-5797 Duplexes. 785-842-7644 2BR, in 4-plex, 858 Highland. $485/mo. Has DW. Quiet & clean. No pets. 1 block east of 9th & Iowa. 785-813-1344 Start at $495 One Bedroom/studio style Pool - Fitness Center - On-Site Laundry - Pet Friendly Water & Trash Paid Apartments Unfurnished ½ Month FREE NEW SPECIALS! 1,2,3 BR W/D, Pool, Gym Canyon Court Apts 700 Comet Lane, Lawrence (785) 832-8805 firstmanagementinc.com AVAILABLE NOW! 3BR, 2 or 2.5 bath- 2 car w/openers W/D hookups, FP, major appls. Lawn care & snow removal 785-865-2505 MULTIPLE PHOTOS, MAPS, EVEN VIDEO! SunflowerClassifieds WorldClassNEK.com DIGITAL ACCOUNT EXECUTIVE Account Executive is responsible for selling a platform of products including digital advertising, web banners, social marketing, search engine optimizations for Lawrence Giveback Program, Lawrence Deals, Johnson County Deals, Dotte Deals, and other World Company digital products. As an Account Executive you are accountable for meeting or exceeding sales goals; prospecting new clients and making initial contact by cold-calling either in person or by phone; and developing and building relationships with potential clients to build a large advertising client list. Ideal candidates are passionate about giving back to the community; desire to work with nonprofit organizations and local businesses to build a more sustainable local economy; two years’ experience in sales, marketing and/or advertising; experience in online media sales; demonstrated success with prospecting and cold calling; excellent verbal and written communication skills; networking, time management and interpersonal skills; regular achievement of monthly sales goals; self-motivated; proficient in Microsoft Office applications; and a valid driver’s license, reliable transportation with proof of auto insurance, and a clean driving record. To apply submit a cover letter and resume to hrapplications@ljworld.com, We offer an excellent benefits package including health, dental and vision insurance, 401k, paid time off, employee discounts, tuition reimbursement, career opportunities and more! Background check, pre-employment drug screen and physical lift assessment required. EOE 59 8B FRIDAY, AUGUST 31, 2012 Miscellaneous Lawrence Wii Rockband. We have outgrown the Rockband 2 video game, guitar, drums and microphone. To a good home. $95.00/offer. Call (785) 727-0894. 70 17 TV: Dynex, color TV, 20 inch screen and built in DVD player. $25. Call 785-749-4490 after 3 p.m. Lawrence 01 GARAGE SALE LOCATOR Tools, furniture lumber, Misc. childrens, baby clothes, and womans clothes Bank Owned Com. bldg, & Multi-family rental units for sale. all priced to sell quick. Theno R.E. 785-843-1811 Garage Sale Baby & Children Items “Little Tikes” Play Kitchen. Refrigerator, sink/stove piece (39”h) and chairdoors, buttons intact. Incl 3 (8x11) baskets of play food and dishes. $40. 785-766-4741. 3305 Riverview Rd (Near 6th & Kasold) Small entertainment center, tent, weight bench, motorcycle helmet, beer kit, lamps, books, software, George Foreman grill, pans, pictures, coffee maker, printer, bike rack, golf clubs, jack stands, household items, auto parts washer. FREE bucket with every $5 purchase, while they last! Don’t miss this sale!!! Friday, August 31 Saturday, Sept. 1 7 am - 2 pm 3406 Sweet Grass Court, Lawrence 03 Bottles, 1 pair of Whiskey political bottles (1964) They are boxers & very colorful. $20. Please call for more info. 816377-8928 Antiques, computer desk, chairs, Yakima ski carrier, microwave, end tables, housewares, small kitchen appliances, lamps, many unique items. 04 300 VCR Tapes for more information, Please call 785-838-0056 Bookcase: IKEA wood bookcase, painted black w/red and white insert doors. 4 shelves. 5’ x 32”. Like new, $40. Call 785-749-4490 after 3 p.m. Chair/ottoman: IKEA chair and ottoman, oak frame w/navy blue cushions. Used only 2 years, clean. $40. Call 785-749-4490 after 3 p.m. Coffee Table, with glass on each end & wood in the center very nice 4’3”x10” $14.00. 785-838-0056 Saturday, September 1, 8-12, and Sunday, September 2, 8-12. Lots of FURNITURE (both indoor and outdoor). 05 Beautiful breakfront hutch, bedroom furniture set including twin bed with storage drawers, desk, dresser, and tall chest, 55 gallon aquarium, metal and glass computer desk, metal and glass table with 4 chairs, scroll saw, large metal desk, kitchen table with 3 chairs, metal shelves, dog stroller, tons of books, holiday decorations, lots of clothes and some vintage clothing, much, much more 08 Miscellaneous Antique travel trunk; A nice, sturdy late 1800s or early 1900s child’s trunk; no mold smell. Lid-picture inside and border are originals. Very clean inside. Measures 28” wide. $50. Cash only please. 842-7419. Tires. Have three good cond. Goodyear Eagle tires. 225/50 R18 and tire depth of 6/32 to 5/32. $30.00 for set of 3. (785)418-1339 for info. Tom Clancy Books - Hardback $2 each, Paperback $1 each. 785-842-5069 Fri. Aug. 31st Sat. Sept. 1st Both days 8AM-2PM Lots of household and kitchen items, some are a set of nice dishes, lots of mugs, Christmas glasses, Coke glasses and other Coke collectibles, pictures, and lot of frames, home decorations, lamps, vacuums. coffee table and end table, Think holiday early, Halloween and Christmas items, table top, LED Christmas tree, bookcases, computer desk, stereo equipment, 10, 20, 50 gal. aquariums, lots of tools, tool boxes, some power tools, games, and card games, serious buyers only 1950’s child’s china tea set, Lots of Jayhawk memorabilia lots of items to numerous to mention. Something for everyone. So Big This is a 2 day sale with NEW items on Sat. HUGE SALE! Fri. Aug. 31 & Sat. Sept. 1 8am-5pm. 1219 W 27th St. Lawrence 9.5 kayak, 15ft. aluminum canoe, mitre saw w/stand, mirrors, grinder, garden tools, kitchen tools, cabinet, desk, small tables, fishing gear, house plants, (lg & small), plant stands and pots, picture frames, wall hangings, quilt rack, florescent light bank w/grow lights, scuba equipment, camera stand, American flag, card table, rotisserie, juicer, bread baking pans, wheelchair (like new), storage containers, and men’s leather coats (XL). Antiques: mirrors, wooden Pepsi crate, metal trunk, rocking chair, wash basin w/pitcher, golf clubs, Coleman camp stove and lantern, stained glass (needs work), camera and more. Nothing goes before 8:00 a.m. Saturday p.m. Make an Offer Garage Sale Sat. Sept 1 8-2pm. Sun. Sept. 2 8-12noon. Sofa: 6 ft. sofa, solid oak frame w/6 cushions. Old, clean, comfortable. $10. Call 785-749-4490 after 3 p.m. Table, round, 2’x3” circle, $6.00. Table, square, 2’2” high with shelves, black, $6.00. Table, 3’x5” with tile on top, with wood around edge, $18.00. Call for more info. 785-838-0056 Moving Sale 4113 Wimbledon Dr, Sat. Sept. 1, 7am to noon. Couch and Hide-a-bed Sofa. $50 each. You haul. 785-841-7076 STUDENT BARGAINS!!!! Sunday, August 26, 2012 8:26. 8PM. Black Leather loveseat matching ottomon. Very comfortable!! Downsizing, don’t have room for it. $125 cash u pick up and take it home.. 5 Brass glass stands, coffee table and endtable all matching set $100 cash you pick up and take it home. Down sizeing no room for these items at our new place. 785-841-1930 (home) or 785-760-0612 (cell) ANNUAL BLOW-OUT YARD SALE 09 FURNITURE/ GARAGE SALE 5202 Carson Place. Furniture 3 Patio Chairs w/cushions & on rollers, $14.00, very strong work bench, $5.00. 785-838-0056 Friday, 7-4 Saturday, 8-4 1217 Stone Meadows Drive Computer-Camera Netgear n600 wireless dual and router. Easy setup. Works great, used for 6 mos. $50/offer. 785-312-9215. 09 Multi-Family Garage Sale Clothing Bottles, 1 pair of Jim Beam Whiskey political bottles (1968) They are clowns & very colorful. $20. Please call for more info. 816377-8928 Lawrence 1842 W 27th Terrace For cribs or toddler bed, in great shape, includes mattress pad and eight fitted sheets, $10. Call 749-7984. Collectibles 10 Garage Sale 848 Broadview Drive, Lawrence, Ks 3306 Yellowstone Dr. (off Kasold) 8 AM - 2 PM Saturday, Sept. 1 XL mens & womens clothes, Coke stuff, furniture, and lots of misc. items. 90 years of stuff Antiques and newer. 16 08 E 23rd St 59 07 10 08 09 01 Men’s new Birkenstock Papillio shoes, size 43, teal green. $55 or best offer. 785-843-5396 14 $29.95 for Thurs. - Sat. (Sun) LJW ONLY or EAST Communities. $39.95 for West Communities with Wed. - Sat. in LJW. $49.95 for Full Coverage (all 6 papers) with Wed. - Sat. in LJW. $10 more for color background or color logo. 8AM – 2PM Housewares, decor, kitchen appliances, pots and pans, kids clothing, toys, patio furniture, books, lamps, fitness equipment, bedding, stereo, pet supplies, luggage, storage containers, and much more! Baby Things! Swing $15, Walker $10, Bouncer $15. 785-842-5069 15th St / N 1400 Rd WEST Community Papers - Lawrence Journal-World (LJW), Tonganoxie Mirror, & Baldwin Signal. EAST Community Papers - Basehor Sentinel, Bonner Springs Chieftain, & Shawnee Dispatch. Ads online also. Saturday Sept. 1st ONLY! Family size George Foreman Lean Mean Fat Reducing Grilling Machine 360 Grill & Griddle. Excellent cond. Removable upper/ lower plates. Cook pizzas & bake. $60. Serious inquiries only. 785-550-1768 19th St 13 CHECK OUT OUR GARAGE SALE SPECIALS – UP TO 4 COLUMN INCHES -$29.95, $39.95 OR $49.95 01 Appliances s Riv er Haskell Ave Commercial Real Estate 423B E 4th Street Tonganoxie, KS 66086 913-704-5037 Antiques, Collectibles, Glass, Furniture, Treasures 06 Kans a W Clinton Pkwy 3528 Morning Dove Circle KIPP’S TREASURES 05 10 Garage Sale Friday & Saturday & Sunday 8-5 Antiques Bob Billings 02 Louisiana St Old farmstead on 6 acres, includes all utils., 3 Morton bldgs, 4 lg. barns, silo, stone smoke house. No house. Repo, assume owner financing, no down payment, $975 monthly. 785-554-9663 04 11 40 03 70 12 Iowa St Farms-Acreage 18 Kasold Dr 10 Acreage-Lots 3 Acre Lot, partly wooded, rural subdivision, West Lawrence schools, on pvmt, $53,900. 785-841-0250 01 W 6th St Wakarusa Dr OWNER WILL FINANCE 2BR, 2 bath, stove, fridge, dishwasher, washer/dryer, large storage building. Lawrence. 816-830-2152 Folks Rd Mobile Homes Peterson Rd Massachusetts St 2 BR, 1.5 bath, 2-story Music-Stereo Townhome. 1 car grg, bsmt w W/D and framed/plum- Kimball Consolette Piano, bed for another bathroom. mahogany finish, good Kitchen incl all appliances, cond. tuning pins blue new coutnertops. Sunken steel, Needs tuning. Good living room has fireplace, for beginning student. fenced yard & patio. CA, Hasn’t been abused. Certinew storm door. Newly fied appraisal for $400. You painted exterior. 1,129 sq haul. 913-441-6798 ft. Asking $114,900. 3720 Westland Place, Lawrence. TV-Video 785-766-9337 40 24 GARAGE SALE 2828 Meadow Dr. Thurs 12:00om-5:00pm. Fri. & Sat. 8:00am-1:00pm. Large assortment of Collectibles - Red Wing and western Bowls, jars, and jugs, pottery dinnerware, pigs, duck cookie jar, Jewel Tea and Jadite Bowls, whiskey decanters, Budweiser Millennium limited Edition Bottle and glass set, and other beer items, US Postal stamp sets including Railroad and Marilyn Monroe. Marilyn collectors plate, Ertl trucks, knives, bottle & beer openers and pocket knives. old books, magazines and sheet music nd Santa Fe calendars.. Ladies long black leather coat, old records including LP’s, 78’s and 45’s and CDs. Beautiful Antique Victorian “ dressing table with swinging mirror, matching bench and dresser in excellent condition! Ethan Allen Head and foot board, mini motorcycle, (needs work) household items including mirrors, pictures, silverware, cooking items, bar ware, and much more women & girls clothing and toys. Lots of misc! Huge 3-Family Garage Sale Saturday September 1, 2012 7 a.m.-Noon 1909 E 24th Terrace. Four 285-16 inch tires, tool boxes and hand tools, drill press, power tools, inflatable raft, fishing poles, golf balls and clubs, desk, table, night stands, dressers, 2 end tables, TVs, radio, printer and electronics, 36 inch jeans, craft supplies and kits, sewing machines, lots of fabric, candles, lots of snacks and misc. Priced to sell. Cash only please. Place your Garage Sale Ad Today! Go to: place/classifieds/ You have up to 45 lines in print! Click on “place an ad” under the blue garage sale box and follow the step by step process! Lawrence 16 Piper Super Sale (Area North of Leavenworth Rd.) Friday, Aug. 31 8am-5pm, Saturday, Sept. 1 8am-2pm 3 sales in one place Sale Sale #2: Vintage stuff, Dishes, kitchen items, nik-naks, wall and home decor, small furniture items, old books, postcards,tablecloths, Small linens, records, cds, videos, barware, toys, wood rocking chair, 50’s kitchen table, Japanese tea set, Sale #3: Weedeater, old and new hand tools, toolboxes, jars of nuts, bots, screws, etc, work gloves, Sawzall, roll of wood-look flooring. KU items, 2 large area rugs, upholstered L.R. chair, wicker chair & table, new books, fabric, craft items, holiday decorations, wading pool, roller blades, much more! Lawrence-Rural BARN SALE Fri. Aug 31, Sat. Sept 1 8-5, Both days 1431 N. 1900 Rd. Two Royal cash registers, framed pictures, ladies jeans and clothing size 8-12, mens denim shirts, men & ladies western and reenactment clothing, massage table, ladies shoes size 6, Roy Rogers collectible VCR tapes, assorted glassware, baskets, books, Christmas and Fall decorations. Antiques, linens, dishes, all size NAME BRAND clothing, shoes, jeans, insulated coveralls and military clothing, small furniture, MY COUNTRY CUPBOARD jam and jelly. MUCH MORE. Low prices. Cars-Domestic Cars-Domestic Chevrolet 1968 Camaro SS Price $8,200. Get in touch with me at: esthertevez@gmail.com for more information (Mongold/Roe) 3550 N. 123rd St., Piper Saturday 7-2, 1708 Hampton St. Near 27 th & Harper, follow signs. #1: Lots of nice lightly used women’s name brand (Gap, JCREW, Banana Republic, etc.) clothing for all seasons, sizes 8 to 14, stylish shoes and boots, purses, hats, scarves, jewelry, make-up, everything else for your wardrobe! Cars-Domestic Garage Sale Chevrolet 1970 Chevelle SS LS5 454/360HP, asking $7000, AC, Automatic, low miles, contact me at dixon9h@msn.com or 913-416-1424. Chevrolet 2008 Impala LT, alloy wheels, power equipment, remote start, great gas mileage! Only $11,781.00 stk#159541 Dale Willey 785-843-5200 Tonganoxie Chevrolet 2009 Cobalt LT, automatic, FWD, alloy wheels, power equipment, GM certified with 2 years of maintenance included! Stk#171411 only $11,815.00 Dale Willey 785-843-5200 Garage Sale Friday August 31 9am - 3pm & Saturday September 1 9am - 3pm 23262 Woodend Rd Tonganoxie, Ks N.E on Hwy 24/40 to Woodend Rd. (Reno) Air compressors, tools, extension and step ladders, BBQ grill, camping equipment, office chair, household and miscellaneous items Chevrolet 2008 Cobalt LT sedan, 4cyl, great gas mileage, spoiler, power equipment, GM certified, stk#337913 only $11,222. Dale Willey 785-843-5200 Pets Beagle puppy, miniature male. Gorgeous, loveable, tri-colored. 7 weeks old. $150. 785-255-4447 Boxers, 3yr. old Brindle, 3 yr. fawn, male boxers. spayed, kind & gentle, to a good home. $100 each. 785-608-8516 Care-ServicesSupplies Chevrolet 2007 Impala LT, alloy wheels, power equipment, cruise control, remote start, alloy wheels, steering wheels controls. Stk#139161 only $8,888. Dale Willey 785-843-5200 2012 Buick Regal Sharp sedan from long time luxury car maker. Low miles and great on gas. Must See. $21,000 23rd & Alabama 843-3500 Cadillac 2008 CTS AWD, luxury package, leather heated/cooled seats, ultra sunroof, remote start, Bose sound, On Star, stk#616681 only $25,884.00 Dale Willey 785-843-5200 Training Classes - Lawrence Cadillac 2007 STS, CTS grill, miles, excellent Jayhawk Kennel Club, 6 34,000 wks. $75. Enroll online, condition,. $22,000. Please or call call 785-979-3808 785-842-5856 Chevrolet 2000 Corvette, targa roof, heads up display, manual, leather memory seats, alloy wheels, V8, low miles, sweet! Stk#15617A only $21,500. Moving Sale Fri & Sat: 7:30 - 1:00 Saturday - all 1/2 OFF! Furniture, tools, work bench with vice, garden equip, dishes, lamps, kitchen, electronics, nintendo, games, much, much misc. Livestock Cattle, High quality yearling Angus steers for sale. will deliver, please call 785-760-2215 1609 E. 686 Rd. Westpointe Subdivision 1blk west of Hwy 40 on Stull Rd (5-6 min west of 6th & Wakarusa) Washington Creek Church Community Garage Sale Aug. 31 & Sept. 1 8AM-6PM. 609 E 550 Road Lawrence, Ks Once again the families of the Washington Creek Church and friends of the area are having their 5th annual garage sale. With clothes, books, small appliances, collectibles, and fun stuff. There are games and toys, videos, and DVDS, glassware and doghouses. A refinished oak dinette table & four chairs. baby strollers and blankets. Stop by and check it out. Chevrolet 2011 Aveo LT, power equipment, sunroof, leather, fantastic gas mileage, GM certified, stk#19399 only $14,917 Dale Willey 785-843-5200 2011 Chevrolet Cruze Low miles with gas saving 4-cylinder engine. Excellent mid-size sedan and a great color. $16,500 23rd & Alabama 843-3500 Boats-Water Craft Boat - 16 ft, 1988 Scroca. Sail/row/paddle. Ex Cond. Trailer. $850. 913-248-1446 RV 2001 Winnebago Rialta 22 QD. $24,900. TV, microwave, fridge, bath, dinette, generator. Beds - one double, one twin, 68,340 miles. Great for tailgating! 785-841-8481 Chevrolet 2009 Aveo LT, sunroof, power equipment, On Star, GM certified with 2 yrs of scheduled maintenance, stk#19353 only $12,744. Dale Willey 785-843-5200 2010 Chevrolet Equinox 2LT package with AWD, leather seats, and back-up camera. Priced very low. $23,000 23rd & Alabama 843-3500 HUMMER Trucks PUT YOUR EMPLOYMENT AD IN TODAY!! Go to ljworld.com or call 785-832-1000. UP TO FOUR PACKAGES TO CHOOSE FROM! All packages include AT LEAST 7 days online, 2 photos online, 4000 chracters online, and one week in top ads. 2009 Chevrolet Malibu LS-69K, AT, CD, Cruise, Keyless Entry, OnStar, 2-owner, Steal at $13,900. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 Hummer 2008 H3, 4wd, GM certified, running boards, tow package, alloy wheels, leather heated seats, On Star, power equipment, stk#538992 only $19,977. Dale Willey 785-843-5200 Chevrolet 2010 Camaro 2LT, GM certified, leather heated seats, remote start, On Star, Boston premium sound, stk#10451B only $22888.00 Dale Willey 785-843-5200 2006 Chevrolet Impala Great back to school car for high school or college students. Good gas mileage and plenty of room. $10,191 23rd & Alabama 843-3500 Chevrolet 2012 Traverse LT, AWD, room for 8, remote start, heated seat, power equipment, stk#10560A only $27,500. Find Jobs & More SunflowerClassifieds FRIDAY, AUGUST 31, 2012 9B INC. Your local concrete Repair Specialists Sidewalks, Patios, Driveways, Waterproofing, Basement, Crack repair 888-326-2799 Toll Free lawrencemarketplace.com/ dalerons FACTORYDIRECT INVENTORY BUY-OUT! Across The Bridge In North Lawrence 903 N 2nd St | 785-842-2922 lawrencemarketplace.com/ battery All Your Banking Needs Famous Brand Overstocks BIG SELECTION NOW IN STOCK! Decorative & Regular Drives, Walks & Patios Custom Jayhawk Engraving Jayhawk Concrete 785-979-5261 Driveways, Parking Lots, Paving Repair, Sidewalks, Garage Floors, Foundation Repair 785-843-2700 Owen 24/7 LAMINATE Wood & Tile Designs! CERAMIC TILE Many Sizes & Styles! CARPET TILE 19”x19” Heavy-Duty! Decks & Fences Looking for Something Creative? Call Billy Construction Decks, Fences, Etc. Insured. (785) 838-9791 Stacked Deck ALL KINDS OF FLOORING From only • Decks • Gazebos • Framing • Siding • Fences • Additions • Remodel • Weatherproofing & Staining Insured, 20 yrs. experience. 785-550-5592 NOW from 69c sq ft! Dirt-Manure-Mulch REMNANTS Carpet, Vinyl, Tile, Laminate. All Sizes! Many priced BELOW wholesale! Installer-Direct Plan saves you even MORE on professional, installation! Jennings’ Floor Trader 3000 Iowa - 841-3838 See what’s new and on sale at Dave’s Construction Topsoil Clean, Fill Dirt 913-724-1515 Electrical Eudora Montessori K Prep-1st, 2 Openings Half day $75, Full day $100/wk Aug. Special 1 FREE week Near Eudora Elementary 785-542-1364 Artisan Floor Company Hardwood Floor Installation, Refinishing and Repair Locally Owned, Insured, Free Estimates 785-691-6117 785-838-4488 lawrencemarketplace.com/ harrisauto Full service preschool & licensed childcare center for children ages 1-12. Open year-round, Monday- Friday, from 7 am to 6 pm Licensed In-home daycare Now enrolling Children of all ages in Tonganoxie Call Kristal 913-593-8651 Carpet Cleaning Kansas Carpet Care, Inc. Your locally owned and operated carpet and upholstery cleaning company since 1993! • 24 Hour Emergency Water Damage Services Available By Appointment Only Specializing in Carpet, Tile & Upholstery cleaning. Carpet repairs & stretching, Odor Decontamination, Spot Dying & 24 hr Water extraction. 785-840-4266 Precision Carpet Cleaning Kansas 785-250-4369 cleaningkansas.com/ BACK TO SCHOOL SPECIAL Newest & most innovative rotary cleaning system. STARTING or BUILDING a Business? 785-832-2222 classifieds@ljworld.com lawrencemarketplce.com/ lynncommunications Employment Services Office* Clerical* Accounting Light Industrial* Technical Finance* 785-842-3311 For Promotions & More Info: lawrencemarketplace.com/ kansas_carpet_care Get Lynn on the line! 785-843-LYNN Cleaning Westside 66 & Car Wash Full Service Gas Station 100% Ethanol-Free Gasoline Auto Repair Shop - Automatic Car Washes Starting At Just $3 2815 W 6th St | 785-843-1878 lawrencemarketplace.com/ westside66 lawrencemarketplace.com/ scotttemperature Home Improvements Full Remodels & Odd Jobs, Interior/Exterior Painting, Installation & Repair of: Deck Drywall Siding Replacement Gutters Privacy Fencing Doors & Trim Commercial Build-out Build-to-suit services Fully Insured 22 yrs. experience Janitorial Services Business-Commercial-Industrial Housecleaning Carpet Cleaning Tile & Grout Cleaning The “Greener Cleaner” Locallly Owned Since 1983 Free Estimates 785-842-6264 LawrenceMarketplace.com/ bpi Apply at eapp.adecco.com Or Call (785) 842-1515 BETTER WORK BETTER LIFE lawrencemarketplace.com/ adecco JASON TANKING CONSTRUCTION New Construction Framing, Remodels, Additions, Decks Fully Ins. & Lic. 785.760.4066 lawrencemarketplace.com/ jtconstruction Int. & Ext. Remodeling All Home Repairs Mark Koontz (785) 550-1565 General Services 785-856-GOLD(4653) Jewelry, coins, silver, watches. Earn money with broken & Unwanted jewelry Retired Carpenter, Deck Repairs, Home Repairs, Interior Wall Repair & Painting, Doors, Wood Rot, Powerwash 785-766-5285 Insurance NOT Your ordinary bicycle store! Guttering Services Banquet Room Available for Corporate Parties, Wedding Receptions, Fundraisers Bingo Every Friday Night 1803 W 6th St. (785) 843-9690 lawrencemarketplace.com /Eagles_Lodge STARTING or BUILDING a Business? Landscaping Low Maintenance Landscape, Inc. 1210 Lakeview Court, Innovative Planting Design Construction & Installation lawrencemarketplace.com/ lml Events/Entertainment Eagles Lodge TWO GOOD PAINTERS 785-424-5860 Husband & wife team excellent refs. 20yrs. exp. Mark & Carolyn Collins Advertising that works for you! Drury Place Live More Pay Less Worry-free life at an affordable price 1510 St. Andrews Pet Services 785-841-6845 Lawrencemarketplace.com/ druryplace Roofing 785-865-0600 Big/Small Jobs Dependable Service Mowing Clean Up Tree Trimming Plant Bed Maint. Whatever U Need Mowing...like Clockwork! Honest & Dependable Mow~Trim~Sweep~Hedges Steve 785-393-9152 Lawrence Only ROCK-SOD-SOIL-MULCH I COME TO YOU! Dependable & Reliable pet sitting, feeding, walks, overnights, and more! References! Insured! 785-550-9289 Complete Roofing Services Professional Staff Quality Workmanship lawrencemarketplace.com/ lawrenceroofing Complete Roofing Professional Service with a Tender Touch Stress Free for you and your pet. Call Calli 785-766-8420 Tearoffs, Reroofs, Redecks * Storm Damage * Leaks * Roof Inspections We’re There for You! 785-749-4391 Lawrencemarketplace.com/ksrroofing Plumbing Precision Plumbing New Construction Service & Repair Commercial & Residential FREE ESTIMATES Licensed & Insured Prompt Superior Service Residential * Commercial Tear Off * Reroofs Free Estimates Insurance Work Welcome 785-764-9582 Lawrencemarketplace.com/ mclaughlinroofing 785-856-6315 lawrencemarketplace.com/ precisionplumbing RETIRED MASTER PLUMBER & Handyman needs small work. Bill Morgan 816-523-5703 Re-Roofs: All Types Roofing Repairs Siding & Windows FREE Estimates (785) 749-0462 STARVING ARTISTS MOVING 15yr. locally owned and operated company. Professionally trained staff. We move everything from fossils to office and household goods. Call for a free estimate. 785-749-5073 lawrencemarketplace.com/ starvingartist Taking Care of Lawrence’s Plumbing Needs for over 35 Years (785) 841-2112 lawrencemarketplace.com /kastl Real Estate Services Travel Services Lawrence First Class Transportation Limos Corporate Cars Drivers available 24/7 785-841-5466 Lawrencemarkeptlace.com /firstclass Music Lessons JAYHAWK GUTTERING 785-550-5610 PIANO LESSONS Learn to play 30-50 songs in the first year with Simply Music! Keys of Joy 785-331-8369 Karla’s Konservatory 785-865-4151 Lawrencemarketplace.com/ keysofjoy 785-842-0094 Heating & Cooling A. B. Painting & Repair Int/ext. Drywall, Tile, Siding, Wood rot, & Decks 30 plus yrs. Refs. Free Est. Al 785-331-6994 Realty Executives - Hedges Joy Neely 785-371-3225 Recycling Services 12th & Haskell Recycle Center, Inc. No Monthly Fee Always been FREE! Cash for all Metals 1146 Haskell Ave, Lawrence 785-865-3730 lawrencemarketplace.com/ recyclecenter albeil@aol.com Lonnie’s Recycling Inc. Buyers of aluminum cans, all type metals & junk vehiA. F. Hill Contracting cles. Mon.-Fri. 8-5, Sat. 8-4, Call a Specialist! 501 Maple, Lawrence. We are the area exclusive ex785-841-4855 terior only painters. Insured. lawrencemarketplace.com/ Free est. call for $300discount lonnies 785-841-3689 anytime Inside - Out Painting Service Complete interior & exterior painting Siding replacement Plan Now For Next Year • Custom Pools, Spas & Water Features • Design & Installation • Pool Maintenance (785) 843-9119 midwestcustompools.com “Your Comfort Is Our Business.” Installation & Service Residential & Commercial (785) 841-2665 lawrencemarketplace.com/ rivercityhvac HIRING? Best Deal We’re cheaper Free estimates Mowing, trimming Bushes & trees 785-505-8697 Trimming, removal, & stump grinding by Lawrence locals Certified by Kansas Arborists Assoc. since 1997 “We specialize in preservation and restoration” Ins. & Lic. visit online 785-843-TREE (8733) Repairs and Services Utility Trailers 785-766-2785 inside-out-paint@yahoo.com Free Estimates Fully Insured Lawrencemarketplace.com/ inside-out-paint Seamless aluminum guttering. Many colors to choose from. Install, repair, screen, clean-out. Locally owned. Insured. Free estimates. jayhawkguttering.com Tree/Stump Removal EAGLE TRAILER CO. Unsightly black streaks of mold & dirt on your roof? Mold/Mildew on your house? Int/Ext/Specialty Painting Siding, Wood Rot & Decks Kate, 785-423-4464 Is winter salt intrusion causing your concrete to flake? Mobile Enviro-Wash 785-842-3030 Manufacturing Quality Flatbed Trailers 20 years SALES SERVICE PARTS WE SELL STEEL WELDING SERVICES (785) 841-3200 Window Installation/Service Martin Windows & Doors Lawn, Garden & Nursery 785-832-2222 classifieds@ljworld.com Retirement Community Landscape Maintenance Painting LawrenceMarketplace.com/ kansasinsurance Temporary or Contract Staffing Evaluation Hire, Direct Hire Professional Search Onsite Services (785) 749-7550 1000 S Iowa, Lawrence KS lawrencemarketplace.com/ express Green Grass Lawn Care Mowing, Yard Clean-up, Tree Trimming, Snow Removal. Insured all jobs considered 785-312-0813/785-893-1509 Painting mmdownstic@hotmail.com Lawrencemarketplace.com/tic Computer/Internet Computer Running Slow? Viruses/Malware? Troubleshooting? Lessons? Computer Questions, Advise? We Can Help — 785-979-0838 Renovations Kitchen/Bath Remodels House Additions & Decks Quality Work Affordable Prices Serving individuals, farmers & business owners 785-331-3607 Housecleaner Honest & Dependable Free estimate, References Call Linda 785-691-7999 • Garage Doors • Openers • Service • Installation Call 785-842-5203 or visit us at Lawrencemarketplace.com /freestategaragedoors Golden Rule Lawncare Complete Lawncare Service Family owned & operated Eugene Yoder Call for Free Est. Insured. 785-224-9436 913-488-7320 No Job Too Big or Small Garage Doors Lawn, Garden & Nursery Marty Goodwin 785-979-1379 Bus. 913-269-0284 Tiny Tots Tires, Alignment, Brakes, A/C, Suspension Repair Financing Available 785-841-6050 1828 Mass. St lawrencemarketplace.com/ performancetire Free Estimates on replacement equipment! Ask us about Energy Star equipment & how to save on your utility bills. Foundation Repair Harris Auto Repair Domestics and Imports Brake repair Engine repair AC repair / service Custom exhaust systems Shock & Struts Transmissions Tire sales / repairs Air Conditioning/ & Heating/Sales & Srvs. 785-843-2244 Flooring Installation Wagner’s 785-749-1696 For Everything Electrical Committed to Excellence Since 1972 Full Service Electrical Contractor Heating & Cooling Roger, Kevin or Sarajane CARPET Stain-Resistant Styles! Child Care Provided For All Your Battery Needs Financial Your Local Lawrence Bank VINYL Rolls & Planks! Automotive Services Concrete Call 866-823-8220 to advertise. Supplying all your Painting needs. Serving Lawrence and surrounding areas for over 25 years. Locally owned & operated. Free estimates/Insured. Water, Fire & Smoke Damage Restoration • Odor Removal • Carpet Cleaning • Air Duct Cleaning • One Company Is All You Need and One Phone Call Is All You Need To Make (785) 842-0351 Milgard replacement windows Free est. 15 yrs. exp. Locally owned & operated Great prices! 785-760-3445 Reach thousands of readers across Northeast Kansas in print and online. Schedule your help wanted ad today! Find the best candidates with 1-785-832-2222 or 1-866-823-8220 10B FRIDAY, AUGUST 31, 2012 Cars-Domestic Cars-Domestic Cars-Imports Cars-Imports Cars-Imports Motorcycle-ATV Sport Utility-4x4 Sport Utility-4x4 2006 Honda Interceptor Low miles, extras, well maintained. $6,800/offer. 785-766-1431 Dodge 2010 Challenger SE V6, alloy wheels, ABS, power equipment, very nice! Stk#18493 only $22,815. Dale Willey 785-843-5200 2004 Pontiac Grand Prix GT2-122K, AT, Cruise, Moon, CD Changer, Lots of Records, 1-owner, Nice $7,900. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 Pontiac 2008 Grand Prix GXP, remote start, heads up display, On Star, sunroof, leather heated seats, V8, traction control, stk#349631 only $14,815 Dale Willey 785-843-5200 2011 Ford Fiesta Hatchback with extra cargo room and great gas mileage. CARFAX 1-owner. $16,000 23rd & Alabama 843-3500 2010 Ford Fusion SE -88K, AT, Cruise, CD Changer, Keyless Entry, 2-owner, Wow $12,900. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 2006 Ford 500 Limited package with leather and AWD and V-6 engine. Easy to maneuver in bad weather and comfortable ride in all weather. $12,000 23rd & Alabama 843-3500 2005 Toyota Corolla CE-136K, AT, AC, CD, Tinted Windows, Power Doors, 3-owner, Clean $8,500 . View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 Cars-Imports Acura 2004 MDX AWD, heated leather seats, Bose sound, navigation, alloy wheels, sunroof, all the luxury without the price, only $12,845. stk#153911 Dale Willey 785-843-5200 Ford 2000 Mustang. ONE owner. NO accident beautiful Mustang. Bright white with clean tan interior! Great condition, looks and runs super. See website for photos. Rueschhoff Automobiles rueschhoffautos.com 2441 W. 6th St. /7 785-856-6100 24/ Ford 2008 Mustang, alloy wheels, spoiler, power equip, V6, stk#142722 only $15,316. Dale Willey 785-843-5200 2002 Honda Accord EX-118K, AT, Leather, Moonroof, CD Changer, 2-owner, Save $8,200. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 2007 Ford 500 SEL package with low miles. V-6 engine with plenty of power in this comfortable cruiser. $11,987 23rd & Alabama 843-3500 Honda 2008 Accord EXL, leather heated seats, sunroof, alloy wheels, navigation, XM radio, one owner, stk#365121 only $18,733. 2009 Hyundai Sonata Certified! Warranty until 2019 or 100k miles, Currently has 42k miles, V6, $13,900 Crossovers Hyundai 2011 Sonata GLS fwd, V6, power equipment, steering wheels controls, great commuter car! Stk#16471 only $17,850 Dale Willey 785-843-5200 Infiniti 2003 FX45 1-owner, well-maintained, 98,700 miles, AWD, leather, sunroof. Premium sound. $15,700.00. 785-550-0504. Dale Willey Automotive 2840 Iowa Street (785) 843-5200 Jaguar 2007 S type AWD 3.0, very nice! Alloy wheels, leather, sunroof, discover luxury without the luxury price! Stk#19206A3 only $13,444. Motorcylce 1996 BMW, 1100R, $3,000, located in Lawrence, KS. 785-550-2897 2012 Ford Explorer XLT 4x4, Like new with a lot of factory warranty left. $34,395 23rd & Alabama 843-3500 2008 Mitsubishi Lancer Red, Very clean, Alloy wheels, 97k miles, Auto trans, $10,500 Call 785-727-0244 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence 2001 Kia Sportage 4X4, 99,802 miles. Manual transmission, Evergreen exterior with grey leather interior, Local trade $7,288 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence Sport Utility-4x4 Nissan 2000 Quest SE white (170,000 miles) Loaded, looks nice & runs great, must see. Front & rear A/C, gray leather, alloy wheels, AM/FM w/rear contl $4,995. 913-620-5000 Nissan 2001 Sentra. 124,000 miles. Car serviced regularly. Tires purchased 2yrs ago. $1000/offer. Baldwin City. Call Nick @ 620-921-5531 for appt. Serious Inquiries Rueschhoff Automobiles rueschhoffautos.com 2441 W. 6th St. 785-856-6100 24/7 GMC 2010 Terrain AWD SLE, local trade, bought here, serviced here. You won’t find a nicer one! GM certified, alloy wheels, remote, On Star, stk#596551 only $20,755. Dale Willey 785-843-5200 2005 Honda CR-V EX SE 4WD-127K, AT, CD Changer, Leather Heated Seats, Moonroof, 2-owner, Save $11,900. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 2007 Chevrolet Tahoe LTZ package with captain’s chairs and rear entertainment system. Sunroof, leather, price slashed. $23,000 23rd & Alabama 843-3500 2008 Ford Edge SEL with leather and power seats. Local trade in and very clean. $19 2006 Toyota Avalon XLS Silver Pine Metallic with 62,864 miles, Nice, dependable sedan. Just $17,500. Call 785-550-6464 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence We are now your Chevrolet dealer, call us for your service or sales needs! Dale Willey Automotive Hyundai 2011 Santa Fe GLS FWD, V6, power equipment, alloy wheels, steering wheel controls, keyless remote, stk#19890 only $19,415 Dale Willey 785-843-5200 The Selection Premium selected automobiles Specializing in Imports 785-856-0280 “We can locate any vehicle you are looking for.” 2002 Cadillac Escalade Base Leather, Automatic with 112,683 miles, AWD in Black, Nice quality SUV and only $12,500! Call 785-550-6464 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence 2002 Lexus ES 300 Fully loaded, Leather seats, Power front seats, Moon roof, Heated seats, Very clean 152,205 miles $8,200 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence 2011 Ford Explorer XLT with leather and dual headrest DVD players for those long drives. Very nice inside and out. $31,000 23rd & Alabama 843-3500 2004 Mazda 6 Sport Wagon S-94K, AT, CD Changer, Cruise, Bose Sound, 3-owner, Rare $9,900. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 2005 Ford Expedition Eddie Bauer with heated and cooled leather seats. Fully loaded and family priced SUV. JAZZ HANDS. S13,995 23rd & Alabama 843-3500 2011 Ford Flex SEL All-Wheel-Drive makes for a comfortable and very safe ride for 7 passengers. Fun crossover alternative. $25,000 23rd & Alabama 843-3500 1999 Toyota 4-Runner Loaded, 4X4, Leather, Wood trim, Automatic trans, Manual transfer case, Sunroof, V6, Local trade, 186k miles $8,000 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence Truck-Pickups 2007 Chevrolet Colorado Z71 4x4 with the 3.7L I5 engine. Automatic with low mileage. A really great truck you must see. $16,000 23rd & Alabama 843-3500 Chevrolet 2011 Equinox LTZ, one owner, GM certified, sunroof, leather heated memory seats, alloy wheels, remote start, stk#435222 only $27,450. Dale Willey 785-843-5200 Need to Sell a Car? Place your ad at ljworld.com or email classifieds@ljworld.com 2009 Ford Flex SEL with leather and captain’s chairs. Easy access to the 3rd row seat for extra passengers makes this a rare and convenient vehicle. $22,000 23rd & Alabama 843-3500 Chevrolet 2006 HHR LT FWD, 4cyl, leather heated seats, cruise control, power equipment, remote start, alloy wheels, stk#194041 only $11,9448 Dale Willey 785-843-5200 GMC 2006 Envoy SLT, 4WD, Beige color, Fully Loaded, Power everything, Sunroof, Heated leather seats, V6 Inline motor, 96,000 miles, good condition. Call or text 785-331-6063/email lndaniels@hotmail.com for more info or to come see. 2006 Chevrolet Silverado 1500 LT Extended Cab, Tow package, 4x4, Leather, 155,849 miles $10,500 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence Ford 2009 Taurus Limited, leather heated memory seats, alloy wheels, ABS, CD changer, very nice! Stk#15708 only $17,444. 1992 Lexus LS400 Affordable Luxury, One owner, Very clean, Loaded, ONLY 82K MILES, V8, Auto trans $8,000 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence 2004 Toyota Camry LE-181K, AT, AC, CD, Cassette, Cruise, 1-owner, Steal at $7,500. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 2009 Nissan Murano SL AWD-97K, AT, CD, Dual Zone AC, Cruise, CD Changer, 2-owner, Clean $15,900. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 We Buy all Domestic cars, trucks, and suvs. Call Jeremy 785-843-3500 Hyundai 2011 Accent GLS, power equipment, steering wheel controls, great commuter car! Stk#19070 only $13,444. Dale Willey 785-843-5200 2005 Pontiac G6 3.5L, V6 Remote keyless entry, Clean Carfax, 98,386 miles $9,000 Call 785-838-2327 2rd & Iowa St. 2003 TOYOTA Corolla LE 182K Highway Miles, Silver, Well Maintained, Tinted Windows, Cruise Control, New Tires, Photo is Available Online, $4600. Price is Negotiable, Very Nice Car! Call 785-727-9389 Hyundai 2011 Elantra GLS save thousands over new! Great rates and payments are available! Stk#11530 only $15,9974. Dale Willey 785-843-5200 2005 Toyota Corolla Local trade, Very clean, 62k miles, Manual trans, White, $10,000 Call 785-727-0244 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence Chevrolet 2007 Silverado Ext cab LT, 4wd, tow package, remote start, alloy wheels, power equipment, very affordable! Stk#340441 only $20,445. Dale Willey 785-843-5200 2006 Hyundai Tucson Good MPG small SUV, 4cyl, Clean, Blue, 97k miles, $10,900 Motorcycle-ATV 2011 Hyundai Santa Fe Certified! Warranty until 2021 or 100k miles, Currently has 30k miles, VERY clean, Silver, $18,000 Call 785-727-0244 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence Ford 2003 Explorer Eddie Bauer, ONE owner, beautiful True Blue Metallic Blue, third row seat and moonroof. Awesome condition and all wheel drive. NO accident history, and only 105K miles. Loaded like all Eddie Bauers! See website for photos. Rueschhoff Automobiles rueschhoffautos.com 2441 W. 6th St. 56-6100 24/7 785-85 We are now your Chevrolet dealer, call us for your service or sales needs! Dale Willey Automotive 785-843-5200 2002 Mazda ProtegeWell below average miles at only 63k, Well maintained Local trade, Automatic, 4cyl, Good MPG $9,000 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence 2007 Toyota Camry XLE Nice, clean sedan with leather and 102,600 miles. Just getting broken in! Only $15,000. Call 785-550-6464 Genuine 2008 Stella 150cc Scooter with Cozy Sidecar. This is a 2 cycle with 4 speed transmission. Great around town vehicle. No worries about sand, oil or water on the road with 3 wheels!!! Daily driver to school, starts everytime!! Comes with lap cover for cold weather and half case of oil. $4800/offer. 785-218-4772 Chevrolet 2007 Silverado Ext cab LT, 4wd, tow package, GM certified with two years of maintenance included! Stk#345911 only $22,416. Dale Willey 785-843-5200 2002 Ford Explorer Sport Trac 4x4 with V6 power. Excellent small pickup with SUV comfort. $9,980 23rd & Alabama 843-35. Jeep 2011 Wrangler Sahara Unlimited 4WD, one owner, running boards, premium alloy wheels, heated seats, power equipment, very sharp!! You have got to see this one! Stk#310461 only $32,845 Dale Willey 785-843-5200 Need an apartment? Place your ad at ljworld.com or email classifieds@ljworld.com 2012 Chevrolet Silverado Only 3800 miles and 4x4 with V8 power. Great looking truck. Must see. $28,000 23rd & Alabama 843-3500 46 Pop s pop (Var.). Dad has no interest in going places due to his health issues, and Mom doesn’t want him driving much or staying home alone. The constant nagging has created an unhealthy environment. It is difficult to visit because we don’t like to see and hear Annie’s Mailbox Marcy Sugar and Kathy Mitchell anniesmailbox@comcast.net them like this. Counseling is not an option, as Mom seems overly concerned with what others know about her or will think of her. Dad doesn’t seem to be concerned about anything. Any suggestions? — The Girls Dear Girls: When couples retire, they can fall into the trap of doing nothing and getting on each other’s nerves. Mom resents Dad invading her domestic domain, and Dad is depressed because his identity was tied up in his job. And if they have YouTube offers first scripted series The U.S. Open Tennis continues (6 p.m., ESPN 2). A male would-be singer has a Cinder-fella story come true in the 2012 musical “Rags” (7 p.m., Nickelodeon). “Dark Secrets of the Lusitania” (8 p.m., National Geographic) recalls the sinking of the passenger liner by a German submarine in 1915. Kane finds it hard to control his feelings for his new aide on “Boss” (8 p.m., Starz). “Deadly Women” (9 p.m., ID) profiles killers inspired by greed. BIRTHDAYS Baseball Hall-of-Famer Frank Robinson is 77. Violinist Itzhak Perlman is 67. Singer Van Morrison is 67. Actor Richard Gere is 63. Olympic gold medal track and field athlete Edwin Moses is 57. Rock musician Gina Schock (The Go-Go’s) is 55. Singer Tony DeFranco (The DeFranco Family) is 53. Singer-composer Deborah Gibson is 42. Golfer Padraig Harrington is 41. Actor Chris Tucker is 40. health issues, it can exacerbate the problem because getting out of the house can be problematic or exhausting.: I’m 13 and live with my mom. She always overreacts when my room is not absolutely spotless, which leaves me wondering whether she has OCD. She doesn’t seem to care that the rest of the house is a mess. She seems to magnify the messiness of my room only. When I confront her JACQUELINE BIGAR’S STARS For) Today’s Full Moon throws you into a tizzy of sorts. You might choose not to share what is going on. Tonight: Hang out your “Not Available” sign. Taurus (April 20-May 20) You could encounter strong reactions from others. As you witness a lot of changes, you’ll feel at peace. Tonight: Where people are. Gemini (May 21-June 20) You want to help people, but in order for that to happen, they need to listen to your ideas. You are not in the mood to debate. Tonight: To the wee hours. Cancer (June 21-July 22) Reach out for someone and understand where he or she is coming from. If this person becomes difficult, do not push. Tonight: Hang out. Leo (July 23-Aug. 22) Today’s Full Moon is far more challenging than you realize, especially concerning others and anything involving finances. Be direct in your dealings. Tonight: Dance away your problems. Virgo (Aug. 23-Sept. 22) Your energy could determine the near future in certain jacquelinebigar.com relationships. The good news is that generally you’ll err on the side of caution. Tonight: Out with that favorite person. Libra (Sept. 23-Oct. 22) You might be struggling with all the demands of your daily life and the added high-voltage energy of today’s Full Moon. Know that this, too, will pass. Tonight: Easy works. Scorpio (Oct. 23-Nov. 21) You enjoy living to the utmost and seeing what will happen. The Full Moon emphasizes this gregarious quality and your love for life. Tonight: Live it up. Sagittarius (Nov. 22-Dec. 21) You could see this moment as critical. Understand that the Full Moon is adding to the sense that this moment might be more important than it really is. Tonight: Make a favorite meal. Capricorn (Dec. 22-Jan. 19) You know what you want to do, and the Full Moon cheers you on. Do what you feel is needed, but save part of the day for you. Tonight: At a favorite spot. Aquarius (Jan. 20-Feb. 18) Observe what is happening with others. You do not need to play into the commotion; you actually might want to distance yourself. Tonight: Join friends. Pisces (Feb. 19-March 20) The Full Moon in your sign throws certain opportunities and people in your direction. Even something that might feel problematic could work out fine. Tonight: Zero in on what you want. — The astrological forecast should be read for entertainment only. © 2012 Universal Uclick FRIDAY , AUGUST 31, 2012 11B UNIVERSAL CROSSWORD ALL KINDS OF MONEY By Gary Cooper 8/31: If your mom is only concerned with the mess in your room, it’s probably not OCD. It’s more likely your room is a little messy. Mom sounds stressed. If you find yourself arguing with her a lot, please consider that the two of you may be pushing each other’s buttons more than you intend. Try talking to her when you are both calm. Explain that you don’t want to fight. Ask how to make things better. If it doesn’t help, please discuss it with your school counselor. ACROSS — Send questions to anniesmailbox@comcast.net, or Annie’s Mailbox, P.O. Box 118190 Chicago, IL 60611. 8/30 Universal Crossword Edited by Timothy E. Parker August 31, 2012 1 Part of Einstein’s famous equation 5 Metallic fabrics 10 Part of a crescent moon 14 Opera solo 15 Ammonia compound 16 Turkish honorific 17 Split apart 18 Last Greek letter 19 Hunk of dirt 20 Retire from the snack food industry? 23 Employ for a purpose 24 ___ and cry (public clamor) 25 Hoedown honey 28 Didn’t drink daintily 32 Successful solver’s shout 35 “Bye-bye, Brigitte” 37 Square fare? 38 Pastrami or salami 39 Game that begins with a break 42 Cleveland’s lake 43 Powerful impulse 44 Library no-no 45 U.S. Open component 46 Pop’s pop (Var.) 48 Airline’s best guess (Abbr.) 49 A Bobbsey twin 50 Pastoral place 52 Very poor alibi 61 Gem with colored bands 62 Fabled tale teller 63 Fed. mail agency 64 Land of the alpaca 65 “When ___ Eyes Are Smiling” 66 British break beverages 67 Meadow mamas 68 Comes up short 69 Marine eagle DOWN 1 Chagall or Connelly 2 A, in geometry 3 Confession components 4 Hindu holy man 5 Hampton of jazz fame 6 Rounds and clips, for short 7 Demeanor or manner 8 Sword feature 9 Beachcomber’s find 10 Stashed supply 11 Tangelo relative 12 Hunt for bargains 13 Goalie protectors 21 Time edition 22 Ruminant’s chew 25 Stares in wonder 26 Be taken with 27 Within the law 29 Shaded area 30 Rule the kingdom 31 ___ in comparison 32 Eagle’s home 33 Possessed, Scripturesstyle 34 “Victory ___” (1954 film) 36 ___ out a win (barely beat) 38 Famous Chinese chairman 40 Run away 41 Back-of-thebook section 46 Long-jawed fish 47 Hebrew alphabet openers 49 Connecting link 51 Angle between 0 and 90 degrees 52 John or Paul, but not Ringo 53 In a different form 54 Funeral fire 55 Medal winner 56 Wife of Osiris 57 Schnozz 58 Microsoft customer 59 Distance between wingtips 60 Start of North Carolina’s motto PREVIOUS PUZZLE ANSWER 8/30 © 2012 Universal Uclick THAT SCRAMBLED WORD GAME by David L. Hoyt and Jeff Knurek Unscramble these four Jumbles, one letter to each square, to form four ordinary words. GAIME GENNIB CIYPAR Print your answer here: Yesterday’s Find us on Facebook Parents need help adjusting to retirement 10 Stashed supply Now arrange the circled letters to form the surprise answer, as suggested by the above cartoon. (Answers tomorrow) Jumbles: HYPER PARCH WANTED TAMPER Answer: After he pitched a perfect game, he — THREW A PARTY BECKER ON BRIDGE 12B FRIDAY, AUGUST 31, 2012 Truck-Pickups Truck-Pickups Vans-Buses Chevrolet 2006 Silverado LT3, V8, crew cab, leather heated seats, sunroof, Bose sound, tow package, stk#185221 only $22,995.0 Dale Willey 785-843-5200 2009 Ford F-150 Platinum Loaded with navigation and leather. All the toys from Ford and a local trade. $31,000 23rd & Alabama 843-3500 Chrysler 2008 Town & Country, one owner, power sliding doors, leather heated seats, quad seating, DVD, alloy wheels, stk#358361 only $$18,841. Dale Willey 785-843-5200 2005 Ford F-150 SuperCrew-XLT package and 4x4. Clean truck and very well taken care of. Good truck at a good price. $17,995 23rd & Alabama 843-3500 Dodge 2009 Ram Diesel Big Horn 4wd, power equipment, crew cab, bed liner, running boards, low miles, ready to get any job done! Stk#503462 only $33,847. 2010 Dodge Ram Lot of engine for a small truck. HEMI power and great looking. Needs an owner. $15,000 23rd & Alabama 843-3500 Dodge 2003 Ram 3500 SLT Diesel, crew cab, running boards, chrome alloy wheels. This is a very nice looking truck and only $18,844. Dale Willey stk#330942 785-843-5200 GMC 2004 Envoy XUV SLT, 4wd, V6, part truck part SUV, bed liner, running boards, alloy wheels, CD changer, leather heated seats. Stk#560912 only $10,888. Ford 2005 Escape 4wd Limited, V6, sunroof, leather, alloy wheels, CD changer, stk#548411 only $12,444. Dale Willey 785-843-5200 GMC 2007 Sierra SLE1 Z71, 4wd, tow package, leather power seat, alloy wheels, stk#551461 only $22,718. Dale Willey 785-843-5200 2012 Dodge Grand Caravan Great family van from the original minivan maker. MyGig system with navigation. Low miles. This one is for you. $23,000 23rd & Alabama 843-3500 Massachusetts Street, Law- US-56 Highway; FOURTH rence, Kansas 66044. COURSE, thence South 88 degrees 18 minutes 00 secA PERMANENT EASEMENT onds West, 69.47 feet along for highway right of way, said Northerly right of way removal of borrow mate- line to the POINT OF BEGINrial, or for other highway NING. The above described purposes over and upon a tract contains 1455 square tract of land in the North- feet, more or less. west Quarter of Section 4, Township 15 South, Range This easement expires 20 East of the 6th P.M., de- three (3) years after legal scribed as follows: COM- possession through conMENCING at the Northeast demnation or ninety (90) corner of said Quarter Sec- days after completion of tion; thence on an assumed the highway construction bearing of South 88 degrees for which this easement is 19 minutes 06 seconds acquired, or whichever coWest, 396.28 feet along the mes first. North line of said Quarter Section to the Easterly right 5. That pursuant to K.S.A. of way line of the former 68-412 and K.S.A. 68-406, the Atchison, Topeka and Santa City of Baldwin City, Kansas Fe Railroad and the POINT has entered into an agreeOF BEGINNING; FIRST ment with the Secretary of COURSE, thence on a curve Transportation for the conof 1482.69 feet radius to the struction, reconstruction right, an arc distance of and maintenance of a city 145.66 feet along said East- connecting link within its erly line with a chord which corporate limits. That the bears South 31 degrees 09 City has seen and approved minutes 20 seconds East, the plans for the project. 145.63 feet; SECOND The City, pursuant to its COURSE, thence North 85 agreement set forth above, degrees 03 minutes 47 sec- verifies the Petition, and onds West, 121.54 feet to that it has requested the the Westerly right of way Secretary of Transportation line of said former railroad; to acquire this property. It THIRD COURSE, thence on a also affirms the fact that curve of 1382.69 feet radius the project is necessary for to the left, an arc distance city connecting link purof 133.27 feet along said poses, and that the interWesterly line with a chord ests in real property set which bears North 33 de- forth herein are required by grees 52 minutes 01 second the City and the Kansas DeWest, 133.25 feet to said partment of TransportaNorth line; FOURTH tion. COURSE, thence North 88 degrees 19 minutes 06 sec- 6. That no right, title, or inonds East, 120.05 feet along terest in or to the oil and said North line to the POINT gas minerals, under or in OF BEGINNING. The above the lands described herein described tract contains is to be condemned. 0.32 acre, which includes 0.22 acre of existing right of 7. Reasonable ingress and way, resulting in an acqui- egress to the property resition of 0.10 acre, more or maining shall be afforded less. by Plaintiff’s contractor at all times during the period Tract 6 - 0032-01 of the temporary construction easements. Ingress Ames High LC, a Kansas and egress over and across Limited Liability Company, temporary construction owner, c/o James Hicks, easements will be mainresident agent, 2330 West tained at all times except 31st Street, Lawrence, Kan- during actual entrance consas 66044; Central National struction or reconstruction Bank, mortgage interest activities. In the event the holder, 800 Massachusetts, property has more than one Lawrence, Kansas 66044; entrance to be constructed Board of County Commis- or reconstructed, not more sioners of Douglas County, than one entrance to the Kansas, tax lien holder, c/o property will be closed for Paul Gilchrist, Courthouse, the construction or recon100 Massachusetts Street, struction of the entrance at Lawrence, Kansas 66044. any one time. Temporary surfacing will be applied A TEMPORARY EASEMENT and maintained to allow for the construction of an reasonable ingress and entrance over and upon a egress to the property durtract of land in Lot 3A, ing times of inclement Block 2, Firetree Estates weather. Phase I, a subdivision of Baldwin City, Douglas 8. The owners, tenants and County, Kansas, according easement holders may fully to the recorded plat use and enjoy the land thereof, situated in the within the temporary conSoutheast Quarter of Sec- struction easement, protion 33, Township 14 South, vided such use shall not inRange 20 East of the 6th terfere with the construcP.M., described as follows: tion of the improvement. COMMENCING at the South- All areas disturbed will be west corner of said Quarter restored by seeding or reSection; thence on an as- placement of sod, or the sumed bearing of North 88 placement of surfacing to a degrees 18 minutes 00 sec- condition as good as, or onds East, 1175.19 feet better than before. No part along the South line of said of any building or strucQuarter Section; thence ture, including any eaves, North 01 degree 44 minutes awnings or other overhang16 seconds West, 33.00 feet ing attachment, either to the Southwest corner of within or partly within temsaid Lot 3A and the POINT porary easements shall be OF BEGINNING; FIRST damaged or removed unCOURSE, thence continuing less specifically stated. AcNorth 01 degree 44 minutes cess to the property will be 16 seconds West, 31.96 feet maintained at all times exalong the West line of said cept during actual entrance Lot 3A; SECOND COURSE, construction or reconstructhence South 74 degrees 07 tion activities. minutes 02 seconds East, 72.90 feet; THIRD COURSE, WHEREFORE, Plaintiff thence South 01 degree 40 hereby respectfully prays minutes 56 seconds East, that the Court set a hearing 9.93 feet to the Northerly to consider this Verified Peright of way line of existing tition, and that at such Lot 12, Block 1, in DEERFIELD WOODS SUBDIVISION NO. 2, a subdivision in the City of Lawrence, Douglas County, Kansas, commonly known as 2900 Winston Drive, Lawrence, KS 66049 (the “Property”) and all those defendants who have not otherwise been served are required to plead to the Petition on or before the 11th day of October, 2012, in the District Court of Douglas County,Kansas. If you fail to plead, judgment and decree will be entered in due course upon the Petition. NOTICE OF PROCEEDING TO CONDEMN LAND FOR STATE HIGHWAY PURPOSES GMC 2008 Sierra W/T, regular cab, bought new here, serviced here! One owner, low miles, GM certified! Stk#10194 only $14,877. 2000 Toyota Tacoma 136K, 5-speed, AC, CD, Cruise, Save $7,500. View pictures at 785.856.0280 845 Iowa St. Lawrence, KS 66049 Lawrence (Published in the Lawrence Daily Journal-World August 31, 2012) The goods of William and April Colette were confiscated pursuant to a Writ of Eviction executed on April 9, 2012, regarding 1309 W. 4th, Apt. C., Lawrence, KS. Said personal property will be sold on Sept. 9, 2012, by the Landlord, Charles Gruber, for partial satisfaction of the rent and other monies owed. ________ (First published in the Lawrence Daily Journal-World Toyota 2009 Tacoma August 31, 2012) pickup. SR5, Pre-Runner, Double Cab, V6, Automatic, IN THE DISTRICT COURT OF 6 ft. Bed, Local One Owner, DOUGLAS COUNTY, KANSAS 45,850 miles, Excellent CIVIL DEPARTMENT Shape, $22,500.00, Dealer Financing Available. Bank of America, N.A. 785-691-8918 Plaintiff, vs. Lucy M. Turner; Asrie Turner; John Doe (Tenant/Occupant); Mary Doe (Tenant/Occupant); Unknown Spouse, if any, of Lucy M. Turner; Unknown Spouse, if any, of Asrie Turner, Defendants. 1993 Toyota T-100 Clean truck, 4X4, Single cab, Long bed, Manual transmission, Manual transfer case $8,000 Call 785-838-2327 LAIRD NOLLER HYUNDAI 2829 Iowa St. Lawrence Case No. 2012CV435 PURSUANT TO CHAPTER 26 KANSAS STATUTES ANNOTATED TITLE TO REAL ESTATE INVOLVED EMINENT DOMAIN PETITION Comes now Michael S. King, Secretary of Transportation for the State of Kansas, and for his cause alleges and states as follows: 1. Plaintiff is the duly-appointed Secretary of Transportation for the State of Kansas. The named defendants are hereby notified that on August 15, 2012, Michael S. King, Secretary of Transportation of the State of Kansas, filed a Petition in the District Court of Douglas County, Kansas, seeking the condemnation of certain lands and/or interest and/or rights therein described in the Petition. The Court has ordered that the Petition be considered by the Court on September 27, 2012, at 10:00 a.m., in the Douglas County Courthouse, Lawrence, Kansas. Lawrence Lawrence hearing the Court enter an Order finding from this Verified Petition that the Plaintiff has the power to exercise the right of eminent domain for the purposes stated herein; that the titles or easements to or upon lands, or interests or rights therein, and other property and rights described herein are necessary to carry out the Plaintiff’s lawful powers and duties; that three disinterested residents of Douglas County be appointed to view and appraise the value of the titles or easements to or upon lands or interest or rights therein and other property and rights described herein and to determine just compensation to the parties named herein; and for such further appropriate relief as the Court deems just and equitable. Prepared by: BARBARA W. RANKIN Chief Counsel /s/ Russell K. Ash RUSSELL K. ASH, No. 07555 GELENE SAVAGE, No. 15491 Michael S. King, Secretary of Transportation for the State of Kansas VERIFICATION STATE OF KANSAS COUNTY OF SHAWNEE ss: I, Michael S. King, Secretary of Transportation of the State of Kansas, being first duly sworn, state that I have read the foregoing Petition and that the facts stated therein are true and correct. /s/ Michael S. King Michael S. King Secretary of Transportation Subscribed and sworn to before me this 8th day of August, 2012. /s/ Peggy S. Hansen-Nagy Notary Public My Commission Expires: 03/12/2013. VERIFICATION STATE OF KANSAS COUNTY OF DOUGLAS ss: I, Ken Wagner, Mayor, of the City of Baldwin City, Douglas County, Kansas, being first duly sworn, state that I have read the foregoing Petition, approved the plans for work to be done in conjunction with this condemnation, and that the facts stated therein are true and correct. /s/ Ken Wagner Mayor Ken Wagner Subscribed and sworn to before me this 16th day of July, 2012. Darcy Higgins Notary Public My Commission Expires: August 5, 2015 ________ NEED TO SELL YOUR CAR? Notice of abandoned property: Robert Gandy’s abandoned things (several CRT televisions, art and craft supplies, miscellaneous furniture, & his “artwork”) will be sold or destroyed Sat., the 15th of Sept. For more info., contact Justin at junkmail.jat@gmail.com. BARBARA W. RANKIN ________ Chief Counsel (Published in the Lawrence Daily Journal-World August GELENE SAVAGE Managing Attorney 31, 2012) Case No. 12CV437 Court Number: 1 2010 Ford F-150 Platinum Fully Loaded with leather seats, Navigation, MyFordTouch with SYNC voice activation and low miles. $36,500 23rd & Alabama 843-3500 Lawrence sioners of Douglas County, minor or are in anywise under legal disability; The unknown officers, successors, trustees, creditors and assigns of such defendants as are existing, dissolved or dormant corporations, and any unknown persons in possession of the real property described herein, Defendants. 2. Pursuant to K.S.A. 68-404 and 68-406 Plaintiff has been delegated the statutory power and authority to designate, construct, maintain, design, locate and esNOTICE tablish highways in the Pursuant to the Fair Debt State of Kansas. Collection Practices Act, 15 U.S.C. §1692c(b), no infor- 3. Pursuant to K.S.A. 68-413 mation concerning the col- Plaintiff is authorized, in Dodge 2008 Grand Caralection of this debt may be the name of the state of van SXT, stow n’ go with given without the prior con- Kansas, to acquire by the swivel n’ go, alloy sent of the consumer given exercise of the right of emiwheels, leather heated directly to the debt collec- nent domain title or easeseats, sunroof, DVD, navtor or the express permis- ments to or upon any lands igation, stk#308381 only sion of a court of compe- or interest to or rights $18,715 tent jurisdiction. The debt therein and other property Dale Willey 785-843-5200 collector is attempting to and rights as more fully de collect a debt and any in- scribed in K.S.A. 68-413 as formation obtained will be may be necessary for the construction, reconstrucused for that purpose. tion, improvement, maintenance or drainage of the Prepared By: state highway system. South & Associates, P.C. Megan Cello (KS # 24167) 6363 College Blvd., Suite 100 4. Pursuant to his lawful powers and duties as Overland Park, KS 66211 stated herein Plaintiff is un(913)663-7600 dertaking a highway im(913)663-7899 (Fax) provement project upon the Attorneys For Plaintiff state highway system (147671) (designated as KDOT Proj________ ect No. 56-23 KA-0032-01) in County Kansas, (Published in the Lawrence Douglas 2007 Ford E-350 Super Daily Journal-World August and has determined that in Duty van order for him to carry out 31, 2012) with V8 power. 15 passuch project and his lawful senger with dual DVD IN THE DISTRICT COURT OF powers and duties it is necplayers and navigation. DOUGLAS COUNTY, KANSAS essary for him to hereby Hard to find. $15,000 acquire, in the name of the 23rd & Alabama 843-3500 state of Kansas and by the IN THE MATTER OF THE exercise of his power of CONDEMNATION OF LAND eminent domain and pursuFOR STATE HIGHWAY ant to the procedures set PURPOSES, forth in the Kansas Eminent Procedure Act, MICHAEL S. KING, Secretary Domain of Transportation for the K.S.A. 26-501, et seq., the following titles, easements, State of Kansas, or other interests to or Plaintiff, upon the following dev. Midland Railway Historical scribed lands located in Association, a Missouri Douglas County, Kansas: Corporation, owner, c/o MiTract 2 - 0032-01 chael Pratt, resident agent, 2997 Riley Terrace, Wellsville, Kansas 66092; Ames Midland Railway Historical a Missouri High LC, a Kansas Limited Association, Liability Company, owner, Corporation, owner, c/o Michael Pratt, resident agent, c/o James Hicks, resident 2001 Honda Odyssey agent, 2330 West 31st 2997 Riley Terrace, WellsEX-153K, AT, AC, CD, Street, Lawrence, Kansas ville, Kansas 66092; Board Leather, Power Doors, 66044; Central National of County Commissioners 2-owner, Save $7,500 . Bank, mortgage interest of Douglas County, Kansas, View pictures at holder, 800 Massachusetts, tax lien holder, c/o Paul Lawrence, Kansas 66044; Gilchrist, Courthouse, 100 785.856.0280 Board of County Commis845 Iowa St. sioners of Douglas County, Lawrence, KS 66049 miNissan 2008 Quest 3.5 SL nor or are in anywise under fwd, power sliding door, legal disability; The unsteering wheel controls, known officers, successors, power equipment, trustees, creditors and assigns of such defendants stk#652591 only $17,426. as are existing, dissolved Dale Willey 785-843-5200 or dormant corporations, and any unknown persons Pontiac 2006 Montana EXT in possession of the real SV6. Nice loaded family property described herein, Defendants. van in nice navy blue with clean gray cloth. DVD, dual Case No. 2012CV435 sliding doors, rear air, new tires, and MUCH more. Clean mini-van. See PURSUANT TO CHAPTER 26 KANSAS STATUTES website for photos. ANNOTATED Rueschhoff Automobiles rueschhoffautos.com TITLE TO REAL ESTATE 2441 W. 6th St. INVOLVED 785-856-6 6100 24/7 2010 Ford F-150 King Ranch 1-owner and low miles. Fully loaded with leather and navigation. Priced to sell. $36,000 23rd & Alabama 843-3500 2010 Ford F-150 One owner with factory 20” wheels. 5.4L Triton power and 4x4. Sharp truck. $31,775 23rd & Alabama 843-3500 Lawrence: 2012 Chevrolet Silverado Work truck with the V6 that saves on gas. Long bed and really low miles. $19,380 23rd & Alabama 843-3500 Dodge 2007 Ram 2500 Diesel, 4wd, one owner, crew cab, running boards, bed liner, power equipment, stk#104711 only $31,851. Dale Willey 785-843-5200 Lawrence Pursuant to K.S.A. Chapter 60 NOTICE OF SUIT THE STATE OF KANSAS, to the above-named defendants and the unknown heirs, executors, administrators, devisees, trustees, creditors and assigns of /s/ Russell K. Ash RUSSELL K. ASH, No. 07555 Staff Attorney ** IN THE DISTRICT COURT OF DOUGLAS COUNTY, KANSAS Create your ad in minutes today on SunflowerClassifieds.com Reach readers in print and online across Northeast Kansas! IN THE MATTER OF THE CONDEMNATION OF LAND FOR STATE HIGHWAY PURPOSES, MICHAEL S. KING, Secretary of Transportation for the State of Kansas, Plaintiff, v. Midland Railway Historical Association, a Missouri Corporation, owner, c/o Michael Pratt, resident agent, 2997 Riley Terrace, Wellsville, Kansas 66092; Ames High LC, a Kansas Limited Liability Company, owner, c/o James Hicks, resident agent, 2330 West 31st Street, Lawrence, Kansas 66044; Central National Bank, mortgage interest holder, 800 Massachusetts, Lawrence, Kansas 66044; Board of County Commis- 1-785-832-2222 or 1-866-823-8220
https://issuu.com/lawrencejournal-world/docs/ljw08-31-12
CC-MAIN-2017-22
en
refinedweb
Code: #include <stdio.h> main () { int unitssold[10]; int empindex; float basal[] = {158.00, 147.75, 315.00, 162.25, 220.00, 181.60, 376.90, 168.70, 293.00, 214.30} printf( "Please enter the Units Sold for Each of the 10 Sales Persons\n" ); for ( empindex = 0 ; empindex < 10 ; empindex ++ ) { scanf( "%d", &unitssold[empindex] ); } End of Code (For Now )) Hey guys im kinda new to arrays and functions i need this program to calculate wages based on Commission ie, the more certain emplyees sell the more money they make, i have to use a certain commission rate as follows, Number of Units sold Commission Rate 1 - 5 INclusive 3.80 6 - 10 Inclusive 5.40 11 - 15 Inclusive 8.60 > 15 15.20 Im just wondering how to implement this into the program, and also if the program so far is looking ok? any advice would be appreciatede guys thanks..
https://cboard.cprogramming.com/c-programming/36293-arrays-functions.html
CC-MAIN-2017-22
en
refinedweb
Hi all. I'm writing a program to convert degrees F to degrees C by calling user-defined functions. I'm trying to get the hang of it, but I'm definately doing something wrong here. Suggestions?? Code:#include <iostream> using namespace std; int GetFahrenheit(); double ComputeCentigrade(); double DisplayCentigrade(); int F; int main() { cout << "Enter temperature in Fahrenheit: "; cin >> F; cout << endl; cout << "Current temperature = " << GetFahrenheit() << endl; cout << "Current temperature = " << DisplayCentigrade() << endl; return 0; } int GetFahrenheit() { int F; return F; } double ComputeCentigrade() { double C; int F; C = (5/9) * (F-32); return C; } double DisplayCentigrade() { double C; return C; }
https://cboard.cprogramming.com/cplusplus-programming/67481-user-defined-functions.html
CC-MAIN-2017-22
en
refinedweb
Tip Try the Microsoft Azure Storage Explorer Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux. Overview This article will show you how to perform common scenarios using File storage. The samples are written in Python and use the Microsoft Azure Storage SDK for Python. The scenarios covered include uploading, listing, downloading, and deleting files. share The FileService object lets you work with shares, directories and files. The following code creates a FileService object. Add the following near the top of any Python file in which you wish to programmatically access Azure Storage. from azure.storage.file import FileService The following code creates a FileService object using the storage account name and account key. Replace 'myaccount' and 'mykey' with your account name and key. file_service = **FileService** (account_name='myaccount', account_key='mykey') In the following code example, you can use a FileService object to create the share if it doesn't exist. file_service.create_share('myshare') Upload a file into a share An Azure File Storage')) How to: Create a Directory You can also organize storage by putting files inside sub-directories instead of having all of them in the root directory. The Azure file storage service allows you to create as many directories as your account will allow. The code below will create a sub-directory named sampledir under the root directory. file_service.create_directory('myshare', 'sampledir') How to: List files and directories in a) Download files') Next steps Now that you've learned the basics of File storage, follow these links to learn more.
https://docs.microsoft.com/en-us/azure/storage/storage-python-how-to-use-file-storage
CC-MAIN-2017-22
en
refinedweb
mprpc 0.1.2 A fast MessagePack RPC library mprpc is a lightweight MessagePack RPC library. It enables you to easily build a distributed server-side system by writing a small amount of code. It is built on top of gevent and MessagePack. Installation To install mprpc, simply: $ pip install Cython $ pip install mprpc Alternatively, $ easy_install Cython $ easy_install mprpc Examples RPC server from gevent.server import StreamServer from mprpc import RPCServer class SumServer(RPCServer): def sum(self, x, y): return x + y server = StreamServer(('127.0.0.1', 6000), SumServer) server.serve_forever() RPC client from mprpc import RPCClient client = RPCClient('127.0.0.1', 6000) print client.call('sum', 1, 2) RPC client with connection pooling import gsocketpool.pool from mprpc import RPCPoolClient client_pool = gsocketpool.pool.Pool(RPCPoolClient, dict(host='127.0.0.1', port=6000)) with client_pool.connection() as client: print client.call('sum', 1, 2) Performance mprpc significantly outperforms the official MessagePack RPC (1.8x faster), which is built using Facebook’s Tornado and MessagePack, and ZeroRPC (14x faster), which is built using ZeroMQ and MessagePack. Results mprpc % python benchmarks/benchmark.py call: 9508 qps call_using_connection_pool: 10172 qps Official MesssagePack RPC % pip install msgpack-rpc-python % python benchmarks/benchmark_msgpackrpc_official.py call: 4976 qps ZeroRPC % pip install zerorpc % python benchmarks/benchmark_zerorpc.py call: 655 qps Documentation Documentation is available at. - Author: Studio Ousia - Keywords: rpc,msgpack,messagepack,msgpackrpc,messagepackrpc,messagepack rpc,gevent - License: Copyright 2013 Studio Ous: ousia, ikuyamada - DOAP record: mprpc-0.1.2.xml
https://pypi.python.org/pypi/mprpc/0.1.2
CC-MAIN-2017-22
en
refinedweb
Another common use for wildcards is with covariant returns. The same rules apply to covariant returns as assignments. If you want to return a more specific generic type in an overridden method, the declaring method must use wildcards: public interface NumberGenerator { public List<? extends Number> generate(); } public class FibonacciGenerator extends NumberGenerator { public List<Integer> generate() { ... } } If this were to use arrays, the interface could return Number[] and the implementation could return Integer[]. We've talked mostly about upper bounded wildcards. There is also a lower bounded wildcard. A List<? super Number> is a list whose exact "element type" is unknown, but it is M Number or a super type of Number. So it could be a List<Number> or a List<Object>. Lower bounded wildcards are not nearly as common as upper bounded wildcards. But when you need them, they are essential. List<? extends Number> readList = new ArrayList<Integer>(); Number n = readList.get(0); List<? super Number> writeList = new ArrayList<Object>(); writeList.add(new Integer(5)); The first list is a list that you can read numbers from. The second list is a list that you can write numbers to. Finally, the List<?> is a list of anything and is almost the same as List<? extends Object>. You can always read Objects, but you cannot write to the list. To summarize, wildcards are great for hiding implementation details from callers as we saw a few sections back, but even though lower bounded wildcards appear to provide read-only access, they do not, due to non-generic methods such as remove(int position). If you want a truly immutable collection, use the methods on java.util.Collections, like unmodifiableList(). Be aware of wildcards when writing APIs. In general, you should try to use wildcards when passing generic types. It makes the API accessible to a wider range of callers. In this example, by accepting a List<? extends Number> instead of List<Number>, the method below can be called with many different types of Lists: void removeNegatives(List<? extends Number> list); Now we'll cover constructing your own generic types. We'll show example idioms where type safety can be improved by using generics, as well as common problems that occur when trying to implement generic types. This first example of a generic class is a collection-like example. Pair has two type parameters, and the fields are instances of the types: public final class Pair<A,B> { public final A first; public final B second; public Pair(A first, B second) { this.first = first; this.second = second; } } This makes it possible to return two items from a method without having to write special-purpose classes for each two-type combo. The other thing you could have done is return Object[], which isn't type-safe or pretty. In the usage below, we return a File and a Boolean from a method. The client of the method can use the fields directly without casting: public Pair<File,Boolean> getFileAndWriteStatus(String path){ // create file and status return new Pair<File,Boolean>(file, status); } Pair<File,Boolean> result = getFileAndWriteStatus("..."); File f = result.first; boolean writeable = result.second; In this example generics are used for additional compile-time safety. By parameterizing the DBFactory class by the type of Peer it creates, you are forcing Factory subclasses to return a specific subtype of Peer: public abstract class DBFactory<T extends DBPeer> { protected abstract T createEmptyPeer(); public List<T> get(String constraint) { List<T> peers = new ArrayList<T>(); // database magic return peers; } } By implementing DBFactory<Customer> the CustomerFactory is forced to return a Customer from createEmptyPeer(): public class CustomerFactory extends DBFactory<Customer>{ public Customer createEmptyPeer() { return new Customer(); } } Whenever you want to place constraints on a generic type between parameters or a parameter and a return type, you probably want to use a generic method. For example, if you write a reverse function that reverses in place, you don't need a generic method. However, if you want reverse to return a new List, you'd like the element type of the new List to be the same as the List that was passed in. In that case, you need a generic method: <T> List<T> reverse(List<T> list) When implementing a generic class, you may want to construct an array, T[]. Because generics is implemented by erasure, this is not allowed. You may try to cast an Object[] to T[]. This is not safe. The solution, courtesy of the generics tutorial, is to use a "Type Token." By adding a Class<T> parameter to the constructor, you force clients to supply the correct class object for the type parameter of the class: public class ArrayExample<T> { private Class<T> clazz; public ArrayExample(Class<T> clazz) { this.clazz = clazz; } public T[] getArray(int size) { return (T[])Array.newInstance(clazz, size); } } To construct an ArrayExample<String>, the client would have to pass String.class to the constructor because the type of String.class is Class<String>. Having the class objects makes it possible then to construct an array with exactly the right element type. In summary, the new language features make for a substantial change to Java. By understanding when and how to use them, you'll write better code. Jess Garms is the Javelin compiler team lead at BEA Systems. Prior to that, Jess worked on BEA's Java IDE, WebLogic Workshop. Additionally, he has a great deal of experience with cryptography, and co-authored Professional Java Security, published by Wrox Press. Tim Hanson is the Javelin compiler architect at BEA Systems. Tim developed much of BEA's Java compiler - one of the earliest 1.5-compliant implementations. He has written numerous other compilers, including a CORBA/IDL compiler while at IBM, and an XQuery compiler.
http://www.oracle.com/technetwork/articles/entarch/java-5-features4-082549.html
CC-MAIN-2017-22
en
refinedweb
#include <stdint.h>int speakerPin = 3; // Can be either 3 or 11, two PWM outputs connected to Timer 2void startPlayback(){ pinMode(speakerPin, OUTPUT); // Set up Timer 2 to do pulse width modulation on the speaker // pin. // Use internal clock (datasheet p.160) ASSR &= ~(_BV(EXCLK) | _BV(AS2)); // Set fast PWM mode (p.157) TCCR2A |= _BV(WGM21) | _BV(WGM20); TCCR2B &= ~_BV(WGM22); // Do non-inverting PWM on pin OC2B (p.155) // On the Arduino this is pin 3. TCCR2A = (TCCR2A | _BV(COM2B1)) & ~_BV(COM2B0); TCCR2A &= ~(_BV(COM2A1) | _BV(COM2A0)); // No prescaler (p.158) TCCR2B = (TCCR2B & ~(_BV(CS12) | _BV(CS11))) | _BV(CS10);}void setup(){ pinMode(ledPin, OUTPUT); OCR2B=0; startPlayback(); ADMUX = (1<<REFS0)|(1<<ADLAR); ADCSRA = (1<<ADEN)|(1<<ADPS2);}void loop(){ // start single convertion // write '1? to ADSC ADCSRA |= (1<<ADSC); // wait for conversion to complete // ADSC becomes '0? again // till then, run loop continuously while(ADCSRA & (1<<ADSC)); OCR2B=ADCH;} uint8_t temp=ADCL; uint16_t output=temp|(ADCH<<8); ADCL|(ADCH<<8) receiving an noisy audio signal from a mic and filter such signal and then output a clear audio signal The first nonlinear technique is used for reducing wideband noise in speech signals. This type of noise includes: magnetic tape hiss, electronic noise in analog circuits, wind blowing by microphones, cheering crowds, etc. Linear filtering is of little use, because the frequencies in the noise completely overlap the frequencies in the voice signal, both covering the range from 200 hertz to 3.2 kHz. How can two signals be separated when they overlap in both the time domain and the frequency domain?Here's how it is done. In a short segment of speech, the amplitude of the frequency components are greatly unequal. As an example, Fig. 22-10a illustrates the frequency spectrum of a 16 millisecond segment of speech (i.e., 128 samples at an 8 kHz sampling rate). Most of the signal is contained in a few large amplitude frequencies. In contrast, (b) illustrates the spectrum when only random noise is present; it is very irregular, but more uniformly distributed at a low amplitude.Now the key concept: if both signal and noise are present, the two can be partially separated by looking at the amplitude of each frequency. If the amplitude is large, it is probably mostly signal, and should therefore be retained. If the amplitude is small, it can be attributed to mostly noise, and should therefore be discarded, i.e., set to zero. Mid-size frequency components are adjusted in some smooth manner between the two extremes.
http://forum.arduino.cc/index.php?topic=173077.msg1286170
CC-MAIN-2017-22
en
refinedweb
To check if a number is a power of two, the instruction BLSR from BMI1 extension can be used. The instruction resets the least set bit of a number, i.e. calculates (x - 1) and x. A sample C procedure that use the bit-trick: bool is_power_of_two(int x) { return (x != 0) && (x == ((x - 1) & x)); } If a number has exactly one bit set then BLSR yields zero. However, when input of BLSR is zero, the instruction also yields zero. Fortunately, BLSR sets CPU flags in following way: Thanks to that we can properly handle all cases. Below is an assembly code: blsr %eax, %eax // result = (ZF == 1) and (CF == 0) setz %al // al = ZF sbb $0, %al // al = ZF - CF movzx %al, %eax // cast Sample program is available.
http://0x80.pl/notesen/2018-03-11-is-power-of-two-bmi1.html
CC-MAIN-2020-16
en
refinedweb
Vector3 geometrycentral::Vector3 is the basic 3D vector type in geometry central. There are many like it, but this one is ours. #include "geometrycentral/vector3.h" Construction Vector3 is a POD type, so you should use brace-initialization sytax: #include "geometrycentral/vector3.h using namespace geometrycentral; Vector3 myVec{3.8, 2.9, 1.1}; //create myVec = Vector3{1.1, 2.2, 3.3}; // reassign Factory methods can construct a few common values: static Vector3 Vector3::zero() Returns the zero vector static Vector3 Vector3::constant(double c) Returns a vector with all components set to c static Vector3 Vector3::infinity() Returns the infinite vector (\infty, \infty, \infty). static Vector3 Vector3::undefined() Returns the undefined vector (NaN, NaN, NaN). Access The three elements of the vector can be accessed as vec.x and vec.y and vec.z. Alternately, the elements can be indexed as vec[0] and vec[1] and vec[2]. Conversion Vector3::operator<<() Vector3 can be serialized. Vector3 v{1.2, 3.4, 5.6}; std::cout << v << std::endl; // prints something like: <1.2, 3.4, 5.6> Arithmetic Vector3 supports the element-wise addition, subtraction, and scalar multiplication you would probably expect. Member operations These methods do not change the underlying Vector3, but return a new Vector3. Vector3 vec{1., 2., 3.}; vec.normalize(); // does nothing vec = vec.normalize(); // much better Vector3 Vector3::normalize() Returns a unit-norm vector pointing in the same direction. If the input is the zero vector, the result will contain NaNs. Vector3 Vector3::rotateAround(Vector3 axis, double theta) Rotate the vector by angle \theta around axis in the right-handed direction. axis need not be a unit vector. Vector3 Vector3::removeComponent(Vector3 unitDir) Removes any component of this vector in the direction unitDir, making the result orthogonal to unitDir. As the name suggests, unitDir must be a unit vector. double Vector3::norm() Returns the magnitude of the vector. Also available as norm(v). double Vector3::norm2() Returns the squared magnitude of the vector. Also available as norm2(v). Function operations These operations do not change the vector on which they are called. double norm(Vector3 v) Returns the magnitude of the vector. Also available as v.norm(). double norm2(Vector3 v) Returns the squared magnitude of the vector. Also available as v.norm2(). Vector3 unit(Vector3 v) Returns normalized copy of the vector. double dot(Vector3 u, Vector3 v) Returns the dot product between two vectors. double sum(Vector3 u) Returns the sum of the coordinates of a vector Vector3 cross(Vector3 u, Vector3 v) Returns the cross product between two vectors. double angle(Vector3 u, Vector3 v) Returns the angle between two not-necessarily-unit vectors. Output in the range [0, \pi]. double angleInPlane(Vector3 u, Vector3 v, Vector3 normal) Returns the signed angle between two not-necessarily-unit vectors, measured in the plane defined by normal (which need not be a unit vector). Output is in the range [-\pi, \pi], as in atan2. Vector3 clamp(Vector3 val, Vector3 low, Vector3 high) Returns returns a a vector where each component has been clamped to be between the corresponding compnents of low and high. Vector3 componentwiseMin(Vector3 u, Vector3 v) Returns a new vector, each component of which is the minimum of that component in u and v. Vector3 componentwiseMax(Vector3 u, Vector3 v) Returns a new vector, each component of which is the maximum of that component in u and v. Properties bool isfinite(Vector3 u) Returns true if all of the components of the vector are finite. Note: this function is intentionally not camel-cased out of solidarity with std::isfinite(). Also available as u.isFinite(). bool isDefined(Vector3 u) Returns true if all of the components of the vector are not NaN. Also available as u.isDefined().
https://geometry-central.net/utilities/vector3/
CC-MAIN-2020-16
en
refinedweb
import "github.com/coreos/etcd/clientv3/balancer" Package balancer implements client balancer. RegisterBuilder creates and registers a builder. Since this function calls balancer.Register, it must be invoked at initialization time. type Balancer interface { // Balancer is called on specified client connection. Client initiates gRPC // connection with "grpc.Dial(addr, grpc.WithBalancerName)", and then those resolved // addresses are passed to "grpc/balancer.Balancer.HandleResolvedAddrs". // For each resolved address, balancer calls "balancer.ClientConn.NewSubConn". // "grpc/balancer.Balancer.HandleSubConnStateChange" is called when connectivity state // changes, thus requires failover logic in this method. balancer.Balancer // Picker calls "Pick" for every client request. picker.Picker } Balancer defines client balancer interface. type Config struct { // Policy configures balancer policy. Policy picker.Policy // Name defines an additional name for balancer. // Useful for balancer testing to avoid register conflicts. // If empty, defaults to policy name. Name string // Logger configures balancer logging. // If nil, logs are discarded. Logger *zap.Logger } Config defines balancer configurations. Package balancer imports 15 packages (graph) and is imported by 26 packages. Updated 2020-03-14. Refresh now. Tools for package owners.
https://godoc.org/github.com/coreos/etcd/clientv3/balancer
CC-MAIN-2020-16
en
refinedweb
_This article focuses on the JDK9 feature Reactive Stream Responsive Stream, describes what the Reactive Stream is and what the backpressure is, and the interface and two use cases for Reactive Stream provided in JDK9, including how to use Processor. _1.Reactive Stream concept _Reactive Stream is a set of standards introduced by JDK9 and a set of data processing specifications based on publish/subscribe mode.Responsive streaming has been an initiative since 2013 to provide an asynchronous stream processing standard for non-blocking backpressure.It is designed to solve the problem of handling element streams -- how to pass element streams from the publisher to the subscriber without blocking the publisher or having the subscriber have an unlimited buffer or discard.More precisely, Reactive streams are designed to "find the smallest set of interfaces, methods, and protocols that describe the operations and entities necessary to achieve the goal of asynchronous streaming of data in a non-blocking backpressure manner". _Reactive Stream specification was born, which defines the following four interfaces: The Subscription interface defines how publishers and subscribers are connected The Publisher<T>interface defines the publisher's method The Subscriber<T>interface defines the method of the subscriber Processor<T,R>Interfaces define processors _Since the Reactive Stream specification was born, RxJava has implemented the Reactive Stream specification since RxJava 2, and the Reactor framework provided by Spring (the basis of WebFlux) has also implemented the Reactive Stream specification successively. _The following diagram shows the interaction between subscribers and publishers _2. back pressure concept _If producers send more messages than consumers can handle, consumers may be forced to keep catching them, consuming more and more resources and burying the potential risk of collapse.To prevent this, a mechanism is needed that allows consumers to notify producers and slow down message generation.Producers can adopt a variety of strategies to achieve this requirement, a mechanism called backpressure. _Simply put, it is - Backpressure refers to the interaction between publishers and subscribers - Subscribers can tell the publisher how much data they need, adjust data traffic, and not cause the publisher to publish too much data, which can waste data or overwhelm subscribers _3. Implementation of Reactive Stream specification in JDK9 _The implementation specification for Reactive Stream in JDK9 is often referred to as the Flow API, which implements responsive streaming through the java.util.concurrent.Flow and java.util.concurrent.SubmissionPublisher classes _In JDK9, Reactive Stream's primary interface declaration In the Flow class, the Flow class defines four nested static interfaces that are used to build components of traffic control in which the publisher generates one or more data items for subscribers: - Publisher: Publisher, Producer of Data Items - Subscriber: Data item subscribers, consumers - Subscription: The relationship between publisher and subscriber, subscription token - Processor: Data Processor _3.1 Publisher Publisher publishes data streams to registered Subscriber s.It typically publishes items to subscribers asynchronously using Executor.Publisher needs to ensure that the ubscriber method for each subscription is called strictly in sequence. subscribe: subscriber subscribes to publisher @FunctionalInterface public static interface Flow.Publisher<T> { public void subscribe(Subscriber<? super T> subscriber); } _3.2 Subscriber Subscriber subscribes to Publisher's streams and accepts callbacks.If Subscriber does not make a request, it will not receive data.For a given Subscription, the method of calling Subscriber is strictly sequential. - onSubscribe: The publisher calls the subscriber's method to deliver the subscription asynchronously, which is executed after the publisher.subscribe method is called - onNext: The publisher calls this method to pass data to the subscriber - onError: Call this method when Publisher or Subscriber encounters an unrecoverable error, and then no other method is called - onComplete: Call this method when the data has been sent and no errors have caused the subscription to terminate, then no other methods will be called _3.3 Subscription Contract Subscription Subscription is used to connect Publisher to Subscriber.Subscriber receives items only when requested and can unsubscribe through Subscription.Subscription has two main methods: Request: Subscribers call this method to request data cancel: Subscribers invoke this method to unsubscribe and dissociate subscribers from publishers public static interface Flow.Subscription { public void request(long n); public void cancel(); } _3.4 Processor _Processor is located between Publisher and Subscriber for data conversion.Multiple Processors can be used together to form a processing chain in which the results of the last processor are sent to Subscriber.The JDK does not provide any specific processors.Processors are both subscribers and publishers, and interface definitions inherit both as subscribers and as publishers, receiving data as subscribers, processing it, and publishing it as publishers. /** * A component that acts as both a Subscriber and Publisher. * * [@param]() <T> the subscribed item type * [@param]() <R> the published item type */ public static interface Processor<T,R> extends Subscriber<T>, Publisher<R> { } _4. Reactive Stream (Flow API) specification call flow in JDK9 _Publisher is the publisher who can emit elements, and Subscriber is the subscriber who receives elements and responds.When the subscribe method in Publisher is executed, the publisher calls back the subscriber's onSubscribe method, in which the subscriber usually requests n data from the publisher with the help of an incoming Subscription.The publisher then sends out up to N data to the subscriber by constantly calling the subscriber's onNext method.If all the data is sent out, onComplete is called to inform subscribers that the stream has been sent out; if an error occurs, error data is sent through onError and the stream is terminated as well. _Subscription is the "link" (contract) between Publisher and Subscriber.Because when the publisher calls the subscribe method to register the subscriber, the Subscription object is passed in through the subscriber's callback method onSubscribe, and the subscriber can then "ask" the publisher for data using the request method of the Subscription object.The mechanism of back pressure is based on this. _5. Case-Response Basic Use Cases _5.1 The following code briefly demonstrates SubmissionPublisher and the basic usage of this publish-subscribe framework: _Note to use versions above JDK9 /** * [@author]() johnny * [@create]() 2020-02-24 5:44 p.m. **/ @Slf4j public class ReactiveStreamTest { public static void main(String[] args) throws InterruptedException { //1. Create Producer Publisher JDK9's own implementation of the Publisher interface SubmissionPublisher<Integer> publisher = new SubmissionPublisher<>(); //2. Creating Subscriber requires implementing internal methods on your own Flow.Subscriber<Integer> subscriber = new Flow.Subscriber<>() { private Flow.Subscription subscription; @Override public void onSubscribe(Flow.Subscription subscription) { this.subscription = subscription; System.out.println("Subscription succeeded."); subscription.request(1); System.out.println("Request a data in a subscription method"); } @Override public void onNext(Integer item) { log.info("[onNext Receive data item : {}] ", item); subscription.request(1); } @Override public void onError(Throwable throwable) { log.info("[onError Exception occurred)"); subscription.cancel(); } @Override public void onComplete() { log.info("[onComplete All data receipt completed)"); } }; //3.Publishers and subscribers establish a subscription relationship by calling back the subscriber's onSubscribe method to pass in the subscription contract publisher.subscribe(subscriber); //4. Publisher Generates Data for (int i = 1; i <= 5; i++) { log.info("[production data {} ]", i ); //submit is a blocking method, which calls the subscriber's onNext method publisher.submit(i); } //5. When publisher data has been published, close the send and the subscriber's onComplete method will be callback publisher.close(); //Main thread sleeps for a while Thread.currentThread().join(100000); } } _Print Output Results _It looks like we can't see what Reactive Stream does, but the key point is publisher.submit(i); submit is a blocking method Let's modify the code a little 1. Add time-consuming operations to onNext to simulate business time-consuming logic 2. Increase the number of publishers publishing data to simulate real-world unlimited data @Override public void onNext(Integer item) { log.info("[onNext Receive data item : {}] ", item); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } subscription.request(1); } //Publisher Generates Data for (int i = 1; i <= 1000; i++) { log.info("[production data {} ]", i ); //submit is a blocking method, which calls the subscriber's onNext method publisher.submit(i); } _Direct-view printing _You will find that the publisher will stop production after generating data for 256 because the publisher.submit(i) method is blocked. There is an internal buffer array with a maximum capacity of 256, only if the subscription.request (1) is sent by the subscriber;After a request, the onNext method takes the data out of the buffer array sequentially and passes it to the subscriber for processing. When the subscription.request(1) method is called, the publisher discovers that the array is not full to reproduce the data, which prevents the producer from generating too much data at once and crushing the subscriber, thereby implementing a backpressure mechanism _6. Case 2 Responsive Use Case with Processor _6.1 Create Custom Processor package com.johnny.webflux.webfluxlearn.reactivestream; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.Flow; import java.util.concurrent.SubmissionPublisher; /** * Custom Processor * * @author johnny * @create 2020-02-25 1:56 p.m. **/ @Slf4j public class MyProcessor extends SubmissionPublisher<Integer> implements Flow.Processor<Integer, Integer> { private Flow.Subscription subscription; @Override public void onSubscribe(Flow.Subscription subscription) { log.info("[Processor Receive Subscription Request)"); //Save the subscription relationship and use it to correspond to the publisher this.subscription = subscription; this.subscription.request(1); } @Override public void onNext(Integer item) { log.info("[onNext Receive publisher data : {} ]", item); //Do business processing. if (item % 2 == 0) { //Filter even numbers sent to subscribers this.submit(item); } this.subscription.request(1); } @Override public void onError(Throwable throwable) { // We can tell the publisher that we won't accept the data later this.subscription.cancel(); } @Override public void onComplete() { log.info("[Processor Finished)"); this.close(); } } _6.2 Run demo to associate publisher with Processor and subscriber package com.johnny.webflux.webfluxlearn.reactivestream; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.Flow; import java.util.concurrent.SubmissionPublisher; import java.util.concurrent.TimeUnit; /** * Case with Processor * * @author johnny * @create 2020-02-25 2:17 p.m. **/ @Slf4j public class ProcessorDemo { public static void main(String[] args) throws InterruptedException { //Create Publisher SubmissionPublisher<Integer> publisher = new SubmissionPublisher<>(); //Creating a Processor is both a publisher and a subscriber MyProcessor myProcessor = new MyProcessor(); //Create Final Subscriber Flow.Subscriber<Integer> subscriber = new Flow.Subscriber<>() { private Flow.Subscription subscription; @Override public void onSubscribe(Flow.Subscription subscription) { this.subscription = subscription; this.subscription.request(1); } @Override public void onNext(Integer item) { log.info("[onNext from Processor Receive filtered data item : {}] ", item); this.subscription.request(1); } @Override public void onError(Throwable throwable) { log.info("[onError Exception occurred)"); subscription.cancel(); } @Override public void onComplete() { log.info("[onComplete All data receipt completed)"); } }; //Establish a relationship between the publisher and the processor, where the processor acts as a subscriber publisher.subscribe(myProcessor); //Establish a relationship between the processor and the subscriber where the processor acts myProcessor.subscribe(subscriber); //Publisher Publishes Data publisher.submit(1); publisher.submit(2); publisher.submit(3); publisher.submit(4); publisher.close(); TimeUnit.SECONDS.sleep(2); } } _7. Summary _This article focuses on the JDK9 feature Reactive Stream Responsive Stream, describes what the Reactive Stream is and what the backpressure is, and the interface and two use cases for Reactive Stream provided in JDK9, including how to use Processor. _Just pay attention to the four interfaces provided by JDK9 and the internal methods. Knock through the code for the case is actually a very simple process to refuel!!! Personal blogging system: Welcome! This article is published by blog OpenWrite Release!
https://programmer.group/jdk9-new-feature-reactive-stream-responsive-stream.html
CC-MAIN-2020-16
en
refinedweb
Release helper script which offers a simple release process Project description Summary install Download and unzip this package next to your other packages in your local svn folder structure. After that, install the p01.releaser package by running the following commands on linux: python bootstrap.py bin/buildout windows: python bootstrap.py binbuildout.exe release You can use the release method with the following command for make a new or next release. nix: bin/release <package-name> windows: binrelease.exe <package-name> With this command the release script will do the following for the package with the given name: - check for pending local modification - find existing versions - get next version based on options (-n, –next-version) - guess next version if nothing defined in options - ask for confirm guessed version or set explicit/initial version - ask for CHANGES.txt release text confirmation if already exist - or offer inplace CHANGES.txt editing if empty confirmed After this, the srcipt will start an automated build process and abort on any error. Note an error could end in partial commited svn data or a missing release file. But this should be simple to check and correct. The steps are: - update version in CHANGES.txt if not already updated during editing - update version in setup.py - commit version change (local pkg dir) - create release based on setup.py (local pkg dir) - ensure tags folder if new package get release - tag package (svn cp tags/pkgName/version) - guess next release version - add next version and unreleased marker in CHANGES.txt - add next version including dev marker in setup.py - commit setup.py and CHANGES.txt dev marker update Now you are done and the release should be ready. in short In short, the releae script should normal only do the following steps: - ask for new guessed version confirmation - ask for CHANGES.txt confirmation or offer editing and the release should just start. credits This package is a kind of simple version of keas.build for one package. The keas.build package offers support for build several release based on configuration files. This is usefull if you need to make several releases based on deifferent packages but not for release the package itself. README This package provides a release helper script which can get used for svn based repository development. The script will do all the steps which are required forrelease a package and add a dev marker if done. A new package release will get uploaded to the right pypi based on the package url. If authentication is required, the script will find them in your HOME/.pypirc configuration file. This means there is no configuration required if your package meta data is correct defined and your “python setup.py sdist upload” command works. Requirement Before using the script make sure the following requirements are fine: - correct <HOME>/.pypirc setup - pypi package server tweaks in setup.py (see Server Lock below) - working "python setup.py sdist upload" command - correct meta data (url, version) in <package>/setup.py - existing CHANGES.txt file in your package Setup You can setup the p01.releaser as a buildout part using the offered entry_point. See setup.py. But I recommend not using the script as a buildout part in your package because it will include the part in your deployment. The recommended way to use the script is to link the p01.releaser package as an svn external in your package <root> next to your other packages. It doesn’t matter which svn layout structure you are using. The release script will automaticaly detect the svn repository layout and find the relevant folders. With such a setup, you can go to the p01.releaser package and call the release command. Of corse, you have to run: python bootstrap.py bin/buildout before you can use the method: bin/release The releaser script will find the correct package and tag folder based on your svn layout. See below for more information about the common svn repository layout structure. Note The release method will only release the package if something changed since the last release. The release method will also not start the release process if there is pending (not commited) code in your package. And the release method supports you by adding comments to the CHANGES.txt. SVN We support 2 kind of svn layout. The first layout is the default layout used for independent python libraries. Each package provdies own branches, tags and trunk folders: - <root> (svn layout detection rule: can't use trunk as name) | |- p01.releaser (cwd location) | | | - bin | | | - releaser.py (releaser.exe) | |- package1 | | | - branches | | | - tags | | | | | - 0.5.0 (version) | | | |- trunk | | | - src ... | - package1 | - branches | - tags | | | - 0.5.0 (version) | |- trunk | - src ... The second svn layout is used for frameworks or other group of packages. Each package is located in the same trunk folder and they share branches and tags folders. This layout provides the option to simply tag all packages in one step: - <root> | - branches | - tags | | | - package1 | | | - 0.5.0 (version) | - trunk | |- p01.releaser (cwd location) | | | - bin | | | - releaser.py (releaser.exe) | |- package1 | | | - src .. | - package2 | - src .. Server Lock The p01.releaser script will upload a relase to the pypi server found based on the <HOME>/.pypirc information. This should prevent that a release accidently get uploaded to the official public pypi server. But remember, the package meta data in <package>/setup.py must proide te correct url. And if you start the release process by hand with the command “python.setup.py sdist upload” you will release to the public pypi server which is probably not what you want. Our solution which we use for private packages is the following. We use a mypypi server and a locker.py script in each of our private packages. This script provides the following content: import sys import os.path from ConfigParser import ConfigParser #---[ repository locking ]----------------------------------------------------- def getRepository(name): """Return repository server defined in .pypirc file""" server = None # find repository in .pypirc file rc = os.path.join(os.path.expanduser('~'), '.pypirc') if os.path.exists(rc): config = ConfigParser() config.read(rc) if 'distutils' in config.sections(): # let's get the list of servers index_servers = config.get('distutils', 'index-servers') _servers = [s.strip() for s in index_servers.split('\n') if s.strip() != ''] for srv in _servers: if srv == name: repos = config.get(srv, 'repository') print "Found repository %s for %s in '%s'" % (repos,name,rc) server = repos break if not server: print "No repository for %s found in '%s'" % (name, rc) sys.exit(1) else: return server def lockRelease(name): """Lock repository if we use the register or upload command""" COMMANDS_WATCHED = ('register', 'upload') changed = False server = None for command in COMMANDS_WATCHED: if command in sys.argv: # now get the server from pypirc if server is None: server = getRepository(name) # found one command, check for -r or --repository commandpos = sys.argv.index(command) i = commandpos+1 repo = None while i<len(sys.argv) and sys.argv[i].startswith('-'): # check all following options (not commands) if (sys.argv[i] == '-r') or (sys.argv[i] == '--repository'): #next one is the repository itself try: repo = sys.argv[i+1] if repo.lower() != server.lower(): print "You tried to %s to %s, while this package "\ "is locked to %s" % (command, repo, server) sys.exit(1) else: #repo OK pass except IndexError: #end of args pass i += 1 if repo is None: #no repo found for the command print "Adding repository %s to the command %s" % ( server, command ) sys.argv[commandpos+1:commandpos+1] = ['-r', server] changed = True if changed: print "Final command: %s" % (' '.join(sys.argv)) With this locker.py script, you can simply lock the release to an own pypi server with the following command in your setup.py file: import locker locker.lockRelease("projekt01") The single lockRelease method argument must be an existing index-servers name defined in your <HOME>/.pypirc file. The .pypirc file could look like: [distutils] index-servers = pypi mypypi [pypi] repository: username:<username> password:<password> [mypypi] repository: username:<username> password:<password> This locker.py script concept is a seatbelt an prevents any release file upload to a wrong pypi server with or without the p01.releaser scipt. Remember the releaser script will find it’s correct server without this script. But it’s allwayys a good idea to backup the concept if you have important libraries. Issues Just like to remember that distutils is broken because of a bad re pattern. It is not possible to include buildout.cfg or other files starting with build on windows. This is only relevant if you need to include additional package data with include_package_data=True. After patching your pyhton installation it should be fine to include a MANIFST.in file with: include buildout.cfg see: CHANGES 0.6.0 (2012-11-16) - added comment about distutils issue - added strict connection error handling - implemented checking externals - implemented better edit option - improve tests, fix test condition - fix changed marker - replace CHANGES.txt wrapper class ChangeDoc with a simpler implementation and API 0.5.4 (2011-08-27) - new version did not get added to CAHNGES.txt before release 0.5.3 (2011-08-27) - bugfix broken back to dev step 0.5.2 (2011-08-27) - improve version/date parsing. Something like this window.open(‘’) was parsed as version headline - skip inline editing, just open the CHANGES.txt file and abort. I will probably bring back the CHANGES.txt fiel editing back if I’ll find a way to open the file in an editor an block the subprocess till the editor will get closed. This is not so simple because opening a file in an already open editor will not block a subprocess.call 0.5.1 (2011-08-25) - added missing register argument in setup.py call. Seems that the pypi index needs this option or a package will not show up in th index 0.5.0 (2011-08-25) - initial release done with p01.releaser Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/p01.releaser/
CC-MAIN-2020-16
en
refinedweb
How To Use Command Line Arguments in .NET The steps below describe how to use command-line arguments in .NET Command line arguments are values passes to an application at execution time. All .Net applications (except services and class libraries) are capable of receiving command line arguments. In most cases, the implementation of command line arguments is incududed in the templates used to create the application. The way we handle command line arguments in C# and Visual Basic is slightly different, so we will examine each language seperately. Lets begin by looking at C#. - In C#, the templates generally include an array of type string in the Main method entry point. For example, here we see the Main method signature in a standard console application. static void Main(string[] args) { } - In order to access the values passed as parameters, we simply need to iterate through the collection and extract the values. For Example: static void Main(string[] args) { foreach (string param in args) { Console.WriteLine(param); } Console.ReadLine(); } - Once compiled, we can execute the application using the following command: ConsoleApp3.exe alpha bravo charlie delta - This will return the following result: Now, lets take a look at Visual Basic. - VB is slightly different because there is no array explicitly declared as a parameter to the Main method. If we build a default VB console application, we will see a method signature that looks like this; Sub Main() End Sub - But even though the array is not explicitly passed as a parameter, the argument values are still available. We simply access them using the "My" namespace. For example; Sub Main() For Each param As String In My.Application.CommandLineArgs Console.WriteLine(param) Next param Console.ReadLine() End Sub - Once compiled, we can execute the application using the following command: ConsoleApp1.exe alpha bravo charlie delta - This will return the following result:
https://www.webucator.com/how-to/how-use-command-line-arguments-net.cfm
CC-MAIN-2020-16
en
refinedweb
import "github.com/mediocregopher/mediocre-go-lib/mcfg" Package mcfg implements the creation of different types of configuration parameters and various methods of filling those parameters from external configuration sources (e.g. the command line and environment variables). Parameters are registered onto a Component, and that same Component (or one of its ancestors) is used later to collect and fill those parameters. cli.go env.go mcfg.go param.go source.go func AddParam(cmp *mcmp.Component, param Param, opts ...ParamOption) AddParam adds the given Param to the given Component. It will panic if a Param with the same Name already exists in the Component. Bool returns a *bool which will be populated once Populate is run on the Component, and which defaults to false if unconfigured. The default behavior of all Sources is that a boolean parameter will be set to true unless the value is "", 0, or false. In the case of the CLI Source the value will also be true when the parameter is used with no value at all, as would be expected. CLISubCommand establishes a sub-command which can be activated on the command-line. When a sub-command is given on the command-line, the bool returned for that sub-command will be set to true. Additionally, the Component which was passed into Parse (i.e. the one passed into Populate) will be passed into the given callback, and can be modified for subsequent parsing. This allows for setting sub-command specific Params, sub-command specific runtime behavior (via mrun.WithStartHook), support for sub-sub-commands, and more. The callback may be nil. If any sub-commands have been defined on a Component which is passed into Parse, it is assumed that a sub-command is required on the command-line. When parsing the command-line options, it is assumed that sub-commands will be found before any other options. This function panics if not called on a root Component (i.e. a Component which has no parents). Code: var ( cmp *mcmp.Component foo, bar, baz *int aFlag, bFlag *bool ) // resetExample re-initializes all variables used in this example. We'll // call it multiple times to show different behaviors depending on what // arguments are passed in. resetExample := func() { // Create a new Component with a parameter "foo", which can be used across // all sub-commands. cmp = new(mcmp.Component) foo = Int(cmp, "foo") // Create a sub-command "a", which has a parameter "bar" specific to it. aFlag = CLISubCommand(cmp, "a", "Description of a.", func(cmp *mcmp.Component) { bar = Int(cmp, "bar") }) // Create a sub-command "b", which has a parameter "baz" specific to it. bFlag = CLISubCommand(cmp, "b", "Description of b.", func(cmp *mcmp.Component) { baz = Int(cmp, "baz") }) } // Use Populate with manually generated CLI arguments, calling the "a" // sub-command. resetExample() args := []string{"a", "--foo=1", "--bar=2"} if err := Populate(cmp, &SourceCLI{Args: args}); err != nil { panic(err) } fmt.Printf("foo:%d bar:%d aFlag:%v bFlag:%v\n", *foo, *bar, *aFlag, *bFlag) // reset for another Populate, this time calling the "b" sub-command. resetExample() args = []string{"b", "--foo=1", "--baz=3"} if err := Populate(cmp, &SourceCLI{Args: args}); err != nil { panic(err) } fmt.Printf("foo:%d baz:%d aFlag:%v bFlag:%v\n", *foo, *baz, *aFlag, *bFlag) Output: foo:1 bar:2 aFlag:true bFlag:false foo:1 baz:3 aFlag:false bFlag:true CLITail modifies the behavior of SourceCLI's Parse. Normally when SourceCLI encounters an unexpected Arg it will immediately return an error. This function modifies the Component to indicate to Parse that the unexpected Arg, and all subsequent Args (i.e. the tail), should be set to the returned []string value. The descr (optional) will be appended to the "Usage" line which is printed with the help document when "-h" is passed in. This function panics if not called on a root Component (i.e. a Component which has no parents). Code: cmp := new(mcmp.Component) foo := Int(cmp, "foo", ParamDefault(1), ParamUsage("Description of foo.")) tail := CLITail(cmp, "[arg...]") bar := String(cmp, "bar", ParamDefault("defaultVal"), ParamUsage("Description of bar.")) err := Populate(cmp, &SourceCLI{ Args: []string{"--foo=100", "arg1", "arg2", "arg3"}, }) fmt.Printf("err:%v foo:%v bar:%v tail:%#v\n", err, *foo, *bar, *tail) Output: err:<nil> foo:100 bar:defaultVal tail:[]string{"arg1", "arg2", "arg3"} Duration returns an *mtime.Duration which will be populated once Populate is run on the Component. Float64 returns a *float64 which will be populated once Populate is run on the Component Int returns an *int which will be populated once Populate is run on the Component. Int64 returns an *int64 which will be populated once Populate is run on the Component. func JSON(cmp *mcmp.Component, name string, into interface{}, opts ...ParamOption) JSON reads the parameter value as a JSON value and unmarshals it into the given interface{} (which should be a pointer) once Populate is run on the Component. The receiver (into) is also used to determine the default value. ParamDefault should not be used as one of the opts. Populate uses the Source to populate the values of all Params which were added to the given Component, and all of its children. Populate may be called multiple times with the same Component, each time will only affect the values of the Params which were provided by the respective Source. Source may be nil to indicate that no configuration is provided. Only default values will be used, and if any parameters are required this will error. Populating Params can affect the Component itself, for example in the case of sub-commands. String returns a *string which will be populated once Populate is run on the Component. TS returns an *mtime.TS which will be populated once Populate is run on the Component. type Param struct { // How the parameter will be identified within a Component. Name string // A helpful description of how a parameter is expected to be used. Usage string // If the parameter's value is expected to be read as a go string. This is // used for configuration sources like CLI which will automatically add // double-quotes around the value if they aren't already there. IsString bool // If the parameter's value is expected to be a boolean. This is used for // configuration sources like CLI which treat boolean parameters (aka flags) // differently. IsBool bool // If true then the parameter _must_ be set by at least one Source. Required bool // The pointer/interface into which the configuration value will be // json.Unmarshal'd. The value being pointed to also determines the default // value of the parameter. Into interface{} // The Component this Param was added to. NOTE that this will be // automatically filled in by AddParam when the Param is added to the // Component. Component *mcmp.Component } Param is a configuration parameter which can be populated by Populate. The Param will exist as part of a Component. For example, a Param with name "addr" under a Component with path of []string{"foo","bar"} will be setable on the CLI via "--foo-bar-addr". Other configuration Sources may treat the path/name differently, however. Param values are always unmarshaled as JSON values into the Into field of the Param, regardless of the actual Source. CollectParams gathers all Params by recursively retrieving them from the given Component and its children. Returned Params are sorted according to their Path and Name. ParamOption is a modifier which can be passed into most Param-generating functions (e.g. String, Int, etc...) func ParamDefault(value interface{}) ParamOption ParamDefault returns a ParamOption which ensures the parameter uses the given default value when no Sources set a value for it. If not given then mcfg will use the zero value of the Param's type as the default value. If ParamRequired is given then this does nothing. func ParamDefaultOrRequired(value interface{}) ParamOption ParamDefaultOrRequired returns a ParamOption whose behavior depends on the given value. If the given value is the zero value for its type, then this returns ParamRequired(), otherwise this returns ParamDefault(value). func ParamRequired() ParamOption ParamRequired returns a ParamOption which ensures the parameter is required to be set by some configuration source. The default value of the parameter will be ignored. func ParamUsage(usage string) ParamOption ParamUsage returns a ParamOption which sets the usage string on the Param. This is used in some Sources, like SourceCLI, when displaying information about available parameters. type ParamValue struct { Name string Path []string Value json.RawMessage } ParamValue describes a value for a parameter which has been parsed by a Source. type ParamValues []ParamValue ParamValues is simply a slice of ParamValue elements, which implements Parse by always returning itself as-is. func (pvs ParamValues) Parse(*mcmp.Component) ([]ParamValue, error) Parse implements the method for the Source interface. type Source interface { Parse(*mcmp.Component) ([]ParamValue, error) } Source parses ParamValues out of a particular configuration source, given the Component which the Params were added to (via WithInt, WithString, etc...). CollectParams can be used to retrieve these Params. It's possible for Parsing to affect the Component itself, for example in the case of sub-commands. Source should not return ParamValues which were not explicitly set to a value by the configuration source. The returned []ParamValue may contain duplicates of the same Param's value. in which case the latter value takes precedence. It may also contain ParamValues which do not correspond to any of the passed in Params. These will be ignored in Populate. SourceCLI is a Source which will parse configuration from the CLI. Possible CLI options are generated by joining a Param's Path and Name with dashes. For example: cmp := new(mcmp.Component) cmpFoo = cmp.Child("foo") cmpFooBar = foo.Child("bar") addr := mcfg.String(cmpFooBar, "addr", "", "Some address") // the CLI option to fill addr will be "--foo-bar-addr" If the "-h" option is seen then a help page will be printed to stdout and the process will exit. Since all normally-defined parameters must being with double-dash ("--") they won't ever conflict with the help option. SourceCLI behaves a little differently with boolean parameters. Setting the value of a boolean parameter directly _must_ be done with an equals, or with no value at all. For example: `--boolean-flag`, `--boolean-flag=1` or `--boolean-flag=false`. Using the space-separated format will not work. If a boolean has no equal-separated value it is assumed to be setting the value to `true`. Parse implements the method for the Source interface. type SourceEnv struct { // In the format key=value. Defaults to os.Environ() if nil. Env []string // If set then all expected Env options must be prefixed with this string, // which will be uppercased and have dashes replaced with underscores like // all the other parts of the option names. Prefix string } SourceEnv is a Source which will parse configuration from the process environment. Possible Env options are generated by joining a Param's Path and Name with underscores and making all characters uppercase, as well as changing all dashes to underscores. cmp := new(mcmp.Component) cmpFoo := cmp.Child("foo") cmpFooBar := cmp.Child("bar") addr := mcfg.String(cmpFooBar, "srv-addr", "", "Some address") // the Env option to fill addr will be "FOO_BAR_SRV_ADDR" Parse implements the method for the Source interface. Sources combines together multiple Source instances into one. It will call Parse on each element individually. Values from later Sources take precedence over previous ones. Parse implements the method for the Source interface. Package mcfg imports 13 packages (graph) and is imported by 8 packages. Updated 2019-07-10. Refresh now. Tools for package owners.
https://godoc.org/github.com/mediocregopher/mediocre-go-lib/mcfg
CC-MAIN-2020-16
en
refinedweb
strcmp() is enough for most string comparisons, but when dealing with unicode characters, sometimes there are certain nuances that make byte-to-byte string comparison incorrect. For instance, if you are comparing two strings in Spanish language, they can contain accentuated characters like á, é, í, ó, ú, ü, ñ, ¿, ¡ etc. By default, such accentuated characters come after the whole alphabet of a,b,c...z. Such comparison would be faulty because the different accents of a should actually come before b. strcoll() uses the current locale to perform the comparison giving a more accurate result in such cases. It is defined in <cstring> header file. strcoll() prototype int strcoll( const char* lhs, const char* rhs ); The strcoll() function takes two arguments: lhs and rhs. It compares the contents of lhs and rhs based on the current locale of LC_COLLATE category. strcoll() Parameters - lhs and rhs: Pointer to the null terminated strings to compare. strcoll() Return value The strcoll() function returns a: - positive value if the first differing character in lhs is greater than the corresponding character in rhs. - negative value if the first differing character in lhs is less than the corresponding character in rhs. - 0 if lhs and rhs are equal. Example: How strcoll() function works? #include <cstring> #include <iostream> using namespace std; int main() { char lhs[] = "Armstrong"; char rhs[] = "Army"; int result; result = strcoll(lhs,rhs); cout << "In the current locale "; if(result > 0) cout << rhs << " precedes " << lhs << endl; else if (result < 0) cout << lhs << " precedes " << rhs << endl; else cout << lhs << " and " << rhs << " are same" << endl; return 0; } When you run the program, the output will be: In the current locale Armstrong precedes Army
https://www.programiz.com/cpp-programming/library-function/cstring/strcoll
CC-MAIN-2020-16
en
refinedweb
Contents Interpolation search is the modification of binary search, where the index of a "middle" key is obtained from linear interpolation of values at start & end of a processed range: a := start b := end key := searched key while a <= b loop t := (key - array[a])/(array[b] - array[a]) c := a + floor(t * (b - a)) -- in binary search just: c := (a + b)/2 if key = array[c] then return c else if key < array[c] then b := c - 1 else a := c + 1 endif end loop The clear advantage over basic binary search is complexity O( log logn). When size of array is 1 million, then average number of comparison in binary search is
http://0x80.pl/articles/interpolation-search.html
CC-MAIN-2020-16
en
refinedweb
Shortcut to plotting in Matplotlib Matplotlib has a well deserved reputation for being a labyrinth. It has dark twisted passageways. Some lead to dead ends. It also has shortcuts to glory and freedom. In this series I’ll help you get where you want to go as directly as I know how, while steering clear of horned monsters. Part of Matplotlib is a simplified interface called pyplot that doesn't require you to know about the innards of the system. For the most common tasks, it gives a convenient set of commands. We'll take advantage of it. import numpy as np import matplotlib.pyplot as plt To use pyplot in a script, first we import it. We also import numpy, a numerical processing library that is great for working with data of any sort. Plotting curves The most common thing to do in Matplotlib is plot a line. First we have to create one. x_curve = np.linspace(-2 * np.pi, 2 * np.pi, 500) y_curve = np.sinc(x_curve) We create the x values for our curve using numpy's linspace() function. The way we called it, it returns an array of 500 evenly spaced values between -2 pi and 2 pi. Then we create the y values using numpy's sinc() function. It's a curve with some personality, so it illustrates the plotting quite well. plt.figure() plt.plot(x_curve, y_curve) plt.show() This bit - initializes the figure, - draws the curve described by our x and y, and - displays it on the screen. And that's it! You just plotted a curve in python. One of the very best things about Matplotlib is how it streamlines common tasks. Once we had our data, it only took three lines of code to make a plot. Plotting points The second most common plotting task is show points in a scatter plot. This is streamlined too. x_scatter = np.linspace(-1, 1) y_scatter = x_scatter + np.random.normal(size=x_scatter.size) The first step again is create some fake data. Here we generate some evenly-spaced x values between -1 and 1 and make some y values that are similar, but have some normally-distributed random noise added to them (zero mean and unit variance). plt.figure() plt.scatter(x_scatter, y_scatter) plt.show() Like before, this bit - initializes the figure, - plots each of the points described by our x's and y's, and - displays the resulting plot on the screen. And there you have your scatterplot! With just this little bit of Matplotlib, you already know enough to be dangerous.
https://e2eml.school/matplotlib_just_plot_it.html
CC-MAIN-2020-16
en
refinedweb
span8 span4 span8 span4 I want to upload some files for a specific workbench using HTTP. I'm using Python for this with requests library, but I'm having difficulty getting it to work. My python code is as follows: with open(file_path, 'rb') as file: url = f"{fme_url}/fmedataupload/{script_location}" data = { 'custom_file': { 'value': file.read(), 'options': { 'contentType': 'application/octet-stream', 'filename': file_name, } }, 'opt_namespace': directory, 'opt_fullpath': True } token = get_token() headers = { 'content-type': 'application/x-www-form-urlencoded', 'Authorization': f"fmetoken token={token}", } return requests.post(url, data=data, headers=headers, json=True) I do get a 200 OK response, so authorization is fine, but the file isn't uploaded. It even creates a directory at the right spot. The response is as follows <Response [200]> {"serviceResponse": { "statusInfo": {"status": "success"}, "session": "foo", "files": { "path": "", "folder": [{ "path": "$(FME_DATA_REPOSITORY)/main_repository/script_name.fmw/foo", "name": "" }] } }} The data upload documentation () simply lists 'Upload a single file or multiple files using a simple form submission that uses HTTP POST.', which is not helpful. I'm using Python 3.8 and FME server 2019.2 My goal is to upload the required files, then start a workbench, and finally retrieve the result, all using Python. Preferably, the FME server account that Python uses has as little permissions as possible. @arjanboogaart, Can you confirm the version of FME Server you are using here? Python 3? It could be an issue we need to look at for sure. Thanks for posting! Python 3.8 and FME server 2019.2. I have a suspicion it has to do with how python requests sends the form data, as we have a similar setup in Node.js which is working. It's strange how it sends a 200 OK response, I would expect a 400 if there's something wrong with my request. @arjanboogaart, I suspect you are not using python3.8 with FME... it won't work as we've not added support for that yet. Here's the export. UploadExamplePostman.json.zip Let us know if this helps. I got curious about your mention that the file was NOT being uploaded so I did go test this in Postman and used the POST. In my testing, using FME Server 2019.2, I can confirm a file is uploaded and a folder is created. Are you looking in the /system/temp/upload location? Does the token you are using have enough permissions (I would expect an error if so - so this is likely not your problem) Because the files are being uploaded by the Data Upload Service, they are volatile (non-persistent) you will find them in the FME Server temp file location. That location is determined in the service properties file. By default: # UPLOAD_DIR UPLOAD_DIR=<System share>/resources/system/temp/upload Do you mind sharing the postman request? I believe there is an 'export' option. That might help me recreate it in Python This example uses HTTP PUT instead of POST. PUT doesn't support custom directory names. Hi @arjanboogaart, So you want to upload some files for a particular workspace... are they to persist? I'm not sure if this is going to help you but you could try the FME Server API for uploading a resource for a workspace and then delete the resource at some time later when no longer needed. I can only share this as guidance and unfortunately, I've not attempted to port it to python... but you could likely have some success. From Postman POST the resource or file: DELETE the resource or file: @arjanboogaart Have you tried FMEServerResourceConnector or HTTPCaller? My question is about using the FME server rest API in python. This is not from inside an FME workbench, but as a standalone python application. Answers Answers and Comments 25 People are following this question.
https://knowledge.safe.com/questions/109808/uploading-files-using-python-via-data-upload-servi.html
CC-MAIN-2020-16
en
refinedweb
2715/store-string-inside-a-file-using-java I have some value inside a String variable by the name 'text'. How can I save the value from this variable. We can use Apache Commons IO. It has some great methods to do this. static void writeStringToFile(File file, String data) FileUtils.writeStringToFile(new File("test.txt"), "Hello File"); If you're looking for an alternative that ...READ MORE We can find out the no. of ...READ MORE // Open the file FileInputStream file = new ...READ MORE public void saveUrl(final String filename, final String ...READ MORE try { final Path ...READ MORE List<String> results = new ArrayList<String>(); File[] files = ...READ MORE You can use Scanner class to read ...READ MORE import java.io.File; import java.nio.file.Files; File file; // ...(file is initialised)... byte[] ...READ MORE // File (or directory) with old name File ...READ MORE Using nio we can check whether file ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/2715/store-string-inside-a-file-using-java
CC-MAIN-2020-16
en
refinedweb
Extracting all links of a web page is a common task among web scrapers, it is useful to build advanced scrapers that crawl every page of a certain website to extract data, it can also be used for SEO diagnostics process or even information gathering phase for penetration testers. In this tutorial, you will learn how you can build a link extractor tool in Python from Scratch using only requests and BeautifulSoup libraries. Let's install the dependencies: pip3 install requests bs4 colorama Open up a new Python file and follow along, let's import the modules we need: import requests from urllib.request import urlparse, urljoin from bs4 import BeautifulSoup import colorama We are going to use colorama just for using different colors when printing, to distinguish between internal and external links: # init the colorama module colorama.init() GREEN = colorama.Fore.GREEN GRAY = colorama.Fore.LIGHTBLACK_EX RESET = colorama.Fore.RESET We gonna need two global variables, one for all internal links of the website and the other for all the external links: # initialize the set of links (unique links) internal_urls = set() external_urls = set() Since not all links in anchor tags (a tags) are valid (I've experimented with this), some are links to parts of the website, some are javascript, so let's write a function to validate URLs: def is_valid(url): """ Checks whether `url` is a valid URL. """ parsed = urlparse(url) return bool(parsed.netloc) and bool(parsed.scheme) This will make sure that a proper scheme (protocol, e.g http or https) and domain name exists in the URL. Now let's build a function to return all the valid URLs of a web page: def get_all_website_links(url): """ Returns all URLs that is found on `url` in which it belongs to the same website """ # all URLs of `url` urls = set() # domain name of the URL without the protocol domain_name = urlparse(url).netloc soup = BeautifulSoup(requests.get(url).content, "html.parser") First, I initialized the urls set variable, I've used Python sets here because we don't want redundant links. Second, I've extracted the domain name from the URL, we gonna need it to check whether the link we grabbed is external or internal. Third, I've downloaded the HTML content of the web page and wrapped it with a soup object to ease HTML parsing. Let's get all HTML a tags (anchor tags that contains all the links of the web page): for a_tag in soup.findAll("a"): href = a_tag.attrs.get("href") if href == "" or href is None: # href empty tag continue So we get the href attribute and check if there is something there. Otherwise, we just continue to the next link. Since not all links are absolute, we gonna need to join relative URLs with its domain name (e.g when href is "/search" and url is "google.com", the result will be "google.com/search"): # join the URL if it's relative (not absolute link) href = urljoin(url, href) Now we need to remove HTTP GET parameters from the URLs, since this will cause redundancy in the set, the below code handles that: parsed_href = urlparse(href) # remove URL GET parameters, URL fragments, etc. href = parsed_href.scheme + "://" + parsed_href.netloc + parsed_href.path Let's finish up the function: if not is_valid(href): # not a valid URL continue if href in internal_urls: # already in the set continue if domain_name not in href: # external link if href not in external_urls: print(f"{GRAY}[!] External link: {href}{RESET}") external_urls.add(href) continue print(f"{GREEN}[*] Internal link: {href}{RESET}") urls.add(href) internal_urls.add(href) return urls All we did here is checking: Finally, after all checks, the URL will be an internal link, we print it and add it to our urls and internal_urls sets. The above function will only grab the links of one specific page, what if we want to extract all links of the entire website ? Let's do this: # number of urls visited so far will be stored here total_urls_visited = 0 def crawl(url, max_urls=50): """ Crawls a web page and extracts all links. You'll find all links in `external_urls` and `internal_urls` global set variables. params: max_urls (int): number of max urls to crawl, default is 30. """ global total_urls_visited total_urls_visited += 1 links = get_all_website_links(url) for link in links: if total_urls_visited > max_urls: break crawl(link, max_urls=max_urls) This function crawls the website, which means it gets all the links of the first page and then call itself recursively to follow all the links extracted previously. However, this can cause some issues, the program will get stuck on large websites (that got many links) such as google.com, as a result, I've added a max_urls parameter to exit when we reach a certain number of URLs checked. Alright, let's test this, make sure you use this on a website you're authorized to, otherwise I'm not responsible for any harm you make. if __name__ == "__main__": crawl("") print("[+] Total External links:", len(external_urls)) print("[+] Total Internal links:", len(internal_urls)) print("[+] Total:", len(external_urls) + len(internal_urls)) I'm testing on this website. However, I highly encourage you to not to do that, that will cause a lot of requests and will crowd the web server and may block your IP address. Here is a part of the output: Awesome, right ? I hope this tutorial was a benefit for you to inspire you to build such tools using Python. Requesting the same website many times in a short period of time may cause the website to block your IP address, in that case, you need to use a proxy server for such purposes. If you're interested in grabbing images instead, check this tutorial: How to Download All Images from a Web Page in Python, or if you want to extract HTML tables, check this tutorial. I edited the code a little bit, so you will be able to save the output URLs in a file, check the full code. Happy Scraping ♥View Full Code
https://www.thepythoncode.com/article/extract-all-website-links-python
CC-MAIN-2020-16
en
refinedweb
In the previous tutorial, we have built an ARP spoof script using Scapy that once it is established correctly, any traffic meant for the target host will be sent to the attacker's host, now you are maybe wondering, how can we detect this kind of attacks ? well, that's what we are going to do in this tutorial. The basic idea behind the script that we're going to build is to keep sniffing packets (passive monitoring or scanning) in the network, once an ARP packet is received, we analyze two components: And then we compare the two. If they are not the same, then we are definitely under an ARP spoof attack! First let's import what we gonna need (you need to install Scapy first, head to this tutorial or the official Scapy documentation for installation): from scapy.all import Ether, ARP, srp, sniff, conf Then we need a function that given an IP address, it makes an ARP request and retrieves the real MAC address the that IP address: def get_mac(ip): """ Returns the MAC address of `ip`, if it is unable to find it for some reason, throws `IndexError` """ p = Ether(dst='ff:ff:ff:ff:ff:ff')/ARP(pdst=ip) result = srp(p, timeout=3, verbose=False)[0] return result[0][1].hwsrc After that, the sniff() function that we gonna use, takes a callback (or function) to apply to each packet sniffed, let's define it: def process(packet): # if the packet is an ARP packet if packet.haslayer(ARP): # if it is an ARP response (ARP reply) if packet[ARP].op == 2: try: # get the real MAC address of the sender real_mac = get_mac(packet[ARP].psrc) # get the MAC address from the packet sent to us response_mac = packet[ARP].hwsrc # if they're different, definetely there is an attack if real_mac != response_mac: print(f"[!] You are under attack, REAL-MAC: {real_mac.upper()}, FAKE-MAC: {response_mac.upper()}") except IndexError: # unable to find the real mac # may be a fake IP or firewall is blocking packets pass Note: Scapy encodes the type of ARP packet in a field called "op" which stands for operation, by default the "op" is 1 or "who-has" which is an ARP request, and 2 or "is-at" is an ARP reply. As you may see, the above function checks for ARP packets. More precisely, ARP replies, and then compares between the real MAC address and the response MAC address (that's sent in the packet itself). All we need to do now is to call the sniff() function with the callback written above: sniff(store=False, prn=process) Note: store=False tells sniff() function to discard sniffed packets instead of storing them in memory, this is useful when the script runs for a very long time. When you try to run the script, nothing will happen obviously, but when an attacker tries to spoof your ARP cache like in the figure shown below: The ARP spoof detector (ran on another machine, obviously) will automatically respond: Alright that's it! To prevent such man-in-the-middle attacks, you need to use Dynamic ARP Inspection, which is a security feature that automatically rejects malicious ARP packets we just detected. Here are some further readings: Happy Crafting ♥View Full Code
https://www.thepythoncode.com/article/detecting-arp-spoof-attacks-using-scapy
CC-MAIN-2020-16
en
refinedweb
Let’s Get Rid of the Thick Yellow Cable Whenever I write about the crazy things vendors are trying to sell us, and the kludges we have to live with, I keep wondering, “Is it just me, or is the whole industry really as ridiculous as it seems?” It’s so nice to see someone else coming to the same conclusions, like Mark Burgess (the author of CFEngine and the Promise Theory) did in a lengthy essay on whether SDN makes sense. Long story short: there’s no need for layer-2 in the data center beyond the virtual link between a VM (or container) and the virtual switch. We should stop emulating the thick yellow cable. The diagram is from the IPv6 Microsegmentation Done Right presentation that I’ll present @ Troopers 2015. There are still a few seats left, so make sure you register ASAP. You can also attend the IPv6 Micro-segmentation webinar, but it would be so much better to meet you in person. Instead of desperately trying to emulate 40-year-old technology, we should strive to make data center networks layer-3-only networks and use DNS for service location. Interested? Go and read what Mark Burgess has to say on the topic. We migrated our blog a few days ago, and the commenting functionality is not there yet. In the meantime enjoy the older comments, or find our content on LinkedIn and comment there. From the linked article: "Virtualization of addresses does not need to interfere with that, if we simply use a service that implements private namespaces on top of public IP addresses, there is no need even for tunnels (Russian doll encapsulation)." Isn't he basically describing NAT/PAT without using those terms? Am I missing something here? I had the same interpretation. Overlays are an abomination so let's replace them with... NAT. Need some more coffee but think he was just suggesting multi-tenancy should be enforced with other things besides L2 and the network should focus on routing traffic to endpoints. The interwebs works because it focuses on L3, why all the duct tape and bandaids inside the datacenter. However, if everything was truly peer-to-peer on a shared L3 medium, multi-tenancy would require encryption at the endpoints (ex. SSL vs. L2 vlan). i.e. EndpointA in Tenant Group 1 in DC1 needs to securely communicate with Endpoint B also in Tenant Group 1. Industry came close to a *full* peer-to-peer world with IPV6 as one of goals was to increase address space and reduce need for multiplexing with PAT/SNAT (i.e. let go of idea of end nodes in tenant domains sharing same address space ... they all get their own unique address which are ephemeral as mac addresses). However, until the holy grail where all endpoints (client/server apps) let go of L2, assume L3 world and use encryption on all their communication (both intra-tenant and public), PAT/SNAT (which requires the "dreaded" state) is a pretty nice fenced house with one way door. Although I have to say, as a tenant, even if I have my own firewall (iptables) and diligently communicate securely with all my peers (SSL), being bare on exposed on the internet taking 10G of L7 attacks on my own front door doesn't sound so fun :-). As the old saying goes, the internet is unfortunately a lot nastier than most neighborhoods so even though a lock on your door is good, sometimes funneled ( or dreaded "state" ) access through a gated community is certainly nice. back to coffee++ You have to ensure that a tenant retains its privacy (which can be done with packet filters), not that it gets private IP addresses (which would require NAT or some sort of tunnels). Like Mark wrote in his essay, you cannot dictate what your room# will be in the hotel. We should stop assigning overloaded meanings to IP addresses. True. Side effect of multiplexing. The point was really more agreeing to the premise that application owners have come around and want less L2 (spanning), not more. They’ve practiced, taken off their training wheels and quickly adapted to the stateless L3 chaos monkey AWS environments and are looking for network as simple transit. See you are definitely not alone Will only be a matter of time until they self service their own tenancy/privacy/group membership up the stack (ex. whether peer-to-peer encryption or identity based/certificate trust domains/etc.). By the time everyone finally deploys overloaded L2 architectures, app-owners will probably have moved on. Anyway, exciting times. As always, thanks for sharing! Mark Burgess's blog seems to be a tirade against overlays and slavishly virtualizing the physical network, whereas your blog seems to suggest that overlays might the solution with L2 only going to the vswitch, with presumably L3 in physical network (and the overlay slavishly virtualizing the physical)-- so these are completely opposite conclusions! Or am I missing something? Imagine if we could just get rid of L2. Every network device did L3. No need to ARP for your gateway, just put the packet on the wire and let the next device figure it out. Well, your article as well as Mark Burgess's blog both point to a serious failure of the networking industry to deliver ubiquitous state-of-the art L2 technology. Back in 2001 (i.e. long before VPLS), the group of IXP operators proposed to redesign L2 according to L3 best-practices - by replacing thick yellow cable emulation and all Spanning tree stuff with IS-IS based routing of ethernet packets. Guess what - no vendor was interested. When Radia Perlman in 2004 presented the first TRILL proposal to IEEE, it was - rejected. Several years later, two next-gen L2 standards were finalized - TRILL (based on L3 principles) and SPB (enhanced STP). Yet worse, every major vendor implemented its own next-gen L2 solution, of course totally incompatible with any of the standards. In the meantime, hacks like M-LAG (vPC) were developed to workaround apparent defficiences of outdated L2 technology. In such a mess, it's no surprise that people consider L2 unusable and try to avoid it whereever possible. Hypervisor vendors decided to fix the problem in their own domain by yet another method (VXLAN, NVGRE,...) but this requires new NIC hardware and only works in DC environment. As such it's no solution to the main problem, since advanced L2 is needed in every LAN to get rid of thick yellow cable emulation and STP limitations. Just FYI, in September 2014 we implemented TRILL-based infrastructure in the Slovak Internet eXchange. After 5 months of production, our experience shows that even though TRILL delivers standard L2 interface on the edge ports, inside it operates on exactly the same principles as any L3 network, utilizing field-proven IS-IS protocol for ethernet frame routing and TTL & RPF checks to prevent network meltdown. So decent L2 technology definitely exists today - but the main problem is that the networking industry is unable to converge on single technical solution due to commercial reasons. This couldn't be fixed by resorting to L3. Wasn't LISP an attempt to separate the IP address from the Location Identifier? Best quote from the essay: "I see people arguing for a dynamic infrastructure. That is wrong thinking. Infrastructure and foundations are meant to be stable. What you build on top of that can be dynamic." I wrote a much longer post which got lost but to summarize. L2 overlays came from VMs moving or being spun up wherever there was free compute. It needed L2 because somewhere else another application or network element like a FW or LB was setup based on that VM/service having a specific IP. That's the crux of the issue. The L2 overlays are getting better. Vendors are solving tromboning using anycasted default GWs, VXLAN can work with L3 directly into a VRF instead of just building a L2 overlay. VXLAN-GPE supports direct v4/v6 encapsulation. We have moved the underlay from L2 to L3. However it will take an orchestration system to redirect endpoints as needed, or at least reprogram DNS entries if that's a valid option. Contextream has a solution based on NVO3 overlays coupled with LISP. So LISP takes care of the endpoint moving behind another IP address. However LISP doesn't have great support, so maybe DNS is the answer, along with more intelligent upstream devices which don't care what IP something is at. Junos Contrail is a L3 overlay, not a L2 overlay, and does so by using host routing over tunnels with a VNID/MPLS label as an identifier. A firewall instance doesn't need to be in the same subnet as the end host, it just needs to know the tunnel to get to it. Now it would be easier without tunnels altogether, but we aren't there yet. Hi, your post intrigues me... How can you get rid of L2 when you have remote sides that need DHCP over public internet .. or when you run hotspots ?
https://blog.ipspace.net/2015/02/lets-get-rid-of-thick-yellow-cable.html
CC-MAIN-2020-16
en
refinedweb
The Future Futhark Package Manager There was a time when it was considered crucial for a programming language to come bundled with a large standard library. Python is perhaps the most famous example of this idea of “batteries included”. Unfortunately, this approach ties development of the libraries to the release schedule of the language implementation (and in particular makes it impossible to upgrade one without the other), which often leads to these “standard” libraries falling behind compared to independently developed libraries. Another issue is that the standard libraries typically must be general enough to cover everyones needs, which easily makes them convoluted. For more, see the post Where modules go to die and its corresponding discussion on Hacker News. Instead of having “batteries included”, many recent languages instead contain minimal standard libraries, and instead provide a standard package manager that makes it easy to use third party packages. Rust is perhaps the most prominent example of this approach. Since reusable Futhark code is beginning to crop up, and I’d rather not stuff all of it into futlib itself, it is time to design and implement a simple package manager for Futhark. There are lots of things we would like the package manager to do. Unfortunately, because we live in an imperfect world, there are also things we cannot expect it to do. - It must be simple to implement. There are lots of improvements to make to Futhark, and we cannot afford to spend too long on introducing and fixing bugs in a package manager. The best way to avoid complexity is to avoid features, so we will keep the package manager as bare-bones as possible. It must be simple to use. Futhark lives in the desert, and our users are not interested in learning a new complicated package manager. It is better to restrict the flexibility of the package manager than to introduce complexity in the user experience. This also extends to package authors - ideally, issuing a release should be as simple as tagging a revision in a source control system. - No user configuration required. Whatever format a package lists its dependencies in must contain enough information to also allow the package manager to fetch them. If you download a Futhark program, all you should do is run futhark-pkg get(or whatever the command ends up being), and then you can compile it. This is in contrast to a system like Smackage, which requires package URLs to be manually added to a user-global configuration file. - The compiler should not be modified. In particular, the compiler does not support include paths, so the package manager must make the fetched packages available as files in the file system, such that they can be directly imported by the program. Concretely, I think the package manager will simply put everything it fetches in a lib/sub-directory, such that a program will have to say import "lib/foo/bar"to import file bar.futfrom the foopackage. The package manager will not be a build system. You will still need to manually invoke the compilers after fetching the dependencies. This creates operational simplicity. - No need for a central server. We do not want more infrastructure maintenance burdens. However, the design should permit us to later add such a server as a package registry, as this seems likely to be useful for documentation and discoverability. - The package manager must fetch only pre-packaged tarballs. Interfacing directly with version control systems is too complex. Fortunately, code hosting sites like GitHub make it trivial to generate tarballs corresponding to revision tags. - It need not be possible for a program to simultaneously depend on multiple versions of the same package. Such support might be difficult to implement, and is anyway only needed for large programs. Futhark programs are supposed to be small, so this is complexity that is not worth it. - “Vendoring” (copying the dependencies directly into your own source repository) must still be possible for those who prefer that. Ideally, just by committing the lib/directory alluded to in point 4. - Initially focus on supporting a GitHub-centred workflow. This is only relevant if it becomes necessary to make a choice about what code hosting service to support initially (if a generic solution is not possible). Ultimately it is not appropriate to depend exclusively on a centralised proprietary service, but we need to start somewhere. I have never implemented a package manager before, although I have used a few. To figure out how to turn the above wish list into a program, I have been doing some reading. In particular, I have found this series of blog posts by Russ Cox very interesting. Russ is also working on adding a new package manager, called vgo, to the Go language. While I am not a big fan of Go as such, I have a lot of respect for the thoughtfulness that tends to go into the language and its tools. For example, Minimum Version Selection seems like an elegant solution that both manages to avoid complex version resolution, and provides reproducible builds without the need for a lock file - and all by preferring to use the oldest compatible packages rather than the newest. Let’s look at how vgo addresses the requirements for Futhark. Simplicity of use is certainly the case, as vgo automatically reads import statements in the source code and downloads necessary dependencies before building. It also does not require any user configuration, but does use the $GOPATH directory as sort of a cache. vgo does not require a central server, but inherits a nice scheme from go get whereby any HTML page can act as a sort of package proxy by serving appropriate META tags. Previous package managers for Go interacted directly with the zoo of different version systems, but Russ Cox explains how this is far too much complexity, so vgo fetches only tarballs. In fact, his explanation is why I added this requirement for the Futhark package manager. Multiple versions of the same package are explicitly supported by vgo via semantic import versioning. This makes sense for Go, which must scale to gigantic programs, but less so for Futhark. Finally, vendoring is unaffected, and vgo seems to also initially support GitHub import paths, and uses the GitHub web API to fetch tags and tarballs instead of invoking git directly. Looks like a pretty close match for our needs! The only missing piece is the question about simplicity of implementation, so I guess the only option is to start writing some code and see what happens.
https://futhark-lang.org/blog/2018-07-20-the-future-futhark-package-manager.html
CC-MAIN-2020-16
en
refinedweb
0x01 one sentence solution 1. If the python version is more than 2.7 and the ipv6 url meets the RFC3986 specification, the urlparse can be used directly 2. If the version is low or the url containing ipv6 does not conform to the specification, urlparse cannot be used for parsing. You need to customize a method implementation, as follows import socket from urlparse import urlparse def is_ipv6(ip): try: socket.inet_pton(socket.AF_INET6, ip) except socket.error: return False return True def extract_host_from_url(url): host = urlparse(url).netloc print 'netloc = ', host if not is_ipv6(host): last_colon_index = host.rfind(':') print 'last_colon_index is ', last_colon_index if last_colon_index == -1: return host host = host[:last_colon_index] print 'extract host from url is : ', host return host 0x02 background ipv4 is about to run out, ipv6 has arrived, many companies should be doing ipv6 adaptation work or have already done it. Recently, there is a problem of url parsing in development, which needs to consider the url of ipv6 address, so it is simply combed as follows. 0x03 RFC3986 compliant scenario What is RFC3986 The notation in that case is to encode the IPv6 IP number in square brackets: http://[2001:db8:1f70::999:de8:7648:6e8]:100/ That's RFC 3986, section 3.2.2: Host. According to the RFC document, in order to standardize, the url of ipv6 address must be enclosed with brackets. Therefore, when parsing, we need to take this as a feature. If we do not meet the requirements, we will not parse Realization Urlparse version 2.7 adds support for ipv6 parsing, which is implemented as follows: The usage code is as follows # coding: utf-8 from urlparse import urlparse def test(): url1 = '' url2 = 'http://[fe80::240:63ff:fede:3c19]:8080' url3 = 'http://[2001:db8:1f70::999:de8:7648:6e8]:100/' url4 = 'http://[2001:db8:1f70::999:de8:7648:6e8]' urls = [url1, url2, url3, url4] for url in urls: up = urlparse(url) print up.scheme, up.netloc, up.hostname, up.port if __name__ == '__main__': test() Operation result http None http [fe80::240:63ff:fede:3c19]:8080 fe80::240:63ff:fede:3c19 8080 http [2001:db8:1f70::999:de8:7648:6e8]:100 2001:db8:1f70::999:de8:7648:6e8 100 http [2001:db8:1f70::999:de8:7648:6e8] 2001:db8:1f70::999:de8:7648:6e8 None 0x04 RFC3986 compliant scenario Because of the particularity of ipv6 expression: The leading 0 of each digit can be omitted. If the leading digit is still 0 after omission, it will continue. For example, the following groups of IPv6 are equivalent: 2001:0DB8:02de:0000:0000:0000:0000:0e13 2001:DB8:2de:0000:0000:0000:0000:e13 2001:DB8:2de:000:000:000:000:e13 2001:DB8:2de:00:00:00:00:e13 2001:DB8:2de:0:0:0:0:e13 A double colon '::' can be used to represent one or more consecutive sets of 0, but only once: 2001:DB8:2de:0:0:0:0:e13 2001:DB8:2de::e13 So the problem comes. If the abbreviation in ipv6, such as 2001:0DB8::1428:57ab, is added to port 2001:0DB8::1428:57ab:443, it is still the legal expression of ipv6, because two colons can represent the original four groups of 0, or three groups of 0, and the last 443 port is regarded as a part of ipv6. 0x05 summary 1. If the URL containing ipv6 does not conform to the RFC standard, the ": port" method will make the abbreviation expression of ipv6 ambiguous. 2. The solution is either to modify ipv6 expression conforming to RFC specification, or not to use ipv6 abbreviation, and then extract host operation with the code in the first section one sentence solution 0x06 reference
https://programmer.help/blogs/get-host-compatible-ipv6-in-url-quickly.html
CC-MAIN-2020-16
en
refinedweb
I am using the pretrained VGG network to implement Fast-RCNN. As mentioned in the paper, i divided the VGG into two parts, the feature and the classifier, Here’s my code. def extract_vgg16(model): """). So this function returns the feature extractor from vgg but removing the last max pooling layer. And the classifier removing the last fc layer :param model: :return: """ features = list(model.features.children()) features.pop() # remove the last max pooling layer classifier = list(model.classifier.children()) classifier.pop() # remove the last fully connected layer and softmax return nn.Sequential(*features), nn.Sequential(*classifier) And I use it like this: extractor, classifier = extract_vgg16(models.vgg16(pretrained=True)) head = VGG16RoIHead(classifier, 3, (7, 7)) model = FastRCNN(extractor, head) However, when I check the output of the feature extractor, it turns to be all zero. Here’s the screenshots As you can see, the input tensor has no problem, but the feature extractor’s output seems to be strange.
https://discuss.pytorch.org/t/all-zero-output-after-pretrained-vgg-networks-features/23923
CC-MAIN-2019-30
en
refinedweb
Neural Network Lab Deep Neural Networks are the more computationally powerful cousins to regular neural networks. Learn exactly what DNNs are and why they are the hottest topic in machine learning research. The term deep neural network can have several meanings, but one of the most common is to describe a neural network that has two or more layers of hidden processing neurons. This article explains how to create a deep neural network using C#. The best way to get a feel for what a deep neural network is and to see where this article is headed is to take a look at the demo program in Figure 1 and the associated diagram in Figure 2. Both figures illustrate the input-output mechanism for a neural network that has three inputs, a first hidden layer ("A") with four neurons, a second hidden layer ("B") with five neurons and two outputs. "There are several different meanings for exactly what a deep neural network is, but one is just a neural network with two (or more) layers of hidden nodes." 3-4-5-2 neural network requires a total of (3 * 4) + 4 + (4 * 5) + 5 + (5 * 2) + 2 = 53 weights and bias values. In the demo, the weights and biases are set to dummy values of 0.01, 0.02, . . . , 0.53. The three inputs are arbitrarily set to 1.0, 2.0 and 3.0. Behind the scenes, the neural network uses the hyperbolic tangent activation function when computing the outputs of the two hidden layers, and the softmax activation function when computing the final output values. The two output values are 0.4881 and 0.5119. Research in the field of deep neural networks is relatively new compared to classical statistical techniques. The so-called Cybenko theorem states, somewhat loosely, that a fully connected feed-forward neural network with a single hidden layer can approximate any continuous function. The point of using a neural network with two layers of hidden neurons rather than a single hidden layer is that a two-hidden-layer neural network can, in theory, solve certain problems that a single-hidden-layer network cannot. Additionally, a two-hidden-layer neural network can sometimes solve problems that would require a huge number of nodes in a single-hidden-layer network. This article assumes you have a basic grasp of neural network concepts and terminology and at least intermediate-level programming skills. The demo is coded using C#, but you should be able to refactor the code to other languages such as JavaScript or Visual Basic .NET without too much difficulty. Most normal error checking has been omitted from the demo to keep the size of the code small and the main ideas as clear as possible. The Input-Output Mechanism The input-output mechanism for a deep neural network with two hidden layers is best explained by example. Take a look at Figure 2. Because of the complexity of the diagram, most of the weights and bias value labels have been omitted, but because the values are sequential -- from 0.01 through 0.53 -- you should be able to infer exactly what the unlabeled values are. Nodes, weights and biases are indexed (zero-based) from top to bottom. The first hidden layer is called layer A in the demo code and the second hidden layer is called layer B. For example, the top-most input node has index [0] and the bottom-most node in the second hidden layer has index [4]. In the diagram, label iaW00 means, "input to layer A weight from input node 0 to A node 0." Label aB0 means, "A layer bias value for A node 0." The output for layer-A node [0] is 0.4699 and is computed as follows (first, the sum of the node's inputs times associated with their weights is computed): (1.0)(0.01) + (2.0)(0.05) + (3.0)(0.09) = 0.38 Next, the associated bias is added: 0.38 + 0.13 = 0.51 Then, the hyperbolic tangent function is applied to the sum to give the node's local output value: tanh(0.51) = 0.4699 The three other values for the layer-A hidden nodes are computed in the same way, and are 0.5227, 0.5717 and 0.6169, as you can see in both Figure 1 and Figure 2. Notice that the demo treats bias values as separate constants, rather than the somewhat confusing and common alternative of treating bias values as special weights associated with dummy constant 1.0-value inputs. The output for layer-B node [0] is 0.7243. The node's intermediate sum is: (0.4699)(0.17) + (0.5227)(0.22) + (0.5717)(0.27) + (0.6169)(0.32) = 0.5466 The bias is added: 0.5466 + 0.37 = 0.9166 And the hyperbolic tangent is applied: tanh(0.9166) = 0.7243 The same pattern is followed to compute the other layer-B hidden node values: 0.7391, 0.7532, 0.7666 and 0.7794. The values for final output nodes [0] and [1] are computed in a slightly different way because softmax activation is used to coerce the sum of the outputs to 1.0. Preliminary (before activation) output [0] is: (0.7243)(0.42) + (0.7391)(0.44) + (0.7532)(0.46) + (0.7666)(0.48) + (0.7794)(0.50) + 0.52 = 2.2536 Similarly, preliminary output [1] is: (0.7243)(0.43) + (0.7391)(0.45) + (0.7532)(0.47) + (0.7666)(0.49) + (0.7794)(0.51) + 0.53 = 2.3012 Applying softmax, final output [0] = exp(2.2536) / (exp(2.2536) + exp(2.3012)) = 0.4881. And final output [1] = exp(2.3012) / (exp(2.2536) + exp(2.3012)) = 0.5119 The two final output computations are illustrated using the math definition of softmax activation. The demo program uses a derivation of the definition to avoid arithmetic overflow. Overall Program Structure The overall structure of the demo program, with a few minor edits to save space, is presented in Listing 1. To create the demo, I launched Visual Studio and created a new project named DeepNeuralNetwork. The demo has no significant Microsoft .NET Framework version dependencies, so any relatively recent version of Visual Studio should work. After the template-generated code loaded into the editor, I removed all using statements except the one that references the top-level System namespace. In the Solution Explorer window I renamed the file Program.cs to the slightly more descriptive DeepNetProgram and Visual Studio automatically renamed class Program for me. using System; namespace DeepNeuralNetwork { class DeepNetProgram { static void Main(string[] args) { Console.WriteLine("Begin Deep Neural Network demo"); Console.WriteLine("Creating a 3-4-5-2 network"); int numInput = 3; int numHiddenA = 4; int numHiddenB = 5; int numOutput = 2; DeepNeuralNetwork dnn = new DeepNeuralNetwork(numInput, numHiddenA, numHiddenB, numOutput); double[] weights = new double[] { }; dnn.SetWeights(weights); double[] xValues = new double[] { 1.0, 2.0, 3.0 }; Console.WriteLine("Dummy weights and bias values are:"); ShowVector(weights, 10, 2, true); Console.WriteLine("Dummy inputs are:"); ShowVector(xValues, 3, 1, true); double[] yValues = dnn.ComputeOutputs(xValues); Console.WriteLine("Computed outputs are:"); ShowVector(yValues, 2, 4, true); Console.WriteLine("End deep neural network demo"); Console.ReadLine(); } static public(""); } } // Program public class DeepNeuralNetwork { . . } } The program class consists of the Main entry point method and a ShowVector helper method. The deep neural network is encapsulated in a program-defined class named DeepNeuralNetwork. The Main method instantiates a 3-4-5-2 fully connected feed-forward neural network and assigns 53 dummy values for the network's weights and bias values using method SetWeights. After dummy inputs of 1.0, 2.0 and 3.0 are set up in array xValues, those inputs are fed to the network via method ComputeOutputs, which returns the outputs into array yValues. Notice that the demo illustrates only the deep neural network feed-forward mechanism, and doesn't perform any training. The Deep Neural Network Class The structure of the deep neural network class is presented in Listing 2. The network is hard-coded for two hidden layers. Neural networks with three or more hidden layers are rare, but can be easily created using the design pattern in this article. A challenge when working with deep neural networks is keeping the names of the many weights, biases, inputs and outputs straight. The input-to-layer-A weights are stored in matrix iaWeights, the layer-A-to-layer-B weights are stored in matrix abWeights, and the layer-B-to-output weights are stored in matrix boWeights. public class DeepNeuralNetwork { private int numInput; private int numHiddenA; private int numHiddenB; private int numOutput; private double[] inputs; private double[][] iaWeights; private double[][] abWeights; private double[][] boWeights; private double[] aBiases; private double[] bBiases; private double[] oBiases; private double[] aOutputs; private double[] bOutputs; private double[] outputs; private static Random rnd; public DeepNeuralNetwork(int numInput, int numHiddenA, int numHiddenB, int numOutput) { . . } private static double[][] MakeMatrix(int rows, int cols) { . . } private void InitializeWeights() { . . } public void SetWeights(double[] weights) { . . } public double[] ComputeOutputs(double[] xValues) { . . } private static double HyperTanFunction(double x) { . . } private static double[] Softmax(double[] oSums) { . . } } The two hidden layers and the single output layer each have an array of associated bias values, named aBiases, bBiases, and oBiases respectively. The local outputs for the hidden layers are stored in class-scope arrays named aOutputs and bOutputs. These two arrays could have been defined locally to the ComputeOutputs method. Most forms of neural networks use some type of randomization during initialization and training. Static class member rnd is used by the demo network to initialize the weights and bias values. The class exposes three public methods: a constructor, method SetWeights and method ComputeOutputs. Private methods MakeMatrix and InitializeWeights are helpers used by the constructor. Private methods HyperTanFunction and Softmax are the activation functions used by method ComputeOutputs. The Deep Neural Network Constructor The deep neural network constructor begins by copying its input parameter values to the corresponding class members: public DeepNeuralNetwork(int numInput, int numHiddenA, int numHiddenB, int numOutput) { this.numInput = numInput; this.numHiddenA = numHiddenA; this.numHiddenB = numHiddenB; this.numOutput = numOutput; Next, space for the inputs array is allocated: inputs = new double[numInput]; Then space for the three weights matrices is allocated, using helper method MakeMatrix: iaWeights = MakeMatrix(numInput, numHiddenA); abWeights = MakeMatrix(numHiddenA, numHiddenB); boWeights = MakeMatrix(numHiddenB, numOutput); Method MakeMatrix is defined as: private static double[][] MakeMatrix(int rows, int cols) { double[][] result = new double[rows][]; for (int r = 0; r < result.Length; ++r) result[r] = new double[cols]; return result; } The constructor allocates space for the three biases arrays: aBiases = new double[numHiddenA]; bBiases = new double[numHiddenB]; oBiases = new double[numOutput]; Next, the local and final output arrays are allocated: aOutputs = new double[numHiddenA]; bOutputs = new double[numHiddenB]; outputs = new double[numOutput]; The constructor concludes by instantiating the Random object and then calling helper method InitializeWeights: rnd = new Random(0); InitializeWeights(); } // ctor Method InitializeWeights is defined as: private void InitializeWeights() { int numWeights = (numInput * numHiddenA) + numHiddenA + (numHiddenA * numHiddenB) + numHiddenB + (numHiddenB * numOutput) + numOutput; double[] weights = new double[numWeights]; double lo = -0.01; double hi = 0.01; for (int i = 0; i < weights.Length; ++i) weights[i] = (hi - lo) * rnd.NextDouble() + lo; this.SetWeights(weights); } Method InitializeWeights assigns random values in the interval -0.01 to +0.01 to each weight and bias variable. You might want to pass the interval values in as parameters to InitializeWeights. If you refer back to Listing 1, you'll notice that the Main method calls method SetWeights immediately after the DeepNeuralNetwork constructor is called, which means that initial random weights and bias values are immediately overwritten. However, in a normal, non-demo scenario where SetWeights isn't called in the Main method, initializing weights and bias values is almost always necessary. Setting the Network Weights The code for method SetWeights is presented in Listing 3. Method SetWeights accepts an array of values that represent both weights and bias values. The method assumes the values are stored in a particular order: input-to-A weights followed by A-layer biases, followed by A-to-B weights, followed by B-layer biases, followed by B-to-output weights, followed by output biases. public void SetWeights(double[] weights) { int numWeights = (numInput * numHiddenA) + numHiddenA + (numHiddenA * numHiddenB) + numHiddenB + (numHiddenB * numOutput) + numOutput; if (weights.Length != numWeights) throw new Exception("Bad weights length"); int k = 0; for (int i = 0; i < numInput; ++i) for (int j = 0; j < numHiddenA; ++j) iaWeights[i][j] = weights[k++]; for (int i = 0; i < numHiddenA; ++i) aBiases[i] = weights[k++]; for (int i = 0; i < numHiddenA; ++i) for (int j = 0; j < numHiddenB; ++j) abWeights[i][j] = weights[k++]; for (int i = 0; i < numHiddenB; ++i) bBiases[i] = weights[k++]; for (int i = 0; i < numHiddenB; ++i) for (int j = 0; j < numOutput; ++j) boWeights[i][j] = weights[k++]; for (int i = 0; i < numOutput; ++i) oBiases[i] = weights[k++]; } Method SetWeights also assumes the weights are stored in row-major form, where the row indices are the "from" indices and to column indices are the "to" indices. For example, if iaWeights[0] [2] = 1.23, then the weight from input node [0] to layer-A node [2] has value 1.23. An alternative design for method SetWeights is to pass the weights and bias values in as six separate parameters rather than as a single-array parameter. Or you might want to overload SetWeights to accept either a single array parameter or six weights and bias value parameters. Computing Deep Neural Network Outputs Merhod ComputeOutputs begins by setting up scratch arrays to hold preliminary (before activation) sums: public double[] ComputeOutputs(double[] xValues) { double[] aSums = new double[numHiddenA]; double[] bSums = new double[numHiddenB]; double[] oSums = new double[numOutput]; These scratch arrays could have been declared as class members; if so, remember to zero out each array at the beginning of ComputeOutputs. Next, the input values are copied into the corresponding class array: for (int i = 0; i < xValues.Length; ++i) this.inputs[i] = xValues[i]; An alternative is to use the C# Array.Copy method here. Notice the input values aren't changed by ComputeOutputs, so an alternative design is to eliminate the class member array named inputs, and to eliminate the need to copy values from the xValues array. In my opinion, the explicit inputs array makes a slightly clearer design and is worth the overhead of an extra array copy operation. The next step is to compute the preliminary sum of weights times inputs for the layer-A nodes, add the bias values, then apply the activation function: for (int j = 0; j < numHiddenA; ++j) // weights * inputs aSums[j] += this.inputs[i] * this.iaWeights[i][j]; for (int i = 0; i < numHiddenA; ++i) // add biases aSums[i] += this.aBiases[i]; for (int i = 0; i < numHiddenA; ++i) // apply activation this.aOutputs[i] = HyperTanFunction(aSums[i]); In the demo, I use a WriteLine statement along with helper method ShowVector to display the pre-activation sums and the local layer-A outputs. Next, the layer-B local outputs are computed, using the just-computed layer-A outputs as local inputs: for (int j = 0; j < numHiddenB; ++j) for (int i = 0; i < numHiddenA; ++i) bSums[j] += aOutputs[i] * this.abWeights[i][j]; for (int i = 0; i < numHiddenB; ++i) bSums[i] += this.bBiases[i]; for (int i = 0; i < numHiddenB; ++i) this.bOutputs[i] = HyperTanFunction(bSums[i]); Next, the final outputs are computed: for (int j = 0; j < numOutput; ++j) for (int i = 0; i < numHiddenB; ++i) oSums[j] += bOutputs[i] * boWeights[i][j]; for (int i = 0; i < numOutput; ++i) oSums[i] += oBiases[i]; double[] softOut = Softmax(oSums); Array.Copy(softOut, outputs, softOut.Length); The final outputs are computed into the class array named outputs. For convenience, these values are also returned by method: double[] retResult = new double[numOutput]; Array.Copy(this.outputs, retResult, retResult.Length); return retResult; } An alternative to explicitly returning the weights and bias values as an array is to return void and implement a public method GetWeights. Method HyperTanFunction is defined as: private static double HyperTanFunction(double x) { if (x < -20.0) return -1.0; // correct to 30 decimals else if (x > 20.0) return 1.0; else return Math.Tanh(x); } And method Softmax is defined as: private static double[] Softmax(double[] oSums) { double max = oSums[0]; for (int i = 0; i < oSums.Length; ++i) if (oSums[i] > max) max = oSums[i]; } Method Softmax is quite subtle, but it's unlikely you'd ever want to modify it, so you can usually safely consider the method a magic black box function. Go Exploring The code and explanation presented in this article should give you a good basis for understanding neural networks with two hidden layers. What about three or more hidden layers? The consensus in research literature is that two hidden layers is sufficient for almost all practical problems. But I'm not entirely convinced, and fully connected feed-forward neural networks with more than two hidden layers are relatively unexplored. Training a deep neural network is much more difficult than training an ordinary neural network with a single layer of hidden nodes, and this factor is the main obstacle to using networks with multiple hidden layers. Standard back-propagation training often fails to give good results. In my opinion, alternate training techniques, in particular particle swarm optimization, are promising. However, these alternatives haven't been studied much. Printable Format I agree to this site's Privacy Policy. > More Webcasts
https://visualstudiomagazine.com/articles/2014/06/01/deep-neural-networks.aspx
CC-MAIN-2019-30
en
refinedweb
Quickstart: Synthesize speech in C++ on Linux by using the Speech SDK Quickstarts are also available for speech-recognition. In this article, you create a C++ console application for Linux (Ubuntu 16.04, Ubuntu 18.04, Debian 9). You use the Cognitive Services Speech SDK to synthesize speech from text in real time and play the speech on your PC's speaker. The application is built with the Speech SDK for Linux and your Linux distribution's C++ compiler (for example, g++). Prerequisites You need a Speech Services subscription key to complete this Quickstart. You can get one for free. See Try the Speech Services for free for details. Install Speech SDK Important By downloading any of the Speech SDK for Azure Cognitive Services components on this page, you acknowledge its license. See the Microsoft Software License Terms for the Speech SDK. The current version of the Cognitive Services Speech SDK is 1.6.0. The Speech SDK for Linux can be used to build both 64-bit and 32-bit applications. The required libraries and header files can be downloaded as a tar file from. Download and install the SDK as follows: Make sure the SDK's dependencies are installed. On Ubuntu: sudo apt-get update sudo apt-get install build-essential libssl1.0.0 libasound2 wget On Debian 9: sudo apt-get update sudo apt-get install build-essential libssl1.0.2 libasound2 wget Choose a directory to which the Speech SDK files should be extracted, and set the SPEECHSDK_ROOTenvironment variable to point to that directory. This variable makes it easy to refer to the directory in future commands. For example, if you want to use the directory speechsdkin your home directory, use a command like the following: export SPEECHSDK_ROOT="$HOME/speechsdk" Create the directory if it doesn't exist yet. mkdir -p "$SPEECHSDK_ROOT" Download and extract the .tar.gzarchive containing the Speech SDK binaries: wget -O SpeechSDK-Linux.tar.gz tar --strip 1 -xzf SpeechSDK-Linux.tar.gz -C "$SPEECHSDK_ROOT" Validate the contents of the top-level directory of the extracted package: ls -l "$SPEECHSDK_ROOT" The directory listing should contain the third-party notice and license files, as well as an includedirectory containing header ( .h) files and a libdirectory containing libraries. Add sample code Create a C++ source file named helloworld.cpp, and paste the following code into it. #include <iostream> // cin, cout #include <speechapi_cxx.h> using namespace std; using namespace Microsoft::CognitiveServices::Speech; void synthesizeSpeech() { // Creates an instance of a speech config with specified subscription key and service region. // Replace with your own subscription key and service region (e.g., "westus"). auto config = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion"); // Creates a speech synthesizer using the default speaker as audio output. The default spoken language is "en-us". auto synthesizer = SpeechSynthesizer::FromConfig(config); // Receive a text from console input and synthesize it to speaker. cout << "Type some text that you want to speak..." << std::endl; cout << "> "; std::string text; getline(cin, text); auto result = synthesizer->SpeakTextAsync(text).get(); // Checks result. if (result->Reason == ResultReason::SynthesizingAudioCompleted) { cout << "Speech synthesized to speaker for text [" << text << "]" << std::endl; } else if (result->Reason == ResultReason::Canceled) { auto cancellation = SpeechSynthesisCancellationDetails::FromResult(result); cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl; if (cancellation->Reason == CancellationReason::Error) { cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl; cout << "CANCELED: ErrorDetails=[" << cancellation->ErrorDetails << "]" << std::endl; cout << "CANCELED: Did you update the subscription info?" << std::endl; } } // This is to give some time for the speaker to finish playing back the audio cout << "Press enter to exit..." << std::endl; cin.get(); } int main(int argc, char **argv) { setlocale(LC_ALL, ""); synthesizeSpeech(); return 0; } In this new file, replace the string YourSubscriptionKeywith your Speech Services subscription key. Replace the string YourServiceRegionwith the region associated with your subscription (for example, westusfor the free trial subscription). Build the app Note Make sure to enter the commands below as a single command line. The easiest way to do that is to copy the command by using the Copy button next to each command, and then paste it at your shell prompt. On an x64 (6464" -l:libasound.so.2 On an x86 (3286" -l:libasound.so.2 Run the app Configure the loader's library path to point to the Speech SDK library. On an x64 (64-bit) system, enter the following command. export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$SPEECHSDK_ROOT/lib/x64" On an x86 (32-bit) system, enter this command. export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$SPEECHSDK_ROOT/lib/x86" Run the application. ./helloworld In the console window, a prompt appears, prompting you to type some text. Type a few words or a sentence. The text that you typed is transmitted to the Speech Services and synthesized to speech, which plays on your speaker. Type some text that you want to speak... > hello Speech synthesized to speaker for text [hello] Press enter to exit... Next steps See also Feedback
https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/quickstart-text-to-speech-cpp-linux
CC-MAIN-2019-30
en
refinedweb
Common Mistakes Junior Developers Make When Writing Unit Tests Common Mistakes Junior Developers Make When Writing Unit Tests Join the DZone community and get the full member experience.Join For Free teams and I had the chance to review a lot of test code. In this post I’m summarizing the most common mistakes that in-experienced developers usually do when writing unit tests. Let’s take a look at the following simple example of a class that collects registration data, validates them and performs a user registration. Clearly the method is extremely simple and its purpose is to demonstrate the common mistakes of unit tests and not to provide a fully functional registration example public class RegistrationForm { private String name,email,pwd,pwdVerification; // Setters - Getters are ommitted public boolean register(){ validate(); return doRegister(); } private void validate () { check(name, "email"); check(email, "email"); check(pwd, "email"); check(pwdVerification, "email"); if (!email.contains("@")) { throw new ValidationException(name + " cannot be empty."); } if ( !pwd.equals(pwdVerification)) throw new ValidationException("Passwords do not match."); } private void check(String value, String name) throws ValidationException { if ( value == null) { throw new ValidationException(name + " cannot be empty."); } if (value.length() == 0) { throw new ValidationException(name + " is too short."); } } private boolean doRegister() { //Do something with the persistent context return true; } Here’s a corresponding unit test for the register method to intentionally show the most common mistakes in unit testing. Actually I’ve seen many times very similar test code, so it’s not what I’d call science fiction: @Test public void test_register(){ RegistrationForm form = new RegistrationForm(); form.setEmail("Al.Pacino@example.com"); form.setName("Al Pacino"); form.setPwd("GodFather"); form.setPwdVerification("GodFather"); assertNotNull(form.getEmail()); assertNotNull(form.getName()); assertNotNull(form.getPwd()); assertNotNull(form.getPwdVerification()); form.register(); } Now, this test, obviously will pass, the developer will see the green light so thumbs up! Let’s move to the next method. However this test code has several important issues. The first one which is in my humble opinion, the biggest misuse of unit tests is that the test code is not adequately testing the register method. Actually it tests only one out of many possible paths. Are we sure that the method will correctly handle null arguments? How the method will behave if the email doesn’t contain the @ character or passwords don’t match? Developers tend to write unit tests only for the successful paths and my experience has shown that most of the bugs discovered in code are not related to the successful paths. A very good rule to remember is that for every method you need N numbers of tests where N equals to the cyclomatic complexity of the method adding the cyclomatic complexity of all private method calls. Next is the name of the test method. For this one I partially blame all these modern IDEs that auto-generate stupid names for test methods like the one in the example. The test method should be named in such a way that explains to the reader what is going to be tested and under which conditions. In other words it should describe the path under testing. In our case a better name could be :should_register_when_all_registration_data_are_valid. In this article you can find several approaches on naming unit tests but for me the ‘should’ pattern is the closest to the human languages and easier to understand when reading test code. Now let’s see the meat of the code. There are several assertions and this violates the rule that each test method should assert one and only one thing. This one asserts the state of four(4) RegistrationForm attributes. This makes the test harder to maintain and read (oh yes, test code should be maintainable and readable just like the source code. Remember that for me there’s no distinction between them) and it makes difficult to understand which part of the test fails. This test code also asserts setters/getters. Is this really necessary? To answer that I will quote Roy Osherove’s saying from his famous book :” The Art of Unit Testing” Properties (getters/setters in Java) are good examples of code that usually doesn’t contain any logic, and doesn’t require testing. But watch out: once you add any check inside the property, you’ll want to make sure that logic is being tested. In our case there’s no business logic in our setters/getters so these assertions are completely useless. Moreover they wrong because they don’t even test the correctness of the setter. Imagine that an evil developer changes the code of the getEmail method to always return a constant String instead of the email attribute value. The test will still pass because it asserts that the setter is not null and it doesn’t assert for the expected value. So here’s a rule you might want to remember. Always try to be as much as specific you can when you assert the return value of a method. In other words try to avoid assertIsNull, assertIsNotNull unless you don’t care about the actual return value. The last but not least problem with the test code we’re looking at is that the actual method (register) that is under test, is never asserted. It’s called inside the test method but we never evaluate its result. A variation of this anti-pattern is even worse. The method under test is not even invoked in the test case. So just keep in mind that you should not only invoke the method under test but you should always assert the expected result, even if it’s just a Boolean value. One might ask : “what about void methods?”. Nice question but this is another discussion – maybe another post, but to give you a couple of tips testing of a void method might hide a bad design or it should be done using a framework that verifies method invocations ( such as Mockito.Verify ) As a bonus here’s a final rule you should remember. Imagine that the doRegister is actually implemented and do some real work with an external database. What will happen if some developer that has no database installed in her local environment tries to run the test. Correct! Everything will fail. Make sure that your test will have the same behavior even if it runs from the dumpiest terminal that has access only to the code and the JDK. No network, no services, no databases, no file system. Nothing! }}
https://dzone.com/articles/common-mistakes-junior
CC-MAIN-2019-30
en
refinedweb
Alternative C++ Code for GPIO Access (now with Interrupts and EXPLED) -! - Lazar Demin administrators that! @Mark-Alexander Regarding the use of the various sys/class/gpio/gpioN/* files : these are created by writing the pin number to /sys/class/gpio/export (and removed by writing the pin number to /sys/class/gpio/unexport) - see: I looked at using these but there seems to be some issue with the sysfs on the Omega that I couldn't figure out. No matter what I did, there was never any /sys/class/gpio/gpioN/edge file. So, most of the code for accessing the pins goes via direct access to the memory registers for the GPIO pins (as in the original fast-gpio) - dealing with interrupts makes use of writing to the /sys/kernel/debug/gpio-irq file which causes entries to be added/removed from the /sys/kernel/debug/gpio file to control the interrupt handling. @Lazar-Demin All good. I will see what I can do. Though there are some aspects of dealing with git and setting up building of modules under the OpenWrt build environment that I am unfamiliar with and will need help from @Kevin-Sidwar or yourself. @Kevin-Sidwar and I have already been in brief e-mail communication on this. It might help (and remove pollution from this post if you sent us a brief e-mail describing what you would see as the aims and end product of the amalgamation including any specific features you would want to see. - Mark Alexander @Kit. @Mark-Alexander Some points regarding the /sys/class/gpio/gpioN/* files etc: - Not sure why you would want to put these files elsewhere. They are already accessible from anywhere (once they have been created by writing to /sys/class/gpio/export) - These are NOT files that my code creates they are part of the standard sysfs for GPIO - If you really want to be able to access these files in a different directory, you could always set up symbolic links to them - they themselves are already accessed via symbolic links: e.g. /sys/class/gpio/gpioN is just a symbolic link to /sys/devices/virtual/gpio/gpioN - I contemplated implementing my C++ code by making access to these files, but decided on the direct access to the memory registers for GPIO pins as was already being done by the standard Omega fast-gpio since it is faster and more compact - As explained earlier, I spent quite a lot of time in looking at the possible usage of /sys/class/gpio/gpioN/edge along with the use of POLLPRI. However, I got nowhere with this since the Omega does not create the edge file for GPIO pins. Neither I nor the Omega people can figure out why it doesn't, hence my eventual use of the /sys/kernel/debug/gpio-irq to register interrupt handlers. Not sure what you mean by not have the need of 'massive' code changes. No matter what method is used to access the functionality, one is going to need write some code to access that functionality. The libnew-gpio.so library I provided is a C++ library that can easily be linked to and called from any C++ code you may write. E.G. some sample code snippets: - // Include necessary header #include "GPIOPin.h" // // Declare function to handle interrupt void handleInterrupt (int pin, GPIO_Irq_Type type) { // code to be executed on interrupt } // Some code snippets: // 1. Create a pin instance for pin 6 GPIOPin * pin6 = new GPIOPin(6); // 2. Set it for output pin6->setDirection(GPIO_OUTPUT); // 3. Set output value pin6->set(1); // 4. Setup for and register interrupt to be trigger on rising edge pin6->setDirection(GPIO_INPUT); pin6->setIrq(GPIO_IRQ_RISING, handleInterrupt); I have made various updates to my new-gpio code and documentation. The archive file (new-gpio-1.3.tar.bz2) with all the changes is attached. Attaches file:new-gpio-1.3.tar.bz2 In particular, you should read the file new-gpio.pdf that is included in the archive file. In summary, the changes in this latest version are: - Some re-organisation of packaged components and component renaming - The new-gpiotest program is now just named new-gpio - Provided Makefile files for all components - Added both static and dynamic link versions of all components - Added new class, RGBLED, for control of RGB leds (e.g. as in expansion led) - Changed syntax of parameters to new-gpio program to be the same (where relevant) as is used for the existing fast-gpio program - Added additional operations to new-gpio program for control of expansion led - Added new program new-expled, that does the same as the existing expled script but written in C++ using libnew-gpio library -.
http://community.onion.io/topic/143/alternative-c-code-for-gpio-access-now-with-interrupts-and-expled/23
CC-MAIN-2019-30
en
refinedweb
Web Skills Part 1: Databases and Persistent Welcome to our Haskell Web Skills series! In these tutorials, we'll explore a bunch of different libraries you can use for some important tasks in backend web development. We'll start by looking at how we store our Haskell data in databases. If you're already familiar with this, feel free to move on to part 2, where we'll look at building an API using Servant. If you want a larger listing of the many different libraries available to you, be sure to download our Production Checklist! It'll tell you about other options for databases, APIs and more! As a final note, all the code for this series is on Github! To follow along with part 1, take a look at the persistent branch. The Persistent Library There are many Haskell libraries that allow you to make a quick SQL call. But Persistent does much more than that. With Persistent, you can link your Haskell types to your database definition. You can also make type-safe queries to save yourself the hassle of decoding data. All in all, it's a very cool system. Let's start our journey by defining the type we'd like to store. Our Basic Type Consider a simple user type that looks like this: data User = User { userName :: Text , userEmail :: Text , userAge :: Int , userOccupation :: Text } Imagine we want to store objects of this type in an SQL database. We’ll first need to define the table to store our users. We could do this with a manual SQL command or through an editor. But regardless, the process will be at least a little error prone. The command would look something like this: create table users ( name varchar(100), email varchar(100), age bigint, occupation varchar(100) ) When we do this, there's nothing linking our Haskell data type to the table structure. If we update the Haskell code, we have to remember to update the database. And this means writing another error-prone command. From our Haskell program, we’ll also want to make SQL queries based on the structure of the user. We could write out these raw commands and execute them, but the same issues apply. There would be a high probability of errors. Persistent helps us solve these problems. Persistent and Template Haskell We can get these bonuses from Persistent without all that much extra code! To do this, we’re going to use Template Haskell (TH). There are a few pros and cons of TH. It does allow us to avoid writing some boilerplate code. But it will make our compile times longer as well. It will also make our code less accessible to inexperienced Haskellers. With Persistent however, the amount of code generated is substantial, so the pros out-weigh the cons. To generate our code, we’ll use a language construct called a “quasi-quoter”. This is a block of code that follows some syntax designed by the programmer or in a library, rather than normal Haskell syntax. It is often used in libraries that do some sort of foreign function interface. We delimit a quasi-quoter by a combination of brackets and pipes. Here’s what the Template Haskell call looks like. The quasi-quoter is the final argument: import qualified Database.Persist.TH as PTH PTH.share [PTH.mkPersist PTH.sqlSettings, PTH.mkMigrate "migrateAll"] [PTH.persistLowerCase| |] The share function takes a list of settings and then the quasi-quoter itself. It then generates the necessary Template Haskell for our data schema. Within this section, we’ll define all the different types our database will use. We notate certain settings about those types. In particular we specify sqlSettings, so everything we do here will focus on an SQL database. More importantly, we also create a migration function, migrateAll. After this Template Haskell gets compiled, this function will allow us to migrate our DB. This means it will create all our tables for us! But before we see this in action, we need to re-define our user type. Instead of defining User in the normal Haskell way, we’re going to define it within the quasi-quoter. Note that this level of Template Haskell requires many compiler extensions. Here’s our definition: {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverloadedStrings #-} PTH.share [PTH.mkPersist PTH.sqlSettings, PTH.mkMigrate "migrateAll"] [PTH.persistLowerCase| User sql=users name Text email Text age Int occupation Text UniqueEmail email deriving Show Read |] There are a lot of similarities to a normal data definition in Haskell. We’ve changed the formatting and reversed the order of the types and names. But you can still tell what’s going on. The field names are all there. We’ve still derived basic instances like we would in Haskell. But we’ve also added some new directives. For instance, we’ve stated what the table name should be (by default it would be user, not users). We’ve also created a UniqueEmail constraint. This tells our database that each user has to have a unique email. The migration will handle creating all the necessary indices for this to work! This Template Haskell will generate the normal Haskell data type for us. All fields will have the prefix user and will be camel-cased, as we specified. The compiler will also generate certain special instances for our type. These will enable us to use Persistent's type-safe query functions. Finally, this code generates lenses that we'll use as filters in our queries, as we'll see later. Entities and Keys Persistent also has a construct allowing us to handle database IDs. For each type we put in the schema, we’ll have a corresponding Entity type. An Entity refers to a row in our database, and it associates a database ID with the object itself. The database ID has the type SqlKey and is a wrapper around Int64. So the following would look like a valid entity: import Database.Persist (Entity(..)) sampleUser :: Entity User sampleUser = Entity (toSqlKey 1) $ User { userName = “admin” , userEmail = “admin@test.com” , userAge = 23 , userOccupation = “System Administrator” } This nice little abstraction that allows us to avoid muddling our user type with the database ID. This allows our other code to use a more pure User type. The SqlPersistT Monad So now that we have the basics of our schema, how do we actually interact with our database from Haskell code? As a specific example, we’ll be accessing a PostgresQL database. This requires the SqlPersistT monad. All the query functions return actions in this monad. The monad transformer has to live on top of a monad that is MonadIO, since we obviously need IO to run database queries. If we’re trying to make a database query from a normal IO function, the first thing we need is a ConnectionString. This string encodes information about the location of the database. The connection string generally has 4-5 components. It has the host/IP address, the port, the database username, and the database name. So for instance if you’re running Postgres on your local machine, you might have something like: {-# LANGUAGE OverloadedStrings #-} import Database.Persist.Postgresql (ConnectionString) connString :: ConnectionString connString = “host=127.0.0.1 port=5432 user=postgres dbname=postgres password=password” Now that we have the connection string, we’re set to call withPostgresqlConn. This function takes the string and then a function requiring a backend: -- Also various constraints on the monad m withPostgresqlConn :: (IsSqlBackend backend) => ConnectionString -> (backend -> m a) -> m a The IsSqlBackend constraint forces us to use a type that conforms to Persistent’s guidelines. The SqlPersistT monad is only a synonym for ReaderT backend. So in general, the only thing we’ll do with this backend is use it as an argument to runReaderT. Once we’ve done this, we can pass any action within SqlPersistT as an argument to run. import Control.Monad.Logger (runStdoutLoggingT) import Database.Persist.Postgresql (ConnectionString, withPostgresqlConn, SqlPersistT) … runAction :: ConnectionString -> SqlPersistT a -> IO a runAction connectionString action = runStdoutLoggingT $ withPostgresqlConn connectionString $ \backend -> runReaderT action backend Note we add in a call to runStdoutLoggingT so that our action can log its results, as Persistent expects. This is necessary whenever we use withPostgresqlConn. Here's how we would run our migration function: migrateDB :: IO () migrateDB = runAction connString (runMigration migrateAll) This will create the users table, perfectly to spec with our data definition! Queries Now let’s wrap up by examining the kinds of queries we can run. The first thing we could do is insert a new user into our database. For this, Persistent has the insert function. When we insert the user, we’ll get a key for that user as a result. Here’s the type signature for insert specified to our particular User type: insert :: (MonadIO m) => User -> SqlPersistT m (Key User) Then of course we can also do things in reverse. Suppose we have a key for our user and we want to get it out of the database. We’ll want the get function. Of course this might fail if there is no corresponding user in the database, so we need a Maybe. get :: (MonadIO m) => Key User -> SqlPersistT m (Maybe User) We can use these functions for any type satisfying the PersistRecordBackend class. This is included for free when we use the template Haskell approach. So you can use these queries on any type that lives in your schema. But SQL allows us to do much more than query with the key. Suppose we want to get all the users that meet certain criteria. We’ll want to use the selectList function, which replicates the behavior of the SQL SELECT command. It takes a couple different arguments for the different ways to run a selection. The two list types look a little complicated, but we’ll examine them in more detail: selectList :: PersistRecordBackend backend val => [Filter val] -> [SelectOpt val] -> SqlPersistT m [val] As before, the PersistRecordBackend constraint is satisfied by any type in our TH schema. So we know our User type fits. So let’s examine the first argument. It provides a list of different filters that will determine which elements we fetch. For instance, suppose we want all users who are younger than 25 and whose occupation is “Teacher”. Remember the lenses I mentioned that get generated? We’ll create two different filters on this by using these lenses. selectYoungTeachers :: (MonadIO m, MonadLogger m) => SqlPersistT m [User] selectYoungTeachers = select [UserAge <. 25, UserOccupation ==. “Teacher”] [] We use the UserAge lens and the UserOccupation lens to choose the fields to filter on. We use a "less-than" operator to state that the age must be smaller than 25. Similarly, we use the ==. operator to match on the occupation. Then we provide an empty list of SelectOpts. The second list of selection operations provides some other features we might expect in a select statement. First, we can provide an ordering on our returned data. We’ll also use the generated lenses here. For instance, Asc UserEmail will order our list by email. Here's an ordered query where we also limit ourselves to 100 entries. selectYoungTeachers’ :: (MonadIO m) => SqlPersistT m [User] selectYoungTeachers’ = selectList [UserAge <=. 25, UserOccupation ==. “Teacher”] [Asc UserEmail] The other types of SelectOpts include limits and offsets. For instance, we can further modify this query to exclude the first 5 users (as ordered by email) and then limit our selection to 100: selectYoungTeachers' :: (MonadIO m) => SqlPersistT m [Entity User] selectYoungTeachers' = selectList [UserAge <. 25, UserOccupation ==. "Teacher"] [Asc UserEmail, OffsetBy 5, LimitTo 100] And that’s all there is to making queries that are type-safe and sensible. We know we’re actually filtering on values that make sense for our types. We don’t have to worry about typos ruining our code at runtime. Conclusion Persistent gives us some excellent tools for interacting with databases from Haskell. The Template Haskell mechanisms generate a lot of boilerplate code that helps us. For instance, we can migrate our database to create the correct tables for our Haskell types. We also can perform queries that filter results in a type-safe way. All in all, it’s a fantastic experience. You should now move on to part 2 of this series, where we'll make a Web API using Servant. If you want to check out some more potential libraries for all your production needs, take a look at our Production Checklist!
https://mmhaskell.com/web-skills-1
CC-MAIN-2019-30
en
refinedweb
Reminder : You can find all the DarkRift2 related articles here You can find the entire project on my official GitHub Tell the client to listen messages First thing to perform, is to update our client to tell him to listen when a message is arrived. We will tell him to connect in background instead of auto-connect (on the inspector). Why ? Just to show you that we can connect easely with code. There is two way of connecting : - Client.Connect() : Connect to the server, but keep “freezed” until the connection is made. It must be used when a connection to the server is mandatory or when you need to be sure that the connection is made. - Client.ConnectInBackground() : Performs the connection asynchronously. Means that the game won’t freeze, even if the connection is not made. Disable the auto connect in the client : So, let’s update the ClientManager script with this code : void Start() { ////////////////// /// Load the game scene SceneManager.LoadScene("MainGameScene", LoadSceneMode.Additive); ////////////////// /// Suscribe to events clientReference.MessageReceived += SpawnGameObjects; ////////////////// /// Connect to the server manually clientReference.ConnectInBackground( IPAddress.Parse("127.0.0.1"), 4296, DarkRift.IPVersion.IPv4, null ); } I’ve put the server adress without variables, but in your game, you should use another way. That’s only for the example. As you can see, we’ve added a listener on message received called SpawnGameObjects. Let’s now create this function ! Create the spawn function ok, what wee need to do ? i will sum up this function feature : - Check if the received message is tagged as SPAWN_OBJECT (see Part 9) - Load the specified game object - Instantiate him at the specified position And that’s all. Are you ready ? … but wait a minute. How can i know what is the desired ressource ? Yes, you’re right, at this point we can’t know. We need to create our NetworkDictionnary ! Network Game Object Dictionnary The dictionnary will be useful to determines wich resource is focus for a specified ID. He will looks like this : As said before, for loading object, we’ll use the resource folder wich is a special folder within unity3D (see this for more information). I don’t encourage to use this in a production environnement ! But for prototypes, it’s ok. Let’s create a new script called NetworkObjectDictionnary in the Scripts/Network folder : Here is the code : using System.Collections; using System.Collections.Generic; using UnityEngine; public static class NetworkObjectDictionnary { /// <summary> /// Dictionnary that contains all gameobjects spawnable /// </summary> private static readonly Dictionary<int, string> dictionnary = new Dictionary<int, string> { {1, "BouncyBall" } }; /// <summary> /// Returns the specified object name /// </summary> /// <param name="pID"></param> /// <returns></returns> public static string GetResourcePathFor(int pID) { string objectName; dictionnary.TryGetValue(pID, out objectName); return objectName; } } The dictionnary cannot be modified during run time, that’s why we specify the readonly keyword. We create a function to get the resource path for a specific if (GetResourcePathFor) Doing it with a function enable us to modify the way how the path is retrieved whitout modifyng everywhere the dictionnary is used. Finally, just need to create the Resources folder and add our BouncyBall as a prefab (you will need to open the MainGameScene) : Perfect ! we propably may now writing our SpawnFunction ? not yet, because, if you take a look on the message receveid, we only have 2 informations : - NetworkID : wich is the network identifier - Position : Position where the object has to be spwaned We are missing one information : the ressourceID wich is defined in our dictionnary. So, i think you guessed, we need to update our message model and the way we send the message. Update the server message Let’s firstly update our message spawn model wich contains the data structure of the message. The script to update is SpawnMessageModel : We just will add a new property called RessourceID : ... /// <summary> /// Resource to spawn /// </summary> public int resourceID { get; set; } #region DarkRift IDarkRiftSerializable implementation public void Deserialize(DeserializeEvent e) { networkID = e.Reader.ReadInt32(); resourceID = e.Reader.ReadInt32(); x = e.Reader.ReadSingle(); y = e.Reader.ReadSingle(); } public void Serialize(SerializeEvent e) { e.Writer.Write(networkID); e.Writer.Write(resourceID); e.Writer.Write(x); e.Writer.Write(y); } #endregion Now, we need to add this property into the NetworkObject Script. This property will be manually filled within the inspector : /// <summary> /// Resource identifier for the spawner /// </summary> public int resourceId; ); //Test for resource ID if (resourceId == 0) throw new System.Exception(string.Format("There is no resource id for {0} gameobject", name)); } } #endregion Just look on the Start() function, i added a check to inform us if we forget to set the resourceId for a NetworkObject. So let’s do this now. Open the prefab and set the resourceID to 1 : And now, we’ll simply modifiy the SendObjectToSpawnTo function within the GameServerManager script : /// , resourceID = pNetworkObject.resourceId,); } } The spawn function (finally) Here we are, we now can write our function within the ClientManager script. There is nothing complicated with this and the code speaks by itself : /// <summary> /// Spawn object if message received is tagged as SPAWN_OBJECT /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void SpawnGameObjects(object sender, MessageReceivedEventArgs e) { if (e.Tag == NetworkTags.InGame.SPAWN_OBJECT) { //Get message data SpawnMessageModel spawnMessage = e.GetMessage().Deserialize<SpawnMessageModel>(); //Spawn the game object string resourcePath = NetworkObjectDictionnary.GetResourcePathFor(spawnMessage.resourceID); GameObject go = Resources.Load(resourcePath) as GameObject; go.GetComponent<NetworkObject>().id = spawnMessage.networkID; Instantiate(go, new Vector3(spawnMessage.x, spawnMessage.y, 0), Quaternion.identity); } } Build server and try the client Let’s try our implementation. Here is how you need to process : - Build the server by selecting these 2 scenes : MainServerScene and MainGameScene - Launch the exe previously builded - Open your client scene in unity : MainClientScene and start the game. If you need more information about how to build, see the part You should see the ball appears in the scene ! That’s cool but the position of the ball should be random because as the ball bounce in the server scene, the spawn message will send the current position of the ball. That’s why when you lauch the game several time, the ball is not spawned on the same position ! What’s next ? On the next article, we’ll now synchronize the position of the ball with the server position. Exciting, isn’t it ? Thanks for reading.
http://materiagame.com/2019/02/21
CC-MAIN-2019-30
en
refinedweb
An ultrasonic water level controller is a device which can detect water levels in a tank without a physical contact and send the data to a distant LED indicator in a wireless GSM mode. In this post we are going to construct a ultrasonic based solar powered wireless water level indicator using Arduino in which the Arduinos would be transmitting and receiving at 2.4 GHz wireless frequency. We will be detecting the water level in the tank using ultrasonics instead of traditional electrode method. Overview Water level indicator is a must have gadget, if you own a house or even living in a rented house. A water level indicator shows one important data for your house which is as important as your energy meter’s reading, that is, how much water is left? So that we can keep track of water consumption and we don’t need to climb upstairs to access the water tank to check how much water left and no more sudden halt of water from faucet. We are living at 2018 (at the time of writing of this article) or later, we can communicate to anywhere in the world instantly, we launched an electric race car to space, we launched satellites and rovers to mars, we even able land human beings on moon, still no proper commercial product for detecting how much water left in our water tanks? We can find water level indicators are made by 5th grade students for science fair at school. How such simple projects didn’t make into our everyday life? The answer is water tank level indicators are not simple projects that a 5th grader can make one for our home. There are many practical considerations before we design one. • Nobody wants to drill a hole on water tank’s body for electrodes which might leak water later on. • Nobody wants to run 230 / 120 VAC wire near water tank. • Nobody wants to replace batteries every month. • Nobody wants to run additional long wires hanging on a room for water level indication as it is not pre-planned while building the house. • Nobody wants to use the water which is mixed with metal corrosion of the electrode. • Nobody wants to remove the water level indicator setup while cleaning the tank (inside). Some of the reasons mentioned above may look silly but, you will find less satisfactory with commercially available products with these cons. That’s why penetration of these products are very less among the average households*. *On Indian market. After considering these key points, we have designed a practical water level indicator which should remove the cons mentioned. Our design: • It uses ultrasonic sensor to measure the water level so no corrosion problem. • Wireless indication of water level real time at 2.4 GHz. • Good wireless signal strength, enough for 2 story high buildings. • Solar powered no more AC mains or replacing battery. • Tank full / overflow alarm while filling the tank. Let’s investigate the circuit details: Transmitter: The wireless transmitter circuit which is placed on the tank will send water level data every 5 seconds 24/7. The transmitter consists of Arduino nano, ultrasonic sensor HC-SR04, nRF24L01 module which will connect the transmitter and receiver wirelessly at 2.4 GHz. A Solar panel of 9 V to 12 V with current output of 300mA will power the transmitter circuit. A battery management circuit board will charge the Li-ion battery, so that we can monitor the water level even when there is no sunlight. Let us explore how to place the ultrasonic sensor at water tank: Please note that you have to use your creativity to mound the circuit and protect from rain and direct sunlight. Cut a small hole above the tank’s lid for placing the Ultrasonic sensor and seal it with some kind of adhesive you can find. Now measure the full height of the tank from bottom to lid, write it down in meters. Now measure the height of water holding capacity of tank as shown in the above image and write in down in meters. You need to enter these two values in the code. Schematic diagram of Transmitter: NOTE: nRF24L01 uses 3.3V as Vcc do not connect to 5V output of Arduino. Power supply for transmitter: Make sure that your solar panel’s output power i.e. output (volt x current) is greater than 3 watts. The solar panel should be 9V to 12V. 12V and 300mA panel is recommended which you can find easily on market. Battery should be around 3.7V 1000 mAh. 5V 18650 Li-ion charging module: The following image shows a standard 18650 charger circuit The input can be USB (not used) or external 5V from LM7805 IC. Make sure that you get the correct module as shown above, it should have TP4056 protection, which has low battery cut-off and short circuit protection. The output of this should to be fed to XL6009’s input which will boost to higher voltage, using a small screw driver output of XL6009 should be adjusted to 9V for Arduino. Illustration of XL6009 DC to DC boost converter: That concludes the transmitter’s hardware. Code for Transmitter: // ----------- Program Developed by R.GIRISH / Homemade-circuits .com ----------- // #include <RF24.h> #include<SPI.h> RF24 radio(9, 10); const byte address[6] = "00001"; const int trigger = 3; const int echo = 2; const char text_0[] = "STOP"; const char text_1[] = "FULL"; const char text_2[] = "3/4"; const char text_3[] = "HALF"; const char text_4[] = "LOW"; float full = 0; float three_fourth = 0; float half = 0; float quarter = 0; long Time; float distanceCM = 0; float distanceM = 0; float resultCM = 0; float resultM = 0; float actual_distance = 0; float compensation_distance = 0; // ------- CHANGE THIS -------// float water_hold_capacity = 1.0; // Enter in Meters. float full_height = 1.3; // Enter in Meters. // ---------- -------------- // void setup() { Serial.begin(9600); pinMode(trigger, OUTPUT); pinMode(echo, INPUT); digitalWrite(trigger, LOW); radio.begin(); radio.openWritingPipe(address); radio.setChannel(100); radio.setDataRate(RF24_250KBPS); radio.setPALevel(RF24_PA_MAX); radio.stopListening(); full = water_hold_capacity; three_fourth = water_hold_capacity * 0.75; half = water_hold_capacity * 0.50; quarter = water_hold_capacity * 0.25; } void loop() { delay(5000); digitalWrite(trigger, HIGH); delayMicroseconds(10); digitalWrite(trigger, LOW); Time = pulseIn(echo, HIGH); distanceCM = Time * 0.034; resultCM = distanceCM / 2; resultM = resultCM / 100; Serial.print("Normal Distance: "); Serial.print(resultM); Serial.println(" M"); compensation_distance = full_height - water_hold_capacity; actual_distance = resultM - compensation_distance; actual_distance = water_hold_capacity - actual_distance; if (actual_distance < 0) { Serial.print("Water Level:"); Serial.println(" 0.00 M (UP)"); } else { Serial.print("Water Level: "); Serial.print(actual_distance); Serial.println(" M (UP)"); } Serial.println("============================"); if (actual_distance >= full) { radio.write(&text_0, sizeof(text_0)); } if (actual_distance > three_fourth && actual_distance <= full) { radio.write(&text_1, sizeof(text_1)); } if (actual_distance > half && actual_distance <= three_fourth) { radio.write(&text_2, sizeof(text_2)); } if (actual_distance > quarter && actual_distance <= half) { radio.write(&text_3, sizeof(text_3)); } if (actual_distance <= quarter) { radio.write(&text_4, sizeof(text_4)); } } // ----------- Program Developed by R.GIRISH / Homemade-circuits .com ----------- // Change the following values in the code which you measured: // ------- CHANGE THIS -------// float water_hold_capacity = 1.0; // Enter in Meters. float full_height = 1.3; // Enter in Meters. // ---------- -------------- // That concludes the transmitter. The Receiver: The receiver can show 5 levels. Alarm, when the tank reached absolute maximum water holding capacity while filling tank. 100 to 75 % - All four LEDs will glow, 75 to 50 % three LEDs will glow, 50 to 25 % two LEDs will glow, 25% and less one LED will glow. The receiver can be powered from 9V battery or from smartphone charger to USB mini-B cable. Code for Receiver: // ----------- Program Developed by R.GIRISH / Homemade-circuits .com ----------- // #include <RF24.h> #include<SPI.h> RF24 radio(9, 10); int i = 0; const byte address[6] = "00001"; const int buzzer = 6; const int LED_full = 5; const int LED_three_fourth = 4; const int LED_half = 3; const int LED_quarter = 2; char text[32] = ""; void setup() { pinMode(buzzer, OUTPUT); pinMode(LED_full, OUTPUT); pinMode(LED_three_fourth, OUTPUT); pinMode(LED_half, OUTPUT); pinMode(LED_quarter, OUTPUT); digitalWrite(buzzer, HIGH); delay(300); digitalWrite(buzzer, LOW); digitalWrite(LED_full, HIGH); delay(300); digitalWrite(LED_three_fourth, HIGH); delay(300); digitalWrite(LED_half, HIGH); delay(300); digitalWrite(LED_quarter, HIGH); delay(300); digitalWrite(LED_full, LOW); delay(300); digitalWrite(LED_three_fourth, LOW); delay(300); digitalWrite(LED_half, LOW); delay(300); digitalWrite(LED_quarter, LOW); Serial.begin(9600); radio.begin(); radio.openReadingPipe(0, address); radio.setChannel(100); radio.setDataRate(RF24_250KBPS); radio.setPALevel(RF24_PA_MAX); radio.startListening(); } void loop() { if (radio.available()) { radio.read(&text, sizeof(text)); Serial.println(text); if (text[0] == 'S' && text[1] == 'T' && text[2] == 'O' && text[3] == 'P') { digitalWrite(LED_full, HIGH); digitalWrite(LED_three_fourth, HIGH); digitalWrite(LED_half, HIGH); digitalWrite(LED_quarter, HIGH); for (i = 0; i < 50; i++) { digitalWrite(buzzer, HIGH); delay(50); digitalWrite(buzzer, LOW); delay(50); } } if (text[0] == 'F' && text[1] == 'U' && text[2] == 'L' && text[3] == 'L') { digitalWrite(LED_full, HIGH); digitalWrite(LED_three_fourth, HIGH); digitalWrite(LED_half, HIGH); digitalWrite(LED_quarter, HIGH); } if (text[0] == '3' && text[1] == '/' && text[2] == '4') { digitalWrite(LED_full, LOW); digitalWrite(LED_three_fourth, HIGH); digitalWrite(LED_half, HIGH); digitalWrite(LED_quarter, HIGH); } if (text[0] == 'H' && text [1] == 'A' && text[2] == 'L' && text[3] == 'F') { digitalWrite(LED_full, LOW); digitalWrite(LED_three_fourth, LOW); digitalWrite(LED_half, HIGH); digitalWrite(LED_quarter, HIGH); } if (text[0] == 'L' && text[1] == 'O' && text[2] == 'W') { digitalWrite(LED_full, LOW); digitalWrite(LED_three_fourth, LOW); digitalWrite(LED_half, LOW); digitalWrite(LED_quarter, HIGH); } } } // ----------- Program Developed by R.GIRISH / Homemade-circuits .com ----------- // That concludes the receiver. NOTE: if no LEDs are glowing, which means the receiver can’t get signal from transmitter. You should wait 5 seconds to receive the signal from transmitter after turning on the receiver circuit. Author’s prototypes: Transmitter: Receiver: If you have any questions regarding this solar powered ultrasonic wireless water level controller circuit, please feel free to express in the comment, you can expect to get a quick reply. Search Related Posts for Commenting Can this circuit be used to control the motor as soon as the tank gets full. If yes then how to connect. If not then kindly suggest me a circuit for the same. Regards, A. Malik Yes it is possible! Disconnect the “FULL” green LED negative pin from the ground, and connect it to the base of a relay driver stage: Are these commercially available.pls share contact details. Thanks in advance yes available through amazon, ebay etc Where I can purchase Can we add two or more receiver to the 1 transmitter yes it may be possible The ultrasonic sensor, HC-SR04, will be continuously exposed to the very humid atmosphere in the tank. Will the sensor be able to survive this humid environment for more than a few months? Will this also work with tubes smaller than 1 meter? I am testing with a Dopper bottle and set the size as follows: float water_hold_capacity = 0.14; // up to 14 centimeter can be with water float full_height = 0.19; // the maximum size of the Dopper bottle The Transmitter is registering the correct measurements in the serial monitor. The problem I have is, that the LED / LED’s never turn on (except on startup of the Arduino). The buzzer is working because I have added a testing code in the void setup() to check it for 5 seconds. How should I change the code of the receiver for this bottle? Sorry, I have no idea regarding code modification. Should the LED/LED’s always be on, or will it/they only be on when filling the tank/bottle? When I use the serial monitor for the receiver, I get the question marks again. You can keep it always ON so that you can continuously get the information about the level of the liquid. I am not Arduino expert so I can’t troubleshot the issue! I followed the circuit design and uploaded the codes. However, when I open the serial monitor, I only get question marks: ? ???????? In the monitor I set the baud rate to 9600 which is the same as in the code. What am I doing wrong? I checked the code in my Arduino ID and it is compiling perfectly: Done Compiling! Sketch uses 5998 bytes (18%) of program storage space. Maximum is 32256 bytes. Global variables use 369 bytes (18%) of dynamic memory, leaving 1679 bytes for local variables. Maximum is 2048 bytes. Solved: I build the circuit without the buzzer and the LEDS. Now, I get normal output in the serial monitor. Glad it is solved! Could you please add a LCD display in receiver end in this project where showing the date, time, motor status and water level (as percentage e.g. 5%, 10% ……. 100%). Sorry, My arduino coding is not good, so I won’t be able to help you in this regard, however there are some information in the following links, which you can refer and try the procedures by yourself: Hi Swagatam, I am Anupam Dasgupta from Dhubri, Assam. The project is very nice and in my mind I was actually looking for such a project for my overhead water tank. Though everything is fine in the code, I could not follow one thing. The messages that will be send from the transmitter section are : text_0[] = “STOP”; text_1[] = “FULL”; text_2[] = “3/4”; text_3[] = “HALF”; text_4[] = “LOW”; But, in the receiver section, the messages that will be received shows; if (text[0] == ‘S’ && text[1] == ‘T’ && text[2] == ‘O’ && text[3] == ‘P’) etc.etc. would you kindly explain it. With regards. Hi Anupam, I am sorry I won’t be able to provide any suggestions, because my Arduino coding knowledge is not good at this moment. sir what is the design if it’s just battery as power source instead of solar power? Because i want to design a portable device that could measure the volume of liquid inside the tank using ultrasonic sensor. Hope you could help me, thank you! Mark, As indicated in the 2nd diagram, you can use a 9V battery instead of a solar panel…. Is the 9V battery connected to a transmitter and another battery to the receiver sir? And what should be the design if i don’t use LEDs? Just like an ultrasonic sensor that indicates a volume sensor. Hoping for your response sir thanks! Yes, since the units are to be operated remotely, the power sources will need to be separate. Hi Swagatam, This is Ramana, from Hosur, Tamilnadu. I am very much impressed on your Ultrasonic Water level controller. But my doubt is whether 2nos. of Audrino is required(one each for Transmitter and Receiver or one is enough). Also if you can clarify the total Parts list for this project it would be very helpful. Please share your response. Thanks in advance, please keep innovating with new projects. Regards, Ramana Thanks Ramana, I am glad you liked the concept. Yes you will require two separate Arduino boards as shown in the transmitter/receiver set ups. The parts will be exactly as given in the two diagrams. Sir I like this project but one thing I want to know that can motor be connected for automatic filling of overhead tank. Waiting for your kind reply. Regards, Mallik Hi Anand, yes it is possible by configuring a set-reset relay circuit with the upper GREEN LED, and the lower RED LED. Sir good day. Can you help me design a cricuit that uses ultrasonic sensor to measure a liquid volume in a tank without the use of arduino? It must be dc powered also sir. Thank you Hi Mark, sorry presently we do not have a non-Arduino version of the above concept… hi guys i need 2v input solar panel charger, is there any recommendation you can try this: Hello Mr Girish. I am Younus from Australia. I would propose to ask you to design and make a Prototype for our Project. Its a portable oil Level Detection hand held unit capable to measure level remotely from a small opening say 2 inch steel pipe till a depth of say 80-100 meters with measuring accuracy of 0.1% Maximum. Preferably using Ultrasonic waves but not necessary if have a robust and cheaper version. We can discuss in detail If you can Possibly spare some time please. Thanks and Regards Younus. thanks for making user friendly circuits. can we buy circuit or pcb made by you because we don’t have knowledge of electronics I appreciate your interest very much, however no longer manufacture PCBs or circuit modules, so can’t help in this regard. what is max distance between sender and receiver? Not sure about the exact distance, but since it works using radio frequency on ISM bands, the distance can be significantly long: Sir what would be the cost over all approximately to this project Could be below Rs.1000/-, you can confirm it by searching in online stores. Ultrasonic Wireless Water Level Indicator and controller using solenoid with arduino i have 2 over head tank please give me circuit diagram with code mail id yousufh2@gmail.com COMMENT BOX IS MOVED AT THE TOP
https://www.homemade-circuits.com/ultrasonic-wireless-water-level-indicator-using-arduino-solar-powered/
CC-MAIN-2021-10
en
refinedweb
Allocator proxy. Allocators are copied by value when containers are created, which means that each container would normally have its own allocator. However, sometimes, like for pool allocators, performance can be better if multiple containers could share an allocator. An allocator proxy allows containers to share allocators by having the proxy reference the allocator. When the proxy is copied, the new copy continues to point to the original allocator. Example usage: Definition at line 60 of file DefaultAllocator.h. #include <DefaultAllocator.h> Constructor. The constructor stores a reference to the allocator within the new proxy object. All copies of the proxy will refer to the same allocator. Definition at line 67 of file DefaultAllocator.h. Allocate memory. Allocates new memory by delegating to the underlying allocator. Definition at line 74 of file DefaultAllocator.h. Deallocate memory. Deallocates memory by delegating to the underlying allocator. Definition at line 81 of file DefaultAllocator.h.
http://rosecompiler.org/ROSE_HTML_Reference/classSawyer_1_1ProxyAllocator.html
CC-MAIN-2021-10
en
refinedweb
A minimalistic framework-agnostic library for showing info messages without interrupting the overall flow on web applications. Usage A simple usage example: import 'package:toast/toast.dart'; main() { Toast( title: 'Info', text: 'Your changes has been saved!', duration: const Duration(seconds: 3)); } Features and bugs Please file feature requests and bugs at the issue tracker.
https://pub.dev/documentation/web_toast/latest/
CC-MAIN-2021-10
en
refinedweb
Batch inserts for SQLAlchemy on PostgreSQL with psycopg2 Project description Benchling uses SQLAlchemy and psycopg2 to talk to PostgreSQL. To save on round-trip latency, we batch our inserts using this code. Typically, creating and flushing N models in SQLAlchemy does N roundtrips to the database if the model has an autoincrementing primary key. This module improves creating N models to only require 2 roundtrips, without requiring any other changes to your code. Is this for me? You may find use for this module if: - You are using SQLAlchemy - You are using Postgres - You sometimes need to create several models at once and care about performance Usage from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy_batch_inserts import enable_batch_inserting engine = create_engine("postgresql+psycopg2://postgres@localhost", executemany_mode="values") # SQLAlchemy < 1.3.7 needs use_batch_mode=True instead Session = sessionmaker(bind=engine) session = Session() enable_batch_inserting(session) If you use Flask-SQLALchemy, from flask_sqlalchemy import SignallingSession from sqlalchemy_batch_inserts import enable_batch_inserting # Make sure that you've specified executemany_mode or use_batch_mode when creating your engine! Otherwise # this library will not have any effect. enable_batch_inserting(SignallingSession) Acknowledgements This is all possible thanks to @dvarrazzo's psycopg2 execute_batch and @zzzeek's SQLAlchemy support for the same and helpful advice on the mailing list. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sqlalchemy-batch-inserts/
CC-MAIN-2021-10
en
refinedweb
Another Issue complete. Unfortunately so much of what I wanted to put in hasn't made it, Mainly through time constraints. In the next issue will be a review of Borland's OS/2 compiler. I have had no exposure to OS/2 prior to using this compiler, and not wishing to do a poor job of the review, I have postponed the publication until it is completed. One of the other subjects I was going to write about was namespaces. Bjarne beat me there by giving us an outline of namespaces in the interview. He has done a better job than I could; he is, after all, the author of the namespace paper. I have received several letters commenting on the style of Overload Issue 1, in particular the variety of fonts used and the hyphenations at end-of-line wrap-around, together with like sections being scattered throughout the magazine. These faults have, been eradicated in this issue, even if it (IMHO) looks sparse. One of the side-effects of keeping all the like sections together, is that I can no longer keep articles to page boundaries. As this is not done in CVu, I must assume that it is an acceptable practice. I have this policy of only keeping general PC magazines for about 6 months; after that period of time I remove any articles that interest me and file them. The remaining 'husk' gets filed in the little grey round thing at the bottom of my desk. It was during such a pruning that I chanced across an article in PC Answers (Dec 1992 Page 174) where someone is asking for a recommended package for learning C/C++. The person answering the questions (Steve Patient) states that: "I think that C++ is a red herring. If you want Object-Oriented programming (not asked for by the person) there are more appropriate ways to get it. Fortunately, all C++ compilers support C as a subset of the language." As you can imagine, this made my blood boil. The article was consigned to the bin His photo now occupies the centre of my dartboard! I don't like to be over-protective of any language, they all have their place, but for someone to simply dismiss C++ in a magazine like PC Answers is a silly thing to do. I know I'm probably preaching to the converted, but C++ is certainly here to stay, and will probably end up displacing C. It's amazing how easy it is to forget things. I had a little problem with the pre-processor the other day. After a little while, struggling to understand what stupid mistake I had made, I remembered the CPP.EXE program. This little gem is the pre-processor and if run against a .CPP file, a file (of type .I) is produced that contains all the post pre-processor source. Damn! It's obvious what that mistake was. I got asked a difficult question when someone asked me, "If you could only keep one C++ book, which one would it be?" What dirty low-down kind of question is that? I have struggled with the answer to this. Should it be one of the standard books like Bjame's "The C++ Programming Language" or the "Annotated Reference Manual"., should it be one of the three books that rarely leaves my side (C++ Programming Style - Cargill, Effective C++ - Meyers & C++ Strategies and Tactics -Murray). Or any one of the myriad's of books bending my shelving. It should, as well as covering C++ as a language, cover the Borland compiler. The choice then becomes obvious. What book is on the top of the desk most of the time? Answer: Ted Faison's Borland C++ Object-Oriented Programming published by SAMS. What? Never heard of it! Go have a look in your book shop, it's probably not the best read in the world, but the amount of detail it covers streams and the Borland class libraries (including OWL and TurboVision) make it my number one book. See you all soon - Mike Toms
https://accu.org/journals/overload/1/2/toms_1360/
CC-MAIN-2021-10
en
refinedweb
I am using Python3 to learn distributed programming there are two python file,one's name is main.py, it distributes information, the other one manipulation data, and the name is worker.py. everything goes well when I run this two file in one computer[set server address = 127.0.0.1, port = 5000] but when i run these two files in seperate computers, they cannot connect to each other, and TimeoutError was encoutered. I don't know why. one computer is Win10 at my home, the other is a linux cloud server which I baught. the code works in one computer. but when I ran main.py in linux, and ran worker.py{change server to linux's ip address} in win10, then the worker.py encounter a TimeoutError I know nothing about the linux, is there some security settings I need to open or close? """main.py""" import queue from multiprocessing.managers import BaseManager import datetime import time TASK_QUEUE = queue.Queue() RESULT_QUEUE = queue.Queue() def get_task_queue(): """set TASK_QUEUE as a function""" global TASK_QUEUE return TASK_QUEUE def receive_result_queue(): """set RESULT_QUEUE as a function""" global RESULT_QUEUE return RESULT_QUEUE class QueueManager(BaseManager): """inherit BaseManager from multiprocessing.managers""" pass if __name__ == '__main__': QueueManager.register('distribute_task_queue', callable=get_task_queue) QueueManager.register('receive_result_queue', callable=receive_result_queue) # bind port 5000, set verification code = 'abc' MANAGER = QueueManager(address=('127.0.0.1', 5000), authkey=b'abc') # start manager MANAGER.start() TASK = MANAGER.distribute_task_queue() RESULT = MANAGER.receive_result_queue() # put each line into manager`enter code here` with open("C:/Users/dayia/Desktop/log.20170817") as f: for line in f: TASK.put(line) # try receive result while 1: try: r = RESULT.get(timeout=1) if r[0] == r[1] and r[0] == "done": break else: print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),"line %s\'s length is %s" % (r[0], r[1])) except queue.Empty: print('result queue is empty.') """worker.py""" import datetime from multiprocessing.managers import BaseManager import queue import time class QueueManager(BaseManager): """inherit BaseManager from multiprocessing.managers""" pass QueueManager.register('distribute_task_queue') QueueManager.register('receive_result_queue') server_addr = '127.0.0.1' print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), 'Connect to server %s...' % server_addr) m = QueueManager(address=(server_addr, 5000), authkey=b'abc') m.connect() TASK = m.distribute_task_queue() RESULT = m.receive_result_queue() def parse_line(line): return len(line) C = 0 while not TASK.empty(): try: n = TASK.get(timeout=1) r = parse_line(n) print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), 'running line %s, length is %s' % (C+1, r)) C += 1 RESULT.put([r, C]) except queue.Empty: print('task queue is empty.') RESULT.put(["done", "done"]) enter code here print('worker exit') The address 127.0.0.1 very specifically refers to the same computer where the code is running.
https://codedump.io/share/lZVVwtMz7YV1/1/timeouterror-when-two-computers-comunicated
CC-MAIN-2021-10
en
refinedweb
class new in Git master #include <Magnum/Vk/Framebuffer.h> Framebuffer Framebuffer. Contents Wraps a VkFramebuffer, which connects a RenderPass together with concrete ImageViews for attachments. Framebuffer creation A framebuffer is created using FramebufferCreateInfo that takes a previously-created RenderPass together with ImageViews onto Images of desired sizes and compatible formats for all its attachments: #include <Magnum/Vk/FramebufferCreateInfo.h> … Vk::Image color{device, Vk::ImageCreateInfo2D{ /* created before */ Vk::ImageUsage::ColorAttachment, Vk::PixelFormat::RGBA8Unorm, size, 1}, …}; Vk::Image depth{device, Vk::ImageCreateInfo2D{ Vk::ImageUsage::DepthStencilAttachment, Vk::PixelFormat::Depth24UnormStencil8UI, size, 1}, …}; Vk::ImageView colorView{device, Vk::ImageViewCreateInfo2D{color}}; Vk::ImageView depthView{device, Vk::ImageViewCreateInfo2D{depth}}; Vk::RenderPass renderPass{device, Vk::RenderPassCreateInfo{} /* created before */ .setAttachments({ Vk::AttachmentDescription{color.format(), …}, Vk::AttachmentDescription{depth.format(), …}, }) … }; Vk::Framebuffer framebuffer{device, Vk::FramebufferCreateInfo{renderPass, { colorView, depthView }, size}}; Public static functions - static auto wrap(Device& device, VkFramebuffer handle, const Vector3i& size, HandleFlags flags = {}) -> Framebuffer - Wrap existing Vulkan handle. Constructors, destructors, conversion operators - Framebuffer(Device& device, const FramebufferCreateInfo& info) explicit - Constructor. - Framebuffer(NoCreateT) explicit - Construct without creating the framebuffer. - Framebuffer(const Framebuffer&) deleted - Copying is not allowed. - Framebuffer(Framebuffer&& other) noexcept - Move constructor. - ~Framebuffer() - Destructor. - operator VkFramebuffer() Public functions - auto operator=(const Framebuffer&) -> Framebuffer& deleted - Copying is not allowed. - auto operator=(Framebuffer&& other) -> Framebuffer& noexcept - Move assignment. - auto handle() -> VkFramebuffer - Underlying VkFramebuffer handle. - auto handleFlags() const -> HandleFlags - Handle flags. - auto size() const -> Vector3i - Framebuffer size. - auto release() -> VkFramebuffer - Release the underlying Vulkan framebuffer. Function documentation static Framebuffer Magnum:: Vk:: Framebuffer:: wrap(Device& device, VkFramebuffer handle, const Vector3i& size, HandleFlags flags = {}) Wrap existing Vulkan handle. The handle is expected to be originating from device. The size parameter is used for convenience RenderPass recording later. If it's unknown, pass a default-constructed value — you will then be able to only being a render pass by specifying a concrete size in RenderPassBeginInfo. Unlike a framebuffer created using a constructor, the Vulkan framebuffer is by default not deleted on destruction, use flags for different behavior. Magnum:: Vk:: Framebuffer:: Framebuffer(Device& device, const FramebufferCreateInfo& info) explicit Constructor. Magnum:: Vk:: Framebuffer:: Framebuffer(NoCreateT) explicit Construct without creating the framebuffer. The constructed instance is equivalent to moved-from state. Useful in cases where you will overwrite the instance later anyway. Move another object over it to make it useful. Magnum:: Vk:: Framebuffer:: ~Framebuffer() Destructor. Destroys associated VkFramebuffer handle, unless the instance was created using wrap() without HandleFlag:: Magnum:: Vk:: Framebuffer:: operator VkFramebuffer() This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. VkFramebuffer Magnum:: Vk:: Framebuffer:: release() Release the underlying Vulkan framebuffer. Releases ownership of the Vulkan framebuffer and returns its handle so vkDestroyFramebuffer() is not called on destruction. The internal state is then equivalent to moved-from state.
https://doc.magnum.graphics/magnum/classMagnum_1_1Vk_1_1Framebuffer.html
CC-MAIN-2021-10
en
refinedweb
/* Parameters and display hooks for terminal devices. <>. */ /* Miscellanea. */ #include "systime.h" /* for Time */ INLINE_HEADER_BEGIN #ifndef TERMHOOKS_INLINE # define TERMHOOKS_INLINE INLINE #endif) (struct frame *f); /* Input queue declarations and hooks. */def HAVE_NTGUI LANGUAGE_CHANGE_EVENT, /* A LANGUAGE_CHANGE_EVENT is generated when HAVE_NTGUI or HAVE_GPM , GPM_CLICK_EVENT #endif #ifdef HAVE_DBUS , DBUS_EVENT #endif , CONFIG_CHANGED_EVENT /*def HAVE_NS /* Generated when native multi-keystroke input method is used to modify tentative or indicative text display. */ , NS_TEXT_EVENT /* Non-key system events (e.g. application menu events) */ , NS_NONKEY_EVENT #if defined (HAVE_INOTIFY) || defined (HAVE_NTGUI) /* File or directory was changed. */ , FILE_NOTIFY_EVENT #endif /*. For a HELP_EVENT, this is the position within the object (stored in ARG below) where the help was found. */ /* In WindowsNT, for a mouse wheel event, this is the delta. */ ptrdiff_t code; enum scroll_bar_part part; int modifiers; /* See enum below for interpretation. */ Lisp_Object x, y; Time timestamp; /* This field is copied into a vector while the event is in the queue, so that garbage collections won't kill it. */ Lisp_Object frame_or_window; /* Additional event argument. This is used for TOOL_BAR_EVENTs and HELP_EVENTs and avoids calling Fcons during signal handling. */ Lisp_Object arg; }; #define EVENT_INIT(event) memset (&(event), 0,^28 bit for any modifier. It may or may not be the sign bit, depending on FIXNUM_BITS, so using it to represent a modifier key means that characters thus modified have different integer equivalents depending on the architecture they're running on. Oh, and applying XINT to a character whose 2^28 bit is set might sign-extend *); #ifndef HAVE_WINDOW_SYSTEM extern void term_mouse_moveto (int, int); /* The device for which we have enabled gpm support. */ extern struct tty_display_info *gpm_tty; struct ns_display_info; struct x_display_info; struct w32_display_info; /* Terminal-local parameters. */ struct terminal { /* This is for Lisp; the terminal code does not refer to it. */ struct vectorlike_header header; /* Parameter alist of this terminal. */ Lisp_Object param_alist; /* List of charsets supported by the terminal. It is set by Fset_terminal_coding_system_internal along with the member terminal_coding. */ Lisp_Object charset_list; /* This is an association list containing the X selections that Emacs might own on this terminal. Each element has the form (SELECTION-NAME SELECTION-VALUE SELECTION-TIMESTAMP FRAME) SELECTION-NAME is a lisp symbol, whose name is the name of an X Atom. SELECTION-VALUE is the value that emacs owns for that selection. It may be any kind of Lisp object. SELECTION-TIMESTAMP is the time at which emacs began owning this selection, as a cons of two 16-bit numbers (making a 32 bit time.) FRAME is the frame for which we made the selection. If there is an entry in this alist, then it can be assumed that Emacs owns that selection. The only (eq) parts of this list that are visible from Lisp are the selection-values. */ Lisp_Object Vselection; /* The terminal's keyboard object. */ struct kboard *kboard; ns_display_info *ns; /* nster */ /* Window-based redisplay interface for this device (0 for tty devices). */ struct redisplay_interface *rif; /* Frame-based redisplay interface. */ /* Text display hooks. */ void (*cursor_to_hook) (struct frame *f, int vpos, int hpos); void (*raw_cursor_to_hook) (struct frame *, int, int); void (*clear_to_end_hook) (struct frame *); void (*clear_frame_hook) (struct frame *); void (*clear_end_of_line_hook) (struct frame *, int); void (*ins_del_lines_hook) (struct frame *f, int, int); void (*insert_glyphs_hook) (struct frame *f, struct glyph *s, int n); void (*write_glyphs_hook) (struct frame *f, struct glyph *s, int n); void (*delete_glyphs_hook) (struct frame *, int); void (*ring_bell_hook) (struct frame *f); void (*toggle_invisible_pointer_hook) (struct frame *f, int invisible); void (*reset_terminal_modes_hook) (struct terminal *); void (*set_terminal_modes_hook) (struct terminal *); void (*update_begin_hook) (struct frame *); void (*update_end_hook) (struct frame *); void (*set_terminal_window_hook) . */ void (*mouse_position_hook) (struct frame **f, int, Lisp_Object *bar_window, enum scroll_bar_part *part, Lisp_Object *x, Lisp_Object *y, Time *); /* When a frame's focus redirection is changed, this hook tells the window system code to re-decide where to put the highlight. Under X, this means that Emacs lies about where the focus is. */ void (*frame_rehighlight_hook) _FLAG is non-zero, F is brought to the front, before all other windows. If RAISE_FLAG is zero, F is sent to the back, behind all other windows. */ void (*frame_raise_lower_hook) (struct frame *f, int raise_flag); /* If the value of the frame parameter changed, whis hook is called. For example, if going from fullscreen to not fullscreen this hook may do something OS dependent, like extended window manager hints on X11. */ void (*fullscreen_hook) judgment.) (struct frame *frame); /* Unmark WINDOW's scroll bar for deletion in this judgment cycle. Note that it's okay to redeem a scroll bar that is not condemned. */. void (*judge_scroll_bars_hook) (struct frame *FRAME); /* Called to read input events. TERMINAL indicates which terminal device to read from. Input events should be read into HOLD_QUIT. A positive return value indicates that that many input events were read into BUF. Zero means no events were immediately available. A value of -1 means a transient read error, while -2 indicates that the device was closed (hangup), and it should be deleted. */ int (*read_socket_hook) (struct terminal *terminal, struct input_event *hold_quit); /* Called when a frame's display becomes entirely up to date. */ void (*frame_up_to_date_hook) (struct frame *); /* Called to delete the device-specific portions of a frame that is on this terminal device. */ void (*delete_frame_hook) . delete_frame ensures that there are no live frames on the terminal when it calls this hook, so infinite recursion is prevented. */ void (*delete_terminal_hook) (struct terminal *); /* Most code should use these functions to set Lisp fields in struct terminal. */ TERMHOOKS_INLINE void tset_charset_list (struct terminal *t, Lisp_Object val) { t->charset_list = val; } TERMHOOKS_INLINE void tset_selection_alist (struct terminal *t, Lisp_Object val) { t->Vselection_alist = val; } /*) /* Return true if the terminal device is not suspended. */ #define TERMINAL_ACTIVE_P(d) (((d)->type != output_termcap && (d)->type !=output_msdos_raw) || (d)->display_info.tty->input) extern struct terminal *get_terminal (Lisp_Object terminal, int); extern struct terminal *create_terminal (void); extern void delete_terminal (struct terminal *); /* The initial terminal device, created by initial_term_init. */ extern struct terminal *initial_terminal; extern unsigned char *encode_terminal_code (struct glyph *, int, struct coding_system *); extern void close_gpm (int gpm_fd); INLINE_HEADER_END
https://emba.gnu.org/emacs/emacs/-/blame/819e2da92a18d7af03ccd9cf0a2e5b940eb7b54f/src/termhooks.h
CC-MAIN-2021-10
en
refinedweb
Sorting a vector in C++. we will discuss how to sort a vector of integers in C++ in ascending order. C++ tutorial for sort(). We are going to see the basic use of sort() as well as some other things we can do with it. Lets say we want to sort this array of integers in ascending order. So all we need to do is use this sort() function available in algorithm header file. You can see, Now the array is sorted. Let’s say we wanted this array to be sorted in descending order instead of ascending order. We can either reverse the (sorted) array or we can sort the array in descending order to begin with. We can do so by passing a* boolean function that checks if that elements are in descending order as argument for sort function. Or even better we can pass this argument (greater()) which is already present in c++ library. So we can do a lots of crazy things using this boolean function. Let’s say I wanted my array to be sorted according to the digit in one’s place. For that, I can do something like this. As You can see this sorted my array as I wanted. What if I wanted, my array to be sorted lexicographically. It is a sorting method used for making dictionaries. for example, ab comes before b in a dictionary. Similarly, 12 will come before 2 even though twelve is greater than two For this type of sorting we can use the array of string instead of an array of integers. This will work with all the alpha-numeric characters. Sort Vector C++ Descending Given an array how to sort all even numbers in ascending order and odd numbers in descending order. Here’s the problem, given an integer array we need to sort all the odd integers in decreasing order first and then all the even integers in increasing order. Let’s look at examples for this problem, if this is the input array, then we can see that the descending order of odd numbers is 7,5,3,1 and ascending order of even numbers is 2,4,10. So, the output is this. Similarly for the second case, we will get this as output The first method to solve this problem is to move all odd numbers to the left and even numbers to the right. After this, we individually sort the left and right parts. Here is an implementation of the given problem in C++. We keep 2 integers for left and right extremities. Then iterate and partition the odd and even numbers. At last, we sort the odd numbers in descending order and even numbers in ascending order. Let’s look at another solution to this problem, we multiply all the odd numbers with -1 and then sort the entire array. At last, we revert back to the changes made in step 1. Here’s the implementation for this problem. We first multiply all the odd numbers with -1. Then sort then entire array and atlast revert back the normal array. The time complexity for both the solutions is O(n log n). Sort Vector of Strings C++ This is C++, not C. Sorting an array of strings is easy. #include <string> #include <vector> #include <algorithm> std::vector<std::string> stringarray; std::sort(stringarray.begin(), stringarray.end()); Sort Vector of Vector C++ Sort a string vector. Actually, the vector is being sorted if all the strings start with the a capital/lower letter. Sure it is. std::sort can take a third parameter which is the comparison function to use when sorting. For example, you could use a lambda function: std::vector<std::vector<int>> vec; // Fill it std::sort(vec.begin(), vec.end(), [](const std::vector<int>& a, const std::vector<int>& b) { return a[2] < b[2]; }); Alternatively, you can pass anything else callable with signature bool(const std::vector<int>&, const std::vector<int>&), such as a functor or function pointer. Response to edit: Simply apply your COST function to a and b: std::sort(vec.begin(), vec.end(), [](const std::vector<int>& a, const std::vector<int>& b) { return COST(a) < COST(b); }); Quick Sort Vector C++ - Always pick the first element as pivot. - Always pick the last element as the pivot (implemented below) - Pick a random element as a pivot. - Pick median as a pivot. #include <vector> using namespace std; void swap(vector<int>& v, int x, int y); void quicksort(vector<int> &vec, int L, int R) { int i, j, mid, piv; i = L; j = R; mid = L + (R - L) / 2; piv = vec[mid]; while (i<R || j>L) { while (vec[i] < piv) i++; while (vec[j] > piv) j--; if (i <= j) { swap(vec, i, j); //error=swap function doesnt take 3 arguments i++; j--; } else { if (i < R) quicksort(vec, i, R); if (j > L) quicksort(vec, L, j); return; } } } void swap(vector<int>& v, int x, int y) { int temp = v[x]; v[x] = v[y]; v[y] = temp; } int main() { vector<int> vec1; const int count = 10; for (int i = 0; i < count; i++) { vec1.push_back(1 + rand() % 100); } quicksort(vec1, 0, count - 1); } C++ Sort Vector of Objects I would overload operator < and then sort using greater. Greater essentially means rhs < lhs. You then sort using sort(listOfPeople.begin(), listOfPeople.end(), greater<Information>()); If you decide to add operator< your class would play nicely with std::set and as a key in std::map in addition to allowing sort. class Information { private: int age; string name; friend bool operator< (Information const& lhs, Information const& rhs){ return lhs.age < rhs.age; } }; Sort Array C++ Learn about searching and sorting in an array. First let’s learn sorting. We do not need to write our own sort function for this. C++ comes with function to do so. To sort an array we use the sort function which is present the algorithm library. The sort function takes 2 parameters, starting address and ending address of the array to be sorted. Here, we have an array of length 10 and we are sorting the first 6 indices of this array using this statement. Let’s run this code, we can see that the first 6 elements of the array are sorted. To sort the complete array, we just need to write the array name s starting index and ending index as starting index + length of the array. Let’s run this code again, we can see that the entire array is sorted. We know that, we can search in a sorted array using binary search in O(log n).1 C++ has a function binary_search for searching. Binary_search takes 3 arguments, starting address, ending address and the value to be searched. Let’s suppose we need to search for 7 in this array in the entire array. We can do it by writing binary_search (array name, arrayname + size, 7), let’s search for 8, 9 and 10 too like this. Let’s run this code, we can see that 7 and 8 are present in the array but 9 and 10 are not.
https://epratap.com/sort-vector-cpp-descending-strings-quick-objects-array/
CC-MAIN-2021-10
en
refinedweb
How to use resource requests and limits to manage resource usage of your Kubernetes cluster How can you control resource usage of containers? Especially when you run containers in production at scale? In this article, Peter Arijs explains how you can utilize container resource limits and resource quotas to keep things in order. When people start looking at running containers in production at scale, they quickly realize they will need an orchestrator such as Kubernetes to efficiently schedule and orchestrate containers on the underlying shared set of physical resources. However, how do you control the resource usage of containers so that different images and projects each get their fair share of the resources? This is where things like container resource limits and resource quotas come in. How to limit container resource usage in Kubernetes? Within Kubernetes, containers are scheduled as pods. By default, a pod in Kubernetes will run with no limits on CPU and memory in a default namespace. This can create several problems related to contention for resources, the two main ones being: - There is no control of how much resources each pod can use. Some images might be more resource heavy or have certain “minimum resource” requirements that we would like to see guaranteed. - When different teams run different projects on the same cluster, there is no control how much resources each team can use. These issues can be addressed respectively in Kubernetes in the following way: - Developers can control the amount of CPU and memory resources per pod or container by setting resource requests and limits in the pod configuration file. - Cluster administrators can create namespaces for different teams and set resource quota (defined by a ResourceQuota object) per namespace. This limits the amount of objects that can be created in a namespace, as well as the total amount of resources that may be consumed by pods in that namespace. In this blog post we will focus on the first aspect: how to set resource constraints on pods and containers, and equally important, how to keep track of these. In a follow up blog post, we will discuss similar aspects for resource quotas. Setting container resource constraints Within the pod configuration file cpu and memory are each a resource type for which constraints can be set at the container level. A resource type has a base unit. CPU is specified in units of cores, and memory is specified in units of bytes. Two types of constraints can be set for each resource type: requests and limits. A request is the amount of that resources that the system will guarantee for the container, and Kubernetes will use this value to decide on which node to place the pod. A limit is the maximum amount of resources that Kubernetes will allow the container to use. In the case that request is not set for a container, it defaults to limit. If limit is not set, then if defaults to 0 (unbounded). Setting request < limits allows some over-subscription of resources as long as there is spare capacity. This is part of the intelligence built into the Kubernetes scheduler. Below is an example of a pod configuration file with requests and limits set for CPU and memory of two containers in a pod. CPU values are specified in “millicpu” and memory in MiB. apiVersion: v1 kind: Pod metadata: name: demo spec: containers: - name: demo1 image: demo/demo1 resources: requests: memory: "16Mi" cpu: "100m" limits: memory: "32Mi" cpu: "200m" - name: demo2 image: demo/demo2 resources: requests: memory: "64Mi" cpu: "200m" limits: memory: "128Mi" cpu: "400m" You can now save this YAML to a file and create this pod: $ kubectl apply -f demo.yaml --namespace=demo-example Analyzing container resource usage Once you have set the resource requests and limits, you also want to check how much actual resources the containers are using. At the time writing this blogpost, Kubernetes is missing a command in kubectl to show the resource usage in an easy way, it’s an open ticket. Fortunately the Kubernetes community is awesome and there some great clever commands we can use to give us an idea. To see the how much of its quota each node is using we can use this command, with example output for a 3 node cluster: $ kubectl get nodes --no-headers | awk '{print $1}' | xargs -I {} sh -c 'echo {}; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo' gke-rel3170-default-pool-3459fe6a-n03g CPU Requests CPU Limits Memory Requests Memory Limits 358m (38%) 138m (14%) 516896Ki (19%) 609056Ki (22%) gke-rel3170-default-pool-3459fe6a-t3b3 CPU Requests CPU Limits Memory Requests Memory Limits 460m (48%) 0 (0%) 310Mi (11%) 470Mi (17%) gke-rel3170-default-pool-3459fe6a-vczz CPU Requests CPU Limits Memory Requests Memory Limits 570m (60%) 110m (11%) 430Mi (16%) 790Mi (29%) To see the pods that use the most cpu and memory you can use the kubectl top command but it doesn’t sort yet and is also missing the quota limits and requests per pod. You only see the current usage: $ kube-system kube-dns-3468831164-v2gqr 1m 26Mi kube-system event-exporter-v0.1.7-1642279337-180db 0m 13Mi kube-system kube-proxy-gke-rel3170-default-pool-3459fe6a 1m 12Mi kube-system l7-default-backend-3623108927-tjm9z 0m 1Mi kube-system kube-dns-3468831164-cln0p 1m 25Mi kube-system fluentd-gcp-v2.0.9-sj3rh 9m 84Mi kube-system kube-dns-autoscaler-244676396-00btn 0m 7Mi kube-system kubernetes-dashboard-1265873680-8prcm 0m 18Mi kube-system heapster-v1.4.3-3980146296-33tmw 0m 42Mi Because of these limitations, but also because you want to gather and store this resource usage information on an ongoing basis, a monitoring tool comes in handy. This allows you to analyze resource usage both in real time and historically, and also lets you alert on capacity bottlenecks. Resource requests and limits in CoScale CoScale was built specifically for container and Kubernetes monitoring. It integrates with Docker, Kubernetes and other container technologies to collect container-specific metrics and events. In CoScale you can also check the container resource requests and limits. Below is an example of a dashboard that shows per node how much resources have been reserved (in this example requests and limits defaut to the same value) and how much has actually been used. This high-level view gives you an idea how much resources in your cluster are currently reserved and used. This helps you to determine whether you need to add new nodes, or perhaps adapt the resource requests. You can also expand the node view to see details about the individual containers and their resource requests and usage. This allows you to identify which containers are using most resources. These dashboards help you to do a high level analysis, but CoScale also allows you to alert on these values to get real-time notifications, for example when CPU resource usage reaches 90% of its limits. Conclusion Setting up container resource requests and limits is a first step towards effectively using resources in your Kubernetes cluster. Make sure you always set these values appropriately for your application. And after you set them, make sure you have monitoring and alerting in place to determine if you need to adapt these values or upgrade your cluster. This post was originally published on The Container Monitoring Blog. Nice topic. THX a lot Hi Peter, thanks for the great article. Unfortunately the “Quota Overview” image is low quality and cannot be zoomed in. Nice article Peter, Thanks alot for sharing this information
https://jaxenter.com/manage-container-resource-kubernetes-141977.html
CC-MAIN-2021-10
en
refinedweb
A class is called an Abstract class if it contains one or more abstract methods. An abstract method is a method that is declared, but contains no implementation. Abstract classes may not be instantiated, and its abstract methods must be implemented by its subclasses. Abstract base classes provide a way to define interfaces when other techniques like hasattr() would be clumsy or subtly wrong (for example with magic methods). ABCs introduce virtual subclasses, which are classes that don’t inherit from a class but are still recognized by isinstance() and issubclass() functions. There are many built-in ABCs in Python. ABCs for Data structures like Iterator, Generator, Set, mapping etc. are defined in collections.abc module. The numbers module defines numeric tower which is a collection of base classes for numeric data types. The 'abc' module in Python library provides the infrastructure for defining custom abstract base classes. 'abc' works by marking methods of the base class as abstract. This is done by @absttractmethod decorator. A concrete class which is a sub class of such abstract base class then implements the abstract base by overriding its abstract methods. The abc module defines ABCMeta class which is a metaclass for defining abstract base class. Following example defines Shape class as an abstract base class using ABCMeta. The shape class has area() method decorated by abstractmethod. A Rectangle class now uses above Shape class as its parent and implementing the abstract area() method. Since it is a concrete class, it can be instantiated and imlemented area() method can be called. import abc class Shape(metaclass=abc.ABCMeta): @abc.abstractmethod def area(self): pass class Rectangle(Shape): def __init__(self, x,y): self.l = x self.b=y def area(self): return self.l*self.b r = Rectangle(10,20) print ('area: ',r.area()) Note the abstract base class may have more than one abstract methods. The child class must implement all of them failing which TypeError will be raised. the abc module also defines ABC helper class which can be used instead of ABCMeta class in definition of abstract base class. class Shape(abc.ABC): @abc.abstractmethod def area(self): pass Instead of subclassing from abstract base class, it can be registered as abstract base by register class decorator. class Shape(abc.ABC): @abc.abstractmethod def area(self): pass @Shape.register class Rectangle(): def __init__(self, x,y): self.l = x self.b=y def area(self): return self.l*self.b You may also provide class methods and static methods in abstract base class by decorators @abstractclassmethod and @abstractstatic method decorators respectively.
https://www.tutorialspoint.com/abstract-base-classes-in-python-abc
CC-MAIN-2021-10
en
refinedweb
class new in Git master #include <Magnum/Vk/Instance.h> Instance Instance. Contents Wraps a VkInstance and stores instance-specific Vulkan function pointers. Instance creation While an Instance can be default-constructed without much fuss, it's recommended to pass a InstanceCreateInfo with at least the argc / argv pair, which allows you to use various --magnum-* command-line options: #include <Magnum/Vk/InstanceCreateInfo.h> … Vk::Instance instance{{argc, argv}}; In addition to command-line arguments, setting application info isn't strictly required either, but may be beneficial for the driver: using namespace Containers::Literals; Vk::Instance instance{Vk::InstanceCreateInfo{argc, argv} .setApplicationInfo("My Vulkan Application"_s, Vk::version(1, 2, 3)) }; The above won't enable any additional layers or extensions except for what the engine itself needs or what's supplied on the command line. Use InstanceCreateInfo:: Vk::Instance instance{Vk::InstanceCreateInfo{argc, argv} … .addEnabledLayers({"VK_LAYER_KHRONOS_validation"_s}) .addEnabledExtensions< // predefined extensions Vk::Extensions::EXT::debug_report, Vk::Extensions::KHR::external_fence_capabilities>() .addEnabledExtensions({"VK_KHR_xcb_surface"_s}) // can be plain strings too }; However, with the above approach, if any layer or extension isn't available, the instance creation will abort. The recommended workflow is thus first checking layer and extension availability using enumerateLayerProperties() and enumerateInstanceExtensionProperties(): /* Query layer and extension support */ Vk::LayerProperties layers = Vk::enumerateLayerProperties(); Vk::InstanceExtensionProperties extensions = /* ... including extensions exposed only by the extra layers */ Vk::enumerateInstanceExtensionProperties(layers.names()); /* Enable only those that are supported */ Vk::InstanceCreateInfo info{argc, argv}; if(layers.isSupported("VK_LAYER_KHRONOS_validation"_s)) info.addEnabledLayers({"VK_LAYER_KHRONOS_validation"_s}); if(extensions.isSupported<Vk::Extensions::EXT::debug_report>()) info.addEnabledExtensions<Vk::Extensions::EXT::debug_report>(); … Vk::Instance instance{info}; Next step after creating a Vulkan instance is picking and creating a Device. Command-line options The Instance is configurable through command-line options that are passed through the InstanceCreateInfo argc / argv parameters. If those are not passed, only the environment variables are used. A subset of these options is reused by a subsequently created Device as well. <application> [--magnum-help] [--magnum-disable-workarounds LIST] [--magnum-disable-layers LIST] [--magnum-disable-extensions LIST] [--magnum-enable-layers LIST] [--magnum-enable-instance-extensions LIST] [--magnum-enable-extensions LIST] [--magnum-vulkan-version X.Y] [--magnum-log default|quiet|verbose] [--magnum-device ID|integrated|discrete|virtual|cpu] ... Arguments: ...— main application arguments (see -hor --helpfor details) --magnum-help— display this help message and exit --magnum-disable-workarounds LIST— Vulkan driver workarounds to disable (see Driver workarounds for detailed info) (environment: MAGNUM_DISABLE_WORKAROUNDS) --magnum-disable-layers LIST— Vulkan layers to disable, meaning InstanceCreateInfo:: addEnabledLayers() will skip them (environment: MAGNUM_DISABLE_LAYERS) --magnum-disable-extensions LIST— Vulkan instance or device extensions to disable, meaning InstanceCreateInfo:: addEnabledExtensions() and DeviceCreateInfo:: addEnabledExtensions() will skip them (environment: MAGNUM_DISABLE_EXTENSIONS) --magnum-enable-layers LIST— Vulkan layers to enable in addition to InstanceCreateInfo defaults and what the application requests (environment: MAGNUM_ENABLE_LAYERS) --magnum-enable-instance-extensions LIST— Vulkan instance extensions to enable in addition to InstanceCreateInfo defaults and what the application requests (environment: MAGNUM_ENABLE_INSTANCE_EXTENSIONS) --magnum-enable-extensions LIST— Vulkan device extensions to enable in addition to DeviceCreateInfo defaults and what the application requests (environment: MAGNUM_ENABLE_EXTENSIONS) --magnum-vulkan-version X.Y— force Instance and Device Vulkan version instead of using what the instance / device reports as supported, affecting what entrypoints and extensions get used (environment: MAGNUM_VULKAN_VERSION) --magnum-log default|quiet|verbose— console logging (environment: MAGNUM_LOG) (default: default) --magnum-device ID|integrated|discrete|virtual|cpu— device ID or kind to pick in pickDevice(); if a device is selected through enumerateDevices() or any other way, this option has no effect (environment: MAGNUM_DEVICE) Interaction with raw Vulkan code In addition to the common properties explained in Common interfaces for interaction with raw Vulkan code, the Instance contains instance-level Vulkan function pointers, accessible through operator->(): Vk::Instance instance{…}; VkPhysicalDeviceGroupPropertiesKHR properties[10]; UnsignedInt count = Containers::arraySize(properties); instance->EnumeratePhysicalDeviceGroupsKHR(instance, &count, properties); These functions are by default not accessible globally (and neither there is a global "current instance"), which is done in order to avoid multiple independent instances affecting each other. Sometimes it is however desirable to have global function pointers — for example when a 3rd party code needs to operate on the same instance, or when writing quick prototype code — and then it's possible to populate those using populateGlobalFunctionPointers(). Compared to the above, the same custom code would then look like this: #include <MagnumExternal/Vulkan/flextVkGlobal.h> … instance.populateGlobalFunctionPointers(); VkPhysicalDeviceGroupPropertiesKHR properties[10]; UnsignedInt count = Containers::arraySize(properties); vkEnumeratePhysicalDeviceGroupsKHR(instance, &count, properties); Similarly you can use Device:: Disabled move and delayed instance creation Similarly to Device, for safety reasons as all instance-dependent objects internally have to keep a pointer to the originating Instance to access Vulkan function pointers, the Instance class is not movable. This leads to a difference compared to other Vulkan object wrappers, where you can use the NoCreate tag to construct an empty instance (for example as a class member) and do a delayed creation by moving a new instance over the empty one. Here you have to use the create() function instead: class MyApplication { public: explicit MyApplication(); private: Vk::Instance _instance{NoCreate}; }; MyApplication::MyApplication() { // decide on layers, extensions, ... _instance.create(Vk::InstanceCreateInfo{…} … ); } Similar case is with wrap() — instead of being static, you have to call it on a NoCreate'd instance. Constructors, destructors, conversion operators - Instance(const InstanceCreateInfo& info = InstanceCreateInfo{}) explicit - Constructor. - Instance(NoCreateT) explicit - Construct without creating the instance. - Instance(const Instance&) deleted - Copying is not allowed. - Instance(Instance&&) deleted - Moving is not allowed. - ~Instance() - Destructor. - operator VkInstance() Public functions - void wrap(VkInstance handle, Version version, Containers:: ArrayView<const Containers:: StringView> enabledExtensions, HandleFlags flags = {}) - Wrap existing Vulkan handle. - void wrap(VkInstance handle, Version version, std:: initializer_list<Containers:: StringView> enabledExtensions, HandleFlags flags = {}) - auto operator=(const Instance&) -> Instance& deleted - Copying is not allowed. - auto operator=(Instance&&) -> Instance& deleted - Moving is not allowed. - auto handle() -> VkInstance - Underlying VkInstance handle. - auto handleFlags() const -> HandleFlags - Handle flags. - void create(const InstanceCreateInfo& info = InstanceCreateInfo{}) - Create an instance. - auto tryCreate(const InstanceCreateInfo& info = InstanceCreateInfo{}) -> Result - Try to create an instance. - auto version() const -> Version - Version supported by the instance. - auto isVersionSupported(Version version) const -> bool - Whether given version is supported on the instance. - template<class E>auto isExtensionEnabled() const -> bool - Whether given extension is enabled. - auto isExtensionEnabled(const InstanceExtension& extension) const -> bool - auto operator*() const -> const FlextVkInstance& - Instance-specific Vulkan function pointers. - auto operator->() const -> const FlextVkInstance* - auto release() -> VkInstance - Release the underlying Vulkan instance. - void populateGlobalFunctionPointers() - Populate global instance-level function pointers to be used with third-party code. Function documentation Magnum:: Vk:: Instance:: Instance(const InstanceCreateInfo& info = InstanceCreateInfo{}) explicit Constructor. Equivalent to calling Instance(NoCreateT) followed by create(const InstanceCreateInfo&). Magnum:: Vk:: Instance:: Instance(Instance&&) deleted Moving is not allowed. See Disabled move and delayed instance creation for more information. Magnum:: Vk:: Instance:: ~Instance() Destructor. Destroys associated VkInstance handle, unless the instance was created using wrap() without HandleFlag:: Magnum:: Vk:: Instance:: operator VkInstance() This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. void Magnum:: Vk:: Instance:: wrap(VkInstance handle, Version version, Containers:: ArrayView<const Containers:: StringView> enabledExtensions, HandleFlags flags = {}) Wrap existing Vulkan handle. The handle is expected to be of an existing Vulkan instance. The version and enabledExtensions parameters populate internal info about supported version and extensions and will be reflected in isVersionSupported() and isExtensionEnabled(), among other things. If enabledExtensions empty, the instance will behave as if no extensions were enabled. Note that this function retrieves all instance-specific Vulkan function pointers, which is a relatively costly operation. It's thus not recommended to call this function repeatedly for creating short-lived instances, even though it's technically correct. Unlike an instance created using a constructor, the Vulkan instance is by default not deleted on destruction, use flags for different behavior. void Magnum:: Vk:: Instance:: wrap(VkInstance handle, Version version, std:: initializer_list<Containers:: StringView> enabledExtensions, HandleFlags flags = {}) This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Instance& Magnum:: Vk:: Instance:: operator=(Instance&&) deleted Moving is not allowed. See Disabled move and delayed instance creation for more information. void Magnum:: Vk:: Instance:: create(const InstanceCreateInfo& info = InstanceCreateInfo{}) Create an instance. Meant to be called on a NoCreate'd instance. After creating the instance, populates instance-level function pointers and runtime information about enabled extensions based on info. If instance creation fails, a message is printed to error output and the application exits — if you need a different behavior, use tryCreate() instead. Result Magnum:: Vk:: Instance:: tryCreate(const InstanceCreateInfo& info = InstanceCreateInfo{}) Try to create an instance. Unlike create(), instead of exiting on error, prints a message to error output and returns a corresponding result value. On success returns Result:: Version Magnum:: Vk:: Instance:: version() const Version supported by the instance. Unless overriden using --magnum-vulkan-version on the command line, corresponds to enumerateInstanceVersion(). bool Magnum:: Vk:: Instance:: isVersionSupported(Version version) const Whether given version is supported on the instance. Compares version against version(). template<class E> bool Magnum:: Vk:: Instance:: isExtensionEnabled() const Whether given extension is enabled. Accepts instance extensions from the Extensions namespace, listed also in the Vulkan support tables. Search complexity is . Example usage: if(instance.isExtensionEnabled<Vk::Extensions::EXT::debug_utils>()) { // use the fancy debugging APIs } else if(instance.isExtensionEnabled<Vk::Extensions::EXT::debug_report>()) { // use the non-fancy and deprecated debugging APIs } else { // well, tough luck } Note that this returns true only if given extension is supported by the driver and it was enabled via InstanceCreateInfo:: bool Magnum:: Vk:: Instance:: isExtensionEnabled(const InstanceExtension& extension) const This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. const FlextVkInstance& Magnum:: Vk:: Instance:: operator*() const Instance-specific Vulkan function pointers. Function pointers are implicitly stored per-instance, use populateGlobalFunctionPointers() to populate the global vk* functions. const FlextVkInstance* Magnum:: Vk:: Instance:: operator->() const This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. VkInstance Magnum:: Vk:: Instance:: release() Release the underlying Vulkan instance. Releases ownership of the Vulkan instance and returns its handle so vkDestroyInstance() is not called on destruction. The internal state is then equivalent to moved-from state. void Magnum:: Vk:: Instance:: populateGlobalFunctionPointers() Populate global instance-level function pointers to be used with third-party code. Populates instance-level global function pointers so third-party code is able to call global instance-level vk* functions. See Interaction with raw Vulkan code for more information.
https://doc.magnum.graphics/magnum/classMagnum_1_1Vk_1_1Instance.html
CC-MAIN-2021-10
en
refinedweb
Spark+DynamoDBSpark+DynamoDB Plug-and-play implementation of an Apache Spark custom data source for AWS DynamoDB. We published a small article about the project, check it out here: NewsNews - 2021-01-28: Added option inferSchema=falsewhich is useful when writing to a table with many columns - 2020-07-23: Releasing version 1.1.0 which supports Spark 3.0.0 and Scala 2.12. Future releases will no longer be compatible with Scala 2.11 and Spark 2.x.x. - 2020-04-28: Releasing version 1.0.4. Includes support for assuming AWS roles through custom STS endpoint (credits @jhulten). - 2020-04-09: We are releasing version 1.0.3 of the Spark+DynamoDB connector. Added option to deleterecords (thank you @rhelmstetter). Fixes (thank you @juanyunism for #46). - 2019-11-25: We are releasing version 1.0.0 of the Spark+DynamoDB connector, which is based on the Spark Data Source V2 API. Out-of-the-box throughput calculations, parallelism and partition planning should now be more reliable. We have also pulled out the external dependency on Guava, which was causing a lot of compatibility issues. FeaturesFeatures - Distributed, parallel scan with lazy evaluation - Throughput control by rate limiting on target fraction of provisioned table/index capacity - Schema discovery to suit your needs - Dynamic inference - Static analysis of case class - Column and filter pushdown - Global secondary index support - Write support Getting The DependencyGetting The Dependency The library is available from Maven Central. Add the dependency in SBT as "com.audienceproject" %% "spark-dynamodb" % "latest" Spark is used in the library as a "provided" dependency, which means Spark has to be installed separately on the container where the application is running, such as is the case on AWS EMR. Quick Start GuideQuick Start Guide ScalaScala import com.audienceproject.spark.dynamodb.implicits._ import org.apache.spark.sql.SparkSession val spark = SparkSession.builder().getOrCreate() // Load a DataFrame from a Dynamo table. Only incurs the cost of a single scan for schema inference. val dynamoDf = spark.read.dynamodb("SomeTableName") // <-- DataFrame of Row objects with inferred schema. // Scan the table for the first 100 items (the order is arbitrary) and print them. dynamoDf.show(100) // write to some other table overwriting existing item with same keys dynamoDf.write.dynamodb("SomeOtherTable") // Case class representing the items in our table. import com.audienceproject.spark.dynamodb.attribute case class Vegetable (name: String, color: String, @attribute("weight_kg") weightKg: Double) // Load a Dataset[Vegetable]. Notice the @attribute annotation on the case class - we imagine the weight attribute is named with an underscore in DynamoDB. import org.apache.spark.sql.functions._ import spark.implicits._ val vegetableDs = spark.read.dynamodbAs[Vegetable]("VegeTable") val avgWeightByColor = vegetableDs.agg($"color", avg($"weightKg")) // The column is called 'weightKg' in the Dataset. PythonPython # Load a DataFrame from a Dynamo table. Only incurs the cost of a single scan for schema inference. dynamoDf = spark.read.option("tableName", "SomeTableName") \ .format("dynamodb") \ .load() # <-- DataFrame of Row objects with inferred schema. # Scan the table for the first 100 items (the order is arbitrary) and print them. dynamoDf.show(100) # write to some other table overwriting existing item with same keys dynamoDf.write.option("tableName", "SomeOtherTable") \ .format("dynamodb") \ .save() Note: When running from pyspark shell, you can add the library as: pyspark --packages com.audienceproject:spark-dynamodb_<spark-scala-version>:<version> ParametersParameters The following parameters can be set as options on the Spark reader and writer object before loading/saving. regionsets the region where the dynamodb table. Default is environment specific. roleArnsets an IAM role to assume. This allows for access to a DynamoDB in a different account than the Spark cluster. Defaults to the standard role configuration. The following parameters can be set as options on the Spark reader object before loading. readPartitionsnumber of partitions to split the initial RDD when loading the data into Spark. Defaults to the size of the DynamoDB table divided into chunks of maxPartitionBytes maxPartitionBytesthe maximum size of a single input partition. Default 128 MB defaultParallelismthe number of input partitions that can be read from DynamoDB simultaneously. Defaults to sparkContext.defaultParallelism targetCapacityfraction of provisioned read capacity on the table (or index) to consume for reading. Default 1 (i.e. 100% capacity). stronglyConsistentReadswhether or not to use strongly consistent reads. Default false. bytesPerRCUnumber of bytes that can be read per second with a single Read Capacity Unit. Default 4000 (4 KB). This value is multiplied by two when stronglyConsistentReads=false filterPushdownwhether or not to use filter pushdown to DynamoDB on scan requests. Default true. throughputthe desired read throughput to use. It overwrites any calculation used by the package. It is intended to be used with tables that are on-demand. Defaults to 100 for on-demand. The following parameters can be set as options on the Spark writer object before saving. writeBatchSizenumber of items to send per call to DynamoDB BatchWriteItem. Default 25. targetCapacityfraction of provisioned write capacity on the table to consume for writing or updating. Default 1 (i.e. 100% capacity). updateif true items will be written using UpdateItem on keys rather than BatchWriteItem. Default false. throughputthe desired write throughput to use. It overwrites any calculation used by the package. It is intended to be used with tables that are on-demand. Defaults to 100 for on-demand. inferSchemaif false will not automatically infer schema - this is useful when writing to a table with many columns System PropertiesSystem Properties The following Java system properties are available for configuration. aws.profileIAM profile to use for default credentials provider. aws.dynamodb.regionregion in which to access the AWS APIs. aws.dynamodb.endpointendpoint to use for accessing the DynamoDB API. aws.sts.endpointendpoint to use for accessing the STS API when assuming the role indicated by the roleArnparameter. AcknowledgementsAcknowledgements Usage of parallel scan and rate limiter inspired by work in
https://index.scala-lang.org/audienceproject/spark-dynamodb/spark-dynamodb/1.1.2?target=_2.12
CC-MAIN-2021-10
en
refinedweb
- edited description [ShowIf] Attribute not respected on property with reference copy. [DISCORD USERNAME: ColdPixel] Windows 10 / Unity 2020.2.0f1 I have outlined exactly how to reproduce it below including code and video screen capture. I have noted that it applies to SerializedScriptableObject and ShowIf attribute, but it could be more broadly applicable (e.g. to Monobehaviours and other attributes). Description: When a class has a [ShowIf] attribute, and a ScriptableObject includes two properties (e.g. 'myObject' and 'myReferenceCopy'), one of which is a reference to the other (e.g. 'myReferenceCopy = myObject'), then in the Inspector, the [ShowIf] attribute is not respected on the reference copy (e.g. 'myReferenceCopy'). See example code and screen capture below: using Sirenix.OdinInspector; public class TestSO : SerializedScriptableObject { public MyClass myObject; public MyClass myReferenceCopy; private void OnEnable() { myReferenceCopy = myObject; } } public class MyClass { public bool showIt; [ShowIf("showIt")] public int myInt; } Video screen capture (GIF) is attached. - assigned issue to - Log in to comment
https://bitbucket.org/sirenix/odin-inspector/issues/735/showif-attribute-not-respected-on-property
CC-MAIN-2021-10
en
refinedweb
It works all fine if I interact with my ESP8266 via my pc browsers. But when I open Chrome on my wifi connected smartphone it can’t connect to the web server. My phone and the ESP8266 are on the same network. What am I doing wrong? Thanks for any hint! Enzo Hi Enzo. What web server are you running? What browser are you using on your smartphone? Do you have the web server opened on both browsers at the same time? Regards, Sara Hi Sara, I’m running the one showed in “MicroPython_Programming_with_ESP32_and_ESP8266_V1_2” starting from page 144. I’m using Chrome on my smartphone. I tried with both browser clients at the same time and with my smartphone browser only. The pc browser (Firefox and Chrome, it’s the same) always works fine, the smartphone browser (Firefox and Chrome) are always “unable to connect”. Of course the same smartphone browsers are ok for every other internet site connection. Thanks for your help! Enzo Hi Enzo. I’ve tested the example and what is happening is that Google Chrome opens an additional connection that remains open. Which makes it impossible to make another connection to the web server while it is not closed. To solve that, we need to add some lines to close any connection after 3 seconds, so that sockets are free for another connection. Here’s the new main.py code. Can you try it and see if that solves the problem? We’ll update that issue in the next eBook update. Regards, Sara Hi Sara, I’ll try it now but I don’t think this works: the smartphone doesn’t connect at first try (after resetting the ESP) so it can’t be blocked from anything else previously. I forgot to tell you I tried with different smartphones, too and the result is always the same. It seems the smartphones can’t reach the web servers: Thonny’s console doesn’t show anything while at every try with pc browsers (wired to the LAN) it shows the connection. Hi Sara, just an update: I tried with your updated version but nothing changes. As I told you the problem seems to be the connection itself. Have a nice day. Enzo Hi Enzo. That’s very weird. I can access the web server from computer and smartphone and both at the same time. I don’t know why you’re having that issue. Are you sure you are really connected to the same network that the ESP32? What is exactly the error message that you get? Regards, Sara Hi Sara, It seems so strange to me, too! The error is a classic “This site can’t be reached” (ERR_ADDRESS_UNREACHABLE). I guessed it could be something in my network but what? The same smartphones which can’t connect to ESP8266 are usually connected, e.g, to a Xiomi robot for floor cleaning. Yes I checked the IP address of the smartphone wifi and it’s in the same net. It couldn’t be otherwise: I’ve a single DHCP server distributing the addresses. The last 2 IP in the connection answer of the ESP8266 (after the subnet mask) what represent? DNS? Default gateway? Regards, Enzo Update: checking the nets on my smartphone I notice a net “MicroPython-ea8efc” protected by a password: shouldn’t my ESp8266 act as a client only? Why do I see it as if it were a hot spot?!? And btw how can I know the AMC address of the ESP8266? Thanks! Enzo Hi Enzo. It’s ok to have the MicroPython “access point”. I have that too and it works fine. You can use the following lines to get the mac address import network import ubinascii mac = ubinascii.hexlify(network.WLAN().config('mac'),':').decode() print (mac) Hi Sara, sorry for this very late answer. I wish to share with you that now I can see the ESP 8266 web server on my wireless connect smartphones! It was not an ESP issue: a flag in my router set the separation among wireless devices connected to it. It had to be unchecked for devices to communicate each other but it was deeply “hidden” in others menu. Thanks for your hints. Nice regards 🙂 Enzo Hi Enzo. I’m glad it is solved now. Many times it can be really difficult to figure out what is causing those issues. I’ll mark this question as resolved. If you need further help, you just need to open a new question in our forum. Regards, Sara I would like to know what flag was checked in your router. I am having a similar issue where my ESP8266 is connected to my SSID but I can not get to the web server with any browser. Sara, you are familiar with this issue from my post at my own thread. I have not been able to resolve. It looks as thought there is something blocking me from connecting to more than one ESP 8266 Web server( I already have one running at another IP address on the same network) Hi David, in my Fritz router it was a setting in the config file: “user_isolation = 1”, when I set it to “user_isolation = 0” my smartphone was able to see the ESP. Enzo, Thanks for your response, I am not familiar with that router setting? Ironically I have one ESP8266 board that works with every platform, and two that don’t work with any? I will do some more research and report what I am able to find. Dave Did you check if you had any access list or other security active on your router? Maybe you inserted the working ESP in an Access Control List but didn’t insert the other two, for instance. I had similar problem with my wifi network when I forgot to insert the MAC address of my ESPs in my router’s ACL.
https://rntlab.com/question/why-cant-i-see-the-web-server-on-esp8266-by-my-smartphone-browser/
CC-MAIN-2021-10
en
refinedweb
#include <deal.II/base/tensor_function_parser.h> This class implements a tensor function object that gets its value by parsing a string describing this function. It is a wrapper class for the muparser library (see). This class is essentially an extension of the FunctionParser class to read in a TensorFunction. The class reads in an expression of length dimrank (separated by a semicolon) where the components of the tensor function are filled according to the C++ convention (fastest index is the most right one). A minimal example for the usage of the class would be: See also the documentation of the FunctionParser class. This class overloads the virtual method value() and value_list() of the TensorFunction . Vector-valued functions can either be declared using strings where the function components are separated by semicolons, or using a vector of strings each defining one vector component. Definition at line 111 of file tensor_function_parser.h. Type for the constant map. Used by the initialize() method. Definition at line 169 of file tensor_function_parser. Standard constructor. Only set initial time. This object needs to be initialized with the initialize() method before you can use it. If an attempt to use this function is made before the initialize() method has been called, then an exception is thrown. Definition at line 45 of file tensor_function_parser.cc. Constructor for parsed functions. This object needs to be initialized with the initialize() method before you can use it. If an attempt to use this function is made before the initialize() method has been called, then an exception is thrown. Takes a semicolon separated list of expressions (one for each component of the tensor function), an optional comma-separated list of constants. Definition at line 55 of file tensor_function_parser.cc. Copy constructor. Objects of this type can not be copied, and consequently this constructor is deleted. Move constructor. Objects of this type can not be moved, and consequently this constructor is deleted. Destructor. Copy operator. Objects of this type can not be copied, and consequently this operator is deleted. Move operator. Objects of this type can not be moved, and consequently this operator is deleted. Initialize the tensor function. This method accepts the following parameters: Definition at line 91 of file tensor 272 of file tensor_function_parser.cc. A function that returns default names for variables, to be used in the first argument of the initialize() functions: it returns "x" in 1d, "x,y" in 2d, and "x,y,z" in 3d. Definition at line 340 of file tensor_function_parser.h. Return the value of the tensor function at the given point. Reimplemented from TensorFunction< rank, dim, Number >. Definition at line 288 of file tensor_function_parser.cc. Return the value of the tensor function at the given point. Definition at line 333 of file tensor_function_parser.cc. Return an array of function expressions (one per component), used to initialize this function. Definition at line 37 of file tensor_function_parser.cc. Initialize tfp and vars on the current thread. This function may only be called once per thread. A thread can test whether the function has already been called by testing whether 'tfp.get().size()==0' (not initialized) or >0 (already initialized). Definition at line 146 of file tensor_function_parser.cc.. Place for the variables for each thread Definition at line 278 of file tensor_function_parser.h. The muParser objects for each thread (and one for each component). We are storing a unique_ptr so that we don't need to include the definition of mu::Parser in this header. Definition at line 286 of file tensor_function_parser.h. An array to keep track of all the constants, required to initialize tfp in each thread. Definition at line 292 of file tensor_function_parser.h. An array for the variable names, required to initialize tfp in each thread. Definition at line 298 of file tensor_function_parser.h. An array of function expressions (one per component), required to initialize tfp in each thread. Definition at line 314 of file tensor_function_parser.h. State of usability. This variable is checked every time the function is called for evaluation. It's set to true in the initialize() methods. Definition at line 320 of file tensor 329 of file tensor_function_parser.h. Number of components is equal dimrank. Definition at line 334 of file tensor_function_parser.h.
https://dealii.org/developer/doxygen/deal.II/classTensorFunctionParser.html
CC-MAIN-2021-10
en
refinedweb
Post-Init Processing In Python Data Class In the previous blog, we learned about Fields in Python Data Classes. This time we are going to learn about post-init processing in python data class. Let’s first understand the problem. Python Data Classes from dataclasses import dataclass @dataclass() class Student(): name: str clss: int stu_id: int marks: [] avg_marks: float student = Student('HTD', 10, 17, [11, 12, 14], 50.0) >>> print(student) Student(name='HTD', clss=10, stu_id=17, marks=[11, 12, 14], avg_marks=50.0) The above code is a simple python data class example. The data fields of the class are initiated from the __init__ function. In this example, we are initiating the value of the avg_marks while initiating the object, but we want to get the average of the marks after the marks have been assigned. This can be done by the __post_init__ function in python. Post-Init Processing in Python Data Class The post-init function is an in-built function in python and helps us to initialize a variable outside the __init__ function. from dataclasses import dataclass, field @dataclass() class Student(): name: str clss: int stu_id: int marks: [] avg_marks: float = field(init=False) def __post_init__(self): self.avg_marks = sum(self.marks) / len(self.marks) student = Student('HTD', 10, 17, [98, 85, 90]) >>> print(student) Student(name='HTD', clss=10, stu_id=17, marks=[98, 85, 90], avg_marks=91.0) To achieve this functionality you will also need to implement the field and set the init parameter as false to the variable which you want to set in the post-init function. When the field init parameter is set to false for a variable we don’t need to provide value for it while the object creation. If you are confused about what are fields in python data class and how to set the field parameters read the previous blog on the python data class fields and learn how they can be used to optimize the python class. def __post_init__(self): self.avg_marks = sum(self.marks) / len(self.marks) In the __post_init__ function in python Data Class, the avg_marks is set by adding all the marks and dividing it by the total length of the list. Hope You Like It! Learn more about Post-Init Processing in Python Data Class from the official Documentation. In the next blog, we will learn about Inheritance in Python Data Class.
https://hackthedeveloper.com/python-post-init-data-class/
CC-MAIN-2021-10
en
refinedweb
Search found 1 match Search found 1 match • Page 1 of 1 - Fri Nov 19, 2010 11:01 pm - Forum: Volume 4 (400-499) - Topic: 410 - Station Balance - Replies: 41 - Views: 16466 I got WA I test all the test cases I found here and all of them are right. it's my code, it's make me happy if you can find any problem #include<iostream> #include<fstream> #include<math.h> #include<string> #include<algorithm> #include <iomanip> using namespace std; int main() { int n=0; int m=0; int c=1; wh... Search found 1 match • Page 1 of 1
https://onlinejudge.org/board/search.php?author_id=62636&sr=posts
CC-MAIN-2021-10
en
refinedweb
Updating Go import statements using CombyUpdating Go import statements using Comby IntroductionIntroduction This campaign rewrites Go import paths for the log15 package from gopkg.in/inconshreveable/log15.v2 to github.com/inconshreveable/log15 using Comby. It can handle single-package import statements like this one: import "gopkg.in/inconshreveable/log15.v2" Single-package imports with an alias: import log15 "gopkg.in/inconshreveable/log15.v2" And multi-package import statements with our without an alias: import ( "io" "github.com/pkg/errors" "gopkg.in/inconshreveable/log15.v2" )-log15-import.campaign.yaml: name: update-log15-import description: This campaign updates Go import paths for the `log15` package from `gopkg.in/inconshreveable/log15.v2` to `github.com/inconshreveable/log15` using [Comby]() # Find all repositories that contain the import we want to change. on: - repositoriesMatchingQuery: lang:go gopkg.in/inconshreveable/log15.v2 # In each repository steps: # we first replace the import when it's part of a multi-package import statement - run: comby -in-place 'import (:[before]"gopkg.in/inconshreveable/log15.v2":[after])' 'import (:[before]"github.com/inconshreveable/log15":[after])' .go -matcher .go -exclude-dir .,vendor container: comby/comby # ... and when it's a single import line. - run: comby -in-place 'import:[alias]"gopkg.in/inconshreveable/log15.v2"' 'import:[alias]"github.com/inconshreveable/log15"' .go -matcher .go -exclude-dir .,vendor container: comby/comby # Describe the changeset (e.g., GitHub pull request) you want for each repository. changesetTemplate: title: Update import path for log15 package to use GitHub body: Updates Go import paths for the `log15` package from `gopkg.in/inconshreveable/log15.v2` to `github.com/inconshreveable/log15` using [Comby]() branch: campaigns/update-log15-import # Push the commit to this branch. commit: message: Fix import path for log15 package published: false Create the campaignCreate the campaign In your terminal, run this command: src campaign preview -f update-log15-import.
https://docs.sourcegraph.com/campaigns/tutorials/updating_go_import_statements
CC-MAIN-2021-10
en
refinedweb
I have a list of excel files and their corresponding sheet number. I need python to go to those sheets and find out the cell location for a particular content. Can someone point out the error in my code? Thanks in advance import xlrd value = 'Avg.' filename = ('C:/002 Av SW W of 065 St SW 2011-Jul-05.xls', 'C:/003 Avenue SW West of 058 Street SW 2012-Jun-23.xls') sheetnumber = ('505840', '505608') dictionary = dict(zip(filename, sheetnumber)) for item in dictionary: book = xlrd.open_workbook(item) sheet = book.sheet_by_name(dictionary[key]) for row in range(sheet.nrows): for column in range(sheet.ncols): if sheet.cell(row,column).value == value: print row, column You don’t need to make a dictionary. Iterate over zip(filename, sheetnumber): for name, sheet_name in zip(filename, sheetnumber): book = xlrd.open_workbook(name) sheet = book.sheet_by_name(sheet_name) for row in range(sheet.nrows): for column in range(sheet.ncols): if sheet.cell(row,column).value == value: print row, column Tags: excel, file, pythonpython
https://exceptionshub.com/python-open-specific-excel-file-sheet-to-get-a-specific-content-cell-location.html
CC-MAIN-2021-10
en
refinedweb
45849/how-get-travis-fail-tests-not-have-enough-coverage-for-python For Python 3, try doing this: import urllib.request, ...READ MORE Inline if-else expression must always contain the else ...READ MORE Hi. Good question! Well, just like what ...READ MORE You need to set up the path ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Understand that every 'freezing' application for Python ...READ MORE If you only have one reference to ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/45849/how-get-travis-fail-tests-not-have-enough-coverage-for-python
CC-MAIN-2021-10
en
refinedweb
Reads vtkArrayData written by vtkArrayDataWriter. More... #include <vtkArrayDataReader.h> Reads vtkArrayData written by vtkArrayDataWriter. Reads vtkArrayData data written with vtkArrayDataWriter. Outputs: Output port 0: vtkArrayData containing a collection of vtkArrays. Definition at line 42 of file vtkArrayDataReader.h. Definition at line 46 of file vtkArrayArrayDataAlgorithm. Reimplemented from vtkArrayDataAlgorithm. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkArrayDataAlgorithm. Set the filesystem location from which data will be read. The input string to parse. If you set the input string, you must also set the ReadFromInputString flag to parse the string instead of a file. Whether to read from an input string as opposed to a file, which is the default. Read an arbitrary array from a stream. Note: you MUST always open streams in binary mode to prevent problems reading files on Windows. Read an arbitrary array from a string. This is called by the superclass. This is the method you should override. Reimplemented from vtkArrayDataAlgorithm. Definition at line 93 of file vtkArrayDataReader.h. Definition at line 94 of file vtkArrayDataReader.h. Definition at line 95 of file vtkArrayDataReader.h.
https://vtk.org/doc/nightly/html/classvtkArrayDataReader.html
CC-MAIN-2021-10
en
refinedweb
Scene nodekit class. More... #include <Inventor/nodekits/SoSceneKit.h> Scene nodekit class. This nodekit is used to organize camera, (SoCameraKit), light, (SoLightKit), and object, (SoShapeKit, SoSeparatorKit, and SoWrapperKit) nodekits into a scene. A scene is composed of a list of cameras, a list of lights, and a list of children. There are three parts created by this nodekit: cameraList , lightList , and childList . The cameraList part is a list part of SoCameraKit nodes. The list itself is an SoNodeKitListPart, and since only one camera can be active at a time, the container of the list part is an SoSwitch node. Use setCameraNumber(), and the scene kit will set the switch to make that camera active. The lightList part is a list of SoLightKit nodes. The lightList is used to illuminate the objects contained in the childList part. The childList part contains a set of SoSeparatorKit nodes. You can add any kind of SoSeparatorKit to this list, including SoShapeKit and SoWrapperKit. Since each SoSeparatorKit in turn contains a childList , this part is used to describe a hierarchical scene. (See the reference page for SoSeparatorKit). All members of childList are lit by the lights in lightList and rendered by the active camera in cameraList . NOTES: (SoNodeKitListPart) cameraList This part is an SoNodeKitListPart It has a container that is an SoSwitch node. The list may contain only SoCameraKit nodekits. The active child of the SoSwitch is the active camera. This part is NULL by default, but is automatically created whenever you add a camera, as with setPart("cameraList[0]", myNewCamera) . (SoNodeKitListPart) lightList This part is an SoNodeKitListPart that uses an defines an SoGroup as its container The list may contain only SoLightKit nodekits. Add SoLightKits to this list and they will light the members of the childList of this SoSceneKit. This part is NULL by default, but is automatically created when you add a light. (SoNodeKitListPart) childList This part is an SoNodeKitListPart that uses an SoGroup for its container. The list may contain only SoSeparatorKit nodekits or nodekits derived from SoSeparatorKit (e.g., SoShapeKit and SoWrapperKit). These children represent the objects in the scene. This part is NULL by default, but is automatically created whenever you add a child to the childList. Also, when asked to build a member of the childList, the scenekit will build an SoShapeKit by default. So if the childList part is NULL, and you call: getPart("childList[0]", TRUE) . the scene kit will create the childList and add an SoShapeKit as the new element in the list. Extra Information for List Parts from Above Table SoAppearanceKit, SoBaseKit, SoCameraKit, SoLightKit, SoNodeKit, SoNodeKitDetail, SoNodeKitListPart, SoNodeKitPath, SoNodekitCatalog, SoSeparatorKit, SoShapeKit, SoWrapperKit Constructor. Returns TRUE if a node has an effect on the state during traversal. The default method returns TRUE. Node classes (such as SoSeparator) that isolate their effects from the rest of the graph override this method to return FALSE. Reimplemented from SoNode. Returns the SoNodekitCatalog for this class. Reimplemented from SoBaseKit. Returns the SoNodekitCatalog for this instance. Reimplemented from SoBaseKit.
https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_scene_kit.html
CC-MAIN-2021-10
en
refinedweb
#include "config.h" #include <stddef.h> #include <stdbool.h> #include "mutt/lib.h" #include "config/lib.h" #include "core/lib.h" #include "debug/lib.h" #include "lib.h" #include "mutt_menu.h" Go to the source code of this file. Dialog dialog.c. Find the parent Dialog of a Window. Dialog Windows will be owned by a MuttWindow of type WT_ALL_DIALOGS. Definition at line 46 of file dialog.c. Display a Window to the user. The Dialog Windows are kept in a stack. The topmost is visible to the user, whilst the others are hidden. When a Window is pushed, the old Window is marked as not visible. Definition at line 66 of file dialog.c. Hide a Window from the user. The topmost (visible) Window is removed from the stack and the next Window is marked as visible. Definition at line 98 of file dialog.c. Listen for config changes affecting a Dialog - Implements observer_t. Definition at line 131 of file dialog.c. Create a simple index Dialog. Definition at line 165 of file dialog.c. Destroy a simple index Dialog. Definition at line 209 of file dialog.c.
https://neomutt.org/code/dialog_8c.html
CC-MAIN-2021-10
en
refinedweb
You've been invited into the Kudos (beta program) private group. Chat with others in the program, or give feedback to Atlassian.View group Join the community to find out what other Atlassian users are discussing, debating and creating. Hi, I have a specific requirement, where we want to do some processing (on the issue) behind the scene once a file is attached to the issue. I am trying to achieve this through scriptrunner plugin. I am able to catch the event once the file is attached. I am also able to fetch that particular attached file in my script but I am not able to find the issue key for the issue where that file is attached. Unless, i have the issue key, I can't change the issue fields in my script. Please help on how this can be achieved. Note that i am using Jira Service Desk Cloud version. Regards, Zeeshan Malik Please try the getKey() method. def myIssue = event.issue def myIssueKey = myIssue.getKey() Something like this should work if you are fetching from an event. If it is from a workflow, they a simple issue.key should normally return your issue key. Regards. Apparently for Jira cloud, you should be able to use issue.key as suggested here: Hi, Can we get the issue on "Attachment created" Event on script listener (script runner cloud version) as there is no "event" keyword defined! all i can find are following variables baseUrl logger attachment timestamp webhookEvent (type: String in our case the value is attachment_created) regards, Sheer.
https://community.atlassian.com/t5/Jira-Service-Management/How-to-fetch-issue-key-in-scriptrunner/qaq-p/1543016
CC-MAIN-2021-10
en
refinedweb
I'm new at this. But I'm taking a class and I need to turn this in by Friday. Please help me. I can only use strings and arrays. My hangman program needs five things: 1. user is prompted for a word that is ten letters or less, or else the program rejects it and asks for reentry. 2. if the word is too long or has a nonletter, the program rejects it and asks for reentry. 3. If the word has uppercase letters, it needs to be converted to lowercase 4. if there is more than 6 wrong guesses, the program ends. 5. If the user repeats a guess or enters a non letter, the program outputs a warning, but does not count it as a wrong guess. So including those five things, the game is won when all the letters of the word is guessed. Here's what I have so far: #include <iostream> #include <cctype> #include <cstring> using namespace std; int main() { bool guessed[26] = {false}; char guess; char solution[11]; //the word that is inputted int wrong_count(0); cout << "Enter a word no more than ten letters: "; cin >> solution; while (true) //I tried writing && wrong_count <6, but it doesn't work. { cout << "Please enter your guess: " << endl; cin >> guess; cout << "You've entered the letter " << char(tolower(guess)) << endl; //converts uppercase into lowercase letters if (bool (guessed [guess - 'a'] == true)) //I think I need a loop here, but I don't know where to start cout << "You've guessed " << char(tolower(guess)) << " already." << endl; else guessed[guess - 'a'] = true; for(int i = 0; i < strlen(solution); i++) { if (guess == solution[i]) cout << "correct" << endl; else wrong_count++; } } if (wrong_count ==6) cout << "You lose " << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/438083/i-have-to-write-a-hangman-program-i-have-2-days-left-i-need-some-help
CC-MAIN-2019-04
en
refinedweb
Ruby on Rails - Quick Guide Ruby. Ruby on Rails - Installation To develop a web application using Ruby on Rails Framework, you need to install the following software − - Ruby - The Rails Framework - A Web Server - A Database System. Rails Installation on Windows Follow the steps given below for installing Ruby on Rails. Step 1: Check Ruby Version. Step 2: Install. Step 3: Install Rails. Step 4: Check Rails Version Use the following command to check the rails version. C:\> rails -v Output Rails 4.2.4 Congratulations! You are now on Rails over Windows. Rails Installation on Linux. Step 1: Install Prerequisite Dependencies First of all, we have to install git - core and some ruby dependences that help to install Ruby on Rails. Use the following command for installing Rails dependencies using yum. tp> Step 2: Install rbenv Now we Step 3: Install Ruby Step 4: Install Rails. Step 5: Install JavaScript Runtime. Step 6: Install Database Keeping Rails Up-to-Date". Ruby on Rails - Framework A. Ruby on Rails MVC Framework The Model View Controller principle divides the work of an application into three separate but closely cooperative subsystems. Model (ActiveRecord ) It maintains the relationship between the objects and. View ( ActionView ) It is a presentation of data in a particular format, triggered by a controller's decision to present the data. They are script-based template. Controller ( ActionController )). Pictorial Representation of MVC Framework Given below is a pictorial representation of Ruby on Rails Framework − Directory Representation of MVC Framework Assuming a standard, default installation over Linux, you can find them like this − tp> cd /usr/local/lib/ruby/gems/2.2.0/gems tp> ls You will see subdirectories including (but not limited to) the following − - actionpack-x.y.z - activerecord-x.y.z - rails-x.y.z. Ruby_8<<. Ruby\> Rails server It generates the auto code to start the server as shown below − _10<< What is next? The next chapter explains how to create databases for your application and what is the configuration required to access these created databases. Further, we will see what Rails Migration is and how it is used to maintain database tables. Ruby on Rails - Database Setup Before starting with this chapter, make sure your database server is up and running. Ruby on Rails recommends to create three databases - a database each for development, testing, and production environment. According to convention, their names should be − - library_development - library_production - library_test You should initialize all three of them and create a user and password for them with full read and write privileges. We are using the root user ID for our application. Database Setup for MySQL. Configuring database.yml. When you finish, it should look something like − development: adapter: mysql database: library_development username: root password: [password] host: localhost test: adapter: mysql database: library_test username: root password: [password] host: localhost production: adapter: mysql database: library_production username: root password: [password] host: localhost Database Setup for PostgreSQL. Configuring database.yml> What is Next? The next two chapters explain how to model your database tables and how to manage those using Rails Migrations. Ruby\> ruby script/generate model Book library\> ruby script/generate model Subject Above rails generate model book commands generates the auto code as below − . Ruby on Rails - Migrations Rails. What Can Rails Migration Do? -. Create the Migrations.. Edit the Code. Run the Migration. Running Migrations for Production and Test Databases. What is Next? Now we have our database and the required tables available. In the two subsequent chapters, we will explore two important components called Controller (ActionController) and View (ActionView). Creating Controllers (Action Controller). Creating Views (Action View). Ruby on Rails - Controller The class BookController < ApplicationController end @book =. Implementing the new Method The new method lets Rails know that you will create a new object. So just add the following code in this method. def new @book = Book.new @subjects = Subject.all end The above method will be called when you will display a page to the user to take user input. Here second line grabs all the subjects from the database and puts them in an array called @subjects. Implementing the create Method Once you take user input using HTML. So, you can create a method inside book_controller.rb to display all the subjects. Assume. Ruby. Ruby_14<<_15<<_16<<_17<<_18<<_20<< What is Next? Hope now you are feeling comfortable with all the operations of Rails. The next chapter explains how to use Layouts to put your data in a better way. We will show you how to use CSS in your Rails applications. Ruby. Ruby on Rails - Scaffolding While you're developing Rails applications, especially those which are mainly providing you with a simple interface to data in a database, it can often be useful to use the scaffold method. Scaffolding provides more than cheap demo thrills. Here are some benefits − You can quickly get code in front of your users for feedback. You are motivated by faster success. You can learn how Rails works by looking at the generated code. You can use scaffolding as a foundation to jump start your development. Scaffolding Example To understand scaffolding, let's create a database called cookbook and a table called recipes. Creating an Empty Rails Web Application Open a command window and navigate to where you want to create this cookbook web application. So, run the following command to create a complete directory structure. tp> rails new cookbook Setting up the Database Here is the way to create a database − mysql> create database cookbook; Query OK, 1 row affected (0.01 sec) mysql> grant all privileges on cookbook.* to 'root'@'localhost' identified by 'password'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) To instruct Rails how to find the database, edit the configuration file cookbook\config\database.yml and change the database name to cookbook. Leave the password empty. When you finish, it should look as follows − development: adapter: mysql database: cookbook username: root password: [password] host: localhost test: adapter: mysql database: cookbook username: root password: [password] host: localhost production: adapter: mysql database: cookbook username: root password: [password] host: localhost Rails lets you run in the development mode, test mode, or production mode, using different databases. This application uses the same database for each. The Generated Scaffold Code the scaffold helper script − cookbook> rails generate scaffold recipe It generates auto-files as shown below − The Controller Let's look at the code behind the controller. This code is generated by the scaffold generator. If you open app/controllers/recipes_controller.rb, then you will find something as follows − class RecipesController < ApplicationController before_action :set_recipe, only: [:show, :edit, :update, :destroy] # GET /recipes # GET /recipes.json def index @recipes = Recipe.all end # GET /recipes/1 # GET /recipes/1.json def show end # GET /recipes/new def new @recipe = Recipe.new end # GET /recipes/1/edit def edit end # POST /recipes # POST /recipes.json def create @recipe = Recipe.new(recipe_params) respond_to do |format| if @recipe.save format.html { redirect_to @recipe, notice: 'Recipe was successfully created.' } format.json { render :show, status: :created, location: @recipe } else format.html { render :new } format.json { render json: @recipe.errors, status: :unprocessable_entity } end end end # PATCH/PUT /recipes/1 # PATCH/PUT /recipes/1.json def update respond_to do |format| if @recipe.update(recipe_params) format.html { redirect_to @recipe, notice: 'Recipe was successfully updated.' } format.json { render :show, status: :ok, location: @recipe } else format.html { render :edit } format.json { render json: @recipe.errors, status: :unprocessable_entity } end end end # DELETE /recipes/1 # DELETE /recipes/1.json def destroy @recipe.destroy respond_to do |format| format.html { redirect_to recipes_url, notice: 'Recipe was successfully destroyed.' } format.json { head :no_content } end end private # Use callbacks to share common setup or constraints between actions. def set_recipe @recipe = Recipe.find(params[:id]) end # Never trust parameters from the scary internet, only allow the white list through. def recipe_params params.require(:recipe).permit(:tittle, :instructions) end end When the user of a Rails application selects an action, e.g. "Show" - the controller will execute any code in the appropriate section - "def show" - and then by default will render a template of the same name - "show.html.erb".. This single line of code will bring the database table to life. It will provide with a simple interface to your data, and ways of − - Creating new entries - Editing current entries - Viewing current entries - Destroying current entries When creating or editing an entry, scaffold will do all the hard work like form generation and handling for you, and will even provide clever form generation, supporting the following types of inputs − - Simple text strings - Text areas (or large blocks of text) - Date selectors - Date-time selectors You can use Rails Migrations to create and maintain tables. rake db:migrate RAILS_ENV=development Now, go to the cookbook directory and run the Web Server using the following command − cookbook> rails server Now, open a browser and navigate to. This will provide you a screen to create new entries in the recipes table. A screenshot is shown below − Once you press the Create button to create a new recipe, your record is added into the recipes table and it shows the following result − You can see the option to edit, show, and destroy the records. So, play around with these options. You can also list down all the recipes available in the recipes table using the URL. Enhancing the Model Rails gives you a lot of error handling for free. To understand this, add some validation rules to the empty recipe model − Modify app/models/recipe.rb as follows and then test your application − class Recipe < ActiveRecord::Base validates_length_of :title, :within => 1..20 validates_uniqueness_of :title, :message => "already exists" end These entries will give automatic checking. validates_length_of − the field is not blank and not too long. validates_uniqueness_of − duplicate values are trapped. Instead of the default Rails error message, we have given a custom message here. Alternative Way to Create Scaffolding Create an application as shown above and The Generated Scaffold Code as shown below rails g scaffold Recipe tittle:string instructions:text Above code generates the auto files with data base by using with sqlite3 with tittle and instruction column as shown below an image. we need to migrate the data base by using below syntax. $ rake db:migrate RAILS_ENV=development Finally run the application by using the following command line − rails server It will generate the result as shown above output images. The Views All the views and corresponding all the controller methods are created by scaffold command and they are available in the app/views/recipes directory. How Scaffolding is Different? If you have gone through the previous chapters, then you must have seen that we had created methods to list, show, delete and create data etc., but scaffolding does that job automatically. Ruby on Rails - AJAX Ajax enables you to retrieve data for a web page without having to refresh the contents of the entire page. In the basic web architecture, the user clicks a link or submits a form. The form is submitted to the server, which then sends back a response. The response is then displayed for the user on a new page. When you interact with an Ajax-powered web page, it loads an Ajax engine in the background. The engine is written in JavaScript and its responsibility is to both communicate with the web server and display the results to the user. When you submit data using an Ajax-powered form, the server returns an HTML fragment that contains the server's response and displays only the data that is new or changed as opposed to refreshing the entire page. For a complete detail on AJAX you can go through our AJAX Tutorial The client-side JavaScript, which Rails creates automatically, receives the HTML fragment and uses it to update a specified part of the current page's HTML, often the content of a <div>. AJAX Example This example works based on scaffold, Destroy concept works based on ajax. In this example, we will provide, list, show and create operations on ponies table. If you did not understand the scaffold technology then we would suggest you to go through the previous chapters first and then continue with AJAX on Rails. Creating An Application Let us start with the creation of an application It will be done as follows − rails new ponies The above command creates an application, now we need to call the app directory using with cd command. It will enter in to an application directory then we need to call a scaffold command. It will be done as follows − rails generate scaffold Pony name:string profession:string Above command generates the scaffold with name and profession column. We need to migrate the data base as follows command rake db:migrate Now Run the Rails application as follows command rails s Now open the web browser and call a url as, The output will be as follows Creating an Ajax Now open app/views/ponies/index.html.erb with suitable text editors. Update your destroy line with :remote => true, :class => 'delete_pony'.At finally, it looks like as follows. Create a file, destroy.js.erb, put it next to your other .erb files (under app/views/ponies). It should look like this − Now enter the code as shown below in destroy.js.erb $('.delete_pony').bind('ajax:success', function() { $(this).closest('tr').fadeOut(); }); Now Open your controller file which is placed at app/controllers/ponies_controller.rb and add the following code in destroy method as shown below − # DELETE /ponies/1 # DELETE /ponies/1.json def destroy @pony = Pony.find(params[:id]) @pony.destroy respond_to do |format| format.html { redirect_to ponies_url } format.json { head :no_content } format.js { render :layout => false } end end At finally controller page is as shown image. Now run an application, Output called from, it will looks like as following image Press on create pony button, it will generate the result as follows Now click on back button, it will show all pony created information as shown image Till now, we are working on scaffold, now click on destroy button, it will call a pop-up as shown below image, the pop-up works based on Ajax. If Click on ok button, it will delete the record from pony. Here I have clicked ok button. Final output will be as follows − Ruby_36<<_37<<_38<< For a complete detail on File object, you need to go through the Ruby Reference Manual. Ruby on Rails - Send Emails Action: '<username>', password: '\mail.
http://www.tutorialspoint.com/ruby-on-rails/rails-quick-guide.htm
CC-MAIN-2019-04
en
refinedweb
A in Python that I helped someone with recently. On reddit, someone was asking about Django’s length template filter, and specifically its behavior with an undefined variable. The bit of template in question was something like: {% if some_var|length > 1 %} It's more than 1 {% else %} It's not more than 1 {% endif %} When some_var was undefined in the template, the if condition was evaluating True. Huh? Version requirements If you want to puzzle this out yourself, then before reading on I’ll mention that I had difficulty reproducing this behavior. I was using Django 1.8 and Python 3.4, and consistently was getting the if evaluating to False as expected. So in order to figure this out, it’s important to know you’ll only see this behavior on a Django version earlier than 1.8, and only when running on Python 2. Using either of Django 1.8 or Python 3 will remove the “wat”. Aside on undefined variables And to avoid spoilers from seeing the answer further down the page, here I’ll talk a bit about why it’s not just automatically an error, since after all some_var is undefined. Shouldn’t this raise a TypeError or some other kind of exception? The answer to that is Django’s template language is a little bit forgiving of some types of errors. The philosophy here is that we shouldn’t necessarily take down your entire site with a hard HTTP 500 error any time you have a typo in a template variable name, or forgot to provide a variable you were expecting to be able to access, so Django’s template language will silence errors related to accessing/manipulating undefined variables. In older versions of Django, the setting TEMPLATE_STRING_IF_INVALID controlled what Django would output in that case, and it defaulted to the empty string. In current (1.8) Django, the behavior is a little bit more complex: - Template engines support the configuration parameter string_if_invalid. It still defaults to the empty string. string_if_invalidcan contain the %sformatting marker, and if so it will tell Django to interpolate in the name of the invalid variable (useful for debugging). - When you attempt to apply a filter to an invalid variable, the filter will only be applied if string_if_invalidis the empty string. Otherwise, the filter is not applied. - Except in the case of the if, forand regrouptemplate tags, which always apply filters to their variable argument. An unfiltered invalid variable is treated as Nonefor purposes of those tags. One other Django 1.8 change: previously, if the length filter was applied to an invalid variable (which, as noted above, only happens if it’s used in certain tags), it returned an empty string, consistent with the TEMPLATE_STRING_IF_INVALID behavior. Starting in Django 1.8, if length gets applied to an invalid variable, it returns 0. This is verging on “wat” territory all by itself (though, again, tracing back to a reasonable-seeming design decision in terms of when and how templates should raise exceptions), but isn’t the direct cause of the weird behavior shown above. For that, we need to go look at Python. Going deeper Now that we understand a bit about how Django handles invalid variables, we can start to pick apart what’s happening. When the Django template engine gets this: {% if some_var|length > 1 %} then, in Django versions prior to 1.8, it will get translated (through the invalid-variable behavior) into the following Python: if '' > 1: And here the behavior is different depending on Python version. Here’s the Python 3 behavior: $ python Python 3.5.0 (default, Sep 26 2015, 18:41:42) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> '' > 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: str() > int() That’s what we’d expect and hope should happen: strings and integers aren’t and should not be orderable with respect to each other. But in Python 2: $ python Python 2.7.10 (default, Jun 6 2015, 18:12:33) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.49)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> '' > 1 True WAT. Just in case someone still wants to figure this out on their own, here are some hints as to what’s going on: >>> '' > () False >>> () > [] True >>> import sys >>> () > sys.maxint True Ah, a tuple than which no greater integer can be conceived. That makes perfect sense, right? The answer All right, enough dancing around the problem. To see why Python is doing this, we have to read a note in the Python 2 comparison documentation: CPython implementation detail: Objects of different types except numbers are ordered by their type names; objects of the same types that don’t support proper comparison are ordered by their address. Comparing two numbers of different types (say, an int and a float) is exempt from this because Python knows how to order them (except for complex numbers, which raise a TypeError on ordered comparison). But otherwise Python is falling back to the names of the types. So why is '' > 1? Because "str" > "int". Similarly, all tuples are “greater” than all integers because "tuple" > "int". To which the only appropriate response is… But not anymore Thankfully, Python 3 fixed this — as we saw above, Python 3 simply raises TypeError and tells you the types of the operands are not orderable relative to each other. It also got rid of the ordering by address, apparently, as that entire note is gone in the Python 3 documentation. And of course we saw how Django’s behavior with invalid variables has changed, and especially how the length filter has changed so that if, by some chance, it gets applied to an invalid variable it’ll sensibly return 0, which fixes this behavior and probably also some other “wat” moments on Python 2. But should you want an example of a Python “wat” that isn’t so easy to puzzle out and doesn’t come down to unexpected consequences of otherwise-reasonable design, well, this certainly is one. I like to pride myself on knowing the reasons behind weird things in Python, but I am utterly at a loss as to why CPython 2 had this ordering behavior, and honestly I’m not sure I want to know; such things may be best left decently in the past, where they can cause no madness from gazing upon their faces.
https://www.b-list.org/weblog/2015/nov/15/real-python-wat/
CC-MAIN-2019-04
en
refinedweb
C++ Nested classes forward declaration error I am trying to declare and use a class B inside of a class A and define B outside A. I know for a fact that this is possible because Bjarne Stroustrup uses this in his book "The C++ programming language" (page 293,for example the String and Srep classes). So this is my minimal piece of code that causes problems class A{ struct B; // forward declaration B* c; A() { c->i; } }; struct A::B { /* * we define struct B like this becuase it * was first declared in the namespace A */ int i; }; int main() { } This code gives the following compilation errors in g++ : tst.cpp: In constructor ‘A::A()’: tst.cpp:5: error: invalid use of undefined type ‘struct A::B’ tst.cpp:3: error: forward declaration of ‘struct A::B’ I tried to look at the C++ Faq and the closeset I got was here and here but those don't apply to my situation. I also read this from here but it's not solving my problem. Both gcc and MSVC 2005 give compiler errors on this Answers Define the constructor for A AFTER the definition of struct B. The expression c->i dereferences the pointer to struct A::B so a full definition must be visible at this point in the program. The simplest fix is to make the constructor of A non-inline and provide a body for it after the defintion of struct A::B. Need Your Help Could not get the ReportViewer dll's in GAC reporting-services reportviewer report-viewer2010I am trying to load ReportViewer dll's into the GAC. I downloaded the ReportViewer.exe from Microsoft's download website, Unblocked it and ran it as administrator. The installation of package was
http://unixresources.net/faq/310557.shtml
CC-MAIN-2019-04
en
refinedweb
Expressive. This section talks about the experience provided by the command line compiler, contrasting Clang output to GCC 4.9's output in some cases. Column Numbers and Caret Diagnostics First, all diagnostics produced by clang include full column number information. The clang command-line compiler driver uses this information to print "point diagnostics". (IDEs can use the information to display in-line error markup.) This is nice because it makes it very easy to understand exactly what is wrong in a particular piece of code. The point (the green "^" character) exactly shows where the problem is, even inside of a string. This makes it really easy to jump to the problem and helps when multiple instances of the same character occur on a line. (We'll revisit this more in following examples.) $ clang -fsyntax-only format-strings.c format-strings.c:91:13: warning: '.*' specified field precision is missing a matching 'int' argument printf("%.*d"); ^ Note that modern versions of GCC have followed Clang's lead, and are now able to give a column for a diagnostic, and include a snippet of source text in the result. However, Clang's column number is much more accurate, pointing at the problematic format specifier, rather than the ) character the parser had reached when the problem was detected. Also, Clang's diagnostic is colored by default, making it easier to distinguish from nearby text. Range Highlighting for Related Text Clang captures and accurately tracks range information for expressions, statements, and other constructs in your program and uses this to make diagnostics highlight related information. In the following somewhat nonsensical example you can see that you don't even need to see the original source code to understand what is wrong based on the Clang error. Because clang prints a point, you know exactly which plus it is complaining about. The range information highlights the left and right side of the plus which makes it immediately obvious what the compiler is talking about. Range information is very useful for cases involving precedence issues and many other cases. $ gcc-4.9 -fsyntax-only t.c t.c: In function 'int f(int, int)': t.c:7:39: error: invalid operands to binary + (have 'int' and 'struct A') return y + func(y ? ((SomeA.X + 40) + SomeA) / 42 + SomeA.X : SomeA.X); ^ $ clang -fsyntax-only t.c t.c:7:39: error: invalid operands to binary expression ('int' and 'struct A') return y + func(y ? ((SomeA.X + 40) + SomeA) / 42 + SomeA.X : SomeA.X); ~~~~~~~~~~~~~~ ^ ~~~~~ Precision in Wording A detail is that we have tried really hard to make the diagnostics that come out of clang contain exactly the pertinent information about what is wrong and why. In the example above, we tell you what the inferred types are for the left and right hand sides, and we don't repeat what is obvious from the point (e.g., that this is a "binary +"). Many other examples abound. In the following example, not only do we tell you that there is a problem with the * and point to it, we say exactly why and tell you what the type is (in case it is a complicated subexpression, such as a call to an overloaded function). This sort of attention to detail makes it much easier to understand and fix problems quickly. $ gcc-4.9 -fsyntax-only t.c t.c:5:11: error: invalid type argument of unary '*' (have 'int') return *SomeA.X; ^ $ clang -fsyntax-only t.c t.c:5:11: error: indirection requires pointer operand ('int' invalid) int y = *SomeA.X; ^~~~~~~~ Typedef Preservation and Selective Unwrapping Many programmers use high-level user defined types, typedefs, and other syntactic sugar to refer to types in their program. This is useful because they can abbreviate otherwise very long types and it is useful to preserve the typename in diagnostics. However, sometimes very simple typedefs can wrap trivial types and it is important to strip off the typedef to understand what is going on. Clang aims to handle both cases well. The following example shows where it is important to preserve a typedef in C. $ clang -fsyntax-only t.c t.c:15:11: error: can't convert between vector values of different size ('__m128' and 'int const *') myvec[1]/P; ~~~~~~~~^~ The following example shows where it is useful for the compiler to expose underlying details of a typedef. If the user was somehow confused about how the system "pid_t" typedef is defined, Clang helpfully displays it with "aka". $ clang -fsyntax-only t.c t.c:13:9: error: member reference base type 'pid_t' (aka 'int') is not a structure or union myvar = myvar.x; ~~~~~ ^ In C++, type preservation includes retaining any qualification written into type names. For example, if we take a small snippet of code such as: namespace services { struct WebService { }; } namespace myapp { namespace servers { struct Server { }; } } using namespace myapp; void addHTTPService(servers::Server const &server, ::services::WebService const *http) { server += http; } and then compile it, we see that Clang is both providing accurate information and is retaining the types as written by the user (e.g., "servers::Server", "::services::WebService"): $ clang -fsyntax-only t.cpp t.cpp:9:10: error: invalid operands to binary expression ('servers::Server const' and '::services::WebService const *') server += http; ~~~~~~ ^ ~~~~ Naturally, type preservation extends to uses of templates, and Clang retains information about how a particular template specialization (like std::vector<Real>) was spelled within the source code. For example: $ clang -fsyntax-only t.cpp t.cpp:12:7: error: incompatible type assigning 'vector<Real>', expected 'std::string' (aka 'class std::basic_string<char>') str = vec; ^ ~~~ Fix-it Hints "Fix-it" hints provide advice for fixing small, localized problems in source code. When Clang produces a diagnostic about a particular problem that it can work around (e.g., non-standard or redundant syntax, missing keywords, common mistakes, etc.), it may also provide specific guidance in the form of a code transformation to correct the problem. In the following example, Clang warns about the use of a GCC extension that has been considered obsolete since 1993. The underlined code should be removed, then replaced with the code below the point line (".x =" or ".y =", respectively). $ = "Fix-it" hints are most useful for working around common user errors and misconceptions. For example, C++ users commonly forget the syntax for explicit specialization of class templates, as in the error in the following example. Again, after describing the problem, Clang provides the fix--add template<>--as part of the diagnostic. $ clang t.cpp t.cpp:9:3: error: template specialization requires 'template<>' struct iterator_traits<file_iterator> { ^ template<> Template Type Diffing Templates types can be long and difficult to read. More so when part of an error message. Instead of just printing out the type name, Clang has enough information to remove the common elements and highlight the differences. To show the template structure more clearly, the templated type can also be printed as an indented text tree.Default: template diff with type elision t.cc:4:5: note: candidate function not viable: no known conversion from 'vector<map<[...], float>>' to 'vector<map<[...], double>>' for 1st argument;-fno-elide-type: template diff without elision t.cc:4:5: note: candidate function not viable: no known conversion from 'vector<map<int, float>>' to 'vector<map<int, double>>' for 1st argument;-fdiagnostics-show-template-tree: template tree printing with elision t.cc:4:5: note: candidate function not viable: no known conversion for 1st argument; vector< map< [...], [float != double]>>-fdiagnostics-show-template-tree -fno-elide-type: template tree printing with no elision t.cc:4:5: note: candidate function not viable: no known conversion for 1st argument; vector< map< int, [float != double]>> Automatic Macro Expansion Many errors happen in macros that are sometimes deeply nested. With traditional compilers, you need to dig deep into the definition of the macro to understand how you got into trouble. The following simple example shows how Clang helps you out by automatically printing instantiation information and nested range information for diagnostics as they are instantiated through macros and also shows how some of the other pieces work in a bigger example. $ clang -fsyntax-only t.c t.c:80:3: error: invalid operands to binary expression ('typeof(P)' (aka 'struct mystruct') and 'typeof(F)' (aka 'float')) X = MYMAX(P, F); ^~~~~~~~~~~ t.c:76:94: note: expanded from: #define MYMAX(A,B) __extension__ ({ __typeof__(A) __a = (A); __typeof__(B) __b = (B); __a < __b ? __b : __a; }) ~~~ ^ ~~~ Here's another real world warning that occurs in the "window" Unix package (which implements the "wwopen" class of APIs): $ clang -fsyntax-only t.c t.c:22:2: warning: type specifier missing, defaults to 'int' ILPAD(); ^ t.c:17:17: note: expanded from: #define ILPAD() PAD((NROW - tt.tt_row) * 10) /* 1 ms per char */ ^ t.c:14:2: note: expanded from: register i; \ ^ In practice, we've found that Clang's treatment of macros is actually more useful in multiply nested macros than in simple ones. Quality of Implementation and Attention to Detail Finally, we have put a lot of work polishing the little things, because little things add up over time and contribute to a great user experience. The following example shows that we recover from the simple case of forgetting a ; after a struct definition much better than GCC. $ cat t.cc template<class T> class a {}; struct b {} a<int> c; $ gcc-4.9 t.cc t.cc:4:8: error: invalid declarator before 'c' a<int> c; ^ $ clang t.cc t.cc:3:12: error: expected ';' after struct struct b {} ^ ; The following example shows that we diagnose and recover from a missing typename keyword well, even in complex circumstances where GCC cannot cope. $ cat t.cc template<class T> void f(T::type) { } struct A { }; void g() { A a; f<A>(a); } $ gcc-4.9 t.cc t.cc:1:33: error: variable or field 'f' declared void template<class T> void f(T::type) { } ^ t.cc: In function 'void g()': t.cc:6:5: error: 'f' was not declared in this scope f<A>(a); ^ t.cc:6:8: error: expected primary-expression before '>' token f<A>(a); ^ $ clang t.cc t.cc:1:26: error: missing 'typename' prior to dependent type name 'T::type' template<class T> void f(T::type) { } ^~~~~~~ typename t.cc:6:5: error: no matching function for call to 'f' f<A>(a); ^~~~ t.cc:1:24: note: candidate template ignored: substitution failure [with T = A]: no type named 'type' in 'A' template<class T> void f(T::type) { } ^ ~~~~ While each of these details is minor, we feel that they all add up to provide a much more polished experience.
https://clang.llvm.org/diagnostics.html
CC-MAIN-2019-04
en
refinedweb
In this particular section let us explore about various SalesForce Deployment and Assertions. First , let us see “What is Deployment?” The answer is very simple, Deployment is nothing but connection between sandbox and PRD environment (production organization) Several mechanisms of Deployment are: |→ Change set |→ Force.com eclipse |→ ANT builder |→ WEB SERVICE During the Hot Deployment , if we exclude an object from the sandbox then it will be omitted from the PRD environment as well . However, PRD environment can be modified by Development environment without stopping the apps at prod area. Let us evaluate between SOQL (Salesforce Object Query Language) and SOSL (Salesforce Object Search Language). SOQL Syntax:- SOSL Syntax:- Returning field name1, field name2, – – – – – 1.System.asserts equals ( val1, val2, ‘string’) ; 2.System.asserts not equals ( var1, var2, ‘string’) ; Winter ’12 for developers’ → App logic @ Is Test public class TestUtil { public static void create TestAccounts () { // create some test accounts } public static void create TestContacts () { // create some test contacts } } Native classes for serializing and deserializing JSON The following classes have been added for JSON support: Eg:List invoices = new List (); invoices.add (inv1); invoices.add (inv2); //Serialize the list of Invoice Statement objects. String JSON String = JSON.serialize (invoices); System.debug (‘serialized list of invoices into JSON format:’+ JSON String); // Deserializing the list of invoices from the JSON string List deserialized Invoices = (list) JSON.deserialize(JSON String, List.class); System.assertEquals(invoices.size(), deserializedInvoices.size()); Eg: Http httpProtocal = new Http(); // create an HTTP request to send. Http Request request = new Http request(); ….. Http Response response = http protocol.send (request); // Create JSON parser with the http response body // as an input string. JSON Parser parser= JSON.create Parser (response.get Body()); Integrate With Testing For Continuous Integration App’s: Two API objects are now available to enable starting asynchronous test runs as well as for checking test results: Apex test queue item: Represents a single Apex class in the Apex Job Queue Apex test result:Represents the result of an Apex test method execution You can use this functionality to better integrate with test execution in continuous integration applications. New methods are being used for determining the execution content in the system class: isBatch: Determines if the currently executing code is invoked by batch Apex Job isFuture: Determines if the currently executing code is invoked by code containing in a method annotated with @future isScheduled: Determines if the currently executing code is invoked by a scheduled Apex job. Enroll for Instructor Led Live SalesForce Training at Mindmajix Mindmajix offers different Salesforce certification training according to your desire with hands-on experience on Salesforce concepts Free Demo for Corporate & Online Trainings.
https://mindmajix.com/salesforce/salesforce-deployment-assertions
CC-MAIN-2019-04
en
refinedweb
US20110125804A1 - Modular distributed mobile data applications - Google PatentsModular distributed mobile data applications Download PDF Info - Publication number - US20110125804A1US20110125804A1 US13012653 US201113012653A US2011125804A1 US 20110125804 A1 US20110125804 A1 US 20110125804A1 US 13012653 US13012653 US 13012653 US 201113012653 A US201113012653 A US 201113012653A US 2011125804 A1 US2011125804 A1 US 2011125804A1 - Authority - US - Grant status - Application - Patent type - - Prior art keywords - context - servo - schema - tree - Computer-implemented system and methods for deploying a distributed client-server system, enabling developers to rapidly create and deploy read-write distributed data interfaces for host database systems. The disclosed system is especially advantageous for mobile clients as connection to the server environment can be intermittent without unduly impeding local (disconnected) operations. One important aspect of the invention is an incremental process for executing an application servo in a client device. This process includes building and incrementally maintaining a logical context tree in which each node expresses a complete current state of the execution process. Description - This application is a divisional of U.S. patent application Ser. No. 10/006,200, filed Dec. 4, 2001, which claims the benefit of U.S. Provisional Application No. 60/251,285 filed Dec. 4, 2000, both of which are incorporated herein by reference. - ©2001 ThinkShare Corporation.). - The present invention relates to software systems for distributed mobile computing and, more specifically, it includes a software platform—and application language—for deployment of portable, reusable, and interoperable data viewing and modification applications in a distributed, intermittently networked environment. - Increasing numbers of workers are mobile, meaning that they do their jobs outside of the conventional office. Laptop and palm-sized computers provide some communication and computing ability “on the road” and, with the decreasing size of data storage devices of all types, mobile workers can carry a lot of data with them. It is more difficult, however, to share data with others or access a centralized database. Specifically, mobile users (or developers of wireless data systems) face several challenges, such as: - Disconnected operation. Workers often lose connectivity in certain areas such as, subways, buildings, etc. but they should be able to continue doing their job and, at a minimum, not have their data corrupted (either locally or centrally) as a result of the disconnection. - Limited screen size and user input. It is difficult to display large amounts of information, and enable a user to meaningfully “browse” the information on a tiny screen. User input is limited as well, especially with respect to PDAs and cellular phones. - Latency. Another challenge facing developers is how to design applications that make best use of a range of bandwidths. - Manageability. Applications should be able to grow, adding features and functions, without rebuilding them from scratch and without losing backward compatibility with other programs or services. - The value of separation of data and format has become widely known, one example being the use of stylesheets as described in U.S. Pat. No. 5,860,073. Stylesheets alone, however, do not adequately address the mobile user problems outlined above. For example, a stylesheet or transformer can be written to translate data to HTML or WML for display on a small screen. But if the result is long, downloading may take awhile, and even if communication bandwidth is good, disconnection is a risk, and even after downloading, limited local processing power may result in long delay before the entire document can be randomly accessed by the user. Stylesheets enable some dynamic properties, but when applied in the context of a Web browser page viewing they do not provide a complete solution for disconnected operation. - XML (eXtensible Markup Language) is a known document processing standard (a simplified form of SGML). It allows a developer to create a custom vocabulary or “schema” defining a custom markup language. This can be done using a document type definition (DTD) or with the XML Schema Definition (XSD), among other schema languages. The schema specifies what elements or tags and attributes can be used in a document and how they relate to each other. Many industry-specific DTDs are evolving, for example MathML, PGML, etc. XML parsers are publicly available for checking “well-formedness” of a document (compliance with the XML syntax specifications) as well as “validity” meaning compliance with the applicable schema. - The Information and Context Exchange (ICE) protocol is designed to manage establishing “syndication” relationships and data transfer for content distribution. ICE is an application of XML. ICE enables establishing and managing syndicator—subscriber relationships for transferring content that is generally originated by the syndicator and consumed by the subscriber, such as news or weather reports. This system essentially supports one-way distribution of content, i.e., publication, rather than interactive, mobile applications implementing and synchronizing distributed databases. Still, it does suggest a data replication scheme mediated by XML messages. - It has also been suggested to use text-based descriptive attribute grammar, like XML, to specify object-oriented applications. U.S. Pat. No. 6,083,276 to Davidson et al. describes such a method. It is limited, however, to essentially translating the application description (in an XML-like language) to a set of software objects such as Java® component classes. Each element declaration in the source (markup language) file must be capable of being mapped to a corresponding application component class. Each element with children must result in a corresponding container component with analogous child components. Every attribute declaration must map to a corresponding property value in the corresponding component, and so on. Thus the application must be designed from the outset to align very closely with a selected object-oriented language and framework. Desired extensibility, reuse and similar goals are sacrificed, although the concept of initially describing an application using an XML-like syntax is interesting and has some obvious benefits. The '276 patent to Davidson, however, primarily teaches an automated method for translating that description into a specific object-oriented program at which point the benefits of the descriptive attribute grammar are lost. - U.S. Pat. No. 6,012,098 teaches a system for isolating the retrieval of data from the rendering of that data. A data retrieval “servlet” executes a query, and converts the results to an XML data stream, delivered to a downstream rendering servlet. The rendering servlet parses this XML data stream, using a stylesheet that may be written using XSL, and creates an HTML data stream as output for communication to a client computer. - XSLT, or Extensible Stylesheet Language—Transformation, is a useful tool for working with XML documents. XSLT enables one to extract and transform the source information in various ways, for example into another markup language like HTML as is commonly done. See FIG. 1. - Many software systems exist which implement or utilize the XML family of standards (XML, XSL, XPath, XML Schema Definitions, etc.). HTML and XML browsers are common. Many enable offline operation by accessing cached read-only data. HTML and XML editors are common as well, all of which form part of the general background of the invention. - See also U.S. Pat. No. 6,167,409 to DeRose et al. “Computer System and Method for Customizing Context Information Sent with Document Fragments Across a Computer Network.” - Application server—One of a class of commercially available software frameworks for integrating computation into a web server environment. - Load-balancing router—A router that takes into account the effective performance of a set of computational resources when deciding where to route a message. - N-tier architecture—A software architecture in which computing resources can be layered to any depth. In contrast to “3-tier architecture.” - Namespace name (Namespaces)—A URI uniquely identifying a namespace. See Namespaces in XML [Namespaces]. - Public identifier (SGML)—Globally unique string to identify a resource. Could be accompanied by a system identifier, which was often a filesystem path on a private filesystem. - Public identifier (XML)—Unique identity string. See XML 1.0 [XML]. - Self identity—Identity as self-assigned by an object, without the direct assistance of an issuing authority. The UUID is a standard created to support the implementation of self identity. Self identity is needed in situations where it is not necessarily possible or appropriate to request an identity from an authority. - Session—An object representing a cached state to optimize an end user's interaction with the system. - Synchronization point—The event during which a database incorporates information from another database, thus instigating replication or updating previously replicated information. - Unique identifier—Any identifier that defines a set of rules, that, if followed, guarantee uniqueness. - XML Extensible Markup Language—A simplified form of SGML, the Standard Generalized Markup Language, in international documentation standard. XML is a document processing standard recommended by the World Wide Web Consortium (W3C). - XSL Extensible Stylesheet Language—A part of the broader XML specification. An XSL document or stylesheet transforms an XML input document to an output document, essentially applying element formatting rules. - XSLT Extensible Stylesheet Language—Transformation—The transform portion of the broader XSL language specification. - XPATH—A syntax for describing a node set location in an XML document using expressions that consider the context in which they appear. - URI—Uniform Resource Identifier. See RFC2396 [URI]. - UUID—Universally Unique Identifier. A 128-bit unique identifier that can be allocated without reference to a central authority. From Universal Unique Identifier [DCE]. - The present invention includes computer-implemented methods for modular programming on distributed devices that brings together and extends industry standards in a unique way to create a platform for a class of portable, reusable, and interoperable data viewing and modification programs that can operate in a distributed, occasionally communicating environment, for example where client devices are loosely linked to hosts or servers via a wireless communications channel. The invention builds on XML and related standards, including, but not limited to, Namespaces in XML, XPath, XSL , XSLT, XPointer, XLink, XHTML and XML Schema Definitions [XSD]. - A complete distributed programming model is achieved by enabling declarations of shared or private data storage, declarations of schema translation, synchronization rules, editing actions, access rules, application packaging, label sets for default interfaces, and registration for distribution and reuse. The invention exploits schema to enable seamless integration at both data and display levels. An application defined for the invention can extend another one dynamically, even if neither developer was aware of the other application. - Thus, one important aspect of the invention is a computer-implemented, incremental process for executing an application servo in a client device. This process includes building a context tree in which each node expresses (by reference) a complete current state of the execution process. For example, the context node content includes a pointer to an element in the servo execution of which spawned the context node; a pointer that identifies a current data context by pointing into a source tree; a reference to a parent context; and an ordered, potentially sparse, list of pointers to zero or more child contexts. - Another feature of the incremental execution calls for the creation of child spacer nodes in the context tree representing unmaterialized child contexts. These can be used for certain operations, and estimates, without waiting for evaluation of the entire source tree. In a presently preferred embodiment, the context tree is implemented using a relative b-tree structure, and each spacer is reflected in an interior node entry in the relative b-tree structure to facilitate searching unmaterialized contexts. - Another aspect of the invention is a servo definition language for defining a distributed application that supports disconnected operation. The language typically includes the following types of rules: application data schema; transformation rules; transaction handling rules; and interface object specifications. Importantly, the servo definition language can also implement opportunity rules to realize automatic extension or integration of servos through opportunity-based linking of an interface component representing an instance of a schema fragment to a template. - The invention enables developers to rapidly create and deploy read-write distributed data interfaces for systems that employ relational databases or have XML interfaces. Even when wireless network service is available, efficient use of the network is important for the end-user and/or the service provider (depending on the pricing structure of the service contract). The invention addresses this by allowing many actions to be taken without immediate use of the network. - Wireless service is unreliable and often unavailable (e.g., in basements, rural areas, and commercial aircraft). While it is not possible to access new information when service is not available, the invention makes it possible to review recent actions and initiate new actions of common types. Further, the present invention enables the development of continuously evolving suites of applications by maintaining integration with legacy “servos” as new servos are created and deployed. Users need not be actively involved in the distribution process. Developers can focus their efforts on individual capabilities rather than frequent systemic overhauls. “Servos” as used herein, roughly analogous to applications, are more precisely defined later. - Additional aspects and advantages of this invention will be apparent from the following detailed description of preferred embodiments thereof, which proceeds with reference to the accompanying drawings. FIG. 1is a simplified data flow diagram illustrating a common batch XSLT transformation process as known in prior art. FIG. 2is a simplified data flow diagram illustrating interactive servo execution in accordance with the present invention. FIG. 3is a simplified block diagram of a distributed system illustrating a deployment of the present invention in a wireless context. FIG. 4Ais a code fragment from the asset management servo of Appendix B. FIG. 4Bis sample data in the schema of the asset management servo. FIG. 5Ais a multitree diagram illustrating execution of the servo template of FIG. 4Aover the sample data of FIG. 4B. FIG. 5Bis an illustration of a screen display generated by the servo template of FIG. 4A. FIG. 6is an illustration of a context node for use in incremental processing of a servo. FIG. 7is an illustration of an interior socket for use in forming a relative b-tree structure. FIG. 8is an illustration of an interior whorl for use in forming a relative b-tree structure. FIG. 9is a simplified illustration of a single b-tree structure for use in constructing a context tree. - Appendix A is a listing of a software description language (SDL) schema consistent with the present invention. - Appendix B is a sample asset management servo expressed in the SDL of Appendix A. - The following terms are used in the present description of the invention. Some terms are described in this glossary only, while others are explained in more detail below. It is provided here as a convenient reference for the reader. - Abstract interface object—An object that provides information to an end-user or other software component. An object that offers the ability for an end-user or other software component to provide information to the software system, such as enabling a choice or providing information in answer to a particular question. - Concrete interface object—An interface object representing a specific interface component that interacts with a human or another software component. For example, a graphical user interface widget such as a drop down list. - Context, interpreter—An object which encodes, through its own data and the objects it references, the internal state of the interpreter at a specific point of evaluation. This is explained in detail below. Computing server—A server used to run servos and reconcile with backing stores and other datasources. - Database name—A string that can be used within one specific database as a short handle to another specific database. - Datasource—An information resource that implements the datasource protocol. Datasources may include resident databases managed by the invention, specific computational resources (such as applications written using other technologies), or gateways to other Internet resources. - Datasource declaration—The means of specifying the existence and addressability of a datasource. Datasources are commonly registered with the registrar. - Datasource protocol—A specific protocol for accessing information resources. SOAP [SOAP] is an example of a datasource protocol. - Default servo—A servo associated by the registrar with a schema to be used in the event that a user encounters data in that schema and has not previously selected an application to use with that schema. Each schema has at least one default application. Different servos may be associated with different element and attribute declarations. - Directory services—A location broker that provides a navigable index of available services. - Front-end server—One or more functionally equivalent servers used to process incoming requests and route them to appropriate computational resources using knowledge about the specific request (the content of the packet). A front-end server(s) may be a component of a load-balancing router. - Incident reporting service—A service which tracks subscriptions to events associated with particular resources, and informs subscribers when those events occur. Used most commonly to track dependencies between source databases and databases that replicate parts of the source databases. - Label—One or more strings or other objects such as sound or images associated with an element, attribute, model group or other component of schema that serve to provide identifying information for use in a user interface. - Location broker—A component of the invention that can accept an identifier for a resource and provide means to contact that resource. Resources are not necessarily local to the broker. - Opportunity—An association between a unit of schema and a view or other action that can provide a means to interact with, modify, or apply an operation to, an instance of that same schema. - Public identifier—The public identifier used by the invention. An instance from a managed URL space. Compatible by subset with public identifier (SGML), public identifier (XML), URI, namespace name, unique identifier. - Public identity—Identity as known to a central authority that issues unique identifiers. (For the invention, the authority is the registrar and the identifier is the invention public identifier.) - Registrar—The system resource which manages the registration of schema, applications and datasources. The registrar enables reliable identification and data mappings as further described below. - Schema translator—A software component that converts data of one specific schema to data of another specific schema. - Servo—The unit of application packaging provided by the invention. - Servo definition language (SDL)—A language in which servos are expressed. - Servo transition—The action in which use of one application naturally leads to use of another by leveraging the common schema of the underlying data. - Short identifier—Another term for database name. - Storage declaration—Declaration of a persistent storage tree. See below. - Synchronizer—A component of the invention which uses transaction definitions, synchronization point identifiers, time stamps, datasource declarations and database instance-specific parameters to maintain correct database state across a set of related databases managed by the invention. Also called transaction engine. - System identifier—An instance of a UUID used as a form of self identity for resources such as databases. - Task allocator—A service that assigns interactive tasks to available computational resources based on suitability to purpose (server load, access to resources, availability of cached database or session state). - Task scheduler—A component of the invention that assigns computational tasks to available computational resources based on priority, aging, deadlines, load, etc. Typically the tasks handled in this manner are not in response to a current user request. - Transaction engine—Another term for the synchronizer. - View—A grouping of transformation rules and components that define all or part of the interface for a servo. - View declaration—Declaration of view—View declarations define how the application (servo) interacts with the user or another software component. - We refer to an application in accordance with this invention as a servo. More specifically, a servo is the unit of application packaging provided by the invention. A servo is created by “declaring” it in accordance with an appropriate servo declaration language. In accordance with a presently preferred embodiment of the invention, an XML-based Servo Declaration Language (SDL) is provided, portions of which are attached as Appendix “A.” Those skilled in the art will recognize that other SDLs along similar lines may be used within the scope of the present invention. Key aspects of the SDL will be described in detail below. In practice, a developer specifies (or references) an appropriate schema in the SDL and then specifies (declares) a desired application, or servo, consistent with that schema. The schema definition language may use a markup language-like syntax, such as XML. A sample servo, discussed in detail later, is shown in Appendix “B.” Another important aspect of the present invention is an incremental process for interpreting or executing an application servo in a client device (or a server). This process is described in detail later. - All other things being equal, the less information the author of a software application needs to provide, the less time the author will need to spend to create and maintain the application. The present invention provides a concise way to declare distributed data management applications. The experience provided to the end-users is created by selectively combining servos, which may be created by different authors, and interpreting them on differing platforms. This process is assisted by keeping the quantity of specification provided by the developer to a minimum. Platform independence is achieved through a minimal platform independent application definition, supplemented by platform specific transformations where the defaults are not adequate to address the goals of the application author. - As noted above, the present servo definition language (SDL) is an XML-based language in which servos are expressed. The XSL vocabulary is referenced by the SDL schema. This enables developers who are familiar with XSL to make use of their experience and allows some of the experience gained developing servos to be used in other contexts. - Information about a particular session environment can be made available (subject to security restrictions) for reference by SDL through a specially named input tree. Session environment information can include, but is not limited to, the physical location of the mobile user, identification of client hardware, quantification of available resources, and status of communication channel. - Application data is available (subject to security restrictions) for reference in SDL as individually named input trees, using the names from storage declarations further described below. These stores can be referenced as variables. Each characteristic of a servo preferably may be identified as overridable or not. This enables or disables user-specific customization. A simplified diagram of data flow is shown in FIG. 2. - Consistent with a presently preferred embodiment, servos, data, and protocol (including queries and query results) are all represented in XML. For any computational task, the inputs, the outputs, and the instructions preferably are all expressed using XML. Since XML is simple and compact to transport, this computational model makes it simple to deploy more hardware to handle loads. This includes the opportunity to utilize thousands of servers as well as the opportunity to move computation to “smart” (invention-enabled) clients when appropriate. It also provides the means to transparently shift the underlying database mechanisms to suit the different performance characteristics of different stores. - Further, the present invention can be said to implement “n-tier architecture”—a software architecture in which computing resources can be layered to any depth. The architecture allows independent information sources over a vast scale to be accessed through a single interface. Rich schema and interschema translation services enable information to flow across the layer boundaries without loss of context as further explained below. - Several specific features of a servo declaration are described in detail below. First, we provide some context by illustrating at a simplified high level an illustrative deployment of the invention, followed by a simple servo example. The sample office asset database application will then be used to illustrate incremental execution of servos and the execution context tree construct. - In FIG. 3, a server environment 300 is shown as currently supporting two mobile users, Alice and Juan. A synchronizer component 310 communicates with a corresponding mobile synchronizer 326 on Alice's mobile unit, such as a Palm Pilot or other PDA 322. Similarly, Juan's pocket PC 342 includes a synchronizer component 346. Alice's PDA 322 includes local storage or data cache 324 where schema, rules (servos) and data are stored. A replica 320 of Alice's storage is shown on the server 300. In operation on Alice's PDA device, an interpreter 328 executes selected servos, interacting with data and with the PDA user interface. Data in the local storage 324 is synchronized with the replica data 320 on the server via the synchronizer components, as explained in detail later. - The local data cache provides the local storage for the synchronization engine described below. It facilitates the sharing of data across applications and provides support for disconnected operation. For example, an application managing purchase order information can integrate seamlessly with an application managing customer information, creating a unified view. - Another aspect of the local data cache is the ability to provide a unified interface to shared as well as personal data sources. Applications provide views that can join individual data, shared data, and group data. The synchronization engines on the client and the server manage all this data. The local data cache in a presently preferred embodiment includes indexing so as to enable efficient random access to local data. - Juan's pocket PC 324 similarly includes Juan's local storage 344, including data, schema and rules (servos). An interpreter 348 on Juan's pocket PC executes selected servos, interacting with locally stored data and the Microsoft Windows user interface, supported by the Windows CER or similar operating system. As before, Juan's storage has a replica 340 on the server 300 with which the mobile unit is synchronized. The pocket PC is an example of a “rich client” that would typically include, in addition to the operating system software, a synchronization engine and a display engine. - The display engine's primary purpose is to translate interface object specifications onto the display of the wireless device, providing a layer of abstraction between the code and the device operating system. This component allows application developers to create applications that can run on multiple device-types in ways that optimize the user interface for each device. - Actions normally handled using multiple windows on a desktop computer, become possible using a single view on a small screen, enabling dissimilar applications to work together without modification. This important flexibility enables system developers to be more responsive than ever to the constantly changing needs of their clients. - The display engine features preferably include: - A unique class of concrete interface objects to assist users making selections from large databases. - The suite of standard device interface objects—tables, pick lists, buttons, etc. - A generic pop-up window supporting all UI and data types available on the platform described herein. - The ability to bind a UI element to a piece of data without needing to track changes backwards and forwards. - The transaction engine (or “synchronizer”) 346 addresses the challenges facing users of wireless solutions when disconnection occurs. Traditionally, productivity grinds to a halt. Here, the transaction engine provides seamless operation—regardless of connected state—to the user, enabling continuous productivity. Additionally, this feature enables the user to work asynchronously, in other words, every user action does not require communication with the server. This significantly enhances application performance. - The synchronization engine preferably includes the following features: - Transactional support-store, forward, and full rollback with supporting UI components for handling related events. - Event synchronization control—a full suite of controls enabling developers to determine when and how synchronization occurs, including priority for individual transactions and classes of data. - Background updating-allows the server to push information to the client transparently. - Referring again to FIG. 3, Juan also has a WAP phone 350 that can communicate with the server 300 via a WAP Gateway 352. Another interpreter instance 356 is deployed in the server 300 to execute servos selected by Juan via the WAP phone, and interact with him via the WAP user interface 354. This illustrates a “thin client” deployment where the interpreter is deployed on the server rather than the client side. Wireless handheld data devices divide into two major categories: rich clients and thin clients. Rich clients have local storage capabilities as well as processing power. Thin clients essentially only have the ability to display data. - The present invention works for both environments. On rich clients, the client-side code resides on the device itself. On thin clients, the client code runs in the server environment as a virtual client and communicates with the device using available protocols. Offline functionality is not available with thin devices, but they access the data in the same manner. Nonetheless, the present architecture provides a consistent development environment for both types of clients by running the XML—based interpreter, XML storage interface, and synchronization protocols in a virtual client module residing on the server. - Server environment 300 also includes a registrar service 330 where each user's rules and schema are registered, as explained in greater detail later. The registrar is a component storing the user applications or “servos” as well as managing the deployment and updating of new versions. It also is where data schema are registered, enabling the seamless migration from one application version to the next and enabling a seamless mechanism for dynamically rolling out new functionality to client devices. This enables the developer to create a persistent richness of applications over time, and able to manage the multiple versions within an organization. - On the backside, the server environment is coupled to a legacy relational database 390, by virtue of an integration toolkit 388. Another synchronizer element 370 synchronizes data between the replica storage 320, 340 and the external database 390. The synchronizer 370 can also utilize other XML services 380. The server environment 300 further includes a group storage element 392 including schema, servos and data. - Finally, the server includes a server side synchronizer 310. The server synchronizer is the server's counterpart to the client synchronizer. Its main function is to speak not only to the clients, but also to the backend databases. The server synchronizer includes the following features in a presently preferred embodiment: - Client synchronization—provides synchronization services with a myriad of devices and handles data collision, store, forward, and roll-back. - Backup and restore of local client data—allows users to use any handheld device and still retain both personal settings and local cache. - Interface with backend database—handles the transactions with the backend databases ensure data integrity throughout the system. - Transactional services—manages changes in data and ensures data integrity between clients and servers in the event connectivity is lost during the transmission of data. - In operation, a typical life cycle of a request from a mobile unit is handled as follows. An incoming request first sees a load-balancing router that parcels requests out to front end servers. This is part of the “server environment” of FIG. 3. Once received by a front-end server, the URL is analyzed by a task allocator to determine an appropriate computational resource. The allocator uses a location broker to determine what servers are involved (responses may be cached for performance) and a performance monitor to assess the availability of the resources. - Once received by a computing server, the request is sent to an interpreter instance for decoding. If there is already an active instance for the user's session, a reference to it is obtained from the session object and the request is sent to that instance. If not, a new instance is created and the request is sent to the new interpreter instance. These objects are not shown in FIG. 3as it is primarily a data flow diagram. At this point, further processing is dependent on the nature of the request. - We define a computing server as a server used to run servos and reconcile with backing stores and other datasources. An application server is one of a class of commercially available software frameworks for integrating computation into a web server environment. And a “session” as used herein refers to an object representing cached state so as to optimize an end user's interaction with the system. These objects are not shown in FIG. 3as it is primarily a data flow diagram. - At this point, further processing is dependent on the nature of the request. If the request involves accessing data that resides in another database managed by the present system, a messaging service is used to post the request. The messaging service will use the location broker to determine where to send the message that has been addressed using a platform public identifier. Public identifiers in this context are URLs for resources assigned through the authority of the registrar. - If the request involves accessing information from an external HTML source, the process is similar to above, except that the object that receives the message is not a database, but an adapter that knows how to post an appropriate request (or set of requests) to the web and convert the result into XML. (Portably written adapters that do not rely on cached state can be run anywhere, so the messaging system can choose to resolve the request in the current address space.) - If the request involves accessing data local to the user's database that has been replicated from elsewhere, a synchronization log is checked to see if the database is within the grace period allowed by the declaration of the datasource. (These services are further described later.) If so, the replicated data is used without further investigation. If the database is out of date, a request is sent to the datasource directly using the messaging service. The datasource responds with the synchronization packets required to update the local database. The update is performed and then the original request is processed. FIG. 4Ashows a simple example of a servo template which would typically appear as part of a servo declaration. FIG. 4Bis a database fragment associated with the servo template of FIG. 4A. Each line of code in the drawing is numbered at the left side (400-419 in FIG. 4A, 420-434 in FIG. 4B). These numbers are provided for reference in this description, they are not part of a sample code. Referring now to FIG. 4A, the xsl:template declaration defines the corresponding fragment of data schema, in other words the associated data type, by reference to the “wh:obj” element. Line 402 borrows the standard html tag for defining a division of the output resulting from this execution. In line 404, the html:span element employs the xsl “value-of” instruction, generally used to generate output, with the attribute select=‘@barId’. This is the XPATH formulation to obtain value of the barId element of the database object. Referring to FIG. 4B, the first object in line 422 has the barId value equal to ‘271’. Consequently, execution of this servo will output the number 271 as illustrated in FIG. 5B. - Referring again to FIG. 4A, line 406 asserts an edit instruction from the “ts” schema, with a matching criterion (“select”) of the description element (“desc”) of the same database. (“ts” alludes to ThinkShare, assignee of the present application.) The edit instruction is defined in the schema of Appendix A. It provides for display of an editable field in the user interface. In this case, it operates on the description element of the database, which appears in FIG. 4Bat line 424 having the string value “black metal cabinet.” Accordingly, the string “black metal cabinet” is displayed on the user interface as shown in FIG. 5B. - The <ts:expandable> and <ts:hideable> elements are described in detail in Appendix “A”. The remaining lines of code in FIG. 4Aare merely closing tags. - Returning to FIG. 4B, a first object is described in lines 422-426 and second object is described in lines 428-432. The second object is shown at line 428 as having a band attribute value of ‘259’ and last attribute value of ‘271’, the latter being a pointer to the previous object. The second element 259 has a description “top shelf” which is then displayed as shown in FIG. 5B, by virtue of the apply-templates instruction shown at line 412 of FIG. 4A. The xsl:apply-templates element selects a set of nodes in the input tree, and processes each of them individually by finding a matching template rule for that node. The set of nodes is determined the select attribute, as noted, which is an XPATH expression that returns a node set. The key used in this expression is a persistent one declared as part of the servo. FIG. 5Ais a portion of an interpretation multitree illustrating operation of the servo template of FIG. 5Aon the database fragment of FIG. 4B. This is an abstraction of the context records created during the incremental process of executing an application servo. FIG. 5Aincludes essentially two related tree structures—the servo execution tree and the output or geometry tree. The execution tree comprises the circles, for example, the root node circle 500, while the geometry tree comprises the squares in FIG. 5A. With respect to the execution tree, each circle corresponds to a “context” which in turn reflects a specific state of the execution process. The tree begins at the root node 500 which identifies the template corresponding to FIG. 4A. FIG. 6is a simplified illustration of a single context node (corresponding to one of the small circles in FIG. 5A). Each context node includes an interpretation tree parent pointer, an interpretation tree root, a geometry tree parent pointer, geometry tree root pointer, and additional context information as described below. Later, we describe the use of “spacers” as a proxy for a plurality of unmaterialized context nodes, and a relative b-tree structure for encoding the interpreter context tree. First, we further explain important elements of the servo language. - The principal types of declarations are: - Schema - Labels - Abstract interface objects - Storage declarations - Concrete interface objects - Abstract interface objects - The currently preferred embodiment of the invention uses XML Schema Definitions as the core vocabulary for defining schemas. In principle, other means of specifying schema can be used. In addition, the invention provides enhancements, discussed later, which enhance the usefulness of schema in the context of the invention. - One such schema enhancement is called a “label.” Here, a label is one or more strings or other objects such as sound or images associated with an element, attribute, model group or other component of schema that provides identifying information for use in a user interface. - Objects declarable as labels include, but are not limited to: - strings representing natural language - bitmap or vector graphics - voice recordings - other sounds, specified by any means - Textual labels of various lengths and languages can be provided by a servo author using the example syntax provided in Appendix A. Shorter strings can be used as abbreviated labels, longer strings or element subtrees as descriptions or documentation providing user assistance. Labels are accessible to servo authors through an additional XPath axis or an element vocabulary within the schema axis. Labels are particularly useful for creating servos that must deal with a large number of different languages and/or schemata. - Additionally, certain abstract interface objects provided by the invention make use of labels to generate user interface objects based on the language context without need for explicit reference to labels via XPath. For example, the <> element in the sample syntax provided in Appendix A specifies a labeled editable field, without need to explicitly reference the label. Other abstract interface objects can make implicit use of labels, including, but not limited to: - tables with headings - tool bars containing icons representing actions - voice prompts - A storage declaration is the means by which a servo author reserves named persistent storage for use by the application and any other servos that may be authorized to access the data. Storage declarations define a locally scoped name for a persistent storage tree, and specify the schema to which the storage tree must comply. Note this refers to a data schema, not a servo declaration schema. An example of a storage declaration is shown in Appendix “B.” - The lifetime of external data varies. Old stock quotes (complete with time stamps) have unlimited shelf life, but little utility to the individual who wants only the latest quote available. Weather forecasts may be updated several times a day, weather observations several times an hour. Personalized recommendations from a commerce site may not need to be updated more than a couple times a week. For these reasons, the author can declare the lifetime of data obtained from a given source. The lifetime may vary for different elements of the schema. - Collaborative datasources, such as a shared shopping list, may require a synchronization check each time they are accessed. The author declaring the storage can also declare the form of acceptable queries and the corresponding query results. An illustrative example of a storage declaration schema is shown within the schema of Appendix “A.” - A concrete interface object is an interface object representing a specific object which interacts with a human or another software component. Some representative concrete interface objects include, but are not limited to: - 1. an editable text field - 2. a voice prompt answerable with a voice response - 3. an XML message sent to another subsystem, answerable by an XML response message - 4. a one-of-n choice presented as radio buttons in a graphical user interface - 5. a one-of-n choice represented as a pull-down or pop-up menu in a graphical user interface - 6. a one-of-n choice presented as voice prompt offering choices that can be made by typing keys on a telephone or other device - 7. a paragraph element (“p”) in XHTML - 8. an XSL formatting object - A graphical user interface specified by the invention can use elements from a number of vocabularies, including XHTML, XSL. In particular, the common user interface elements, including, but not limited to forms, buttons, menus, editable fields and tables may be specified using any number of vocabularies. In addition to common graphical user interface objects, the invention provides for use of concrete interface elements that provide structural views of information. For example, the invention provides concrete interface objects that expand to display additional concrete interface objects in response to user commands. These devices are particularly useful when combined with opportunities, described below. The combination of these interface devices with the method of “view declaration” provides particular advantage to users of handheld devices with small displays. - Recall that an abstract interface object is an object that provides information to an end-user or other software component. An abstract interface object can be used to enable an end-user or other software component to provide information to the software system, such as enabling a choice or providing information in answer to a particular question. The invention enables developers to specify a user interface or software interface using abstract interface objects that are independent of the specific concrete interface object that may be used when the specification is interpreted, for example when a servo is executed, in the context of a particular device or system. - Example abstract interface objects include, but are not limited to: - choice of 1 of n options - choice of m of n options - ability to specify a value for a specific instance of an object defined by schema, providing the current value as an option - progressive disclosure of means to specify information - a group consisting of other abstract interface objects - Specification of an interface using abstract interface objects is preferred over concrete interface objects because it enhances the portability of the interface thus defined. - Recall from the glossary that a view is essentially a selected grouping of transformation rules and components that define all or part of the interface for a servo. It is implemented by a view declaration. View declarations define how the application interacts with the user or another software component. See Appendix “A”. - The declaration of views uses the XSLT vocabulary combined with extensions. The XSLT input trees correspond to declared data storage areas, and the output tree is a specification of the interactive user interface. XSLT provides a mostly declarative vocabulary for transformation. The invention allows the input tree to be modified as a consequence of transformation, as described in Specifying data modification. Use of the XSLT vocabulary does not imply a batch process for transformation. The invention uses the vocabulary to specify an interface that changes dynamically as the preconditions of transformation change. This process is described in the Interpreting servos section. - Example extensions include, but are not limited to the following elements, as illustrated in the schemas which accompany this document as Appendix “A”: - the <view> element - use of variables, available to XPath, which identify input trees corresponding to storage declarations - an additional XPath axis providing access to schema information corresponding to the addressed data - an additional XPath axis providing access to origin and change history of the addressed data - an additional XPath axis providing access to opportunities (as defined below) associated with the addressed data - Operation of such extension is described in the interpreting servos section. - Recall that an opportunity is an association between a unit of schema and a view that can provide a means to interact with, modify, or apply an operation to, an instance of that same schema. Opportunities thus allow servo authors to declare schema fragments for which the servo can provide alternative views or special actions. - For example, any number of applications can register the fact that they provide actions that can be performed on the address element of a particular schema. These actions might include indicating which side of the street the address is on, providing latitude/longitude coordinates for the address, providing driving directions from the current location, providing distance from current location, providing a map of the area, etc. These actions are available to users of applications that display data encoded in the target schema element. This is done both by allowing these templates or views to be called directly and by run-time discovery of options and display of those options to end-users of the servos. - Opportunities associated with objects are available to XPath expressions as functions and/or variables. This enables servo authors to discover opportunities at run time. - The invention can also be configured so that the declaration of an opportunity results in the opportunity being listed with certain abstract interface objects so that choices are presented to the user when the user is interacting with the object, regardless of what servo is being employed. - Operation of servos can be further illustrated with the following electronic commerce example. The kind of information that consumers may keep in databases managed by the invention can be extremely useful in predicting future purchases. For example, a list of “movies I want to see” can be interpreted as a list of “movies I want to buy.” Any number of commerce servos can be introduced into a pre-existing set of servos, and be readily accessed from views of those servos by making use of opportunities declared by those servos. - Take for example a servo “A” that manages a personal library, and reading list. The schema for the personal library contains various views of book metadata which utilize a schema that includes a <book> element that has an ISBN attribute. A servo can be added which declares one or more opportunities associated with the <book> element, and offers a means to submit and track orders to one or more book suppliers. Additional servos can be added to track all outstanding orders and to provide status information from various carriers. - A servo can be used with a datasource that maintains user preferences and makes recommendations, such as is found at moviecritic.com or amazon.com. Such a servo can maintain a client side list of movie ratings that are reconciled with ratings found by crawling the site using the user's authentication credentials. When used in combination with servos oriented at purchasing or viewing products, the end-user is able to use multiple rating engines with the same set of ratings, and any set of purchasing servos. Thus, the invention can be used to help create electronic markets. - Consistent with the presently preferred embodiment, operations that must be performed as a transaction are enclosed in transaction tags in the context in which they are defined. Operations not so grouped are processed individually. Transactions can be nested. Editing actions are specified using an XML vocabulary that covers operations that can be performed on an XML tree, such as the editing actions supported by the W3C Document Object Model. - The sample schema given in Appendix “A” provides an example of an access control model. The invention can be used with any number of access control models. Access control can be based on your role in a community, the specific data-types you are modifying, their context in the database and the control attributes on those objects or their closest container having such control attributes. - The preferred embodiment of the invention uses a native XML database with persistent, incrementally updated indices. Alternative database representations can include a simple XML file parsed and loaded at application startup, as well as a relational database. - The logical structure of the database is described here as if it were represented in XML, although other persistent representations are preferred, as explained above. Within each database are regions, which can contain, among other data: - identification information for the database - declarations identifying other databases - servos installed in the database - named storage regions associated with servos - a transaction log with all changes encoded in XML, including the operations required to back out each transaction. - Example schema for a database is given in Appendix “A.” This description applies to both local storage and server replica storage (see FIG. 3). - Servos are distributed on request or by pushing the servo definition into the offline store. The servos preferably are stored in parsed form. Alternatively, servos can be stored as unparsed XML. Alternatively, servos can be stored and distributed in a compiled form, such as would be used on a virtual machine, or actual machine code. Storing the servo as the data is stored enables the same tools to be used for storing, indexing, accessing, transporting, synchronizing and transforming the servos. However, compiled representations can be used to achieve better performance and/or smaller size. Byte code or machine code representations can also be translated on device from unparsed XML or parsed XML forms and cached locally to improve performance. The invention is described here in terms of the parsed representation option. The incremental interpretation process described below applies to XSLT transforms as well as servo definitions. The interpretation process requires first understanding the concept of “context” in greater detail. - Basically, an interpreter context is an object which encodes, through its own data and the objects it references, the internal state of the interpreter at a specific point of evaluation. - The component that performs the incremental transformation is called the transformer. The internal state of the transformer is organized into records, here called contexts. Each context specifies the evaluation state of the transformer at a specific point in the transformation process. Each context node can include at least the following content: - 1. A pointer to an element within the servo template, XSLT stylesheet, or a node within an abstract syntax tree of an XPath expression contained in the transform. This serves as a program counter. This representation can be XSLT source, a parsed XSLT transform or a further compiled representation such as instructions in an XSLT virtual machine or hardware machine code. - 2. A pointer that identifies the data context. It is a pointer into a source tree, or a result tree fragment. - 3. An ordered list of zero or more contexts that provide a symbol space for the context. Intermediate contexts in the tree that do not introduce symbols may be skipped. - 4. A reference to a parent context. - 5. An ordered, potentially sparse, list of pointers to zero or more child contexts. In the preferred embodiment, these child contexts are contained in a data structure that encodes the relative order of materialized contexts, including the ability to encode unmaterialized regions of unknown size. An example of such a data structure is a b-tree in which materialized contexts and objects representing regions of unmaterialized contexts are maintained in order. This b-tree structure is further described below. - 6. Definitions for symbols introduced by the context. Such symbols include variable declarations, parameter declarations, template declarations, view declarations, and data declarations. Contexts preferably encode the differences from the symbol context inherited from their parent(s). This includes declarations of variables and templates. Thus each context, through use of its parents, represents a complete XSLT processor state. - 7. In the preferred embodiment of the invention, the transformer contexts also encode the current synchronization state. - 8. Further, contexts can reference cache result trees, along with signatures of the types of changes that would invalidate the cached results. These result trees can represent components of a user interface. - Unmaterialized regions can be of known length or unknown length. Unmaterialized regions of known length encode the length of the region as a count of missing contexts. This is called a spacer in the context of the execution tree. Unmaterialized regions of unknown length are distinctly marked and include an estimate of the number of missing contexts. When there is no basis for a better estimate, a predefined constant is used. - As mentioned earlier, a spacer can be used as a proxy for a plurality of contexts. There is an obvious savings from not creating context nodes that may never be needed. We leverage the b-tree structure to implement spacers. In a presently preferred configuration, we make use of the interior b-tree nodes, which are called interior whorls in FIG. 8, to stand in for the unmaterialized context or contexts. That's done within the interior socket structure as described with reference to FIG. 7, as part of an interior whorl, setting selected bits and, instead of having a pointer down to a context, use that same storage to indicate that the context is unmaterialized (which is to say it doesn't actually exist) and to indicate the number of unmaterialized objects that correspond to that object, as well as estimates for metrics such as the sum of Y span element (b) in FIG. 7. - Also, at each level of the b-tree of FIG. 9, there are a set of flags which are used for a number of different purposes. Specifically, these can include flags that are searched for as part of the search criteria in the execution algorithm. Among them is a flag that indicates whether there is unmaterialized context (spacer) present. The search criteria of the algorithm can quickly investigate whether there is a context that needs processing, or it can search at the same time for unmaterialized contexts. (An unmaterialized context can be thought of as a context that needs a lot of processing. It doesn't even exist yet.) - When first processing an instruction that contains child instructions, an unmaterialized region (represented by a spacer) is created to correspond to all of the child instructions. Searching for unmaterialized regions is the primary mechanism that draws the process forward and causes new contexts to be created. Here the term “region” refers to a fragment of the geometry tree, corresponding to a region of user interface, for example a portion of a screen display. - Once an unmaterialized region is found that satisfies a given set of criteria, for example whether it is on-screen, the interpreter can “carve off” a piece, at one of the ends or in the middle, creating a new spacer or adjusting the existing spacer as required to accommodate a new context node. - In a presently preferred embodiment, the context tree is implemented using a relative b-tree configuration. FIG. 7illustrates a single interior relative b-tree node entry. Each entry in the interior nodes of the b-tree contains a value that is the sum of the lengths (including materialized contexts, unmaterialized regions of known length and unmaterialized regions of estimated length), and a flag indicating whether estimated lengths were included in the sum. These values are paired with a pointer to the corresponding child node in the b-tree. - Referring to FIG. 7, the interior socket 700 includes (a) the sum of number of context node children, transitive over interior whorls, but not beyond the first level of context nodes. Second, element (b) contains the sum of y span, transitive over all levels below. The next element (c) comprises flags, including the union of all flags below. And finally, element (d) is a pointer to the child (context node or interior whorl) or cached information about unmaterialized object. FIG. 8illustrates an interior whorl of the b-tree 800. In this interior node, each column 802, 804, 806 defines an interior socket as described with reference to FIG. 7. FIG. 9is an illustration of a portion of the interpretation tree. It includes a parent node 900 and plurality of child nodes, for example, 902 and 904. The relative b-tree implements interior sockets 910, 912 and 914. In this system, each Wt node can be parented separately by an interpretation parent and a geometric parent. In turn, each node can separately parent an interpretation tree and a geometry tree. - Additional fields can also be included in the b-tree node entries to track other linear values such as the y-axis of an image space that can be mapped to a graphical display. For example, this can correspond to the height of a PDA screen display. This data structure enables contexts to be materialized incrementally. Relative addressing can be performed in logarithmic time within spans of the tree that do not contain unmaterialized regions of unknown size by walking the tree structure in the usual manner of traversing a relative b-tree. Similarly, absolute addressing can be performed in logarithmic time up to the occurrence of the first unmaterialized region of unknown size. Estimates derived from this data structure can be used to satisfy scroll car sizing requirements of graphical user interfaces. In short, use of this data structure supports an incremental execution process that enables immediate output to the mobile (or local) user regardless of the actual database size. - Contexts are evaluated by performing the operation implied by the instruction addressed by the program counter and reflecting the result in a context. The details of the evaluation depend on the specific instruction. Evaluation actions include, but are not limited to the following: - Variable and parameter declarations introduce symbols - Literal elements copy themselves to the output tree - Concrete interface objects are passed on to the user interface subsystem - Abstract interface objects are mapped to concrete interface objects using platform-specific rules - Control constructs such as <xsl:if> determine which subtrees are processed further. - Creating a spacer to correspond to all child instructions. - Several techniques can be used to encode the types of change that could require that a context be reprocessed. Some optimizing space, some optimizing the amount of unnecessary computation that is performed due to false positives. - A context may express dependency on instances of a specific schema component. For example, an interpreter context which references an XSLT statement including an XPath query that depends on the value of “y” attributes on “x” elements could register interest in any changes to values of “y” attributes on “x” elements (including the addition or deletion of “x” elements having “y” attributes). - Dependency on source trees can be generalized to an XPath expression identifying nodes whose change would require reprocessing. Such an expression can be derived from XPath expressions in the stylesheet element addressed by the program counter. - An interpreter context whose processing result depends only on its ancestors and the value of a single element or attribute instance can register an interest in that element or attribute instance. - All contexts express dependency on XSLT stylesheet content through their program counter. - Contexts are marked “unverified” when state they depend on has changed. When a context is marked unverified its ancestor contexts are flagged as having unverified contexts below. This can be used to rapidly search the context tree(s) from the root(s) for contexts that need to be reprocessed. - When an unverified context is selected for processing several techniques can be used: - The context can be evaluated, as if for the first time, in its current context. The result of the evaluation is then compared to the previous value. - The change log can be examined to determine the precise nature of the differences. In some cases (false positives) these differences can be determined to not affect the result. - The change log can be examined to determine a corresponding change to the output tree. For example, consider a query returning a somewhat different set of nodes. Nodes in only the new set must have corresponding contexts created. Nodes in only the old set must have their corresponding contexts deleted. - In each case, if the context has an identical effect to its previous evaluation, no changes need to be made to the output tree, and no child contexts need to be marked as unverified as a result of the evaluation. If the result of reprocessing is different than the previous processing, any child contexts that could be affected by the difference are marked unverified. - The choice of what context to process is driven by specific objectives of the software component requesting the transformation. These objectives can include: - The need to fill the visible portions of a formatting object with content. - The need to satisfy a request for data that is the object of a schema translation. - Processing to prepare the system to respond quickly to likely user action. - In order to effectively manage limited resources the processor can release contexts and associated fragments of the result tree. Release objectives can be set to work within a specific memory quota. The invention chooses contexts for release by scoring on the following criteria: - Contexts nearer the interior of the tree are preserved over contexts near the leaves of the context trees. - Contexts which required substantial computation on previous evaluation are preserved over contexts which required little computation. - Contexts which are responsible for less memory are favored over contexts which use more memory. - Contexts which carry decorations of the source or result trees are maintained as long as possible. - The present system preferably validates data modification by rejecting editing transactions that violate data schema constraints. The entire effect of a transaction should be considered when validating schema constraints, although individual actions that make up a transaction need not yield valid intermediate results. In particular, the net result of a transaction must leave attribute values that match their declared type, and element structure that matches declared content models. The existence or absence of attributes should also be consistent with the corresponding element declarations. - The present system also rejects transactions that would violate access control constraints (described below) and transactions that would violate data mastering. And the present system ultimately rejects transactions that are rejected by the datasource that masters the data (described below). For maximum security, read and modification control are enforced by the mastering datasource. Modification control is enforced on the client and offline replicas as well as the server. By default, the database in which an object was created is the master for purposes of replication. This means that any modification that can be achieved through that creating database (subject to access control mechanisms) will be propagated to replicating databases as synchronization points with those databases occur. - The invention can be used in combination with encryption technology to provide private and authenticated access to remote datasources. For secure applications, the server preferably authenticates user identities when processing requests from client devices. User authentication is central to higher level security mechanisms such as access control and management of paid subscriptions. - Embodiments of the invention also can use digital signatures to ensure that servos delivered to a client device come from a trusted source. This model is similar to Microsoft's ActiveX control distribution model. A preferred implementation also provides robust access control mechanisms for users, content, and applications. Users can be given rights only to certain applications or certain data delivered to those applications. When a user registers for an application, his or her access rights are checked against the application's rights list that has been set up by the application's administrator. If the rights match, the user is allowed to install the application. Content can be given access control lists as well, enabling a single application to serve different levels of content to users with different access rights. For example, within a corporate environment, only the authorized accounting staff would have rights to see quarterly revenue figures before they were announced, but everyone could see historical figures that had been previously publicly announced. - Application Rights Arbitration: The client can be configured to provide a mechanism similar to the Java sandbox concept wherein the client enforces a set of rules governing what types of actions an application can take. Rules govern things such as writing data to the device's persistent storage, interacting with built-in applications, and requesting data from foreign servers. The user controls the settings for these rules. For instance, the user may allow any action to be taken by an authenticated application whereas tighter security might be applied to untrusted applications. - Content Authentication and Version Checking: In many business situations it is crucial to establish that information has not been tampered with and that all information components are of the right version. Consider, for example, the management of aircraft maintenance publications where it is required that all current service bulletins be displayed and that no un-approved documentation is displayed. The present system enables this to be accomplished through the synchronization processes by authenticating all messages through encryption, including messages which define configurations that are current as of a particular time. - Schema authentication and Version Checking: The invention can be deployed in an environment where servos come from a variety of sources. It is valuable in such cases to establish that new schema contributions come from authorized sources and that incompatible versions of schema components are not used together. Details of implementation will be apparent to those skilled in the art in view of this specification. - The server environment of the present invention also enables content and application providers to support subscription or pay-per-use distribution. Content providers might require a paid subscription to access their content. Thus, users' subscription status can be taken into account when checking access control. Content providers might also provide pay-per-use content. Embodiments of the invention can provide the infrastructure for collecting payment and ensuring that pay-per-use content is only delivered to paying consumers. Again, the particulars of implementation will be apparent to those skilled in the art in view of this specification and in any event will vary with the particular application. - User Management of Asynchronous Transactions: End-users can engage in asynchronous actions and can inspect the status of the task queue. The preferred embodiment includes user interface components that indicate when there are transactions in the following states: - Transactions that have not yet been communicated to the server or datasource. - Transactions that have been communicated to the server but for which no response has been received. - Transactions that have been confirmed by the server as having been successfully completed. - Transactions that have been rejected by the server but have not yet been viewed or acknowledged by the user. - Rejected transactions that have been viewed by the user but have not been acknowledged. - These interface components enable the user to quickly determine what tasks have been carried out or need to be attended to. In addition, the system can provide user interface components that enable users to initiate operations on transactions that are accessible through the interface described above. These actions include, but not limited to: - Viewing transactions in the context in which they were created - Canceling transactions - Modifying transactions that have not yet been communicated to the server and - Coping and modifying a rejected transaction to be resubmitted - Embodiments of the present invention employ two kinds of identifiers: public identifiers and system identifiers. Public identifiers preferably are URLs for resources assigned through the authority of the registrar (330 in FIG. 3). System identifiers are UUIDs assigned without the assistance of any central authority when resources such as databases are created. - The invention implements a form of self identity as well as public identity because not all resources can be known to a public authority at all times. Take for example an environment in which the client has been distributed to a group of individuals without current network access. These individuals must be able to begin using the application right away. Using the application, however, can involve sharing information with others. The identity of this information needs to be tracked as it travels, so a unique identifier must be assigned. Since a central authority cannot be contacted to assign a public identity, the system must fall back on using self identity to track the information. - For reconciliation accuracy, performance and automation, units of information managed by the invention can be uniquely identified at a fine degree of granularity. Indeed, the reader will no doubt appreciate by now that the granularity is limited only as defined by the applicable data schema. Unique identifiers must be scoped in such a way that no global mechanism is necessarily required to dispense the identifiers. (Note however that a central authority could be used when available.) - Local and Global Identification with Short Forms - Recall that a “database name” is a string that can be used within one specific database as a short handle to another specific database. A short identifier is another term for a database name. The databases that the invention manages are identified using UUIDs. - Fully qualified global identification of an object is achieved by combining the identity of the creating database with a UID scoped to that particular database. When within their originating database, objects can be identified using a locally unique identifier. Within a specific database, other databases can be referenced using short identifiers which are scoped within the referencing database. Short identifiers can always be normalized to long forms to provide location independent identifiers. - In the sample embodiment, individual elements are identified with the “ts:u” attribute, which is an attribute defined on a “base” archetype that facilitates management of data. All element types that require generalized unique identification are below “base” in the implied inheritance graph. The value of the “ts:u” attribute consists of a database name followed by a local id, with a “#” separator. For example, in a database where “b” is a declared database name, an element with identify “2” scoped to within database “b” will have a “ts:u” attribute value of “b#2.” Database names are declared with the <dbDecl> element in the sample schema found in Appendix A. - Elements which are not identified with the “ts:u” attribute are associated with the nearest ancestor that has a “ts:u” attribute. Such unidentified elements can be accessed with XML linking mechanisms. These pointers would be relative to the nearest ancestor with a “ts:u” attribute. UUIDs are encoded as a string of 32 hexadecimal digits with a dash delimiter (“-”) after the 8th, 12th, 16th and 20th digits. For a full specification see the Distributed Computing Environment [DCE] specification. For example: “2facl234-31f8-11b4-a222-08002b34c003” is a valid UUID. - In the sample embodiment, the UUID for a database is indicated by the “systemId” attribute on the first “ts:dbDecl” element. The UUIDs for other databases having elements replicated in this database are given as “ts:uu” attributes on the “ts:db” element under the “ts:imports” section. The local alias for each database is given as the “ts:localAlias” attribute of its “ts:db” tag. (See the schema in Appendix A.) - While it is possible to fully normalize all data represented in XML so that it is isomorphic to a normalized relational database representation, and while this is an appropriate encoding for many applications, the invention does not require that this be done with all data. This allows document-oriented characteristics to be provided by the invention. These characteristics include presentation order for elements and choice of primary location for multiply referenced objects. - It is also suggested that instances of the invention include an aliasing mechanism that allows the structure of cross-linked XML to be fairly thoroughly checked with existing XML tools and allows the structure of the database to be easily interpretable by humans. In the sample embodiment, the “base” archetype includes an “ts:aliasOf” attribute of type IDREF which can be used to reference an element of the same type whose attributes and content serve as the attributes and content for the referencing element. Most processing utilities operate above the level where knowledge of these aliases is necessary. Lower level utilities expand these aliases as an automatic service. - This same reference mechanism can be the basis for element instance inheritance. This provides a means to extend data replicated from others while minimizing resolution conflicts. Low-level utilities interpret inheritance automatically. Transformers to expand aliases can be used to simplify operations for downstream transformers. When aliases have been expanded it is possible to discover the original source of the information by examining special system maintained attributes or alternative XPath axes. - Since aliases can in turn reference other aliases, information can be added at each level. Any number of subtrees can contribute to the final value of an alias chain. Since multiple transformations can be composed together it would not be uncommon for multiple instances of the same record to turn up at some phase of the transformation process when only a single instance is required. The “ts:u” attributes described above are the primary mechanism for identifying copies and eliminating duplicates when that is appropriate. By default, when an alias is expanded, the resulting element acquires the “ts:u” attribute of the referenced element. This is the appropriate action when the alias exists to facilitate a tree-oriented view of non-normalized data. - In the sample embodiment, the “base” archetype includes a “refPath” attribute of type IDREFS. When an alias is expanded, the value of the “ts:u” attribute of the referencing element is appended to the list. In the sample embodiment the “base” archetype further includes an “inhPath” attribute of type IDREFS. When an ancestor is processed that contributes attributes (other than “refPath”) or content to the expanded value, the value of the “ts:u” attribute of the ancestor is appended to the list. The values of certain attributes can apply to an arbitrary amount of information which may be grouped together in a subtree. Attribute inheritance may or may not follow aliases. Whether or not it follows is a choice of extended schema and/or application design. When possible, personalization should occur through refinement (e.g., via inheritance) rather than through making individualized changes to replicated data. - Recall that changes to data are the subject of messages between client and server, as illustrated in FIG. 3. Editing actions are specified in terms of operations on an abstract XML model. For example, operations as described in DOM. The invention creates logs of changes as they occur. Edits are placed in the log prior to being performed. This is called an edit log or transaction log. - A sample syntax for communicating change information is provided as part of Appendix A. In particular, <syncRequest> messages are requests for updates to a particular piece (or pieces) of replicated data. Information in a syncRequest may include: - The “ts:u” attribute values (discussed above) of the root nodes of the subtrees requesting an update. This information is required for every Sync Request. - The Sync Packet Id of the last committed Sync Packet. This Id contains the sequence number or date/time value of the last committed Sync Packet. This is to request edits that have occurred since this synchronization. In the absence of this attribute, it is assumed that the first Sync Packet in the edit log is the starting point. - The Sync Packet Id of the last Sync Packet desired. This is an optional attribute. - The document identifier for the document we are requesting edits to. In the absence of this attribute, we assume that the request is for edits to the local document. - Whether the document identifier is a public Id or a UUID. This is basic typing information needed to resolve the document identifier attribute. - A collection of “ts:u” attributes of subtree elements we wish to exclude from the request. This is optional. - To support tracking changes in collaborative authoring contexts, the “ts:cre” attribute encodes the creation date/time of an instance of an element derived from the “base” archetype. The “ts:mod” attribute encodes the modification date/time. Modification is defined as a change to attributes (other than “ts:u”), the generic identifier, the order or existence of immediate children, and change to any children not enclosed in one or more elements derived from the “base” archetype. Changes to the “ts:mod” attribute are not themselves considered changes (this rule may have no consequences since modification does not propagate beyond the first ancestor element derived from the “base” archetype). - The “ts:cre” and “ts:mod” attribute values are based on system clocks and cannot be trusted as the basis for synchronization, which brings us to synchronization points, described next. - We define a “synchronization point” as an event during which a database incorporates information from another database, thus instigating replication or updating previously replicated information. Each database has its own linear address space of synchronization points, preferably represented as strings of decimal digits. Every database will maintain a record of at least the latest synchronization point with every database from which it imports replicated information. Every database will maintain a record of at least all synchronization points at which information may have been replicated out. - While it would be useful to keep specific bidirectional records of all synchronization points, this record keeping may be impractical for certain databases, for example, a database which serves as a template for a large number of other databases or a database on read-only media. Even if this information is always kept, the system must be constructed so as to recover transparently when such export information is lost. In a presently preferred implementation, databases are not required to keep a record of all databases which have received information replicated from them. Not keeping usage (reference) counts with transactional integrity implies a need for a distributed garbage collection solution. Otherwise, one may not know how long to keep synchronization packets. The server environment should provide a mechanism for distributed garbage collection. This must not interfere with the moment-to-moment operation of the system at any time. - In our sample schema, the “ts:sy” attribute carries the synchronization point identifier of the first sequence point following the element's creation or modification. Modification is defined as in the definition of the “ts:mod” attribute. The “ts:sy” attribute is found on the “base” archetype. The type of the value of the “ts:sy” attribute is a string representation of an integer. The “ts:db” elements under the “synchronization” element in the “context” section hold the synchronization point log for each database. See the schema of Appendix “A” for details. - Scope of replication can be controlled as well. A database can explicitly disallow specific data from being replicated outward. A database can explicitly disallow specific data from being replicated transitively (i.e., more than one level). Since it is easy to copy data, constraints on replication and transitive replication are not strong security measures. They are more a way of indicating, in an advisory manner, how the originator of the data wants the data to be handled. Control parameters apply to the transitive closure of a subtree through containment, but not through aliasing or other linking mechanisms. This is a case where the physical structure of the database dictates control semantics. Items referenced in the structure are not necessarily protected by protection of the structure. - Copies created using the services of the present invention preferably carry a copy attribute that indicates the identity of the originating element. The syntax of the copy attribute values is identical to that of the “ts:u” attribute value described above. When a subtree is copied, a copy attribute is applied to each element deriving from the “base” archetype, identifying its corresponding source. Changes from other datasources are captured using one of a variety of techniques. Examples are: - Tracking change during modification in the manner of the invention - Comparison of backing store - Query based on modification stamp. - Use of database logging facilities. - The present invention also very effectively addresses (data) schema incompatibilities. A “schema translator” is a software component that converts data of one specific schema to data of another specific schema. Each version of each schema component preferably has a public identifier. One or more schema translators can be associated with a directed pair of schema component public identifiers. A flag for each registered transformer indicates whether the transformation involves a loss of data. Transformer pairs that have been successfully reviewed and tested for lossless round trip are registered as such. “Registered” alludes to the registrar described above with reference to FIG. 3and further described below. Each version of each application registers the schema components it depends upon. - Consistent with the present invention, client-client, client-server and server-server interactions are all mediated by XML based messaging. (Alternative encodings can be used when both parties can handle them.) Standard encryption and authentication methods can and should be used to manage these messages. The invention relies on XML and transmission of other file types. This messaging can be accomplished using a wide variety of transmission techniques, including but not limited to the following protocols, technologies and standards, individually or in combination: - TCP/IP - Hypertext Transport Protocol (http) - Secure Hypertext Transport Protocol (https) - SOAP - Secure Socket Layer (SSL) - Short Message Service (SMS) - Cellular Digital Packet Data (CDPD) - GSM - GPRS - The server environment described above preferably also includes an incident reporting service. This is a service that tracks subscriptions to events associated with particular resources, and informs subscribers when those events occur. Services of this type are used most commonly to track dependencies between source databases and databases which replicate parts of the source databases. - The incident reporting service manages relationships among resources known to the system of the present invention. Database representations of these relationships are generally redundant, in that the receiving database knows about its relationship with the originating database. (The originating database may also know about the receiving database.) In other words, database relationships are peer to peer, but the incident reporting service facilitates communication between peers which may not be active at the same time. The purpose of the incident reporting service is to facilitate the timely update of databases, but it has no responsibility for actually scheduling tasks. Clients of the messaging service can use the incident reporting service as a sort of “group” mechanism: messages sent to the incident reporting service referencing a resource are can be forwarded to each of the subscribers to that resource. - Another component of the server environment in a preferred embodiment is a task allocator (not shown). This is a service that assigns interactive tasks to available computational resources based on suitability to purpose (server load, access to resources, availability of cached database or session state). The allocator depends upon the location broker to identify the location of resources and the performance monitor to determine the load on each of the relevant resources. - Another component of the server environment in a preferred embodiment is a task scheduler (not shown). This is a component that assigns computational tasks to available computational resources based on priority, aging, deadlines, load, etc. Typically the tasks handled in this manner are not in response to a current user request. In order to deliver optimal performance in response to user requests, the invention makes provision for accomplishing preparatory tasks in the background on a time available basis. Tasks likely managed in this way by the scheduler include, but are not limited to: - synchronizing databases - creating usage reports - optimizing databases - garbage collection of unneeded synchronization packets - crawling personalized web sites - The task scheduler depends upon the task allocator to actually allocate tasks. - The location broker is component of the invention that can accept an identifier for a resource and provide means to contact that resource. Resources are not necessarily local to the broker. - A directory of services is a location broker that provides a navigable index of available services. Location broker services are needed to find database resources in the computing environment of the invention, where resources are distributed and location is subject to change. - Given a public identifier or system identifier the location broker provides access to the resource. In the simplest case, the location broker will satisfy a request by posting the URL given in the public identifier. This a degenerate case above which the location broker can add value. For example, the location broker can cache responses. - The registrar, as mentioned above, manages metadata to enable the invention's distributed deployment. Data associations managed by the registrar include, but are not limited to: - schema public identifiers with schema representation - schema translators with directed pairs of schema public identifiers, including flag for whether translation loses information - datasource public identifier with datasource declaration - servo public identifier with servo declaration - each servo public identifier with the public identifiers of the schemas it uses - each servo public identifier with the public identifiers referencing schema referenced in the opportunity declarations of the servo - The registrar provides mappings across these associations use standard database techniques. In addition, the registrar calculates best translation path (if any) between any two schema public identifiers using standard graph analysis algorithms. - The synchronizer (310, 326 in FIG. 3) is a component that uses transaction records, synchronization point identifiers, time stamps, datasource declarations and database instance-specific parameters to maintain correct database state across a set of related databases managed by the invention. The synchronization point identifier is an identifier uniquely representing a synchronization point for a particular database. Recall a synchronization point is the event during which a database incorporates information from another database, thus instigating replication or updating previously replicated information. - The synchronizer is responsible for generating and processing the synchronization packets that mediate the propagation of change among components of the invention. Data source declarations influence the timing of synchronization by indicating when and how quickly data becomes obsolete. Time stamps are relevant when time-dependent data such as stock quotes need to be reconciled. - The invention's default approach for distributed lock management is optimistic but centralized (on a per-resource basis). Authority to attempt change is given by the datasource declaration. Change is implemented through one or more protocols published with the datasource declaration. These protocols can be classified as either low-level “data oriented” XML modification protocols, such as TSXUL, or higher-level “object oriented” actions which can be schema specific. - There are several levels at which a change can fail: - Data source does not accept changes: The datasource may choose not publish a protocol for accepting changes. Such a datasource is effectively read-only. - Individual requesting change does not have authority to make changes (at all) to the data. This is commonly called resource-level access control. - Individual requesting change does not have authority to make the specific changes (e.g., some fields are editable, some are not). In such a case, specification of what can be changed should be made available as part of the change protocol so that upstream clients don't lead individuals to attempt changes which will ultimately fail. - An individual's changes conflict with other changes made since the conflicting sources have a common baseline. For example, two people change the same text field. - The invention attempts to detect these issues as far upstream as possible, but the datasource itself must do it's own checking - There are a number of ways a client can deal with “rejected” changes. The last two in the list below are used by the preferred embodiment of the invention. - Transaction managed client aborts transaction, changes never committed. - Client does not apply actual change to its store before submitting the request. The requested modification continues to have unofficial status (e.g., modified data in a form). - Client has already implemented update but applies “undo” info saved as part of synchronization point. - Client receives update synchronization packets as response, updates, shows user conflicting information, provides opportunity to retry. - Replication of data must be carefully managed as well. In accordance with the invention, a transient database is used, i.e., a database that exists only for the purpose of messaging. - Databases that are created only for the purpose of transmitting data are transient: they need not have public IDs because it is not always required for them to be publicly addressable. They do have system IDs. Whenever data is replicated the root element(s) of the replicated data carry a master” attribute which contains the local alias of the master. Even when new elements are created in subtrees that are mastered elsewhere, they are give “ts:u” identifiers as mentioned above for the database in which they are created. In situations where a client that is originating edits is in direct communication with the master, the “ts:u” attributes can be normalized to new identifiers scoped to the master. This is useful in situations where it is important to protect the privacy of the originator of the data. It also reduces the number of database declarations that need to be created in context/imports. - Applications and their underlying schema are expected to change over time. New capabilities are added, ambiguities are fixed and sometimes applications are simplified. Since multiple applications may be depending on different versions of the same schema, the invention must employ the schema translation infrastructure to keep everything working through a transition. - The invention's ability to manage schema and schema translation means that new schema need never be forced on users unless they want capability enabled by the new schema, or enabled by applications that work only with the new schema. Each database automatically maintains lists of imported (used) schemas. (Automatic tracking for the dependencies that instance data has on schema). - The scheduling capabilities of the server environment are used to identify and applications requiring update and taking them through the needed transformations in a timely manner. - Managing Schema Registered with the Invention - To fully benefit from the integration made possible by the invention some human coordination is valuable. On their own, different application developers will extend the base schema in multiple incompatible ways. To see how this works in practice, consider the hobby of collecting. There are some shared characteristics for how collectors operate, but the objects collected are diverse, as are the sources of information. Everyone involved in stamp collecting needs a common way to describe the artifacts. Individuals and vendors need a way to document their holdings, offer items for sale and identify items available for purchase from others. Information about individual artifacts and issues can come from publications, news groups, clubs, commercial websites and websites of issuing governments. Many of these parties have reason to have an interest in having their information accessible in a public deployment of the invention, if the environment has already been well seeded and it is reasonably easy or them to make the necessary extensions. - All that is fine until different people begin extending the system in mutually incompatible ways. For example, two different parties might extend the schema in different ways to cover the same missing data, say the name of the designer of a stamp. Now there are some views that expect the designer of a stamp to be encoded one way, and some views that expect it to be encoded another way. Neither set of views will be able to show data that is in the encoding expected by the other. Reconciling these differences requires human intervention to establish a standard schema, map the variant schemas to the standard schema, and transform the views accordingly. The role of “schema editor” is a service that keeps the system running smoothly by arbitrating the preferred schema. Third party schema developers can create, using the invention, the transformers that translate instances between schemas. Throughout this process, each variant of a schema can registered and authenticated through the processes described above. -. Claims (19) - 1. (canceled) - 2. A computer-implemented, incremental process for executing an application servo on a mobile computing device in a distributed computing system for intermittently networked devices based on a specified set of matching criteria, the process comprising:selecting a servo to provide services, the servo stored in local storage on the mobile computing device, the servo comprisinga declaration of a datasource on the mobile computing device, the declaration including or referencing a schema, anda declaration of a view, the declaration including a plurality of transformation rules mapping schema components to abstract interface objects or concrete interface objects;identifying on the mobile computing device, the datasource defined by the declaration of the datasource in the selected servo;initializing on the mobile computing device an execution context tree structure by creating a root node of the context tree associated with an initial transformation rule of the servo;choosing a context of the context tree that satisfies the matching criteria;executing a transformation rule of the servo associated with the chosen context;responsive to said executing step, creating zero or more new child contexts in the context tree, each new child context including content defining a current internal evaluation state of the process;repeating said choosing, executing and creating steps over subsequent transformation rules of the servo until no context satisfies the matching criteria;responsive to changes to the datasource, marking dependent contexts as unverified;choosing a marked context of the context tree;performing a transformation rule of the servo associated with the chosen context;responsive to said executing step, creating zero or more new child contexts in the context tree and removing or modifying zero or more existing child contexts;unmarking the chosen context and marking zero or more dependant contexts as unverified; andrepeating said choosing, performing, creating, removing, modifying, unmarking and marking steps over subsequent transformation rules of the servo until no contexts are left marked. - 3. A process according to claim 2wherein the content of the child context includes:a pointer to an element within the selected servo; anda pointer that identifies a current data context by pointing into a source tree. - 4. A process according to claim 2wherein the content of the child context includes;a reference to a parent context;an ordered, potentially sparse, list of pointers to zero or more child contexts; and definitions for any symbols introduced by the context. - 5. A process according to claim 2further including, responsive to said executing and performing steps, creating zero or more child spacers in the context tree representing unmaterialized child contexts; and wherein said choosing a context includes choosing either a context or a spacer. - 6. A process according to claim 5wherein the context tree is implemented using a relative b-tree structure, and each spacer is reflected in an interior node entry in the relative b-tree structure to facilitate searching unmaterialized contexts. - 7. A process according to claim 2wherein the b-tree node entry includes a field to track a linear value associated with a graphical display output object. - 8. A process according to claim 2wherein the process creates and maintains both the context tree and a geometry tree, the geometry tree representing the spatial structure of a predetermined graphical user interface. - 9. A process according to claim 2wherein the servo is defined using a servo definition language that references XML schema definitions as its core vocabulary. - 10. A process according to claim 9wherein the servo definition language comprises:application data schema;transformation rules; andopportunity rules. - 11. An interpreter stored in a non-transitory computer-readable medium of a mobile computing device in a distributed computing system, the interpreter for interpreting a servo definition language for defining a distributed application that supports disconnected operation on the mobile computing device, the language comprising:application data schema;transformation rules;transaction rules;transaction handling rules;interface object specifications;opportunity rules to realize automatic extension or integration of servos through opportunity-based linking of an interface component representing an instance of a schema fragment to a template. - 12. An interpreter according to claim 11further comprising access rules. - 13. An interpreter according to claim 11wherein the template specifies at least one of a transformation rule, a transaction handling rule and an interface object specification. - 14. An interpreter according to claim 11and further comprising an abstract interface object definition. - 15. An interpreter according to claim 11wherein the application data schema comprises an XML-based schema. - 16. An interpreter according to claim 11defined using XML schema definitions XSD as the core vocabulary. - 17. An interpreter according to claim 11including a view element for selecting a group of the said transformation rules to define at least a part of an output interface. - 18. An interpreter according to claim 11including a storage declaration element that enables an author to reserve and name persistent storage on the mobile computing device for use by the servo and any other servos authorized to access the corresponding data, wherein data in the storage on the mobile computing device is synchronized with replica data on a server computing device in the distrubuted computing system. - 19. An interpreter according to claim 18wherein the storage declaration element includes a locally scoped name for a corresponding storage tree and identifies a schema to which the storage tree must conform.
https://patents.google.com/patent/US20110125804A1/en
CC-MAIN-2019-04
en
refinedweb
dart_tags 0.3.1 The library for work with music tags like ID3. Written on pure Dart. It can be used in flutter, web, and vm projects. Dart Tags # The library for parsing ID3 tags, written in pure Dart. You can found sample app written with flutter framework here. License # project under MIT license Changelogs # 0.3.1 # - implemented separate getting size of frame for id3 v2.3 and v2.4 - added test case and asset - fixed typos, thanx to @algoshipda and his PR - fixed APIC picture type error, thanx to @algoshipda and his PR 0.3.0+1 # - hotfix! missed exports for new tags was added 0.3.0 (BREAKING CHANGES) # - COMM, APIC, USLT, WXXX tags returns as a map - WXXX frame returns WURL object - various fixes - added USLT tag - added possibility to pass many COMM, APIC, USLT tags - APIC processing was refactored - hex encoder - unrecognized encoding falls to hex encoder (removed unsupported encoding error) - unsupported tags like PRIV will be printed just like raw binary data Instalation # add dependency in pubsec.yaml dependencies: dart_tags: ^0.3.1 Usage # A simple usage example: import 'dart:io'; import 'package:dart_tags/dart_tags.dart'; main(List<String> args) { TagProcessor tp = new TagProcessor(); File f = new File(args[0]); tp.getTagsFromByteArray(f.readAsBytes()).then((l) => l.forEach((f) => print(f))); } Code of conduct # Please refer our code of conduct. Features and bugs # Please feel free for feature requests and bugs at the issue tracker. In addition # Thanx for contributing @magodo, @frankdenouter Thanx for the Photo by Mink Mingle on Unsplash that we using in unit tests.
https://pub.dev/packages/dart_tags
CC-MAIN-2020-45
en
refinedweb
Extend VersionListControl with Delayed Publish Date If you are using the delayed publish function a lot or if you have multiple delayed page versions pending, this post is useful for you. Every page in EPiServer has the 'Version List' tab in the EditPanel. This tab usefully shows the number of page versions that are ‘Not Ready’ and ‘Ready to Publish’ in parentheses in the tab name. The Version List page itself displays all page versions of the current page and provides insight into the Status of those page versions. The Delayed Publish icon will be made visible for each page version that is not yet published. This is useful if you want to specify the date and time on which the specific page version should be published. So far so good. One thing was missing in my opinion. While the Version List page displays a fair amount of information on each page version, it does not however display any information on the Delayed Publish date and time which could well be set for a specific page version. The only clues as to see which page version is scheduled for delayed publish is the Status field (which displays 'Delayed publish') and the visible Delayed Publish icon. This would be enough information if you are not using Delayed Publish a lot. But what if you have multiple page versions with a scheduled publish date? It would be ideal if there could be a way to display the actual delayed publish date and time of a page version within the DataGrid on the Version List page itself without having to open the Manage Delayed Publish page which is displayed once you click on the Delayed Publish icon. I figured this would be easy enough to implement, so let’s see how to accomplish this. Take a look at the above image. The date and time is displayed next to the Delayed Publish icon. To implement this, we need to perform a few things: - Create a custom VersionListControl UserControl - Change the web.config file to set up the custom file mapping We start by creating a custom VersionListControl UserControl. In your EPiServer install directory, look for the default VersionListControl.ascx. This should be available at the following location, dependent on which version of EPiServer you're running: C:\Program Files (x86)\EPiServer\CMS\5.2.375.236\Application\UI\Edit\VersionListControl.ascx Copy the file to your project or solution. Also create a code behind for the UserControl. Hint: you could also create a new UserControl and copy-paste the code from the original VersionListControl.ascx into your newly created UserControl. Change the name of your newly created VersionListControl.ascx to for instance MyVersionListControl.ascx Open up the UserControl and make sure the '<%@ Control .. %>' declaration points to the correct CodeBehind and inherited namespace. Look for the <asp:DataGrid> element. Add the following OnItemDataBound event to the declaration: 1: 2: <asp:DataGrid [...] OnItemDataBound="VersionListItemDataBound" [...] > 3: Next, look for the TemplateColumn that handles the delayedpublish. Add a <asp:Literal> element with ID 'PublishDateLiteral' to the ItemTemplate. Your code should now look like the following: 1: <asp:TemplateColumn 2: <ItemTemplate> 3: <EPiServerUI:ToolButton 4: <asp:Literal 5: </ItemTemplate> 6: </asp:TemplateColumn> Open up the code behind file and add the method for the ItemDataBound event. The name of this method should be the same name as declared in the OnItemDataBound event of the <asp:DataGrid> declaration which we added earlier. In our case this method would look like the following: 1: public partial class MyVersionListControl : EPiServer.UI.Edit.VersionListControl 2: { 3: protected void VersionListItemDataBound(object sender, DataGridItemEventArgs e) 4: { 5: if (e.Item.ItemType == ListItemType.Item || 6: e.Item.ItemType == ListItemType.AlternatingItem) 7: { 8: PageVersion pv = (PageVersion)e.Item.DataItem; 9: PageData pd = DataFactory.Instance.GetPage(pv.ID, LanguageSelector.AutoDetect(true)); 10: 11: //Make sure the Status of the PageVersion is DelayedPublish 12: if (pv.Status == VersionStatus.DelayedPublish) 13: { 14: Literal publishDateLiteral = (Literal)e.Item.FindControl("PublishDateLiteral"); 15: publishDateLiteral.Text = pd.StartPublish.ToString("MM/dd/yyyy hh:mm:ss tt"); 16: } 17: } 18: } 19: } Save both the UserControl and the code behind file, and make sure you're not getting any compilation errors. Adjusting the web.config At this point we have our custom VersionListControl finished. We still need to make sure that EPiServer knows we actually have a custom VersionListControl UserControl, so we need to change the web.config to handle the custom file mapping: The following provider needs to be added to the virtualPath node within the episerver node in web.config. Make sure to add this mapping at the bottom of your other mappings. 1: <add showInFileManager="false" 2: virtualName="AdminMapping" 3: virtualPath="~/UI/edit/VersionListControl.ascx" 4: bypassAccessCheck="false" 5: name="AdminMapping" 6: Further, the custom MyVersionListControl.ascx needs to be added to the virtualPathMappings node just beneath the virtualPath node, still within the episerver node. Make sure the url attribute points to the correct original VersionListControl.ascx file and the mappedUrl attribute points to your custom MyVersionListControl.ascx file. 1: <virtualPathMappings> 2: <add url="~/UI/edit/VersionListControl.ascx" 3: 4: </virtualPathMappings> Save the web.config file and rebuild your project. Run your EPiServer website and have a look at the Version List in the EditPanel tab for any page that contains Delayed Publish page versions, or just create a delayed publish for a new page version. You should see the date and time just next to the Delayed Publish button, as shown in the beneath image. That's it and good luck!
https://world.episerver.com/blogs/Eric-Vanderfeesten/Dates/2011/9/Extend-VersionListControl-with-Delayed-Publish-Date/
CC-MAIN-2020-45
en
refinedweb
Extensions for Akka Stream Available items The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here: We are proud to opensource Akka-Stream-Extensionsextending Typesafe Akka-Stream. The main purpose of this project is to: Develop generic Sources/ Flows/ Sinksnot provided out-of-the-box by Akka-Stream. Make those structures very well tested & production ready. Study/evaluate streaming concepts based on Akka-Stream & other technologies (AWS, Postgres, ElasticSearch, ...). We have been developing this library in the context of MFG Labs for our production projects after identifying a few primitive structures that were common to many use-cases, not provided by Akka-Stream out of the box and not so easy to implement in a robust way. Scaladoc is available there. build.sbt resolvers += Resolver.bintrayRepo("mfglabs", "maven") build.sbt Currently depends on akka-stream-2.4.18 libraryDependencies += "com.mfglabs" %% "akka-stream-extensions" % "0.11.2" import com.mfglabs.stream._ // Source from a paginated REST Api val pagesStream: Source[Page, ActorRef] = SourceExt .bulkPullerAsync(0L) { (currentPosition, downstreamDemand) => val futResult: Future[Seq[Page]] = WSService.get(offset = currentPosition, nbPages = downstreamDemand) futResult.map { case Nil => Nil -> true // stop the stream if the REST Api delivers no more results case p => p -> false } } someBinaryStream .via(FlowExt.rechunkByteStringBySeparator(ByteString("\n"), maximumChunkBytes = 5 * 1024)) .map(_.utf8String) .via( FlowExt.customStatefulProcessor(Vector.empty[String])( // grouping by 100 except when we encounter a "flush" line (acc, line) => { if (acc.length == 100) (None, acc) else if (line == "flush") (None, acc :+ line) else (Some(acc :+ line), Vector.empty) }, lastPushIfUpstreamEnds = acc => acc ) ) Many more helpers, check the Scaladoc! This extension provides tools to stream data from/to Postgres. libraryDependencies += "com.mfglabs" %% "akka-stream-extensions-postgres" % "0.11.2" Pull all docker images launched by the tests bash docker pull postgres:8.4 docker pull postgres:9.6 import com.mfglabs.stream._ import com.mfglabs.stream.extensions.postgres._ implicit val pgConnection = PgStream.sqlConnAsPgConnUnsafe(sqlConnection) implicit val blockingEc = ExecutionContextForBlockingOps(someEc) PgStream .getQueryResultAsStream( "select a, b, c from table", options = Map("FORMAT" -> "CSV") ) .via(FlowExt.rechunkByteStringBySeparator(ByteString("\n"), maximumChunkBytes = 5 * 1024)) someLineStream .via(PgStream.insertStreamToTable( "schema", "table", options = Map("FORMAT" -> "CSV") )) libraryDependencies += "com.mfglabs" %% "akka-stream-extensions-elasticsearch" % "0.11.2" import com.mfglabs.stream._ import com.mfglabs.stream.extensions.elasticsearch._ import org.elasticsearch.client.Client import org.elasticsearch.index.query.QueryBuilders implicit val blockingEc = ExecutionContextForBlockingOps(someEc) implicit val esClient: Client = // ... EsStream .queryAsStream( QueryBuilders.matchAllQuery(), index = "index", type= "type", scrollKeepAlive = 1 minutes, scrollSize = 1000 ) This extension allows to build at compile-time a fully typed-controlled flow that transforms a HList of Flows to a Flow from the Coproduct of inputs to the Coproduct of outputs. For more details on the history of this extension, read this article. libraryDependencies += "com.mfglabs" %% "akka-stream-extensions-shapeless" % "0.11.2" // 1 - Create a type alias for your coproduct type C = Int :+: String :+: Boolean :+: CNil // The sink to consume all output data val sink = Sink.foldSeq[C], C(_ :+ _) // 2 - a sample source wrapping incoming data in the Coproduct val f = GraphDSL.create(sink) { implicit builder => sink => import GraphDSL.Implicits._ val s = Source.fromIterator(() => Seq( CoproductC, CoproductC, CoproductC, CoproductC, CoproductC, CoproductC, CoproductC ).toIterator) // 3 - our typed flows val flowInt = Flow[Int].map{i => println("i:"+i); i} val flowString = Flow[String].map{s => println("s:"+s); s} val flowBool = Flow[Boolean].map{s => println("s:"+s); s} // >>>>>> THE IMPORTANT THING // 4 - build the coproductFlow in a 1-liner val fr = builder.add(ShapelessStream.coproductFlow(flowInt :: flowString :: flowBool :: HNil)) // <<<<<< THE IMPORTANT THING // 5 - plug everything together using akkastream DSL s ~> fr.in fr.out ~> sink ClosedShape } // 6 - run it RunnableGraph.fromGraph(f).run().futureValue.toSet should equal (Set( CoproductC, CoproductC, CoproductC, CoproductC, CoproductC, CoproductC, CoproductC )) Check our project MFG Labs/commons-aws also providing streaming extensions for Amazon S3 & SQS. To test postgres-extensions, you need to have Docker installed and running on your computer (the tests will automatically launch a docker container with a Postgres db). MFG Labs sponsored the development and the opensourcing of this library. We hope this library will be useful & interesting to a few ones and that some of you will help us debug & build more useful structures. This software is licensed under the Apache 2 license, quoted below..
https://xscode.com/MfgLabs/akka-stream-extensions
CC-MAIN-2020-45
en
refinedweb
#include <genesis/utils/io/file_input_source.hpp> Inherits BaseInputSource. Input source for reading byte data from a file. The input file name is provided via the constructor. It is also possible to provide a FILE pointer directly. In this case, the ownership of the file pointer is taken by this class. Thus, closing the file is done when destructing this class. Definition at line 59 of file file_input_source.hpp. Construct the input source from a file with the given file name. Definition at line 70 of file file_input_source.hpp. Construct the input source from a FILE pointer. The file_name is used for the source_name() function only. Definition at line 82 of file file_input_source.hpp.
http://doc.genesis-lib.org/classgenesis_1_1utils_1_1_file_input_source.html
CC-MAIN-2020-45
en
refinedweb
1.15 anton.14 anton 88: require search-order.fs.1 anton 107: \ the: 1.5 anton 114: slowvoc @ 115: slowvoc on \ we want a linked list for the vocabulary locals 1.1 anton 116: vocabulary locals \ this contains the local variables 1.3 anton 117: ' locals >body ' locals-list >body ! 1.5 anton 118: slowvoc ! 1.1 anton 1.3 anton 128: aligned dup adjust-locals-size ; 1.1 anton 129: 130: : alignlp-f ( n1 -- n2 ) 1.3 anton 131: faligned dup adjust-locals-size ; 1.1 anton -- ) 1.3 anton 156: -1 chars compile-lp+! 1.1 anton 157: locals-size @ swap ! 158: postpone lp@ postpone c! ; 159: 160: : create-local ( " name" -- a-addr ) 1.9 anton 161: \ defines the local "name"; the offset of the local shall be 162: \ stored in a-addr 1.1 anton 163: create 1.12 anton 164: immediate restrict 1.1 anton 165: here 0 , ( place for the offset ) ; 166: 1.3 anton 167: : lp-offset ( n1 -- n2 ) 168: \ converts the offset from the frame start to an offset from lp and 169: \ i.e., the address of the local is lp+locals_size-offset 170: locals-size @ swap - ; 171: 1.1 anton 172: : lp-offset, ( n -- ) 173: \ converts the offset from the frame start to an offset from lp and 174: \ adds it as inline argument to a preceding locals primitive 1.3 anton 175: lp-offset , ; 1.1 anton 176: 177: vocabulary locals-types \ this contains all the type specifyers, -- and } 178: locals-types definitions 179: 1.14 anton 180: : W: ( "name" -- a-addr xt ) \ gforth w-colon 181: create-local 1.1 anton 182: \ xt produces the appropriate locals pushing code when executed 183: ['] compile-pushlocal-w 184: does> ( Compilation: -- ) ( Run-time: -- w ) 185: \ compiles a local variable access 1.3 anton 186: @ lp-offset compile-@local ; 1.1 anton 187: 1.14 anton 188: : W^ ( "name" -- a-addr xt ) \ gforth w-caret 189: create-local 1.1 anton 190: ['] compile-pushlocal-w 191: does> ( Compilation: -- ) ( Run-time: -- w ) 192: postpone laddr# @ lp-offset, ; 193: 1.14 anton 194: : F: ( "name" -- a-addr xt ) \ gforth f-colon 195: create-local 1.1 anton 196: ['] compile-pushlocal-f 197: does> ( Compilation: -- ) ( Run-time: -- w ) 1.3 anton 198: @ lp-offset compile-f@local ; 1.1 anton 199: 1.14 anton 200: : F^ ( "name" -- a-addr xt ) \ gforth f-caret 201: create-local 1.1 anton 202: ['] compile-pushlocal-f 203: does> ( Compilation: -- ) ( Run-time: -- w ) 204: postpone laddr# @ lp-offset, ; 205: 1.14 anton 206: : D: ( "name" -- a-addr xt ) \ gforth d-colon 207: create-local 1.1 anton 208: ['] compile-pushlocal-d 209: does> ( Compilation: -- ) ( Run-time: -- w ) 210: postpone laddr# @ lp-offset, postpone 2@ ; 211: 1.14 anton 212: : D^ ( "name" -- a-addr xt ) \ gforth d-caret 213: create-local 1.1 anton 214: ['] compile-pushlocal-d 215: does> ( Compilation: -- ) ( Run-time: -- w ) 216: postpone laddr# @ lp-offset, ; 217: 1.14 anton 218: : C: ( "name" -- a-addr xt ) \ gforth c-colon 219: create-local 1.1 anton 220: ['] compile-pushlocal-c 221: does> ( Compilation: -- ) ( Run-time: -- w ) 222: postpone laddr# @ lp-offset, postpone c@ ; 223: 1.14 anton 224: : C^ ( "name" -- a-addr xt ) \ gforth c-caret 225: create-local 1.1 anton 1.3 anton 249: drop nextname 250: ['] W: >name ; 1.1 anton 1.14 anton 265: : { ( -- addr wid 0 ) \ gforth open-brace 1.1 anton 266: dp old-dpp ! 267: locals-dp dpp ! 268: also new-locals 269: also get-current locals definitions locals-types 270: 0 TO locals-wordlist 271: 0 postpone [ ; immediate 272: 273: locals-types definitions 274: 1.14 anton 275: : } ( addr wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 1.1 anton 276: \ ends locals definitions 277: ] old-dpp @ dpp ! 278: begin 279: dup 280: while 281: execute 282: repeat 283: drop 284: locals-size @ alignlp-f locals-size ! \ the strictest alignment 285: set-current 286: previous previous 287: locals-list TO locals-wordlist ; 288: 1.14 anton 289: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 1.1 anton 290: } 1.9 anton 291: [char] } parse 2drop ; 1.1 anton: 1.3 anton 383: \ Implementation: migrated to kernal.fs 1.1 anton: 1.3 anton 398: \ explicit scoping 1.1 anton 399: 1.14 anton 400: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.3 anton 401: cs-push-part scopestart ; immediate 402: 1.14 anton 403: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.3 anton 404: scope? 1.1 anton 405: drop 1.3 anton 406: locals-list @ common-list 407: dup list-size adjust-locals-size 408: locals-list ! ; immediate 1.1 anton 409: 1.3 anton 410: \ adapt the hooks 1.1 anton 411: 1.3 anton 412: : locals-:-hook ( sys -- sys addr xt n ) 413: \ addr is the nfa of the defined word, xt its xt 1.1 anton 414: DEFERS :-hook 415: last @ lastcfa @ 416: clear-leave-stack 417: 0 locals-size ! 418: locals-buffer locals-dp ! 1.3 anton 419: 0 locals-list ! 420: dead-code off 421: defstart ; 1.1 anton 422: 1.3 anton 423: : locals-;-hook ( sys addr xt sys -- sys ) 424: def? 1.1 anton 425: 0 TO locals-wordlist 1.3 anton 426: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton: 1.14 anton 476: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 477: \ a little space-inefficient, but well deserved ;-) 478: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 479: \ as long as you use it in a definition 1.3 anton 480: dup 481: if 482: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 483: else 484: 2drop 485: endif ; 1.1 anton 486: 1.4 anton [ ' bits 1.13 anton 502: swap [ 1 invert ] literal and does-code! 1.4 anton 503: else 504: code-address! 505: then ; 506: 507: \ !! untested 1.14 anton 508: : TO ( c|w|d|r "name" -- ) \ core-ext,local 1.4 anton 509: \ !! state smart 510: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 511: ' dup >definer 512: state @ 513: if 514: case 515: [ ' locals-wordlist >definer ] literal \ value 516: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 517: [ ' clocal >definer ] literal 518: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 519: [ ' wlocal >definer ] literal 520: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 521: [ ' dlocal >definer ] literal 522: OF POSTPONE laddr# >body @ lp-offset, POSTPONE d! ENDOF 523: [ ' flocal >definer ] literal 524: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 1.11 anton 525: -&32 throw 1.4 anton 526: endcase 527: else 528: [ ' locals-wordlist >definer ] literal = 529: if 530: >body ! 531: else 1.11 anton 532: -&32 throw 1.4 anton 533: endif 534: endif ; immediate 1.1 anton 535: 1.6 pazsan 536: : locals| 1.14 anton 537: \ don't use 'locals|'! use '{'! A portable and free '{' 538: \ implementation is anslocals.fs 1.8 anton 539: BEGIN 540: name 2dup s" |" compare 0<> 541: WHILE 542: (local) 543: REPEAT 1.14 anton 544: drop 0 (local) ; immediate restrict
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.16;sortby=rev;only_with_tag=MAIN
CC-MAIN-2020-45
en
refinedweb
,_multi and attr_multif functions provide a way to operate on multiple attributes of a filesystem object at once. Page 1 ATTR_MULTI(2) ATTR_MULTI(2) interpreted and can take on one of the following values: ATTR_OP_GET /* return the indicated attr's corresponding single-attribute function call. For example, the result code possibleoperation, respectively. Their use varies depending on the value of the am_opcode field. ATTR_OP_GET [Toc] namespace will be searched. ATTR_OP_SET [Toc] [Back] The am_attrvalue and am_length fields contain the new value for the given attribute name and its length. The ATTR_ROOT flag may be set Page 2 ATTR_MULTI(2) ATTR_MULTI(2)operation [Toc] [Back] symbolic links, flags should be set to ATTR_DONTFOLLOW to not follow symbolic:
https://nixdoc.net/man-pages/IRIX/man2/attr_multi.2.html
CC-MAIN-2020-45
en
refinedweb
News: Content of this SCN Doc will be maintained now in wiki page Purpose With the following hints you will be able to configure the use of Service Level Agreement SLA to make sure that messages are processed within the defined period of time. For configuring SLA you should get this document SLA Management from SAP SMP. Here I will try to give you some hints for SLA configuration. The screenshots are taken from a Solution Manager 7.1 SP05 with a Incident Management standard scenario configuration. Overview By setting up the SLA Escalation Management mechanism the system monitors when deadlines defined in the SLA parameters have been exceeded in the service process and which follow-up processes would be triggered. For example, email notifications will be sent to upper levels in the Service Desk organization like to responsible IT Service Managers to inform them immediately about expiration of deadlines and SLA infringements. Thereby, IT Service Managers are only involved in the ticketing process when it is really necessary. Definitions IRT on the created incident has to be performed at the latest. When the processor starts processing the incident then it is enriched with the timestamp “First Reaction” for actual first reaction by the processor. M. When the incident is closed by the reporter (in the case that a newly created incident is withdrawn or a proposed solution is confirmed) then the incident is enriched with the timestamp “Completed” for actual incident completion. Step 1. Copy transaction type SMIN -> ZMIN We are going to work with ZMIN transaction type. Insist here on the fact that you should copy all transaction types into your own namespace or <Z>, <Y> namespace before starting to use Incident Management, copy transaction types, copy status profiles, action profiles, etc…if not your modifications to the standard will be overwritten after the next support package import. This is really important in Solman 7.1!!! After each Support Patch applications you have the option to use report AI_CRM_CPY_PROCTYPE “Update option” to update already copied transaction types with new shipped SAP standard configuration. Step 2. Define Service Profile & Response Profile Transaction: CRMD_SERV_SLA (/nSPRO ->SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) ->Application Incident Management (service Desk) -> SLA Escalations ->Edit Availability and Response Times) Example: Factory calendar must be a valid one, see transaction /nscal and notes 1529649 and 1426524. Note: The usage of Holiday calendar in availability time is not supported by SLA date calculation i.e. you MUST use the option “Factory Calendar” or “All Days Are Working Days”. Pay attention to the System time zone & user time zone. Check in STZAC (note: 1806990). Create a response profile: I would suggest to maintain the times always in MIN. Make the same for all priorities. 3. Define SLA Determination Procedure SM34 : CRMVC_SRQM_SDP (SPRO -> SAP Solution Manager IMG ->SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> SLA Determination Procedure) Create your own SLA determination procedure: What is important here is to determine where are the response profiles and service profiles to check first, there are several alternatives: Possible access sequence: – Service Contracts Please note that currently Service Contract Determination is just recommended for upgrading purposes to SAP Solution Manager 7.1 SPS 05 to support already defined configurations. For enabling Service Contracts, the required customizing has to be performed (please keep in mind that usage of Service Contracts in SPS 05 are not supported at the moment by SAP – no adoptions and tests were performed for SPS 05). – Service Product Item A Service Profile as well as a Response Profile can be attached to a Service Product. The Service Product can also be assigned to specific master data like to the category of a defined Multilevel Categorization. In case of selecting this category during the incident creation process, the correct Service Product will be determined as well as its defined Service & Response Profiles. – Reference Objects (IBase Component) A Service Profile as well as a Response Profile can be attached to a specific IBase Component. This means, if this IBase Component is entered during the incident creation process, the related Service & Response Profile will be chosen. – Business Partners (Sold-To Party) A Service Profile as well as a Response Profile can be attached to a specific Sold-To Party (e.g. a Customer). This means, if this Sold-To Party is entered to the incident (manually by the Processor or automatically by a defined rule), the related Service & Response Profile will be assigned The most frequently used are the SLA determination via Service Product item and Business Partners (sold-to party). If you need your own SLA determination check BAdI Implementation CRM_SLADET_BADI (IMG: Customer Relationship Management -> Transactions -> Settings for Service Requests -> Business Add-Ins -> Business Add-In for SLA Determination). Now check that you linked this new SLA Determination procedure to ZMIN 4.Define Settings for Durations Specify the times to be recalculated when the status changes, under “Specify Duration Settings”. SM30: CRMV_SRQM_DATSTA (/nSPRO-> SAP Solution Manager IMG ->SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations ->Define Settings for Durations) Note 1674375 is having two attached files indicated entries to be inserted and to be deleted. For solman 7.1 until SP04 included these should be the standard entries: For SP05 and above these are the default entries: Date profile is not the data profile of the ZMIN transaction type, this is the date profile of the item category used, usually SMIP, we will see details about this in Step 9. Note: For SMIV incidents in VAR scenarios status E0010 Sent to Support means that the incident is at solman side so the correct entries are: Status Profile Status Date Profile Duration type Date type ZMIV0001 E0010 Sent to Support SMIN_ITEM SRQ_TOT_DUR ZMIV0001 E0010 Sent to Support SMIN_ITEM SRQ_WORK_DUR As summary, if the status means that the incident is at processor side the correct entries are: Status Profile Status Date Profile Duration type Date type XMIX0001 E000X SMIX_ITEM SRQ_TOT_DUR XMIX0001 E000X SMIX_ITEM SRQ_WORK_DUR If the status means that in the incident is at key user side the correct entries are: Status Profile Status Date Profile Duration type Date type XMIX0001 E000X SMIX_ITEM SMIN_CUSTL XMIX0001 E000X SMIX_ITEM SMIN_CU_DURA XMIX0001 E000X SMIX_ITEM SRQ_TOT_DUR XMIX0001 E000X SMIX_ITEM SRV_RR_DURA See the meaning of the Duration fields: – Duration Until First Reaction: This period of time is defined within the Response Profile and represents the basis for IRT calculation. Based on the selected incident priority, you should see the same values as defined in the Response Profile (dependencies between Incident Priority Level and “Duration Until First Reaction”). – Duration Until Service End: This period of time is defined within the Response Profile and represents the basis for MPT calculation. Based on the selected incident priority, you should see the same values as defined in the Response Profile (dependencies between Incident Priority Level and “Duration Until Service End”). – Total Customer Duration: The time duration when an incident message is assigned to the reporter (incident status is set to “Customer Action”, “Proposed Solution” or “Sent to SAP”) is added and visible via the parameter “Total Customer Duration”. – Total Duration of Service Transaction: The time duration for the whole processing of the incident message is added and visible via the parameter “Total Duration of Service Transaction”. – Work Duration of Service Transaction: The time duration when an incident message is assigned to the processor is added and visible via the parameter “Work Duration of Service Transaction”. See the meaning of Date Types fields: – Notification Receipt: When an incident message is created by the reporter the system sets the timestamp “Notification Receipt” which represents the initialization of the service start. This timestamp is the basis for all future SLA time calculations. – First Response By: at the created incident has to be performed at the latest. – First Reaction: When the processor starts processing the incident then it is enriched with the timestamp “First Reaction” for actual first reaction by the processor. – ToDo. – Completed: When the incident is closed by the reporter (in the case that a newly created incident is withdrawn or a proposed solution is confirmed) then the incident is enriched with the timestamp “Completed” for actual incident completion. – Customer Status Changed: The timestamp “Customer Status Changed” is set every time when the processor changes the status of an incident message to a customer status like “Customer Action”, “Proposed Solution” or “Sent to SAP”. This information represents at what given point in time the incident was assigned to the reporter. It is also the basis for IRT & MPT recalculation because customer times do not affect the SLA calculation. Step 5. Specify Customer Time Status /nSPRO -> SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> Specify Customer Time Status Identify non-relevant customer times in the step “Specify Customer Time Status”. That means the clock is stopped while time is spent in these statuses. Customer times are specified by the user status of an incident message. Defined Customer Times Statuses do not affect the SLA calculation (MPT calculation). This mechanism should prevent mainly for SLA escalations if an incident has to be processed by another person than the processor. The processor requires additional information from the reporter which is not included at the moment within the created message description. For an adequate processing, the incident will be commented with a request for providing additional information and assigned back to the reporter by the incident status change to “Customer Action”. The duration the reporter requires for enrichment of the incident should be excluded from calculation of SLA times because the processor is not able to take any influence on the time the reporter needs to provide the information (in the worst case the message is sent back to the processor and the MPT would be already exceeded). The period of time the message is on reporter’s side is added to the parameter “Total Customer Duration” and the MPT will be recalculated according to this value. Step 6. Create a product If you decide to use the SLA determination based on Service Product Item you need to create a product. Product INVESTAGATION will be created automatically when you perform in solman_setup for ITSM activity Create Hierarchy for Service products. Execute transaction COMMPR01, find product ID INVESTIGATION. Note: Use Unit of Measure MIN That avoids errors which could be caused be time rounds. Ensure that SRVP is entered in the Item Cat. Group: Enter your service and response profiles. Step 7. Check the Item Categories SM34: CRMV_ITEM_MA ( /nSPRO IMG -> CRM -> Transactions -> Basic Settings -> Define Item Categories) You can use SRVP: Step 8. Check the Item Category Determination SE16: CRMC_IT_ASSIGN (/nSPRO IMG -> CRM -> Transactions -> Basic Settings -> Define Item Category Determination) You should see the relation between ZMIN, SRVP and SMIP. Step 9. Check SMIP Item category /nSPRO IMG -> CRM -> Transactions -> Basic Settings -> Define Item Categories Pay attention to the Date Profile. With these settings the SLA times (IRT and MPT) will be calculated for any created incident message according to the parameters set within “Investigation”. Step 11. SLA Escalation The following clarifies how SLA Escalation is working including the configuration of the email notification service. The SLA Escalation mechanism is used to inform responsible staff like IT Service Managers immediately about expiration of deadlines and SLA infringements. In the case that an incident message reaches the calculated IRT or MPT timestamp, the systems sets the status automatically at first to “Warning”. If the timestamp is exceeded than the incident’s status is set to “Exceeded”. In both cases an email notification will be triggered to defined partner functions. Report AI_CRM_PROCESS_SLA is responsible for setting the warning/escalated status values once these thresholds are exceeded. So firstly ensure that your incidents are receiving the correct status values (IRT/MPT warning/escalated). Note that these are “additional” status values, which are not reflected in the main status of the incident. To view these status values, make the “Status” assignment block visible in the CRM UI, or view the Incident in the old CRMD_ORDER transaction. If your incidents are not receiving the correct status values, the e-mail actions will not function correctly. Then ZMIN_STD_SLA_IRT_ESC/ZMIN_STD_SLA_MPT_ES are intended to be scheduled based on the status of the incident, not directly on the evaluation of the respective durations. 11.1. Maintaining SLA E-mail Actions In the standard SMIN_STD profile delivered by SAP, the following actions (smartform based) are responsible for generating e-mails once escalation conditions have been reached since SP04: – ZMIN_STD_SLA_IRT_ESC – ZMIN_STD_SLA_MPT_ESC Please see the scheduling/starting conditions to ensure that they are appropriate for your customized transaction type ZMIN and ZMIN_STD action profile. If you need to send also emails at warning times you will need to create actions: – ZMIN_STD_SLA_IRT_WRN – ZMIN_STD_SLA_MPT_WRN Use the same settings than for the shown actions *ESC, the only difference is in the Start condition that you need to use IRT_WRN and MPT_WRN that do not exists by default, for fixing this: 1. Open BAdI implementation AI_SDK_SLA_COND in t-code SE19. 2. Change to Edit mode and deactivate this BAdI implementation. 3. Add Filter Values “IRT_WRN” and “MPT_WRN”. 4. Save and activate the BAdI implementation. Then, you will be able to select IRT_WRN / MPT_WRN from the start condition list. 11.2. Schedule SLA Escalation Background Job for triggering Email Notifications Since SAP Solman 7.1 SP04: /n SPRO -> SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> Schedule Escalation Background Job Schedule a job for report AI_CRM_PROCESS_SLA running every 10 minutes for example. This job is updating the SLA data for the incidents setting the additional user statuses (IRT Exceeded/IRT Warning/MPT Exceeded/MPT Warning). Note: It could happen that in sm_crm->Incident search the search result shows for example IRT warning at IRT Status text for an incident, however in the incident itself this additional status is not set. The search is making his own calculation. But the emails are only triggered when the status is really set by this report in the incident document. Before SAP Solman 7.1 SP04 you need to schedule report RSPPFPROCESS. 11.3. Email Notification In case that all previous described configuration activities were performed probably, email notifications will be sent automatically by following IRT and MPT status conditions: – Warning – Exceeded A default email will be sent with following parameter: – In case that IRT is impacted (incident status “Warning” or “Exceeded”): - Subject: “Transaction: <Incident ID> First Response Exceeded” - PDF attachment with the same file name like the subject – In case that MPT is impacted (incident status “Warning” or “Exceeded”) - Subject: “Transaction: <Incident ID> Completion Exceeded” - PDF attachment with the same file name like the subject Step12. Activate SLA Escalations /nSPRO -> SAP Solution Manager IMG -> SAP Solution Manager -> Capabilities (optional) -> Application Incident Management (service Desk) -> SLA Escalations -> Activate SLA Escalations In transaction DNO_CUST04 set the attribute SLA_ESCAL_ACTIVE to ‘X’ Related Content Related Documentation SLA Management guide Related Notes Always check the SLA notes relevant for your patch level and ensure that you have implemented the latest version of the notes. Dolores, Excellent work here. Many man(woman) hours saved for the SCN community. Cheers! Espectacular Dolores, muchas gracias por el documento ¡ Saludos, Luis Thanks for sharing Dolores. One of my customers is testing it right now, we’ll go live this week (I hope). Everything seems to be ok. We are using the Factory Calendar and Service Product Investigation. I had to do the additional steps that you explained to include the 2 new e-mail notifications: one for IRT Warning (IRT_WRN) and other for MPT Warning (MPT_WRN). I don’t understand why this is not included in the standard, if the standard configuration is prepared for WRN threshold. That was not included in the standard documentation (SLA guide) so it took me sometime to figure it out. I only miss in the blog the step where we configure the thresholds: table AIC_CLOCKNAME. I needed to change the standard limits that are: – Warning: 60% – Exceeded: 100% They need different values. Best regards, Raquel HI Raqual in the sm30 …enter table AIC_CLOCKNAME…you will standard entries like below IRT_ESC IRT Escalation SRV_RFIRST 100 IRT_WRN IRT Warning SRV_RFIRST 60 MPT_ESC MPT Escalation SRV_RREADY 100 MPT_WRN MPT Warning SRV_RREADY 60 just update these entries as per your reqiurements Hope this resolves Regards Prakhar Hi, Thank you for info but I was not asking, I was informing that this configuration was missing in the blog. I have already done that. Regards, Raquel Thank you, Dolores. This is a good short-guide for SLA configuration. Thank You for the concise guide on SLA Management. I recently configured SLA Management for one of our clients and we were looking for IRT/MPT Warning notifications. This helped us to understand the mechanish involved and configured the same. I agree with Raquel Pereira da Cunha ; IRT/MPT Warning notifications should have been part of standard package. Best Regards, Vivek Hi Dolores, Excellent SLA technical document. Hi Dolores, It’s a very good guide. I have a requirement for different service time on different priorities. For example, a “Very High” priority message need 7X24 working hours and other priorities need only 5X8 working hours. Can it be realized? Best regards, Wang Lue Hi Wang, I am sorry to say that not in the standard. Best regards, Dolores Hi Dolores, Thanks for your reply. We are trying to do some enhancements to realize it. Best regards, Wang Lue Hello Lue I am trying to setup same scenario as you. ¿Could you advise how you are trying to achive this? Thanks in advance. Best regards. Jorge Hi Dolores You mentioned that we can not configure escenario with a “Very High” priority message that needs 7X24 working hours and other priorities need only 5X8 working hours. Do you know any way to achive this? Thanks Best Regards. Jorge Luis Marquez Hi Jorge, If you need your own SLA determination check BAdI Implementation CRM_SLADET_BADI (IMG: Customer Relationship Management -> Transactions -> Settings for Service Requests -> Business Add-Ins -> Business Add-In for SLA Determination). Best regards, Dolores Hello Dolores Gracias for reply. I am working in SM 7.1. Have you any example about coding this BADI. Should I work with ABAP colleagues? Best regards Jorge Luis Marquez Hi Dolores, Thanks for such a nice blog on SLA configuration. I have configured SLA for one of a client and struggling with a requirement in which client want to have it. We have two support group created in the system. 1st one is L1 PP / MM/ SD module and another one L2 PP/MM / SD etc.. for every module. Similarly multiple ticket statuses are already created. The SLA is applicable for New and In process status. – Issue – Every ticket is initially assigned to L1 support group with New ticket status. L1 is handled by Client Team and if they want our (Service Provider) help then the support group will change to L2 but SLA clock will not reset as it is being measured based on status and Service Profiles. I have created separate priorities for Service Provider as the SLA’s reset in priority change, but i see the IRT and MPT are the same. Request you to please suggest what configuration i have to do for reset the clock when Business Partner changed to L2 support in the Support team field. Reagrds, Sri Kanth Hi, Can any one please provide solution for this situation. Regards, Sri Kanth Hello Dolores I am newly configuring the solman.I can’t understand how the escalation will work.Is it needed to change the status of the incident to “IRT excedded” or “MPT excedded” before the escalation should work.I am confused in that part.I have already scheduled the background job but still can’t understand how the data will be picked by that job.I have found Item is also having action profile and all the actions are inactive.Please guide. Regards Rashmi ranjan behera Hello Dolores, Thanks for your blog! We have configured SLA Management within SAP Solution Manager ITSM. Everything is working fine except the calculation and population of the SLA dates is not performed automatically when the message is created from for example a managed system. Only when I change the priority or the service profile manually in the message within the CRM_UI the date calculation is performed and the dates and durations are populated. We are on support pack 10. Do you know why the date calculation is not performed when a message is created from a managed system and the priotity is already assigned? Thanks, Guido Jacobs Thanks for this great post. I have an issue, all my config is as you mentioned. The only thing is that the icon status and the percentage at the incidente are not changing, but the date calculations is working fine. In SM30-AIC_CLOCKNAME the entries are as suggested too. Any suggestions?. Thanks and best regards. Hi Dolores, thanks for your blog about SLA Escalation! I have 1 additionally question about email notification.. Is it possible to send SLA Escalation email notification to Message processor and also to some special SLA Manager? — regards, Yessen Sypatayev Though the question is directed to Dolores, but I would like to reply to it with a yes, it is possible to do that. Just that you have to have an action profile for the specific partner function + have the action condition based on the item escalation status. Best Regards. Thanks Jacob. Now as workaround I solved my problem by this way: — best regards, Yessen Hi Yessen, If you are still looking forward to notifying other parties, what you can do is link those partner functions to the support team (which is already determined in the transaction), that way you will be able to notify them based on the same conditions you maintained before. Action profiles will be partner function dependent and there you go! Thanks for your post. We have the following requirement: If the notification receipt of the incident message is before 2 pm, set MPT (ToDo By) as Notification Receipt + 2 hrs. Else the notification receipt of the incident message is after 2 pm, set MPT (ToDo By) as Notification Receipt + 1 day 2 hrs.Please advise how we can achieve. Please also clarify the following: In this Post, 1) A date profile “ZMIN_HEADER” is assigned to the transaction type ZMIN and it has its own date rule logic for IRT and MPT. 2) Status profile and responsible profile have been assigned and it has its own duration logic as per the priority. 3) Date Profile SMIN_ITEM has been assigned for status “New-E0001” under “Define settings for Duration” and it has its duration logic. 4) SLA Determination Procedure has been defined . 5) Status profile and responsible profile have been assigned to the item category. In what sequence the IRT and MPT will be calculated and If the SLA Determination procedure is good enough then why we need to assign date profile under transaction type and in “Define settings for Duration” . Thanks Dear Dolores… This material is very detailed and helpful, Congrat! Patricia Dear Dolores, Interesting blog. For my requirement i’m missing something. I want to put a SLA on a status from the status profile. This means the transaction type can have a status like ‘On hold’ for only 30 days. After 30 days the user has to be warned about the breach. Do you have any suggestions? Thank you! Stefan Melgert Hola Dolores, He seguido todos los pasos pero al momento de la primera reacción no se detiene el medidor de tiempo. Cada vez que el mensaje está en manos nuestras continúa sumando tiempo al indicador IRT. ¿qué puede estar pasando? Espero me puedas ayudar, saludos.
https://blogs.sap.com/2013/09/16/incident-management-sla-configuration-hints-for-sap-solution-manager-71/
CC-MAIN-2020-45
en
refinedweb
Hello, I'm trying to get this package compiled on my machine which states this requirement "RTI Connext DDS >= 5.1.0 (Source install to /opt)". I've managed to get RTI installed on my computer but then I ran into this bug It seems like the problem is this package:. I'm not too sure about the RTI library but I've been attempting to use the idl file to recreate the .cxx and .h files in the package by running `rtiddsgen DDSImage.idl -replace` and then moving the files to the correct folders. After doing that though, the px-ros-pkg will build but the vmav-ros-pkg will still not build and the output looks strange on git diff. For example the namespace was removed and there is a "#using <new>". Since I'm not familiar with this library, I am not sure how to update the package so that it can still be used. Would anyone be able to take a look at the output of the rtiddsgen command to see if there is anything obvious that I'm missing? Thanks in advance Just a guess here, but -- Can you determine what the original command-line options for rtiddsgen were, when this was built for v5.1.0? I suspect that one or more options should be included when using rtiddsgen to generate the code, such as: "-namespace", but there may be others needed. Try regenerating the code with this option, and see if it gets you closer to the original configuration. The command-line options for the 5.3.1 release are in the current rtiddsgen user's manual, and the ones for 5.1.0 are in the 5.1.0 Core Libs & Utils user's manual.
https://community.rti.com/forum-topic/updating-510-531
CC-MAIN-2020-45
en
refinedweb
import "cmd/internal/objfile" Package objfile implements portable access to OS-specific executable files. disasm.go elf.go goobj.go macho.go objfile.go pe.go plan9obj.go xcoff.go CachedFile contains the content of a file split into lines. Disasm is a disassembler for a given File. func (d *Disasm) Decode(start, end uint64, relocs []Reloc, f func(pc, size uint64, file string, line int, text string)) Decode disassembles the text segment range [start, end), calling f for each instruction. Print prints a disassembly of the file to w. If filter is non-nil, the disassembly only includes functions with names matching filter. If printCode is true, the disassembly includs corresponding source lines. The disassembly only includes functions that overlap the range [start, end). DWARF returns DWARF debug data for the file, if any. This is for cmd/pprof to locate cgo functions. Disasm returns a disassembler for the file f. LoadAddress returns the expected load address of the file. This differs from the actual load address for a position-independent executable. A File is an opened executable file. Open opens the named file. The caller must call f.Close when the file is no longer needed. FileCache is a simple LRU cache of file contents. NewFileCache returns a FileCache which can contain up to maxLen cached file contents. Line returns the source code line for the given file and line number. If the file is not already cached, reads it, inserts it into the cache, and removes the least recently used file if necessary. If the file is in cache, it is moved to the front of the list. type Liner interface { // Given a pc, returns the corresponding file, line, and function data. // If unknown, returns "",0,nil. PCToLine(uint64) (string, int, *gosym.Func) } type Reloc struct { Addr uint64 // Address of first byte that reloc applies to. Size uint64 // Number of bytes Stringer RelocStringer } type RelocStringer interface { // insnOffset is the offset of the instruction containing the relocation // from the start of the symbol containing the relocation. String(insnOffset uint64) string } type Sym struct { Name string // symbol name Addr uint64 // virtual address of symbol Size int64 // size in bytes Code rune // nm code (T for text, D for data, and so on) Type string // XXX? Relocs []Reloc // in increasing Addr order } A Sym is a symbol defined in an executable file. Package objfile imports 30 packages (graph) and is imported by 11 packages. Updated 2020-06-01. Refresh now. Tools for package owners.
https://godoc.org/cmd/internal/objfile
CC-MAIN-2020-29
en
refinedweb
Recently I’ve been pondering the idea of cloud-like method of consumption of traditional (physical) networks. My main premise for this was that users of a network don’t have to wait hours or days for their services to be provisioned when all that’s required is a simple change of an access port. Let me reinforce it by an example. In a typical data center network, the configuration of the core (fabric) is fairly static, while the config at the edge can change constantly as servers get added, moved or reconfigured. Things get even worse when using infrastructure-as-code with CI/CD pipelines to generate and test the configuration since it’s hard to expose only a subset of it all to the end users and it certainly wouldn’t make sense to trigger a pipeline every time a vlan is changed on an edge port. This is where Network-as-a-Service (NaaS) platform fits in. The idea is that it would expose the required subset of configuration to the end user and will take care of applying it to the devices in a fast and safe way. In this series of blogposts I will describe and demonstrate a prototype of such a platform, implemented on top of Kubernetes, using Napalm as southbound API towards the devices. Frameworkless automation One thing I’ve decided NOT to do is build NaaS around a single automation framework. The tendency to use a single framework to solve all sorts of automation problems can lead to a lot of unnecessary hacking and additional complexity. When you’re finding yourself constantly writing custom libraries to perform some logic that can not be done natively within the framework, perhaps it’s time to step back and reassess your tools. The benefit of having a single tool, may not be worth the time and effort spent customising it. A much better approach is to split the functionality into multiple services and standardise what information is supposed to be passed between them. Exactly what microservices architecture is all about. You can still use frameworks within each service if it makes sense, but these can be easily swapped when a newer and better alternative comes along without causing a platform-wide impact. One problem that needs to be solved, however, is where to run all these microservices. The choice of Kubernetes here may seem like a bit of a stretch to some since it can get quite complicated to troubleshoot and manage. However, in return, I get a number of constructs (e.g. authentication, deployments, ingress) that are an integral part of any platform “for free”. After all, as Kelsey Hightower said: Kubernetes is a platform for building platforms. It's a better place to start; not the endgame.— Kelsey Hightower (@kelseyhightower) November 27, 2017 So here is a list of reasons why I’ve decided to build NaaS on top of Kubernetes: - I can define arbitrary APIs (via custom resources) with whatever structure I like. - These resources are stored, versioned and can be exposed externally. - With openAPI schema, I can define the structure and values of my APIs (similar to YANG but much easier to write). - I get built-in multitenancy through namespaces. - I get AAA with Role-based Access Control, and not just a simple passwords-in-a-text file kind of AAA, but proper TLS-based authentication with oAuth integration. - I get a client-side code with libraries in python, js and go. - I get admission controls that allow me to mutate (e.g. expand interface ranges) and validate (e.g. enforce per-tenant separation) requests before they get accepted. - I get secret management to store sensitive information (e.g. device inventory) - All data is stored in etcd, which can be easily backed up/restored. - All variables, scripts, templates and data models are stored as k8s configmap resources and can be retrieved, updated and versioned. - Operator pattern allows me to write a very simple code to “watch” the incoming requests and do some arbitrary logic described in any language or framework of my choice. Not to mention all of the more standard capabilities like container orchestration, lifecycle management and auto-healing. The foundation of NaaS Before I get to the end-user API part, I need to make sure I have the mechanism to modify the configuration of my network devices. Below is the high-level diagram that depicts how this can be implemented using two services: - Scheduler - a web server that accepts requests with the list of devices to be provisioned and schedules the enforcers to push it. This service is built on top of a K8s deployment which controls the expected number and health of scheduler pods and recreates them if any one of them fails. - Enforcer - one or more job runners created by the scheduler, combining the data models and templates and using the result to replace the running configuration of the devices. This service is ephemeral as jobs will run to completion and stop, however, logs can still be viewed for some time after the completion. Scheduler architecture Scheduler, just like all the other services in NaaS, is written in Python. The web server component has a single webhook that handles incoming HTTP POST requests with JSON payload containing the list of devices. @app.route("/configure", methods=["POST"]) def webhook(): log.info(f"Got incoming request from {request.remote_addr}") payload = request.get_json(force=True) devices = payload.get("devices") The next thing it does is read the device inventory mounted as a local volume from the Kubernetes secret store and decide how many devices to schedule on a single runner. This gives the flexibility to change the number of devices processed by a single runner (scale-up vs scale-out). sliced_inventory = [x for x in inv_slicer(devices_inventory, step)] schedule(sliced_inventory) Finally, for each slice of the inventory, scheduler creates a Kubernetes job based on a pre-defined template, with base64-encoded inventory slice as an environment variable. t = Template(job_template) job_manifest = t.render( job={"name": job_name, "inventory": encode(inventory_slice)} ) return api.create_namespaced_job( get_current_namespace(), yaml.safe_load(job_manifest), pretty=True ) In order for the scheduler to function, it needs to have several supporting Kubernetes resources: - Deployment to perform the lifecycle management of the app - Service to expose the deployed application internally - Ingress to expose the above service to the outside world - Configmap to store the actual python script - Secret to store the device inventory - RBAC rules to allow scheduler to read configmaps and create jobs Most of these resources (with the exception of configmaps) are defined in a single manifest file. Enforcer architecture The current implementation of the enforcer uses Nornir together with Jinja and Napalm plugins. The choice of the framework here is arbitrary and Nornir can easily be replaced with Ansible or any other framework or script. The only coupling between the enforcer and the scheduler is the format of the inventory file, which can be changed quite easily if necessary. The enforcer runner is built out of two containers. The first one to run is an init container that decodes the base64-encoded inventory and saves it into a file that is later used by the main container. encoded_inv = os.getenv("INVENTORY", "") decoded_inv = base64.b64decode(encoded_inv) inv_yaml = yaml.safe_load(decoded_inv.decode()) The second container is the one that runs the device configuration logic. Firstly, it retrieves the list of all device data models and templates and passes them to the push_config task. models = get_configmaps(labels={"app": "naas", "type": "model"}) templates = get_configmaps(labels={"app": "naas", "type": "template"}) result = nr.run(task=push_config, models=models, templates=templates) Inside that task, a list of sorted data models get combined with jinja templates to build the full device configuration: for ordered_model in sorted(my_models): model = yaml.safe_load(ordered_model.data.get("structured-config")) template_name = ordered_model.metadata.annotations.get("template") for template in templates: if template.metadata.name == template_name: r = task.run( name=f"Building {template_name}", task=template_string, template=template.data.get("template"), model=model, ) cli_config += r.result cli_config += "\n" Finally, we push the resulting config to all the devices in the local inventory: result = task.run( task=networking.napalm_configure, replace=True, configuration=task.host["config"], ) Demo Before we begin the demonstration, I wanted to mention a few notes about my code and test environments: - All code for this blogpost series will be stored in NaaS Github repository, separated in different tagged branches (part-1, part-2, etc.) - For this and subsequent demos I’ll be using a couple of Arista EOS devices connected back-to-back with 20 interfaces. - All bash commands, their dependencies and variables are stored in a number of makefiles in the .mkdirectory. I’ll provide the actual bash commands only when it’s needed for clarity, but all commands can be looked up in makefiles. The code for this post can be downloaded here. Build the test topology Any two EOS devices can be used as a testbed, as long as they can be accessed over eAPI. I build my testbed with docker-topo and c(vEOS) image. This step will build a local topology with two containerised vEOS-lab devices: make topo Build the local Kubernetes cluster The following step will build a docker-based kind cluster with a single control plane and a single worker node. make kubernetes Check that the cluster is functional The following step will build a base docker image and push it to dockerhub. It is assumed that the user has done docker login and has his username saved in DOCKERHUB_USER environment variable. export KUBECONFIG="$(kind get kubeconfig-path --name="naas")" make warmup kubectl get pod test This is a 100MB image, so it may take a few minutes for test pod to transition from ContainerCreating to Running Deploy the services This next command will perform the following steps: - Upload the enforcer and scheduler scripts as configmaps. - Create Traefik (HTTP proxy) daemonset to be used as ingress. - Upload generic device data model along with its template and label them accordingly. - Create a deployment, service and ingress resources for the scheduler service. make scheduler-build If running as non-root, the user may be prompted for a sudo password. Test In order to demonstrate how it works, I will do two things. First, I’ll issue a POST request from my localhost to the address registered on ingress () with payload requesting the provisioning of all devices. wget -O- --post-data='{"devices":["all"]}' --header='Content-Type:application/json' A few seconds later, we can view the logs of the scheduler to confirm that it received the request: kubectl logs deploy/scheduler 2019-06-13 10:29:22 INFO scheduler - webhook: Got incoming request from 10.32.0.3 2019-06-13 10:29:22 INFO scheduler - webhook: Request JSON payload {'devices': ['all']} 2019-06-13 10:29:22 INFO scheduler - get_inventory: Reading the inventory file 2019-06-13 10:29:22 INFO scheduler - webhook: Scheduling 2 devices on a single runner 2019-06-13 10:29:22 INFO scheduler - create_job: Creating job job-eiw829 We can also view the logs of the scheduled Nornir runner: kubectl logs jobs/job-eiw829 2019-06-13 10:29:27 INFO enforcer - push_configs: Found models: ['generic-cm'] 2019-06-13 10:29:27 INFO enforcer - push_configs: Downloading Template configmaps 2019-06-13 10:29:27 INFO enforcer - get_configmaps: Retrieving the list of ConfigMaps matching labels {'app': 'naas', 'type': 'template'} Finally, when logged into one of the devices, we should see the new configuration changes applied, including the new alias: devicea#show run | include alias alias FOO BAR Another piece of configuration that has been added is a special event-handler that issues an API call to the scheduler every time its startup configuration is overwritten. This may potentially be used as an enforcement mechanism to prevent anyone from saving the changes done manually, but included here mainly to demonstrate the scheduler API: devicea#show run | i alias alias FOO BAR devicea#conf t devicea(config)#no alias FOO devicea(config)#end devicea#write Copy completed successfully. devicea#show run | i alias devicea#show run | i alias alias FOO BAR Coming up Now that we have the mechanism to push the network changes based on models and templates, we can start building the user-facing part of the NaaS platform. In the next post, I’ll demonstrate the architecture and implementation of a watcher - a service that listens to custom resources and builds a device interface data model to be used by the scheduler.
https://networkop.co.uk/post/2019-06-naas-p1/?__s=yendj9cs848srfgswwfc
CC-MAIN-2020-29
en
refinedweb
Igor Fle - Total activity 30 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 8 Igor Fle created a post, Creating QuickFix for Xml fileHi,I try to create the QuickFix for the xml file.I use with ElementProblemAnalyzer class for code analyzer. But Resharper does not call my analyzer. What is wrong?This is example of my simple analy... Igor Fle created a post, The target dispatcher “JetDispatcher(Runner thread:5)” does not support asynchronous execution or cross-thread marshalling.When I run unit tests I have following error.It is strange that the tests did wokred yesterday.SetUp : JetBrains.TestFramework.Utils.TestLoggerListener+TestWrapperException : 2 exceptions were thro... Igor Fle commented, Igor Fle commented, Igor Fle created a post, QuickFix testingI created nunit test file, sorce test file and gold file. But my quickfix does not run. I did not find any information how to test the quickfixTestFixture] public class WindowOnloadQuickFixTest : Q... Igor Fle commented, Igor Fle commented, Igor Fle commented,
https://resharper-support.jetbrains.com/hc/en-us/profiles/2100704069-Igor-Fle
CC-MAIN-2020-29
en
refinedweb
inotify_init, inotify_init1 — initialize an inotify instance Synopsis #include <sys/inotify.h> int inotify_init(void); int inotify_init1(int flags); Description For..04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By inotify(7), inotify_add_watch(2), inotify_rm_watch(2), proc(5), syscalls(2). The man page inotify_init1(2) is an alias of inotify_init(2).
https://dashdash.io/2/inotify_init1
CC-MAIN-2020-29
en
refinedweb
This forum post will help you troubleshoot issues related to Web Services(ws-consumer/ws-provider) Debugging Steps - In order to enable the client authentication for Web services we need to set the "NeedClientAuth" property in the jetty.xml as "true". This file can be found inside ".\AdeptiaServer\ServerKernel\etc\jetty". - To publish the process as a web service, you need to design a process from perspective that it is similar to a function that takes input, processes it, and generates output. - While migrating the web service from one environment to another, you'll have to allow the activity recognize the new environment and update the target namespace and address.() - In order to avoid the error while updating the web service publishing parameters. Please verify that all files in "..\AdeptiaServer\ServerKernel\wsdl" directory should have extensions and Verify the parameters entered in the WS provider page are valid. - When uploading the wsdl to create a web service consumer,the web service is being created as DOCUMENT style instead of SOAP,Change the parser type for the web service to Easy WSDL.(-) - In order to avoid the connection problems in WS consumer,download the valid server certificate and place it inside the truststore folder: \AdeptiaServer\ServerKernel\etc\truststore. - If you are getting error while making connection with wsdl file location. Please create a zip file of the referenced XSD's and create File Reference activity under Develop>Services>Miscellaneous. - Try to connect and hit the wsdl with third party clients like SoapUI, Fiddler etc. Related Articles Information to be Provided to Support for Assistance - Screen shot of Error that you are receiving. - Application log files(). - In case of referential XSD please provide the wsdl files and the zip used to create File Reference. - Share result along with errors after testing with third party client like soapUI, Fiddler etc. PS: Providing the above information to support will ensure quicker resolution as Support will have the necessary information to investigate the issue.
https://support.adeptia.com/hc/en-us/articles/207879663-Debug-Web-Services-Issue-WS-Consumer-WS-Provider-
CC-MAIN-2020-29
en
refinedweb
Changelog: swc v1.1.20 ImprovementsImprovements #631)Smaller runtime dependency ( swc now imports regenerator-runtime instead of @babel/runtime/regenerator. This is not a breaking change because @babel/runtime depends on regenerator-runtime. From now on, you can remove @babel/runtime from dependencies. #650)Better error message ( Instead of showing require failed, swc emits proper error message ( swc: You have to install browserslist to use env) to stderr. #642)TypeScript privatge fields ( TypeScript 3.8 added the concept of private fields. As swc had private field support for ecmascript, adding support for typescript was easy. class Person { #name: string constructor(name: string) { this.#name = name; } greet() { console.log(`Hello, my name is ${this.#name}!`); } } let jeremy = new Person("Jeremy Bearimy"); jeremy.#name // ~~~~~ // Property '#name' is not accessible outside class 'Person' // because it has a private identifier. #647)TypeScript: export namespace from ( TypeScript 3.8 allows code like export * as utilities from "./utilities.js"; As it's official typescript syntax, export * as namespace from 'foo' does not require changing any config. BugfixesBugfixes #652)Escpaes in template literals ( Previously, codes like `\x1b[33m Yellow \x1b[0m`; were broken by swc. swc handles escapes in template literals correctly. #641)TypeScript imports ( swc strips out type-only imports correctly.
https://swc-project.github.io/blog/2020/02/07/swc-1.1.20
CC-MAIN-2020-29
en
refinedweb
A recent article in Forbes stated that unstructured data accounts for about 90% of the data being generated daily. A large part of unstructured data consists of text in the form of emails, news reports, social media postings, phone transcripts, product reviews etc. Analyzing such data for pattern discovery requires converting text to numeric representation in the form of a vector using words as features. Such a representation is known as vector space model in information retrieval; in machine learning it is known as bag-of-words (BoW) model. In this post, I will describe different text vectorizers from sklearn library. I will do this using a small corpus of four documents, shown below. corpus = ['The sky is blue and beautiful', 'The king is old and the queen is beautiful', 'Love this beautiful blue sky', 'The beautiful queen and the old king'] CountVectorizer The CountVectorizer is the simplest way of converting text to vector. It tokenizes the documents to build a vocabulary of the words present in the corpus and counts how often each word from the vocabulary is present in each and every document in the corpus. Thus, every document is represented by a vector whose size equals the vocabulary size and entries in the vector for a particular document show the count for words in that document. When the document vectors are arranged as rows, the resulting matrix is called document-term matrix; it is a convenient way of representing a small corpus. For our example corpus, the CountVectorizer produces the following representation. from sklearn.feature_extraction.text import CountVectorizer import pandas as pd vectorizer = CountVectorizer() X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) Doc_Term_Matrix = pd.DataFrame(X.toarray(),columns= vectorizer.get_feature_names()) Doc_Term_Matrix ['and', 'beautiful', 'blue', 'is', 'king', 'love', 'old', 'queen', 'sky', 'the', 'this'] The column headings are the word features arranged in alphabetical order and row indices refer to documents in the corpus. In the present example, the size of the resulting document-term matrix is 4×11, as there are 4 documents in the example corpus and there are 11 distinct words in the corpus. Since common words such as “is”, “the”, “this” etc do not provide any indication about the document content, we can safely remove such words by telling CountVectorizer to perform stop word filtering as shown below. vectorizer = CountVectorizer(stop_words='english') X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) Doc_Term_Matrix = pd.DataFrame(X.toarray(),columns= vectorizer.get_feature_names()) Doc_Term_Matrix ['beautiful', 'blue', 'king', 'love', 'old', 'queen', 'sky'] Although the document-term matrix for our small corpus example doesn’t have too many zeros, it is easy to conceive that for any large corpus the resulting matrix will be a sparse matrix. Thus, internally the sparse matrix representation is used to store document vectors. N-gram Word Features One issue with the bag of words representation is the loss of context. The BoW representation just focuses on words presence in isolation; it doesn’t use the neighboring words to build a more meaningful representation. The CountVectorizer provides a way to overcome this issue by allowing a vector representation using N-grams of words. In such a model, N successive words are used as features. Thus, in a bi-gram model, N = 2, two successive words will be used as features in the vector representations of documents. The result of such a vectorization for our small corpus example is shown below. Here the parameter ngram_range = (1,2) tells the vectorizer to use two successive words along with each single word as features for the resulting vector representation. vectorizer = CountVectorizer(ngram_range = (1,2),stop_words='english') X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) Doc_Term_Matrix = pd.DataFrame(X.toarray(),columns= vectorizer.get_feature_names()) Doc_Term_Matrix ['beautiful', 'beautiful blue', 'beautiful queen', 'blue', 'blue beautiful', 'blue sky', 'king', 'king old', 'love', 'love beautiful', 'old', 'old king', 'old queen', 'queen', 'queen beautiful', 'queen old', 'sky', 'sky blue'] It is obvious that while N-gram features provide context and consequently better results in pattern discovery, it comes at the cost of increased vector size. TfidfVectorizer Simply using the word count as a feature value of a word really doesn’t reflect the importance of that word in a document. For example, if a word is present frequently in all documents in a corpus, then its count value in different documents is not helpful in discriminating between different documents. On other hand, if a word is present only in a few of documents, then its count value in those documents can help discriminating them from the rest of the documents. Thus, the importance of a word, i.e. its feature value, for a document not only depends upon how often it is present in that document but also how is its overall presence in the corpus. This notion of importance of a word in a document is captured by a scheme, known as the term frequency-inverse document frequency (tf-idf ) weighting scheme. The term frequency is a ratio of the count of a word’s occurrence in a document and the number of words in the document. Thus, it is a normalized measure that takes into consideration the document length. Let us show the count of word i in document j by . The document frequency of word i represents the number of documents in the corpus with word i in them. Let us represent document frequency for word i by . With N as the number of documents in the corpus, the tf-idf weight for word i in document j is computed by the following formula: The sklearn library offers two ways to generate the tf-idf representations of documents. The TfidfTransformer transforms the count values produced by the CountVectorizer to tf-idf weights. from sklearn.feature_extraction.text import TfidfTransformer transformer = TfidfTransformer() tfidf = transformer.fit_transform(X) Doc_Term_Matrix = pd.DataFrame(tfidf.toarray(),columns= vectorizer.get_feature_names()) pd.set_option("display.precision", 2) Doc_Term_Matrix Another way is to use the TfidfVectorizer which combines both counting and term weighting in a single class as shown below. from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(ngram_range = (1,2),stop_words='english') tfidf = vectorizer.fit_transform(corpus) Doc_Term_Matrix = pd.DataFrame(tfidf.toarray(),columns= vectorizer.get_feature_names()) Doc_Term_Matrix One thing to note is that the tf-idf weights are normalized so that the resulting document vector is of unit length. You can easily check this by squaring and adding the weight values along each row of the document-term matrix; the resulting sum should be one. This sum represents the squared length of the document vector. HashingVectorizer There are two main issues with the CountVectorizer and TdidfVectorizer. First, the vocabulary size can grow so much so as not to fit in the available memory for large corpus. In such a case, we need two passes over data. If we were to distribute the vectorization task to several computers, then we will need to synchronize vocabulary building across computing nodes. The other issue arises in the context of an online text classifier built using the count vectorizer, for example spam classifier which needs to decide whether an incoming email is spam or not. When such a classifier encounters words not in its vocabulary, it ignores them. A spammer can take advantage of this by deliberately misspelling words in its message which when ignored by the spam filter will cause the spam message appear normal. The HashingVectorizer overcomes these limitations. The HashingVectorizer is based on feature hashing, also known as the hashing trick. Unlike the CountVectorizer where the index assigned to a word in the document vector is determined by the alphabetical order of the word in the vocabulary, the HashingVectorizer maintains no vocabulary and determines the index of a word in an array of fixed size via hashing. Since no vocabulary is maintained, the presence of new or misspelled words doesn’t create any problem. Also the hashing is done on the fly and memory need is diminshed. You may recall that hashing is a way of converting a key into an address of a table, known as the hash table. As an example, consider the following hash function for a string s of length n: The quantities p and m are chosen in practice to minimize collision and set the hash table size. Letting p = 31 and m = 1063, both prime numbers, the above hash function will map the word “blue” to location 493 in an array of size 1064 as per the calculations: where letter b is replaced by 2, l by 12, and so on based on their positions in the alphabet sequence. Similarly, the word “king” will be hashed to location 114 although “king” comes later than “blue” in alphabetical order. The HashingVectorizer implemented in sklearn uses the Murmur3 hashing function which returns both positive and negative values. The sign of the hashed value is used as the sign of the value stored in the document-term matrix. By default, the size of the hash table is set to ; however, you can specify the size if the corpus is not exceedingly large. The result of applying HashingVectorizer to our example corpus is shown below. You will note that the parameter n_features, which determines the hash table size, is set to 6. This has been done to show collisions since our corpus has 7 distinct words after filtering stop words. from from sklearn.feature_extraction.text import HashingVectorizer vectorizer = HashingVectorizer(n_features=6,norm = None,stop_words='english') X = vectorizer.fit_transform(corpus) Doc_Term_Matrix = pd.DataFrame(X.toarray()) Doc_Term_Matrix You will note that column headings are integer numbers referring to hash table locations. Also that hash table location indexed 5 shows the presence of collisions. There are three words that are being hashed to this location. These collisions disappear when the hash table size is set to 8 which is more than the vocabulary size of 7. In this case, we get the following document-term matrix. The HashingVectorizer has a norm parameter that determines whether any normalization of the resulting vectors will be done or not. When norm is set to None as done in the above, the resulting vectors are not normalized and the vector entries, i.e. feature values, are all positive or negative integers. When norm parameter is set to l1, the feature values are normalized so as the sum of all feature values for any document sums to positive/negative 1. In the case of our example corpus, the result of using l1 norm will be as follows. With norm set to l2, the HashingVectorizer normalizes each document vector to unit length. With this setting, we will get the following document-term matrix for our example corpus. The HashingVectorizer is not without its drawbacks. First of all, you cannot recover feature words from the hashed values and thus tf-idf weighting cannot be applied. However, the inverse-document frequency part of the tf-idf weighting can be still applied to the resulting hashed vectors, if needed. The second issue is that of collision. To avoid collisions, hash table size should be selected carefully. For very large corpora, the hash table size of or more seems to give good performance. While this size might appear large, some comparative numbers illuminate the advantage of feature hashing. For example, an email classifier with hash table of size 4 million locations has been shown to perform well on a well-known spam filtering dataset having 40 million unique words extracted from 3.2 million emails. That is a ten times reduction in the document vectors size. To summarize different vectorizers, the TfidfVectorizer appears a good choice and possibly the most popular choice for working with a static corpus or even with a slowing changing corpus provided periodic updating of the vocabulary and the classification model is not problematic. On the other hand, the HashingVectorizer is the best choice when working with a dynamic corpus or in an online setting.
https://iksinc.online/tag/spam-filtering/
CC-MAIN-2020-29
en
refinedweb
Insert space after a certain character in C++ In this tutorial, we will learn how to insert a space after a certain character in a string in C++. Before start writing our program, It is good to make an algorithm to determine how to achieve the objective of that program. The algorithm for this program is as given below: - Take an input string containing a certain character. - Take an empty string. - Use the for loop to access each of its characters. If the character is not that certain character, concatenate it to the empty string else concatenate it with additional space. To implement this, we should iterate in a loop throughout the string length to find that certain character. Then, add a space after a certain character to achieve the given objective. For this, we will use the concatenation operator. The sample code illustrates it: C++ program to insert a space after a certain character in a string #include <iostream> using namespace std; string replace(string str, char c) { string s1=""; for(int i=0;i<str.length();i++) { if(str[i]!=c) s1=s1+str[i]; else s1=s1+str[i]+" "; } return s1; } int main() { string s="Hi:Bye:Hello:Start:End"; char c=':'; cout<<"Input string:"<<s<<endl; s=replace(s,c); cout<<"Updated string:"<<s<<endl; return 0; } Output: Input string:Hi:Bye:Hello:Start:End Updated string:Hi: Bye: Hello: Start: End Program explanation: Consider an input string ‘s’ and a certain character (say ‘:’). Define a function replace with two arguments: an input string and a character and with the return type string. Now, take another empty string ‘s1’. Then, iterate throughout the input string using a for loop checking for each of its characters. If the character is not a ‘:’, just concatenate it with the string ‘s1’. If the character is a ‘:’, then concatenate it with an additional space which is the objective of the program. Call the replace function with values input string and ‘:’ and store the returned value in the string ‘s’ and display the result on the screen. I hope this post was helpful and it helped you clear your doubts! Thanks for reading. Happy coding! Recommended Posts: Adding comments in C++ Adding A Character To A String in C++
https://www.codespeedy.com/insert-space-after-a-certain-character-in-cpp/
CC-MAIN-2020-29
en
refinedweb
13711/how-does-downed-peer-connects-back-the-network-fabric-network I recently deployed the fabric network using Docker-compose, I was trying to simulate a downed peer. Essentially this is what happens: Why isn't the 4th peer synchronizing the blockchain, once its up.Is there a step to be taken to ensure it does? Or is it discarded as a rogue peer. This might be due to the expected behavior of PBFT (assuming you are using it). As explained on issue 933, When you start with Geth you need ...READ MORE Whenever a smart contract receives ether via ...READ MORE There are two ways you can do ...READ MORE When you run the command: peer channel create ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE The peers communicate among them through the ...READ MORE You can use Composer with the Bluemix ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/13711/how-does-downed-peer-connects-back-the-network-fabric-network?show=13714
CC-MAIN-2020-29
en
refinedweb
Introduction: Serial Communication and Firebase Data Sending With DragonBoard 410c Using Python serial library, we read the sensor values and send to Firebase using Pyrebase third party library. Step 1: Setting Serial Parameters To read the sensors values with serial communication we need to import a Python library called "serial": import serial ser = serial.Serial('/dev/ttyUSB') is the port that serial is connected; ser.baudrate = 115200 is the serial baudrate of serial; Step 2: Setting Firebase Database Parameters To make communication with Firebase we need to install a third party library called "Pyrebase". To do this we will execute the follow terminal command: pip install pyrebase (before this, you need to install the pip "python get-pip.py"). After this import the library (import Pyrebase). Now the parameters will be set: config = { "apiKey": "AIzaSyBlHBEbRzYL5IOHc9Yqkn-2XlxHe1R947Q", "authDomain": "imaca2-36859.firebaseapp.com", "databaseURL": "", "storageBucket": "imaca2-36859.appspot.com" } These parameters you will obtain in your Firebase database. Next you create firebase variables: firebase = pyrebase.initialize_app(config) db = firebase.database() Step 3: Reading Values From Serial and Sending to Firebase The following code reads a serial line: var = serial.readline() To format and convert to float the next two lines are executed: var = var.replace('\0', '') var = float(var) Now we send the value to our database: db.update({"key": var}) After all we will finish the serial communication: ser.close() Be the First to Share Recommendations Discussions 3 years ago Welcome to Instructables. Thanks for sharing with the community.
https://www.instructables.com/id/Serial-Communication-and-Firebase-Data-Sending-Wit/
CC-MAIN-2020-29
en
refinedweb
Calling a function of a module by using its name (a string) Assuming module foo with method bar: import foomethod_to_call = getattr(foo, 'bar')result = method_to_call() You could shorten lines 2 and 3 to: result = getattr(foo, 'bar')() if that makes more sense for your use case. You can use getattr in this fashion on class instance bound methods, module-level methods, class methods... the list goes on.
https://codehunter.cc/a/python/calling-a-function-of-a-module-by-using-its-name-a-string
CC-MAIN-2022-21
en
refinedweb
generator in transformers from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/electra-large-generator", tokenizer="google/electra-large-generator" ) print( fill_mask(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) - Downloads last month - 8,898
https://huggingface.co/google/electra-large-generator
CC-MAIN-2022-21
en
refinedweb
RationalWiki:Saloon bar/Archive354 Contents - 1 You won't be seeing much of me for the time being - 2 Cosmos New Worlds - 3 Small Men on the Wrong side of History - 4 Duolingo should give me a job - 5 Media Bias Chart - 6 Bernie Sanders dropping out - 7 UV wands and Covid 19 - 8 COVID-19 denialists are officially the new HIV/AIDS denialists - 9 Prostitution should be legal - 10 Forgive the youtube link - 11 I have to wonder: how many people are fed up with the Stay In Place order? - 12 bobvids on The Quartering - 13 101 pseudo-evidences - 14 ContraPoints video from January - 15 Newest addition to RationalWiki (That I kinda need a little help with) - 16 Illusions - 17 thank you... - 18 WHO vs Trump - 19 The probability of 0 isnt impossibility - 20 About Sanders - 21 The Lansing, Michigan Protests aka "Operation: Gridlock" is in full swing - 22 Don't Be Like Belarus, Folks - 23 COVID-19 and Eugenics, let's take this one slow - 24 Anyone Familiar with "Swiss Propaganda Research"? - 25 vitamin d and lockdowns - 26 Fuck fox news - 27 Some depressing news... - 28 Would pseudo-academic accrediting agencies be missional? - 29 Hungary - 30 HSUS You won't be seeing much of me for the time being[edit] The laptop I've been using since 2011 has finally died. Until I get a new one, I'm not going to be very active here at all. I'm writing this at work, now that I've finished teaching for the day.. My hottible boss is around and making me feel uncomfortable. I want to sod off as soon as I can. See you all later. Spud (talk) 11:32, 10 April 2020 (UTC) - Take it easy duderino. (p.s. in case you really need the equipment...my previous several laptops were all castaways from my friends installing Lubuntu (light ubuntu though I'm sure you knew that) and for the really crappy oldest one I installed Porteus (super ultra light weight....fastest system I'd ever used). I never ever had a problem finding someone wanting to get rid of an older laptop. Don't be a stranger to long-eroo. ShabiDOO 12:41, 10 April 2020 (UTC) - I'd suggest vanilla Slackware over Porteus and especially Lubuntu, since Slackware has better native WM support than Porteus and is much lighter than Lubuntu. Though Spud might have unfixable hardware problems, so it might not be possible to fix it with a mere Linux install.) @ 15:35, 10 April 2020 (UTC) is this really about a broken laptop, or are you really going to prison? 19:13, 11 April 2020 (UTC) - I've never done anything that would warrant sending me to prison. The problem really is a broken laptop. But it looks like I'll get given a second hand one soon. So fear not! Spud (talk) 11:06, 13 April 2020 (UTC) - And you call that living? ikanreed 🐐Bleat at me 13:16, 13 April 2020 (UTC) - I realize now that that could be taken wrong. I meant the not doing crimes is not living. Having a second hand laptop is pretty reasonable. ikanreed 🐐Bleat at me 15:17, 13 April 2020 (UTC) Cosmos New Worlds[edit] I found the 2014 Cosmos reboot (A spacetime oddesy) okay, beautiful cinematography and animation and some nice humanist stories though I'd have to say I didn't really learn much except a few unknown remarkable lives. I've been watching Cosmos: Possible Worlds (2020) and have to say I'm really impressed so far. Has anyone seen the first few episodes. Any comments? ShabiDOO 12:41, 10 April 2020 (UTC) - Didn't even know they were doing more. ikanreed 🐐Bleat at me 16:48, 10 April 2020 (UTC) Small Men on the Wrong side of History[edit] The above subject is the title of a book by conservative journalist, Ed West. The "small men" refers to conservatives. This is a channel where right-wing or conservative writers are interviewed. The thumb-nail description is "The Decline and Fall, and unlikely Return of conservatism." The title of the video is, "Ed West: Why Conservatives have lost almost every Political Argument since 1945." I think it is refreshingly detached from a polarized sort of POV. The video is only 32 minutes.Ariel31459 (talk) 17:11, 10 April 2020 (UTC) Duolingo should give me a job[edit] I can come up with strange sentences with what little Esperanto vocabulary I know. Example: "La bebo manĝas lia porko kaj katoĵn". Me being crazy would make me an excellent candidate for a job --Rationalzombie94 (talk) 18:48, 10 April 2020 (UTC) - I assume you need an accusative in both places: La bebo manĝas lian porkon kaj katoĵn. Smerdis of Tlön, wekʷōm teḱsos. 13:44, 11 April 2020 (UTC) Media Bias Chart[edit] I think it's okay, but I really don't think Fox News should be on the same level as MSNBC, it's ridiculously rotten. Buzzfeed is also too high. Wonder where rawstory is, should be pretty low on this list. БaбyЛuigiOнФire🚓(T|C) 19:59, 10 April 2020 (UTC) - What's Daily Mail doing in that spot? --It's-a me, LeftyGreenMario!(Mod) 20:05, 10 April 2020 (UTC) - MSNBC is pretty damn bad, like all 24 hours news services are, filled to the brim with constant opinion-as-entertainment. ikanreed 🐐Bleat at me 03:26, 11 April 2020 (UTC) - Fox News is primarily problematic because of the talking heads who spread disinformation. If the chart is based on just the news part of Fox News, its probably technically accurate. The problem is that the Fox business model is to swamp the news with hours and hours of the far-more-lucrative disinformation. Bongolian (talk) 04:04, 11 April 2020 (UTC) - You can find links to the reviews of individual sources here and you are correct. Assuming "quality" and "reliability" are the same metric, Fox News mostly gets dinged by their talking heads (on a 0-64 scale, one analysis of Tucker Carlson gets 9.75, one of Hannity gets 14.25) where anchors like Chris Wallace score in the 40s and most of the articles are okayish (if biased). For MSNBC, it's the same, though not all of their talking heads do badly on the quality department (Rachel Maddow and Morning Joe do decently), Al Sharpton (especially), Chris Matthews, and Lawrence O'Donnell get dinged. It still seems that individually there's more crap at Fox News from the individual tables so I'm not sure why they are rated equally. MSNBC though isn't that great overall, in my opinion no American cable news network is these days. 72.184.174.199 (talk) 17:17, 11 April 2020 (UTC) - I have come to learn in my life experience that no news source is free from bias. Honestly every major news source is just as bad as every other one. Whenever I read news on a given topic, whether it be the coronavirus or a presidential campaign, honestly the best thing to do is to read as many sources as you can and just decide what you think is true. There is no one news source, and chances are there never will be, that succeeds in putting all possible bias aside and running like a perfect logical/fact machine. Chances are everything I just said has already been said by someone else (even on this website probably), but that's honestly what I've learned from experience. Aaronmichael5 15:52 11 April 2020 (UTC) - You were with me until the "every major news source is JUST as bad as every other one". I'm sorry to say that this only works in the most narrow sense. I'd take the CBC over any American network when covering the American election. They're from a different country, their coverage will not affect their own bottom line, they are government protected and funded with a mandate to at least attempt to tone down the bias. And even within America I'd take NPR over fox news or MSNBC, both heavily biased in America's two party system. I'm with you when it comes to stories reported all by news sources within the same country which affect national narratives and the universal interests of its citizenry (that mostly dealing with international stories where power struggles are hypersimplified). But thats a pretty small portion of news in general. It's one thing to say that all major news sources do a fairly bad job at educating their readers, its another thing to say they are equally bad when it comes to bias in general. That's utterly false. Bias can and has been demonstrated to be more present or pernicious in some sources than others. There are no truly reliable news sources. But some are awfully unreliable on subjects spanning politics, the economy, society, crime, culture etc. ShabiDOO 16:21, 11 April 2020 (UTC) - "honestly the best thing to do is to read as many sources as you can and just decide what you think is true" - Ideally you should be consuming all news sources as many as you can, BUT there's a whole "subscribe to us to unlock the article" begging from all those sites that's completely counterproductive to this. I guess this is on a tangent, but I think it's worth pointing out that this system makes it more annoying than it should to try to figure things out. --It's-a me, LeftyGreenMario!(Mod) 23:02, 11 April 2020 (UTC) Bernie Sanders dropping out[edit] UV wands and Covid 19[edit] UV wands are claimed to kill all sorts of nasty things like this. But I'm a little sceptical. Anybody got any ideas? — Unsigned, by: Bob_M / talk / contribs - Short wave ultraviolet is ionizing radiation that really messes up nucleic acids and anything that relies on them, and viruses in particular don’t have room for radiation shielding. It’s super effective against microbes on otherwise clean nonporous surfaces or dispersed in air or water with a clear line of sight to the UV lamp. But it’s not effective against anything shielded by opaque material like dirt or the outer layers of a porous material. Germicidal UV is also dangerous to multicellular life, particularly to eyes, and it can break down many polymers and damage items being cleaned. As such, it can be useful, but only situationally. Great for sanitizing air, clear water, and nonporous inorganic surfaces, but incapable of sanitizing porous or dirty things - A proper UVC lamp like the kind used in HVAC or water systems is very good at what it does, but it will cause radiation burns very quickly (nearly instantly to unshielded eyes), and they’re not in a form factor as convenient as the one you linked. If the product details there are accurate, that would be marginally effective. If all the output light is focused evenly, it could deliver a germicidal dose of ultraviolet to about one and a half square inches per second. A few caveats: the listed wavelength is on the border of UVB and UVC, so it may be less effective than the somewhat shorter wavelengths preferred for germicidal use, though with viruses being more fragile than bacteria, that may not matter. Also, it just recommends against looking directly into the light. A germicidal light would be dangerous to eyes even in reflection, and should only be used openly like that while wearing something like polycarbonate goggles with tight, opaque seals at the edges, and clothing that doesn’t leave skin exposed. - TL;DR: Bleach, soap, and heat are probably more practical for most home use applications where you might use a UV light. 192․168․1․42 (talk) 11:47, 11 April 2020 (UTC) - OK. Thanks. That's really interesting. From the confident tone of your response would I be right in assuming that you have some special competence in this area?Bob"Life is short and (insert adjective)" 12:33, 11 April 2020 (UTC) - Bob M - 'various scientists' have said soap and water etc are effective: can you give a more authoritative source for your UV lamp than 'item appearing on a well-known selling platform'? Anna Livia (talk) 16:24, 11 April 2020 (UTC) - I was not suggesting for one moment it was an authoritative source. I gave it as an example, said was skeptical and I asked for information.Bob"Life is short and (insert adjective)" 17:55, 11 April 2020 (UTC) As a 'bad syllogism' - This is a UV wand - Wands are used in magic - Therefore this is magical thinking. Is this totally wrong? Anna Livia (talk) 16:27, 11 April 2020 (UTC) - We got a UV sterilizer a couple months before the outbreak to help sterilize baby bottle stuff. As such I've looked into the research on this question somewhat already. UV light of this sort requires like 40 minutes of direct exposure to deactivate common viruses to conventionally sterile levels. And this is a plugged in AC device that approximates the UV intensity of sunlight from all 6 sides of a cube. I would suspect waving a wand over something for the duration of at most a minute or two would do next to nothing. Don't still have the papers I read about it on hand though. So... that syllogism might be questionable, but so is using UV wands to decontaminate. ikanreed 🐐Bleat at me 19:12, 11 April 2020 (UTC) - Is it better as a summary of 'flawed logic' than as a syllogism? And is this the direct ancestor of the UV wand? Anna Livia (talk) 22:29, 11 April 2020 (UTC) - @Bob_M: I investigated the matter a few months ago while considering what measures to take in the upcoming pandemic, and I was already familiar with the risks of UV light from experience welding (germicidal lamps and welding arcs have comparable UV output). I eventually got an HVAC light and made a heated internally-reflective enclosure for sanitizing packaged items and other things that I don’t want to chemically treat but where I’m not concerned about polymer degradation. - @Anna Livia: Sometimes the form factor of a wand is useful for an application. Metal detectors and Geiger counters are often wands, for example. Violet rays produce their light via corona discharge. This can produce some UVC, but it’s generally not enough to matter. Low-end UV lights like the one Bob_M linked use ultraviolet diodes. Better UV lights like this use mercury vapor discharge tubes. Corona discharge can efficiently generate ozone, though, so it has its own use in sanitary applications. - @ikanreed: Germicidal effects from UVC (or other ionizing radiation) depend on the applied dose. In addition to time, this also depends on the power of the light and how it is applied to the surface to be sanitized. 2 millijoules per square centimeter (which is what I used to calculate the 1.5 square inches per second figure) is a standard figure for what it takes, though the mechanism is more a logarithmic reduction as dosage gets higher than outright complete sterilization as in an autoclave. Even a dinky wand like the one Bob_M linked applies UV light far more effectively than sunlight when used correctly (virtually no UVC makes it through the atmosphere, and UVB is much less effective at killing microbes), and the bulb I linked has about a thousand times the UVC output of the wand. Unshielded close-proximity exposure to a proper HVAC lamp will kill nearly all microbes (or scorch corneas) in less than a second. 192․168․1․42 (talk) 00:12, 12 April 2020 (UTC) - All very interesting. When you look on the net (or at least when I look on the net) I find: sites selling professional equipment with complex specifications which isn't of much interest to the home user; sites selling home user stuff which claim they are fantastic; and non-experts expressing mild skepticism. - Which all suggests to me that this would be a fine subject for an RW article which might turn out to be quite popular as I'm sure that many people are asking themselves the same question. - I would be happy to create the stub - but I lack the knowledge required to do more it.Bob"Life is short and (insert adjective)" 06:48, 12 April 2020 (UTC) I was referring to the use of wands in this particular context - rather than the 'technical and practical' uses. If a UV wand article is created - mention should be made of UV banknote checkers (probably cheaper than the woo-wands), being 'actually existing technology' and the Victorian 'violet-ray and electric shock machines' (see example on [1]). Anna Livia (talk) 16:46, 12 April 2020 (UTC) COVID-19 denialists are officially the new HIV/AIDS denialists[edit] With the way they are insisting that the number of deaths is overinflated, I think it's safe to say that COVID-19 denialists are just as bad as HIV/AIDS denialists. To bear this out, I've seen too many posts on social media where they claim that people who die from other "underlying causes" (e.g. heart attacks) who were infected with the novel coronavirus are ordered to be counted as deaths by coronavirus. G Man (talk) 19:13, 11 April 2020 (UTC) - Those people deserve to punch themselves in the face repeatedly until the pandemic blows over. БaбyЛuigiOнФire🚓(T|C) 08:01, 12 April 2020 (UTC) - I do not feel it is as yet a fair comparison. there is so much about covid we just wont know for sure about till over, or at east being going on for a hell of a lot longer than mere weeks. death counts, death rates, infection rates, numbers of infected - these are things that we have to view for the time being as being provisional. there are many variables that that will effect the accuracy, and there is likely much variance between locales. testing for example is necessary for an accurate infection rate and we know there have been huge problems in getting that done. I read somewhere daily death counts are almost certainly higher than reported on the day when they are still rising, and almost certainly lower than reported when it is falling. something to do with when and how deaths are reported and collated. the claim of underlying causes and not covid being the cause of some of the deaths attributed may well have some truth to it. but we cannot know for certain if it is, or how significant an effect it might be having. and of course its likely be different in different regions. - when it comes to denialism of covid, much that could be labelled as such is simply down to a mixture of factors. uncertainty would a be one such factor. uncertainty of not yet knowing about so many things about the virus itself - the science stuff, uncertainty about our differing responses, their effectiveness, how long they will be necessary, and the impact on our lives, our jobs, our finances. and of course the uncertainty of how we would react to infection - wold it be mild? severe? will you die? will some of your loved ones die? there is nothing really to hold on to. nothing concrete to ground us and build our coping strategies. it all just feeds a sense of fear, of dread and anxiety about our health, our finances, our futures and ways of life. misinformation reigns with rampant speculation. claims are made then disproven later, some have a plausible logic but we just are not sure yet. some are simply wishful thinking. they are not lies as such, just we don't have all the facts. we want something certain some thing concrete. that would be progress, something to build on. even better if it means this upheaval can end. that's what most of the denialism has been. its been grasping at straws. hoping against hope this can all go away. even governments can fall prey to this, as in early on with the slow responses or possibly wrong tactics, and then as it drags on with consequences ever more catastrophic. most of us wont have the agency of governments, our wrong actions wont doom millions to bankruptcy or death. we likely have little agency at all. in lockdowns, with nothing to do or to distract us. nothing to do but watch it all play out and brood. - some denialism has been genuinely despicable. where it is not just from making wrong choices early on, or a reluctance or difficulty comprehending the significance of events or the severity or scale and where risks lay. this has been unprecendented and things moved so quickly. no time to see how things go, while a wrong or rash decision would hurt us and set us back. where its been dispicible is when as the tragedy escalated, some continued ignore what was happening. where misinformation was not mistaken assumptions but bare faced lies. where its purpose has been to cover backs and deflect blame rather than inform or correct early errors. to take credit for things not done, to sow dissent, to attack, to blame, to scapegoat, to sideline and discredit, not just perceived enemies but anyone and anything for doing their jobs or giving accurate and useful information. we know who these people are. its hampered any sense of a coordinated message or plan. it has us competing with each for vital supplies instead of pulling together. its always been bad since before the virus, now its costing lives and does not bode well for futures that require consensus, compromise, shared values, and alliances made for a common good. its a world that was already under assault, but now seems will spiral ever faster away to a world of petty squabbles and all against each other. - hiv/aids denialism is a little different. I cant really speak for the early days of the disease. it may well have had similarities. but in the developed world, has not been a disease that has impacted us all equally, in a way that covid has or is doing right now. there have no lockdowns. no sense of all being in this together, or all needing to do our bit. certainly in the developing world, Africa especially might have been or is like that, but in the developing world its only really impacted certain communities and continues to do so. communities that have and continue to face prejudice. and the responses and effects are different in different communities. in gay communities for example its been apocalyptic. things like testing, when we were eventually able to have since become commonplace. done regularly. its what you do. in black communities, I still see periodic drives to try and get people to test. there are issues that some communities didn't or don't have, that need to be addressed in other ways. im not equipped to go into much of that, I know though that in America hiv infection is a hell of a lot higher in black communities than any where else, in white gay communities its like we can start to believe it could almost be irradicated entirely. in gay black communities though its a far more bleak outlook. no one in the developed need die from aids any more. medication and prep have been nothing short of a miracle. but if you are gay and black in the us, infection rates and death speak to a disgusting injustice. hiv also is a feature amongst drug users, though i'm not sure how much we can speak of communities in that group. the commonality of prejudice amongst these groups, of varying degrees, means that for many it could be ignored. it was ignored. and even today for its an abstract threat where the actual likelihood of encountering is low enough its not a concern for most people. so denialism that arose has character of victim blaming. that the people who get it are dirty, living immoral lives, they brought it on themselves. some denialism sees it not as a disease at all, but the result of poor lifestyle choices specific acts like anal sex, associated with some at risk groups, as inherently deadly in itself that has been labelled hiv. other denialism is centred around effective treatment. from early on when what are proven treatments now, were in there infancy had problems with dosage that lead to peoples death. your friends were dying, your lovers were dying, you were dying, and too many people had died. drs just killed you quicker. you'd consider anything and you cant trust anymore the drs who you would normally listen to so don't heed their warnings on dangerous or ineffective treatment. and the usual quack cures, divine retribution and dastardly cia programs are all present. - the real difference with hiv/aids denialism and the covid variety is time. hiv has been going for years. much progress has been made, in all areas social and scientific. medication has gone along way to remove stigmas surrounding it, infection isn't, or least shouldn't be, a death sentence. its not life altering, barely an inconvenience. its been going long enough that the uncertainty surrounding everything covid, that we more easily spot the charlatans and the bigots and we don't quite have the existential dread hanging over us that might make us make rash judgements or bury our heads in the sand. and we can more clearly see what kind of denials stuck, where they came from and motivations. its just easier all round to deal with and it feels like we got it licked. covid is all uncertainty and theres only a few short weeks to base assessments on. we wont know for sure about any of it till the fog has lifted, its run its course and we can do the post mortem on our whole experience. until then we all grasping aqt straws and condemning people for actions that may seem indefensible to us but are simply bewildered and frightened people desperately seeking something to hold onto. AMassiveGay (talk) 09:43, 12 April 2020 (UTC) - jesus Christ, I seem to be incapable of short and to point answers anymore. they start out that way then I just go on and on. I cannot even tell if I am even remotely coherent. sorry.AMassiveGay (talk) 09:49, 12 April 2020 (UTC) - From what I remember of the late 1980s scene (can't remember before that very well), HIV denialism in America was strongly aligned with the conservative / evangelical nutjob sect, and was (like it always is) often used to sell "alternative" treatments. Kind of like COVID-19 denialism. So in that sense, nothing has changed. However, I remember the pace of progress on HIV being a *lot* slower than what's happening with COVID-19 today (reading up on some HIV history seems to confirm this, it took a decade for a lot of the information to come together, where we know a heck of a lot about this virus in roughly 4 months). I also remember a *lot* more hem-hawing and holier-than-thou bullshittery since in America as you say this first affected marginalized groups (first homosexuals, than IV drug users as you mentioned) -- what that meant is that, even for those who knew it was a virus, way too many (right up to the top of the Reagan administration in the early 1980s, according to what I'm reading) dismissed it as a "gay plague" and wanted nothing to do with it other than preach how you will burn in hell or something. Even in the early 1990s -- ten years after the disease became first known to the American public, roughly -- I remember hysterics on both how the disease could be transmitted ("you can get AIDS from toilet seats!" type bullshittery), and "gay plague" type demonization. There are a few American idiots linking COVID-19 to "homosexual 'sexual' events" but such is even stupider than usual since it's obvious to anyone with an IQ over a doorknob that anyone can get this disease. Rather, a lot of the denialism is more of the "oh this is just the flu / lockdowns are for pansies" type of bullshittery, which, if it weren't for the collateral damage, I'd just say "let 'em crash!" Airplane style and be done with it. 72.184.174.199 (talk) 16:56, 12 April 2020 (UTC) - there also far right groups trying link muslims to covid. this isn't denialism its outright prejudice. AMassiveGay (talk) 13:21, 14 April 2020 (UTC) - Those very frequently go hand in hand. ikanreed 🐐Bleat at me 13:28, 14 April 2020 (UTC) Prostitution should be legal[edit] If two (or more) consenting adults are allowed to have a one night stand it is legal as long as money is not exchanged.......yet if money is exchanged sleeping with someone is wrong? If adults give consent to the exchange of money and sex, so freaking what? Why send people to jail over screwing around as long as it is consensual? --Rationalzombie94 (talk) 22:53, 11 April 2020 (UTC) I agree but it's called 'sex work', cis scum. 2600:1004:B093:3F34:695C:FE20:86E4:FA96 (talk) 23:23, 11 April 2020 (UTC) - @2600:1004:B093:3F34:695C:FE20:86E4:FA96 Go fuck yourself IP address. VerminWiki (talk) 23:27, 11 April 2020 (UTC) - Why the random flame toward cis people? --It's-a me, LeftyGreenMario!(Mod) 01:03, 12 April 2020 (UTC) - It's a meme referencing the loonier parts of social media politics. 2600:1004:B093:3F34:695C:FE20:86E4:FA96's statement there is tongue in cheek. As for the topic, a further consideration is that it becomes legal again if it's filmed. 192․168․1․42 (talk) 01:35, 12 April 2020 (UTC) - Prostitution should be legalized so it grants protections to sex workers. БaбyЛuigiOнФire🚓(T|C) 03:06, 12 April 2020 (UTC) May I ask how prostitution would be legalized in practical effect?-Flandres (talk) 03:38, 12 April 2020 (UTC) - Clarify what you mean by that. — Oxyaena Harass 07:57, 12 April 2020 (UTC) - I was referring to getting support amidst congresspeople and the white house, and eventually the court system. Of course this would mater a lot less if this became a "let the states decide thing" but while that avoids an ugly fight on the federal level you might lose it does mean you are allowing some states to not do it, which on a issue of extending rights to sex workers might be seen as a unsatisfactory middle ground half measure that allows abuse to continue.-Flandres (talk) 08:14, 12 April 2020 (UTC) - I don't know, how did Nevada manage to do it, out of all states in the union? БaбyЛuigiOнФire🚓(T|C) 08:25, 12 April 2020 (UTC) - Technically sex work is only legal in individual counties of Nevada rather than the entire state. In those cases it has been legal since the late 19th and early 20th centuries mostly because it boosts the local economy, so it suggests any movement to legalize sex work would have to focus on taxing it at least as much as the human rights of the people involved.-Flandres (talk) 08:34, 12 April 2020 (UTC) - Only the counties with populations under 700,000 can allow it, which makes it illegal in Clark County (where Las Vegas is), which is where virtually the whole state lives. Of course as with the rest of the country it's in reality de facto legal for well-off people who just go to "escorts"; law enforcement only bothers going after "streetwalking" and the low-budget brothel operations ("massage parlors" and the like). --47.146.63.87 (talk) 17:55, 12 April 2020 (UTC) - It's legal in several European and South American countries (as well as a few other scattered countries across the world), the laws vary considerably. Even in America (which is more prohibitionist) it's possible for a wink-wink-nudge-nudge brothel to operate even in more conservative times of old (like the notorious "Chicken Ranch" in Texas if you are "connected with the right people". 72.184.174.199 (talk) 21:10, 12 April 2020 (UTC) - It's not the sex that's the problem in prostitution. It's the power dynamic. "Consent" gets real fuzzy when "or starve to death" is part of the question. Albeit a rational person will recognize other fucked up sexual dynamics exist and are not illegal. ikanreed 🐐Bleat at me 03:49, 12 April 2020 (UTC) - Like, don't go around busting everyone who does engage in it for starters. Legalization would also grant legal protections for sex workers so people can't just fuck them up and abuse them without legal consequences. БaбyЛuigiOнФire🚓(T|C) 03:49, 12 April 2020 (UTC) - The absolute priority and focus should be 1) vigorously fighting human trafficking and forced sex work 2) protection of voluntary sex workers (and non-criminalization obviously). Some crime agencies take these a little more seriously than others and are better funded and given man hours and expertise. Few are. The root of all problems lies in inadequate policing of forced sex work. ShabiDOO 06:40, 12 April 2020 (UTC) - While I generally agree with the premise, a short drive through the Bronx at night (if you're leaving a Yankee game you do not want to be stuck on some of the highway traffic getting out, it's either that or the delightful neighborhoods around it) sure does make it hard to justify. Though it really should be legal, there's no way as a man I could bring myself to get involved in that "trade" or encourage anyone to do the same; I'll stay single my whole life before I have any part of that world, thank you. Though it'll never happen, I'd be perfectly happy in a world without this; there's no one I have a better relationship with because he/she is regularly fucking someone, but there sure are a whole lot of people I have a worse relationship because of the same (though I'm straight I generally soured on the idea young, sometimes wish I was religious because I'd probably make a good Jesuit). But since it won't, like pornography it seems to be a rather effective way for people to ease tension that they'd otherwise express via violence, so if it's a choice between them I'll take the former. The Blade of the Northern Lights (話して下さい) 07:34, 12 April 2020 (UTC) - Sometimes I think I'm lucky to simply not care if I ever get laid, as an asexual myself. *shrug* БaбyЛuigiOнФire🚓(T|C) 07:36, 12 April 2020 (UTC) - I can admit to taking after one of the not-so-subtle messages of The Heart is a Lonely Hunter, that almost no one who plays the relationship game (be it romance or even friendship) learns a goddamn thing. No matter what scenario, I don't think so highly of myself as to imagine I'd emerge as the one out of 5. The Blade of the Northern Lights (話して下さい) 07:41, 12 April 2020 (UTC) - I hve visions of the job centre with prostitution all above board. 'so madam, what have you been doing this week in your efforts to find work? you might need to start broadening your work search in these adverse conditions or it might affect your benefits. have you considered going on the game? you can work from home and would be considered self employed, so thered be some tax breaks. we are running a short on course on hand jobs and fellatio, i'll sign you up. will look good on the cv. what are your views on getting pissed on? its just a thought. you have to specialise these days. AMassiveGay (talk) 10:08, 12 April 2020 (UTC) we have a page on prostitution that covers a lot the ground covered here. that page though I would consider dogshit and I would not recommend it. AMassiveGay (talk) 09:55, 12 April 2020 (UTC) As an adjoiner to the original post, prostitution is not merely consenting adults + money. there is the question how the additional of money changes the power dynamics of such an encounter, and issues surrounding poverty, substance abuse and desperation, issues of physical and mental health, and issues concerning all this effects issue of coercion and consent for what is often a profoundly invasive act. such a profession can takes psychological toll, especially if circumstance, financial or otherwise, has pushed someone into it. it is not the same as consenting adults nor as is sometimes argued 'just another job' (our own page once compared it flipping burgers - a frankly disgusting notion). there are also moral arguments and what it means for a society as whole as a whole that condones payment for possession of anothers body. what does it say it about the people selling and the people buying? simply saying prostitution should be legal is not enough without knowing what that means. these issues absolutely have to be addressed and in a way that addresses what it means for those working as prostitutes, not for the supposed benefits of those seeking their services. I believe it should be legal in some form, merely to better provide sex workers with rights and legal protections and allow for better access to physical and mental healthcare and benefits, and the grey areas of escorting must be clarified in any legalisation. legislation is not a magic bullet. it will not fix all the issues surrounding prostitution. any rationale for making it legal that talks of empowerment or ignores or downplays what is an inherently risky profession that can be and often is to an overwhelming degree something forced upon people by others and/or circumstance is stillborn and fundamentally sick. AMassiveGay (talk) 14:40, 14 April 2020 (UTC) - Circumstances are important and nuance is required. Am also pro-legalisation on account of prostitutes in Australia/Europe having better protection and healthcare. 86.14.252.201 (talk) 18:23, 15 April 2020 (UTC) - The demand for street prostitution is not the demand for sex. The demand for street prostitution is the demand for CHEAP, FREQUENT sex. A john that pays $20 for a blowjob from a drug addict 4 times a week isn't going to suddenly pay out $200 each time or reduce his frequency simply because the sex worker is now certified by the State of Illinois that she's organic fair-trade gluten free. Nor are the women (and men) in prostitution going to charge $20 if they are able to pass inspection. The only way to serve this clientele is with drug addictions, human trafficking, and worse. Legalization, regulation, inspections, etc, aren't going to be able to do squat to help the people at the bottom. CoryUsar (talk) 20:57, 15 April 2020 (UTC) Forgive the youtube link[edit] Yes... I know it's unforgiveable but considering the huge amount of time people have available it's a video worth watching. It's about 95% correct, leaves a lot out (how much can you really cover in this short time) and has a couple flaws, most especially in rule 0 at the end but I'd say, for a brief representation of how power structures work (how the world always has and does work) it does a pretty great job at explaining things. Watch this video. Think about it for a day before kneejerk reactions, watch it again a few days later if you have the time and give your thoughts. Again, forgive this rare unforgiveable posting of a youtube video. ShabiDOO 10:01, 12 April 2020 (UTC) - Eh. It's fine. I don't like the authoritative factual statement style for explaining a singular theory of power, though. Social dynamics as fact makes my skin crawl. ikanreed 🐐Bleat at me 13:07, 13 April 2020 (UTC) - I think that those "rulers" would be better off playing with a Rubik's Cube in their office than fighting wars and supressing freedom. I know a thing or two about using Rubik's Cubes. After all...— Jeh2ow Damn son! 16:40, 13 April 2020 (UTC) - Yeah indeed he does talk with a lot of authority for a video that doesn't come with any references or sources. What I like about the video is how quickly and concisely it summarizes what a lot of political science, political philosophy etc has said for a long time and what a lot of people are very unaware of. Where power/political decision making actually comes from and that the "court" never disappeared even when democracy came along. I wish he had used a different metaphor than "key" cause i think its more confusing than having any explanatory power. What you'd replace that with...I don't know. Maybe it would have been better if it was narrated by Machiavelli. ShabiDOO 08:47, 16 April 2020 (UTC) I have to wonder: how many people are fed up with the Stay In Place order?[edit] There is actually a group on Facebook called "Michiganders Against Excessive Quarantine". They are fed up with business closures, park closures and highly restricted shopping. Honestly I cannot blame them. Even with the order the virus is still spreading like wildfire. Seems that the Stay In Place order is irrelevant to the spread unless I am missing something. --Rationalzombie94 (talk) 16:48, 13 April 2020 (UTC) - The group is a collection of self-centred assholes who care more about their boredom and the illusion of freedom over the lives of others. I would call it collective scum-bucket thinking. Read up on "flattening the curve". If you go out and meet people or do unnecessary activity outside enough...you will be responsible for people dying. Third degree murder? Is your personal inconvenience, paranoia of government overreach and the illusion of freedom worth murdering people? Zheesh we are entering week 6 of a strict lock down in Spain, the daily body count is high and yet it still clearly helps, its widely supported and few pity the assholes who get 600€ fines for minor infractions. Its better than going to jail for manslaughter (as they are implementing in Russia). If we had done this earlier it wouldn't have been as horrid as it is now. Consider yourselves lucky enough that you have time to do it. Be grateful you have a Govenor who gives shit. ShabiDOO 17:18, 13 April 2020 (UTC) - Yeah, this thing would be much, much worse if we weren't social distancing. The people complaining are selfish. Chef Moosolini’s Ristorante ItalianoMake a Reservation 17:44, 13 April 2020 (UTC) - no one is happy with stay at home orders or lockdowns or whatever you want to call them. it doesn't trump necessity though. - I don't think it is fair to characterise everyone who are not on board with lockdowns as necessarily selfish arseholes. granted there are a lot of such arseholes but the financial repercussions of this thing are going to be huge for most of us, if not feeling them already. that said more should probably be done (if it isn't already) to help with the financial burden of people not being able to work. im guessing some places are better than others. - it is also the case that people would likely be more accepting of them if, like in the us, the official response from your government wasnt still even now absolute dogshit, and at local level responses have varied wildly. if trump was leading my country's response to this thing, I'd either shitting my pants while boarding up the front door less any germs break in, or just be running round the streets shouting 'its flu you pussies'. - lockdowns are an extreme response for an extreme situation. the us government needs to, and needed to far earlier to explain the severity and the need for what is necessarily. all the shit that boris has gotten, most of it deserved, I just look at the news to see how it goes across the pond. its quite calmly to see how much worse it could have been. - anyhow, ive been on the dole for bit. its not really been whole lot different to usual for me. AMassiveGay (talk) 18:17, 13 April 2020 (UTC) - The virus has been a pretty big deal for my family, so not being in a job any more is a pretty big deal for the aftereffects of the stay in home order. That being said, the Facebook group isn't concerned about the loss of the availability of jobs, the loss of demand for the job to function, and the amount of people who are filed for unemployment but rather inconveniences, and to me, it just sounds like they reek of privilege when they complain about highly restrictive shopping. Yes, I do get it that humans are social animals and can get cabin fever as a result of being cooped up indoors all the time, but the way of coping with the loss of social activity is compounded by the internet nowadays. I would have been sympathetic to the group's cause if the stuff they were complaining about were actual, serious consequences as a result of the best solution against the virus rather than "derp I can't buy whatever garbage I like anymore". БaбyЛuigiOнФire🚓(T|C) 18:44, 13 April 2020 (UTC) - If you Google "Michigan covid 19 cases" it brings up a handy graph. Seems like there's a downward trend in new cases that's repeated in a few other states like New York. It might be a "spike" in the data of course but one can cautiously hope that this is a sign that the lockdown is working. 72.184.174.199 (talk) 18:47, 13 April 2020 (UTC) - I looked up my state, California. We're still getting a lot of new cases but I think we peaked, hopefully. БaбyЛuigiOнФire🚓(T|C) 18:53, 13 April 2020 (UTC) - Kansas cases are still going up pretty quickly. Chef Moosolini’s Ristorante ItalianoMake a Reservation 19:29, 13 April 2020 (UTC) - Also, I can't emphasize this enough, but thank goodness Kansas elected a Democratic governor. I mean, holy shit. Chef Moosolini’s Ristorante ItalianoMake a Reservation 19:31, 13 April 2020 (UTC) - From my understanding the main reason people are angry is how highly inconsistent the Stay At Home order is. You are able to buy lottery tickets yet you cannot buy gardening supplies. You can buy recreational weed yet you cannot buy entertainment such as video games nor DVD's. Pretty long list of inconsistencies that is frustrating. Because of it there is a protest planned in Lansing, Michigan soon (Lansing is the State capital). Have a feeling that this protest will not end well. --Rationalzombie94 (talk) 19:38, 13 April 2020 (UTC) - @ being unable to buy video games and DVDs: Thank god for piracy. Yeah, but it's not something I'd still protest over, even though it IS stupid that you're able to buy a fucking gamble thing while you can't garden. БaбyЛuigiOнФire🚓(T|C) 19:41, 13 April 2020 (UTC) - Yeah, I hate the lottery in general. Chef Moosolini’s Ristorante ItalianoMake a Reservation 19:44, 13 April 2020 (UTC) - At least the protests have not reached the scale of what is going on in Brazil. The quarantine along with the horrible mismanagement of the situation by a President worse than Trump has drew large protest crowds- Some sort of compromise must be made to prevent the spread of infection while lessening the economic damage. Once the situation blows over there will probably be a spike in the number of homeless people and or people needing government assistance. --Rationalzombie94 (talk) 20:33, 13 April 2020 (UTC) - Well, that opportunity was lost when Dearest Leader completely botched our response to this. Side note to the fact everything we're doing seems totally ineffective, that because you don't see the results quickly. There's a serious lag time on restriction being put in place verse restriction actually appearing to work biased off testing, due to how pisspoor the testing is in the USA. It's going to look totally ineffective for at least a week or two, longer if the measures were limited or poorly enforced. Further the fairly targeted nature of our testing means we'll see a decent number of positive cases against negatives, since anyone who is tested is generally tested for a reason, and not randomly. Additionally, someone ignoring the restrictions could completely fuck the thing up. To see that in practice, look at South Korea's Patient 31 (This is what worries me with states like Arkansas doing nothing and Florida half-assing it.) It sucks, I won't fight you there, but it's the reality of the situation now. There are no good options for us at this point, aside from just having to wait.--NavigatorBR (Talk) - 02:07, 14 April 2020 (UTC) - The governor of my state is really dropping the ball with inconsistencies and excessive overreach. Can you honestly blame people for being pissed? --Rationalzombie94 (talk) 16:07, 14 April 2020 (UTC) - No...they are being pissy "freedom" babies. People who don't have the slightest clue what actual inconvenience means. I can understand their frustration (the world is) and wishing it would be over. But I cannot for a second commend their selfish bitch-fest. Grow up, be an adult. Stay home until its safe. Don't visit people. Do only the most necessary shopping in as few trips as possible. Exercise at home. Something a small number (though notable number) of Americans don't seem to understand is that we live in a society and that requires making sacrifices (sometimes notable ones including big chunks of your paycheck and dealing with inconveniences) to keep others out of misery and society and its streets safe. It is pretty unfair for the majority of Americans who would be happy to contribute to things like medicare for all and say...avoid thousands of pointless coronavirus deaths. Get over yourself and respect the rules like everyone else does. ShabiDOO 16:17, 14 April 2020 (UTC) - Think about it this way. This thing will be over a lot faster if you just stay the fuck home like the authorities are begging you to do. Chef Moosolini’s Ristorante ItalianoMake a Reservation 16:39, 14 April 2020 (UTC) - are michigans rules really that inconsistent and is it really excessive over reach? your state has the 3rd highest cases in the us. that would suggest to me its all pretty necessary. I haven't been able see much that is inconsistent in michigans approach personally either. the us as a whole's approach has been inconsistent, but that's largely down to the federal government and whose running the show there. ive no doubt any anger over the lockdown stems from or is fed by that. inconsistent would be putting far to positive a spin on it. i'm trying very hard to not bang on about it all the time, but it really difficult not at the moment - everything happening in the us right now, is a direct result of trump - inconsistency, lies, contradictions, misinformation all majorly contributed by trump, exasperated by trump, with attacks on media so constant that by now, there cant be any source for info available to the us public that hasn't been labelled as hideous bias. anger at the states government for doing what is necessary is misplaced. it should be directed at the man whose ego and caprice is killing thousands of americans. every appearance he makes is more horrifying than the last, its incomprehensible to me that he has any kind of support left at all. hes killing people AMassiveGay (talk) 17:21, 14 April 2020 (UTC) - His base is so awful that anything positive that does occur during the pandemic, his base will believe it. Or that they will say that it's not as bad as it could have been or it would have been worse under Clinton. Or that none of the things he did was really his fault or that it's not to be blamed for the pandemic, just blame illegal immigration and China instead. I'll repeat what everyone else has been repeating for eternity, this man has a cult of personality, and if he were to propose to nuke California, his base would be fine with it. БaбyЛuigiOнФire🚓(T|C) 19:10, 14 April 2020 (UTC) - As for the protest whether you support it or not- it will be an interesting situation. There are also protests due to lack of medical supplies and detention of immigrants while other prisoners are being let free. You already know my position but this will be an interesting turn of events. I am sure many of you would agree that this whole thing will be of interesting. This will certainly have an effect on the Presidential election. --Rationalzombie94 (talk) 20:19, 14 April 2020 (UTC) - Zombie could you please give me the formula? The acceptable thousands of people dying of covid (not just now but in the future) for each day the economy isn't inconvenienced? Are 1,000 grandmothers, grandfathers, babies, young people with lung problems and kidney problems, a couple people as young and healthy as you ... worth it for kick-starting the economy. 1,000 per day? Is that a fair compromise? ShabiDOO 08:54, 16 April 2020 (UTC) bobvids on The Quartering[edit] Might be useful for whomever is creating the page on The Quartering? Gunther8787 (talk) 17:38, 13 April 2020 (UTC) - May take a look but I think in order to get an accurate page, someone has to sit down and watch his videos. I haven't seen a second of this guy yet and the thumbnails and video titles tell me I'll be having a splendid time. 😒 БaбyЛuigiOнФire🚓(T|C) 18:49, 13 April 2020 (UTC) - I have watched some of his videos. He is into games, and YouTube drama. Right now some stuff about a person named Lucy Lu for using copyright material. Complains about anti-orangeman content, is " anti-woke", likes Tim Pool. Doesn't do a lot I would call significant. Mostly non political content (I think). Why bother?Ariel31459 (talk) 02:53, 14 April 2020 (UTC) - I mean you also have slanderous pages on both Styxhexenhammer666 and Razorfist. So I don't see the issue.108.208.14.123 (talk) 15:24, 18 April 2020 (UTC) 101 pseudo-evidences[edit] I wan wondering whether the classic response to '101 evidences for a young earth and Universe' be edited? Teerthaloke101 (talk) 09:35, 14 April 2020 (UTC) - I would say no for two reasons. First of all, the things being refuted are literally the alleged 101 evidences for a young age of the Earth and the universe. Secondly, some YEC looking for the proposed "evidences" might accidentally end up here and learn something.Bob"Life is short and (insert adjective)" 12:11, 14 April 2020 (UTC) - Correct, Bob. If you're talking about changing what the actual 101 evidences are, Teerthaloke101, we can't do that because this is a refutation of an actual document. Changing what we're quoting puts words in their mouths and therefore also degrades the rationale for our fair-use quoting of their possibly-copyrighted document. Bongolian (talk) 18:56, 14 April 2020 (UTC) No, I was talking about modifying our response to some particular 'evidence'. Specifically. the evidence about dinosaur soft tissue. There is continued doubt about whether the soft tissues found with dinosaur fossils belonged to the fossilized dinosaurs or had a recent separate origin. Teerthaloke101 (talk) 08:39, 15 April 2020 (UTC) - Ah. Sure, if you are confident you have found something wrong then go ahead and fix it.Bob"Life is short and (insert adjective)" 16:31, 15 April 2020 (UTC) ContraPoints video from January[edit] This is apparently a video she made where she's explains the whole twitter shitshow that happened a few months ago. Yes, it's 1h40m. The first 20 min. is about this 18 year old that was being accused of being a rapist. Gunther8787 (talk) 12:09, 14 April 2020 (UTC) - a 1h40m video on a twitter shit show is 1h40m too long. summarise if yo want commentAMassiveGay (talk) 13:16, 14 April 2020 (UTC) - "Breadtube" becoming increasingly centered on drama was an easy prediction to make. And I'll predict that will continue. I still like some of the content. ikanreed 🐐Bleat at me 13:30, 14 April 2020 (UTC) - Footnotes version of the Contra video. Contra passive-aggressively responds to criticism of her making enby-phobic comments and generally having bad takes as well associating with and giving the impression of legitimacy to Buck Angel (who is a crappy person). Most of it is waffle and largely irrelevant crap. I can link transcripts to a at least decent series of response videos if anyone wants them. ☭Comrade GC☭Ministry of Praise 13:42, 14 April 2020 (UTC) - Context is important. Who is this "she" who has made a video?Bob"Life is short and (insert adjective)" 14:38, 14 April 2020 (UTC) - ContraPoints, AKA Natalie Wynn. A progressive trans youtuber of moderate fame. ikanreed 🐐Bleat at me 15:20, 14 April 2020 (UTC) - Videos? I thought that only one person made a video response (based on her page)? Also, what's "enby-phobic"? Gunther8787 (talk) 10:36, 15 April 2020 (UTC) - @Gunther8787 Bigotry and/or discrimination towards non-binary people based around their gender expression. ☭Comrade GC☭Ministry of Praise 13:02, 15 April 2020 (UTC) - Yes grammarcommie please link videos with well thought out responses that deal with the argument. I'd like to see them ShabiDOO 13:41, 16 April 2020 (UTC) - @Shabidoo Part 1. (Transript.) Part 2. (Transcript.) Part 3. (Transcript.) Apologies for the delay. ☭Comrade GC☭Ministry of Praise 15:32, 18 April 2020 (UTC) - @GrammarCommie Peter's first video has more downvotes (1001) than upvotes (881) for some reason, unless Contra's fanboys jumped on that video... Plus, the Tati part makes you wonder why she'd (Contra) clip that out of her video... - Oh, and is Essence Of Thought (30,4K subs) on mission? Based on his videos, I think Peter needs a page. Gunther8787 (talk) 14:34, 20 April 2020 (UTC) Newest addition to RationalWiki (That I kinda need a little help with)[edit] Louisiana Baptist University and Seminary!! Took me an hour to create. Keep in mind that this is a mere start. --Rationalzombie94 (talk) 23:16, 14 April 2020 (UTC) - Can you format the sources? It's not hard, just time consuming. --It's-a me, LeftyGreenMario!(Mod) 23:20, 14 April 2020 (UTC) - You should write this to the Draft namespace first and then build up on it there. I can't help at the moment because I'm busy writing up a draft myself and it's time-consuming. БaбyЛuigiOнФire🚓(T|C) 23:21, 14 April 2020 (UTC) - How do I format links? Never figured it out. Please and thank you. --Rationalzombie94 (talk) 23:28, 14 April 2020 (UTC) - I've formatted one reference for you. Typically, the coding goes like this for web pages <ref>Last Name, First Name. (Date of Publication). [htmldotcom Title of the webpage]. ''Publication Name''. Retrieval date.</ref> БaбyЛuigiOнФire🚓(T|C) 23:46, 14 April 2020 (UTC) - Do I add the web link to [htmldotcom Title of the webpage]. --Rationalzombie94 (talk) 23:47, 14 April 2020 (UTC) - (edit conflict) Well, you have to know the citation format for Wikis. There's a documentation on cite_web, but just follow what the examples are. Last name, first name of author; publish date in parentheses, title of article, publisher in italics, date of source accessed. That's the general pattern I go with, but for other sources, you should check documentation. --It's-a me, LeftyGreenMario!(Mod) 23:50, 14 April 2020 (UTC) - Check the coding for this page and how it's linked. You can get a good understanding of how titling your external links work. БaбyЛuigiOнФire🚓(T|C) 23:54, 14 April 2020 (UTC) - Oh, by the way, if you want to make your article use multiple refs from the same source, use <ref name="WhateverYouLike">Last Name, First Name. (Date of Publication). [htmldotcom Title of the webpage]. ''Publication Name''. Retrieval date.</ref> and then use <ref name="WhateverYouLike"/> for any subsequent uses of the same ref. БaбyЛuigiOнФire🚓(T|C) 23:54, 14 April 2020 (UTC) Your questions about wiki markup and the structure of a RationalWiki article are summarized nicely in the style manual. You should read it, the community put an awful lot of effort into it. Cosmikdebris (talk) 01:20, 15 April 2020 (UTC) - You can also use reference #9 as a template for the others. Bongolian (talk) 04:25, 15 April 2020 (UTC) Illusions[edit] So I tried this out and very briefly I saw my vision distorted when I looked away, what is this an example of? Also are there any other things that have the same effect. I know there was one with a flag that you stare at and then look at a blank paper and you'll see it in different colors.Machina (talk) 00:57, 15 April 2020 (UTC) - The vision network in your brain does not directly see images like a computer camera does with pixels. It would be far too much data for your brain to handle. So your eye and optical nerves simplify the image somewhat, then the optical center's of your brain divide it into "metadata" that is a vastly over simplified version of the image that gets the job done. The thing is, this simplification procedure doesnt work on all images, especially images with many little highly contrasting lines or squares. Your brain basically glitches out and sees them as moving or imprints colors over top of things. MirrorIrorriM (talk) 12:56, 15 April 2020 (UTC) - Might be tangentally connected, turn your volume down, there's little done to the audio here it peaks and has echo, But the color brown is not a point on the light spectrum - a video on how Brown is contextual. I'm a big fan of brown also but now I know where it comes from. It's kind of a tiresome video because it's made by an engineer, and the audio is awful and the jokes are bad, but the ideas are systematic and really good, that brown is contextual in vision and interpretation and not a point on the light spectrum, making brown a shade of orange with a broad context. In some sense, an illusion. Just really tough to listen to unless it's low volume, the audio problems are earnest, bad gear in the wrong room, can't be fixed in post, can't believe this guy also does videos on the engineering of audio equipment. Gol Sarnitt (talk) 01:54, 16 April 2020 (UTC) - if only there was an online encyclopedia where you could look up lots of stuff --47.146.63.87 (talk) 03:05, 16 April 2020 (UTC) - Very good, BoN, what was your favorite? Gol Sarnitt (talk) 02:31, 17 April 2020 (UTC) - I like the lilac chaser. Also I find it pretty remarkable to see how easy it is to demonstrate the blind spot. --47.146.63.87 (talk) 04:50, 18 April 2020 (UTC) - I like that one too, thanks for sharing. I didn't notice I was seeing it until I knew what to look for, and then it was like "yeah, that's the motherfucker right there!" very funny to me. I can't pick one, but I do like exploring the limits of our tools every now and again. Enough adds up to a pretty loose grasp on reality, and I think that's important. I've experienced things that can only be explained as "Ghosts did it or I'm broken, because I cannot recreate what I just saw." It's why I'd make the most boring ghostbuster you'd ever see. In a haunted house "I saw something out of the corner of my eye! But that happened yesterday too, soooo, probably nothing." Gol Sarnitt (talk) 01:04, 21 April 2020 (UTC) thank you...[edit] Thank you so much you rats for being a place I can come on the onlines and know I can pretty much trust yall, no matter how much you might bicker amongst yourselves. It is going to be a gnarly road forward for us all and I just want to express some admiration. Give yourselves an elbow bump. Whoever has been on point with the WIGO lately double elbow bump. I appreciate yall. 206.53.88.85 (talk) 03:44, 15 April 2020 (UTC) WHO vs Trump[edit] So I've heard anti-china corruption allegations for a while now, something along the lines of "China is buying out the top WHO officials rather than contributing fully to the budget and that's why they won't mention Taiwan etc.". I was always under the impression the WHO had been able to maintain a degree of political independence. Is Trump cutting the budget a stopped clock (because the protest over corruption is justified) or just opportunism and dangling the budget like a carrot to influence the organisation? Were the awkward interviews just Aylward being inoffensive to an easily antagonized China? Anyone have any insight here? McUrist (talk) 09:07, 15 April 2020 (UTC) - As far as I can tell he just did it because they kept telling him that his "cures" were stupid. ☭Comrade GC☭Ministry of Praise 13:46, 15 April 2020 (UTC) - I've gotta disagree about the WHO maintaining a degree of political independence. They officially endorsed traditional Chinese medicine because of political pressures just 2 years ago. The US leans on them equally as hard, and so do many other countries. It's dangerous to discount the sum total of public health expertise they've gathered because of things like that, but it's also not right to think of them as insulated from political pressures. ikanreed 🐐Bleat at me 13:52, 15 April 2020 (UTC) - Yes, that Traditional Chinese Medicine endorsement is a real black mark against them and does raise some questions about their impartiality.Bob"Life is short and (insert adjective)" 16:26, 15 April 2020 (UTC) - I always saw the endorsement of TCM as a problem not unique or limited to this year to WHO as other organizations we've held as science-based became susceptible to it. The WHO has been promoting supplements for a while, as far as in 2015. Nature caught promoting TCM back in 2011. Science also caught promoting TCM. Establishment of alt med agencies in government (by Democrats), such as the NCCIH. Universities having alt-med promotions anf treatments including hospitals. NatGeo touting benefits of TCM (ok debatable if NatGeo is science-based). Naturopaths fighting for licensure in states. What the WHO is doing, I see less of political pressure as it is marketing and people blind-sided by exotic mysticism and how dubious TCM is. --It's-a me, LeftyGreenMario!(Mod) 19:17, 15 April 2020 (UTC) - It's notable that different parts of WHO have held seemingly contradictory viewpoints at the same time. The International Agency for Research on Cancer (IARC), part of WHO, concluded in 2002 that Aristolochia species were carcinogenic to humans,[2] and later strengthened their opinion in 2012.[3] Yet in 2010, WHO released a publication recommending Ayurvedic medicine, including specifically Aristolochia indica.[4] The latter document was produced based on a WHO conference on phytotherapy that exclusively included homeopaths, Ayurvedic medicine specialist and other alt-medicine proponents. As far as I'm aware, IARC is strictly science-based and includes advice from top scientific experts. Bongolian (talk) 20:28, 15 April 2020 (UTC) - Which is consistent with a scientific organization being under political pressure. Science says X, politics says Y, publish both and assure all the Yists that you understand their concerns with X. It's only when you're under direct political control that you start suppressing X in favor of Y. ikanreed 🐐Bleat at me 21:38, 15 April 2020 (UTC) Although the WHO has many faults including trying to make murderous dictator Robert Mugabe a goodwill ambassador it is, unfortunately, all we really have as a global health coordinator. Trump is probably not even aware of the TCM and Mugabe disasters. So I think he is wrong for him to cut funding. At some point when the local political feuding has died down and the petty nationalistic fights have finished we are going to be looking for someone to coordinate the global response - define standardized testing, diagnosis and treatment protocols and manage the anticiupated immunization program. And the WHO is the only organisation in a position to do this. I just hope they don't recommend treatment with tiger gall bladder and immunization via acupuncture. (That last sentence isn't serious)Bob"Life is short and (insert adjective)" 08:47, 16 April 2020 (UTC) - It's hard to say though that it's entirely nationalistic fighting. It's maybe more focused on legitimizing TCM, with the WHO just being politically correct in order to maintain their position (eg. give some ground to stop some of the CCP's hostility to oversight). I'm consistently surprised at how how internally focused the Chinese government is, maybe because we're used to seeing the US trying to be world police. TCM "research" is such a big deal to the Chinese government and a point of national pride. I think the CCP's interest in the influence side is, again, internal: They don't want someone in a position to make them look bad or talk about Taiwan. All speculation though. McUrist (talk) 10:05, 16 April 2020 (UTC) - Oh that wasn't a defense of trump's barbaric actions, just a warning against thinking that the WHO is truly independent. ikanreed 🐐Bleat at me 12:30, 16 April 2020 (UTC) - I understand that. I just wanted to remind people that, for all its obvious faults, the WHO is what we have.Bob"Life is short and (insert adjective)" 13:23, 16 April 2020 (UTC) - as I understand it TCM is 90% boner pills AMassiveGay (talk) 22:15, 19 April 2020 (UTC) The probability of 0 isnt impossibility[edit] Uuuuuummmmm eeeeee okayyyy, any probability nerds out there: is this true??? I'M BACK Blaze_Zero85.58.203.69 (talk) 09:39, 15 April 2020 (UTC) - I’m probably not the nerd your looking for. But I think it’s attempting to explain how sometimes a mathematical proof involves disproving the assertion, reaching a contradiction. E.g. “Thus and such must be evenly divisible by 3” and then when you throw a bunch of equations around you find a paradox where the end result cannot possibly be divisible by three. Then when you marry calculus with sloped curves and things “approaching” infinity or approaching 0 there’s still that tiny bit of non absolutes involved that you cannot say 3=3, end of story. Antigem (talk) 10:27, 15 April 2020 (UTC) - Oh don´t worry, I´m just a guy with basic studies; pls bear with me xD. It´s just Youtube decided to recommend me this (for some reason) and I got a little curious. - So I got from this is that if you wanna ask the chances of getting (example) tails: it´s more correct to ask the probability of the probabilty or just the probability of it? - Or am I just making a mess of myself? - Blaze_Zero85.58.203.69 (talk) 12:14, 15 April 2020 (UTC) - The terminology used in the video is terrible. It should be that an "Observed Sample Proportion of 0" does not mean a "Population Probability of 0 with 100% confidence". Basically in the coin flip example, you would have to flip an infinite number of coins to be *truly certain* the probability of an event is 0. This is why statisticians always talk about confidence levels in numbers. After 30 flips if none came back heads, we could be 99% confident (roughly) that heads is an impossibility. More flips would eventually lead to a confidence of say 99.9999% that heads is impossible. But every flip only makes that 99% *approach* 100%, but it never reaches it. MirrorIrorriM (talk) 13:24, 15 April 2020 (UTC) - God, he really used bad terminology didn´t he? Thank you, now I think I understand a little better! Blaze_Zero85.58.203.69 (talk) 19:25, 15 April 2020 (UTC) - A probability known to be zero based on a valid deduction in mathematics is in impossibility. A probability measured by way of sampling to be zero isn't. This is really obvious stuff. ikanreed 🐐Bleat at me 13:43, 15 April 2020 (UTC) - In mathematics, a probability of 0 means impossible. That is a certainty beyond dispute, a mathematical identity. In real life however (e.g., biology, sociology, physics, chemistry), probabilities are measured and never really absolutes. Hence, scientific papers will report for example, p<0.01 or p<0.00000001, the later case indicates that probability is remote but not impossible. Joe YouTube must be confusing "0" with "practically 0" or something like that. I can't be bothered to watch YouTube in general, but the probability that I will watch it is not zero. Bongolian (talk) 18:27, 15 April 2020 (UTC) - Even in the world of pure mathematics, this logic breaks down when the possibility space is continuous. For example, if I pick a random real number between 0 and 1, what probability does it have to land on exactly 0.5? Well, there are an infinite number of real numbers available to choose, the probability is 1 out of infinity. If that has any well-defined meaning at all, it's got to be 0. This works for any arbitrary real number you might pick. And the probability of picking an arbitrary real number cannot be greater than zero, because it can trivially be proved that if the probability was greater than zero, the total probability of choosing any real number between 0 and 1 is infinity, which is obviously absurd. But then that means it is possible for a random event to land on some outcome that had probability 0; that this will happen is certain, in fact. This is the contradiction the video aims to explain, and this is why mathematicians say an event with probability 1 or 0 happens "almost surely" or "almost never" respectively. Also, the video is part 2 in a series about how to solve problems where the probability of an event is itself unknown, which is complicated by the fact that (as MirrorIrroriM correctly noticed) you cannot gain a sure answer to the state of that probability just by observing outcomes, and so you must instead produce a list of probabilities that the probability of the original event is any given real number between 0 and 1, thus producing the original paradox. I would recommend the first video in that series (and the Wikipedia page I linked earlier, plus its notes and references) if you want context on that, especially if you don't trust me, the guy who just created an account 20 minutes ago to disagree in a talk thread, to deliver that context unharmed. thecnoNSMB (talk) 08:35, 16 April 2020 (UTC) - Nah, if you structure a question as "what's the probability of drawing 5 aces in a row from a standard deck without replacement mathematically, it's quite easy to prove the answer is absolutely zero. ikanreed 🐐Bleat at me 12:31, 16 April 2020 (UTC) - This reminds me of Zeno's stuff. @Bongolian: mathematically, zero probability of a set of outcomes does not necessarily mean impossible. Zero probability in mathematics means something like: the Lebesgue measure of a subset (desired outcomes) of the unit interval (all possible outcomes) is equal to zero.Ariel31459 (talk) 17:40, 16 April 2020 (UTC) - @ikanreed Yeah, because a standard deck of cards is a discrete possibility space, which is fundamentally different than a continuous or infinite one. I'm not saying "probability 0" never means "impossible," I'm just saying it doesn't automatically mean "impossible." It seems to me like you might be making a category mistake re: probability theory and its subfields. Ariel seems to be on the right track. thecnoNSMB (talk) 18:34, 16 April 2020 (UTC) About Sanders[edit] So, my father told me that he read somewhere (he's not telling me where he read this) that Sanders used to be a communist/radicalist. Now I know that he is subscribed to a click-bait newspaper, so I thought that those douchebags were at it again. Except I can't find anything on their site. So, I decided to google "communism Sanders" to see who wrote this tripe, and bumped against an article from a dutch clickbait newspaper called "De Volkskrant", were, according to them, Sanders defended Cuba. Just what did he actually say about Cuba? Gunther8787 (talk) 12:11, 15 April 2020 (UTC) - @Gunther8787 He said their healthcare and education system were good. Which is true. We even have a similar statement in one of our articles. ☭Comrade GC☭Ministry of Praise 12:30, 15 April 2020 (UTC) - I don't follow US politics that much, but I'm never impressed by any argument based on "Somebody used to be X", or "used to believe in X". I used to be a Christian. So what? It's what someone believes now that's important.Bob"Life is short and (insert adjective)" 16:21, 15 April 2020 (UTC) - Meh, if someone changed their mind, that's fine, but I want to hear from them why. What new information made you reconsider your belief? If someone said "HIV was created by Muslims as a bioweapon", but now says it wasn't, I'd be interested in knowing why! But most of this stuff in politics is not about learning more about a person's beliefs. It's "X said A, which is Bad, and that means X is a Bad Person and you shouldn't support them". --47.146.63.87 (talk) 23:37, 15 April 2020 (UTC) - Did you try a search for "sanders cuba"? It was all over the U.S. media a couple months ago. He made a statement that was strictly true but politically tone-deaf. His obvious point was "Castro's regime improved Cuba in some ways even as it did other things that are reprehensible, and this shouldn't impugn the good things", but it was easy to misconstrue him as "SANDERS LOVES CASTRO AND IS A COMMIE", which is what the whole right-wing media complex promptly did, followed of course by "establishment" media "reporting the controversy": "Some say Sanders idolizes Castro and will start executing landlords in Central Park if elected. Others disagree." --47.146.63.87 (talk) 23:37, 15 April 2020 (UTC) - "...Sanders used to be a communist/radicalist..." Excuse me, what planet is this? nobsFree Roger Stone! 18:22, 16 April 2020 (UTC) - Well he never said it wasn't true! As I'm sure you know everyone to the left of Romney is a crypto-commie, and even he looks a bit pinko! Can't trust those heathen Mormons! ("Radical" is such a useful word because since your opinions are of course correct, anyone who disagrees is "radical". Like those "radical abolitionists" and their "agitation"!) --47.146.63.87 (talk) 20:19, 16 April 2020 (UTC) The Lansing, Michigan Protests aka "Operation: Gridlock" is in full swing[edit] Holy crap the protest is massive- When I first heard of it I honestly did not expect that many people. I was wrong. Guess that many people are pissed. --Rationalzombie94 (talk) 21:23, 15 April 2020 (UTC) - These people are fucking morons and the cops should have arrested them. One of them even said, "I'd rather die from the coronavirus than see a generational company be gone." being so blatantly selfish of the damage he's instigating. БaбyЛuigiOнФire🚓(T|C) 21:32, 15 April 2020 (UTC) - What do you say to people who need to work to get by and have no alternative though? Throw them in jail for being between a rock and a hard place? ikanreed 🐐Bleat at me 21:34, 15 April 2020 (UTC) - It's a real issue, but it's not something worth clogging the streets over, nor encouraging a mass congregation of people; there are people still outside on the sidewalks. Yes, the shut down stinks. Yes, it affects wages. However, this is the best response to curbing a pandemic, and the alternative is to let the virus run rampant and kill people, which I think would have worse long term economic ramifications than the shut does. БaбyЛuigiOнФire🚓(T|C) 21:57, 15 April 2020 (UTC) - One model predicted that without social distancing, 40 million people would be killed worldwide this year. That would be roughly on par with the last big pandemic, the Spanish flu, which seems about right. One study of the economic impact of the Spanish flu suggested that cities that implemented "non-pharmaceutical interventions" such as social distancing, in the medium term, bounced back. Those that didn't suffered a greater spike in mortality, which created far worse economic problems in the long term. Right wing populists aren't known for looking at scientific data or thinking long term in general, but that's the data I see. 72.184.174.199 (talk) 02:22, 16 April 2020 (UTC) - It's amazing how crises underpinned by right wing ideology(no social safety net or other way to survive without constant work) create such effective populist right wing outrage. ikanreed 🐐Bleat at me 21:34, 15 April 2020 (UTC) - I was about to mention this to, how a right wing response to this pandemic created this shitstorm in the first place. But of course, conservatives are too short-sighted to see it. БaбyЛuigiOнФire🚓(T|C) 21:58, 15 April 2020 (UTC) - A social safety net can only do so much. People lose their businesses and even when this thing blows over, their income is gone. --Rationalzombie94 (talk) 22:11, 15 April 2020 (UTC) - not just a right wing response making thinks worse. right wing populism AMassiveGay (talk) 23:11, 15 April 2020 (UTC) - @Rationalzombie94Nominally safety nets should help any business not over-leveraged to hell weather the storm and come back. They wouldn't have to worry about the biggest recurring bill at most businesses(payroll) and owners could survive. If they're deeply deeply in debt with every inch of collateral dumped on loans for constant expansion, yeah, sadly that will bring you under. And if we had a social safety net I'd think debt holidays for shuttered business was good policy to resolve that particular question. ikanreed 🐐Bleat at me 23:34, 15 April 2020 (UTC) - It's almost as if high density low income populations and low density high income populations are living in entirely different worlds. It's garbage thinking to say "I should be able to go to my lake house unhindered" and it's garbage practice to say "I can't afford to live in my apartment literally tomorrow if I can't go to work" and this fucking beautiful convergence of complaints is falling squarely on Michigan's governance and their inability to work on it from a locale to locale basis. However, somehow the Federal government is off the hook for not wanting to try, and defunding the WHO (which I always pronounced as "whoa," saying "who" is like not even understanding the words World Health Organization or how they're pronounced) From the same people who say it's all made up and the people who are most disproportionately affected do we see this complaint that government is fucking up the response. It's not a fucking plot to destroy the economy for political gain, and not just a fucking difficulty in governing. It's like people think the people they voted in are geniuses and it's not up to them at all to take part in their own societies. I dunno, I've been pretty soaked in a bunch of people who have good arguments about avoiding a shutdown, a bunch of people who have good means of social distancing, and a bunch of people who can't make rent without a full paycheck. And instead of cracking open the nut of "Poorly managed exconomic structure" this is either seen as "the people in charge don't care" or "the people in charge are trying to destroy the economy for political gain." I would say the people in charge don't have a clue what they are doing, they are maintaining a system that they don't even care about. Gol Sarnitt (talk) 02:56, 16 April 2020 (UTC) - Why do you assume the protestors' preferred policy is not "let the virus spread unhindered"? Are you still in denial that a good number of Muricans are totally in favor of killing Others if it makes them richer? (Or sometimes even if it doesn't.) Of course most of them find it distasteful or too much work to personally do the killing, which is why diseases are a good method. As is climate change. --47.146.63.87 (talk) 03:13, 16 April 2020 (UTC) - No, that's pretty much how I feel about it, except for the assumed part. And I was full on Operation Ivy back in high school. Gol Sarnitt (talk) 03:58, 16 April 2020 (UTC) - What I find most demoralizing abut all this is that we have laid bare so much of what is wrong with "the system" during this pandemic, and there is a legitimate mass movement of popular anger and there is no shortage of articles on line saying this represents a historic turning point, a pendulum swing to the left where this crisis will make us all better people on issues like healthcare or neoliberalism or climate change, but when I look at the people in power, or the presidential choices, or the actions taken by various elites who ultimately shape policy across the world, I just don't see the necessary reforms being made. At best most people will have to settle for half measures and some countries are already using this to reinforce power like the aftermath of 9/11. This is like climate change in microcosm.Flandres (talk) 04:33, 16 April 2020 (UTC) - Remember that for every left-wing person who sees this and thinks that it represents an opportunity to remake society along their desired lines, there's a right-wing person thinking the same thing. --47.146.63.87 (talk) 04:48, 18 April 2020 (UTC) - Leftwing totalitarianism will be overthrown, beginning in Michigan. And yah, like the cops are going to arrest 100,000 protesters, some carrying arms, without using 100,000 masks and PPE, and throw the protesters in jails recently emptied of leftwing criminals to riot, loot, and burn because of coronavirus. nobsFree Roger Stone! 18:27, 16 April 2020 (UTC) - LOL, 81% agree on the need to keep on socially distancing. Only 10% think we should stop social distancing. 10%! How often do 80% of Americans agree on anything? Seriously, for the most part, the relatively uncontroversial notion of "staying safe" has bipartisan support. Certainly the rules could be tweaked in Michigan, but these idiots are advocating well beyond tweaking, to something most reasonable people know is stupid. As a result, they can yap all they want (and increase the risk of getting COVID-19, good on ya) but they can be safely ignored. 72.184.174.199 (talk) 18:37, 16 April 2020 (UTC) - Is this SUPPOSED to be sarcastic, nobs? БaбyЛuigiOнФire🚓(T|C) 18:44, 16 April 2020 (UTC) - My position is that the rules should be tweaked in a way that protects both health of people and the economy. Stemming the spread of the virus is needed but having a functional economy is needed. We could always go the way of Zimbabwe where currency is useless and epidemics of preventable disease are common. --Rationalzombie94 (talk) 01:12, 17 April 2020 (UTC) - nobs cannot be defeated with numbers or percentages, I'm guessing he read a Chick Tract about the dangers of Communist propaganda or something and whiffed on all the context. Gol Sarnitt (talk) 02:49, 17 April 2020 (UTC) - Sorry Zombie. Nobody here is capable of letting you have you eat a whole cake and yet still have left-overs. The trade-off is a higher corpse-count. If your state honestly gave the slightest shit about unemployed people or homeless people then it would have even the most basic social security net for them...which is doesn't. The kind of people who magically suddenly care about raising homeless numbers (cause these strict covid rules are going to make more homeless people who I am for the first time in years actually worried about) are the same people who would vote against a political party that would slightly raise taxes to help build more homeless shelters. Michigan has next to no safety net for this and it stems from the kind of mentality that human beings should be expendable so that the economy could get going again [read so I don't have to waste my time cooped inside and/or have a lower paycheck because of other people's problems]. ShabiDOO 07:54, 17 April 2020 (UTC) - It's also about minorities getting "handouts". Michigan is very rural, white, and bigoted outside of the urban centers (not unlike a lot of states). I saw something posted elsewhere recently that a lot of white Americans would gladly vote for someone who promised to put them and their family in a cardboard box with nothing but rats to eat by cooking over a fire as long as they also promised to make sure the black family next to them didn't have a rat to cook. Sadly pretty true. --47.146.63.87 (talk) 05:03, 18 April 2020 (UTC) Don't Be Like Belarus, Folks[edit] They still refuse to quarantine anyone. The lesson learned is that Belarus is a horrible role model for other nations.— Jeh2ow Damn son! 22:31, 15 April 2020 (UTC) - No offense to the fine people of Belarus, but since when has Belarus been a role model for, well, anyone? RoninMacbeth (talk) 22:35, 15 April 2020 (UTC) - Belarus is not a role model for anything, I mean anything. Not sure what areas they would be role models unless countries want to a adopt a terrible human rights record? --Rationalzombie94 (talk) 22:42, 15 April 2020 (UTC) - burgeoning dictators may be AMassiveGay (talk) 23:09, 15 April 2020 (UTC) - Lukashenko is the ur-Russia ass kisser, with a terrible porno mustache to boot. I was writing my senior thesis on the Belarusian independence movement when the 2011 Minsk bombing happened, too bad it missed him. The Blade of the Northern Lights (話して下さい) 01:05, 16 April 2020 (UTC) - Belarus is a role model for tankies.[5] Bongolian (talk) 01:49, 16 April 2020 (UTC) - Why do you assume his death would make things better and not worse? A dictator's death frequently leads to chaos if there are no broadly-supported succession plans. In Belarus's case Putin would either install another puppet or gin up an invasion. --47.146.63.87 (talk) 03:08, 16 April 2020 (UTC) - In 2011 I could've seen it working out better; this was right around the beginning of the Arab Spring, which looked promising and, if nothing else, worked out well for Tunisia. While assassinations generally aren't ideal (though it couldn't happen to a nicer guy), and there's definitely the devil you know versus the devil you don't, it would've been plausible that the rest of Europe could exert enough pressure on a country in their own back yard to get it to reform. The Blade of the Northern Lights (話して下さい) 13:28, 16 April 2020 (UTC) COVID-19 and Eugenics, let's take this one slow[edit] I have heard plenty of arguments that, "well, everyone is going to get it, some people are going to die, but we can't sacrifice the economic structure over this." But I would ask, why is an economic structure that fails at protecting its society especially more valuable than the society that makes it possible? Meanwhile, due to the limitations of our own optimized supply chain in our current economic system, the dairy industry is pouring out milk, the pork industry is discussing euthanizing baby hogs, and hospitals operate at { small margins] because we, what, all are supposed to pay an insurance company some cost so that they can pay for the minimum expected number of people needing care, so that if we get sick, everybody else that pays for insurance shares the load for our treatment but we don't get a benefit higher than what we paid in for? Sounds like Communism with buy-in tiers of priority to me. This disruption is supposed to affect everyone and capitalism is supposed to thrive on disruption. Why are the markets crashing while hardcore capitalist cheerleaders are just pissed off? But there are people who say "The economy must move on, and those too sick or poor or old to survive can suck the big one, they got left out and that proves that they were unfit to take part in this economy." Now, I don't want to Godwin's Law here, but I do want to point out that there is a certain attitude about protecting the economy and status quo that does not quite understand how anyone could find themselves in a disadvantageous position against some massive force like our current pandemic without it being an inherent personal and/or sub-societal fault that they have to own up to, and would rather blame those people than question whether any uses of structures as a means to their own position might be unfair. Therefore, if someone will die from lack of ability to conform to the norms of who should physically or economically survive COVID-19, no big deal, it doesn't matter who is most affected, and the disproportionate negative effects the virus is having on communities of lower income an higher density is natural order and not an indication of systemic classism? Gol Sarnitt (talk) 03:43, 16 April 2020 (UTC) - Hypothetical question for you. What would happen to life expectancy if we were to cut all school budgets by 50%? It's not "nothing", because without extra curricular activities, without adequate teachers and supplies, fewer children would graduate and the ones that do would have fewer options, people would make less money, crime would increase, etc etc. Basically, Very Bad things would happen, from something that isn't directly a health issue. Really, cutting most major functions of government would negatively affect life expectancy. Cut the budget for the highways, and you have fewer safety measures and thus more deaths. Cut the budget for emergency services, and that has a more direct effect on life expectancy. Cutting the military, well, that depends on whom you ask. - Regardless, every government has to weigh the cost and benefits of every action they take, and that entirely depends on what resources they have available. Spend too much on one thing, and you don't have another. Life is precious, but not priceless. What is that price? In the US, the average US citizen is worth around 8 million dollars. That's average, by the way, actual values are going to vary, and I will be getting a little bit into that in a second. What does this 8 million dollar price actually mean? Well, if a project is expected to save one life per 8 million spent, it's a good use of money. Perhaps it costs 80 million and is expected to save at least 10, perhaps it costs $80,000 and only has a 1 in 100 chance of saving a life. If it costs more, tough shit. It's terrifying, but bear in mind that every dollar spent on one project is one less dollar available for other projects. Spending $400m on upgrading a new bridge means you can't spend that $400m on schools. That 8 million is basically the amount we've been able to come to in order to save the most lives possible. Poor countries don't have that kind of money to save one life, they are forced to choose between saving lives with building a water filtration plant or ensuring enough food for everyone, things which save lives for relatively cheap. - Why can't we just tax people to save more lives? If it comes out of taxes, bear in mind that poverty is indeed deadly. A poor person in the US has a much, much lower life expectancy than a middle class person, and every dollar in tax reduces a person's health in some small way. Maybe it's an extra shift worked, maybe it's one less home cooked meal that's replaced with fast food, something insignificant to a single person but when dealing with a population of 325 million, otherwise insignificant numbers add up. How many people would die if every person replaced just one meal with a greasy burger? It's more than 0. - In other words, you set the value of the human life too high, you tax people into poverty and you kill more people than you save. You set the value too low, and you fail to save more than you kill. We may not have the right value, but there needs to be that value. - Now, what does this have to do with Covid? You can probably guess. You shut the economy down, people have less money, their life expectancy falls. You issue a stimulus to counteract this, that has to be paid back at some point with a combination of lower government budgets or increased taxes on the middle class (no, the rich aren't going to pay), and people die. The question isn't "should we be more concerned with our money than our lives" but rather "how many lives will be lost through the shutdown versus doing nothing at all". - And even that question itself is wrong. Most people would agree that it's worse to let one baby die than two senior citizens, because that baby could live another 80 years while the seniors may have only 10 years between them. Ignoring which years are worth what and assuming all years are equally valuable, we have to ask the question "how many life-years will be lost through the shutdown versus doing nothing at all", especially considering that the people dying from the virus tend to be the elderly or sick who have the fewest life years left while it's the young who lose the most lifespan from the shutdown. - And I will admit, I don't know the answer. But I can hope that the people at the CDC, WHO and so on, have asked this question and do have an answer. Which is why we are doing what we are doing. CoryUsar (talk) 05:49, 16 April 2020 (UTC) - The harshness is palpable, but I do agree on a couple points. One, it will have to be paid back, and I fully expect my rent to rise as I see some people just calling off their lease and moving out of my apartments, and I don't know how many of them are just defaulting to "sure, if my eviction is reported, my credit will die, but I don't want to be there for the 4 month cumulative bill." But evictions aren't always reported, as that runs the risk of having to go legal as opposed to just being ok with the loss of one or two month's rent. Most apartment complexes do kind of run with that capacity, I have no reason to ask if premature ending of a lease is available, I haven't saved up enough to buy a house yet, unless maybe I have because people are going to lose their mortgages, but I bet mortgages are going to be nasty shit over this, and prioritizing the economy doesn't stop people who pay rent or mortgage or are legally, contractual, actual factual facing massive consequences based on the current economic system from ever getting credit again. It's not that I don't think the economy is all-connected, it's that I don't think it's all-important and that, more importantly, the economy doesn't survive disruption like we say it does. I don't want to go full [assertion of Mammon] here, but there is a practical sense to life that is cost vs gain, sure. So the second point, in natural predation it's the old and the young that get it the hardest. I mean, there's a reason the pork industry wants to head off oversupply by euthanizing baby hogs, not young hogs. A population can survive the death of the oldest and the newborns. But epidemics, be they fungal, bacterial, viral, or parasitic, populations struggle and dwindle. Also, I live in the Midwest, it was 65 degrees on Christmas and it snowed 4 inches today, viruses are not the only threat to a population, how hearty are all of our crops? We are far too secure in "big business, tiny margins, big winning owners, and just enough to keep the workforce working" kind of explanations of why the economy "must press on" in a crisis. Because nobody has any of this "credit" built up enough to stay home? I'm not saying people don't need to go to work, I'm definitely not saying children don't need school as an important resource, I'm definitely not saying it would be solved if I was in charge, I'm saying we were not just unprepared for THIS crisis, the current economic system is wildly unprepared for the next crisis. Water company forced to turn water back on In American cities for households that did not pay their water bills!?! And the justification is that water is a tool in combating viral spread. Well, yes, and thank goodness it is somehow that, too, great job America at noticing something vital. Gol Sarnitt (talk) 01:40, 17 April 2020 (UTC) Anyone Familiar with "Swiss Propaganda Research"?[edit] Recently, came across this website and didn't know what to think of it. Seems to be some sort of 'anti-propaganda' site. How should one approach this? dogman_1234 04:17, 16 April 2020 (UTC)dogman_1234 - According to them Wikipedia is some kind of western disinformation campaign. Wow! I wonder what they would make of us?Bob"Life is short and (insert adjective)" 08:57, 16 April 2020 (UTC) - Their COVID-19 page is full of statements ranging from complete bullshit (many of the unsourced statements can easily be countered with a simple Google search) to cherry-picking to "shit source reading" in order to promote the viewpoint popular among certain sides of conservatives (including the conspiracy side this person seems to be on) that COVID-19 is nothing and the economy should be re-started or something. Initial read is to treat this as a "fake news" site in the direction of, say, the better known Zero Hedge or Wikispooks -- way over the top conspiracy. It is probably one guy's personal yapping outlet. The analysis that negative media reporting on Donald Trump is because the Council of Foreign Relations rules the media and the world and he's not on it or something is a real hoot, that's a new one, he should add that to the QAnon conspiracy of everything graph. 72.184.174.199 (talk) 14:17, 16 April 2020 (UTC) vitamin d and lockdowns[edit] are we all going to get rickets if we not going outside much? AMassiveGay (talk) 13:10, 16 April 2020 (UTC) - Lockdowns perfect to go run in circles in a field, provided you can. Realistically not going to infect someone. May as well use the extra time for something healthy. Personally going for 20km a week.McUrist (talk) 13:32, 16 April 2020 (UTC) - Indeed - if you can go outside. Not all of us have that option.Bob"Life is short and (insert adjective)" 14:03, 16 April 2020 (UTC) - Much as we shit on supplements here, and for good reason, they're an appropriate way to deal with an actual nutrient deficit. ikanreed 🐐Bleat at me 14:26, 16 April 2020 (UTC) - Fortunately, I have a balcony and it has become something of a daily ritual for me to sit there for those roughly three hours when it’s in direct sunlight each day (work permitting). Having a cocktail and a cigar while doing so doesn’t detract from the experience, either. ScepticWombat (talk) 09:22, 18 April 2020 (UTC) - im still able to go for a stroll so not really a problem this end. plus I'm pale enough to burn in moonlight so im confident I could generate vit d in a pitch black cave a 1000 thousand feet underground. AMassiveGay (talk) 22:13, 19 April 2020 (UTC) Fuck fox news[edit] They're now on national television, attributing to "sources" the conspiracy theory that coronavirus was made in a Chinese lab. Bomb shelter time? ikanreed 🐐Bleat at me 13:43, 16 April 2020 (UTC) - I'm reading the FOX news article and it's odd. It says "the virus sprang from Wuhan facility" but later says that the original infection was bat to human - and that the human worked at the lab. But those are not the same things. The initial implication is that the lab created it, but then they row back on that in the body of the article. - Who would have expected doublethink from Fox?!Bob"Life is short and (insert adjective)" 14:01, 16 April 2020 (UTC) - We're brewing towards cold war 2. The doublethink is way less important than the consequences of the propaganda. ikanreed 🐐Bleat at me 14:22, 16 April 2020 (UTC) - Bob, let's go slow on this. There's a difference between "made in a lab" and "came from a lab"; there a difference between "accidentally came from a lab" and "deliberately came from a lab"., Have we covered all subsets and contingent possibilities? Two things are certain, Coronavirus did not originate in the Wuhan wet market where no bats were sold, and did not originate from eating bats in the Wuhan wet market where no bats were sold. You are the victim of officially controlled CPP racist propaganda aimed at denigrating the cultural eating habits of the Chinese people when you repeat the those conspiracy theories. nobsFree Roger Stone! 18:35, 16 April 2020 (UTC) - I'm quoting Fox news on the "came from a bat" thing. But as Fox is the most unreliable mainstream news source I am aware of there is a lot of doubt on that statement. Nevertheless you seem to have some additional certain information and perhaps you should get in contact with the CDC who I am sure would be happy to hear from you.Bob"Life is short and (insert adjective)" 20:16, 16 April 2020 (UTC) - First of all, instantly attributing this notion that "lol Chinese eat everything" is something you've brought up that no one else did and something only the racist morons bring up. Get your cultural facts straight: eating stuff like dogs and cats and bats isn't widely practiced in China, and people who claim that China needs to stop eating bats needs to get their facts straight before they act like racist buffoons. Spread of the virus is not as simple as consumption of a weird and strange meat. It's more of a problem with the practice of the wet markets in general than it is Chinese culture. БaбyЛuigiOнФire🚓(T|C) 19:03, 16 April 2020 (UTC) - False dilemma: - P1: A place could only have been the origin of the disease if bats were sold there. - P2: No bats are sold in the Wuhan wet market. - C: The Wuhan wet market is not the origin of the disease. - Not to mention I've seen some reporting from reliable sources (i.e. not Murdoch's propaganda outlets) saying some experts think it may not literally have jumped to humans in the exact location of the wet market, but might have jumped to humans once or more in the Wuhan area and then circulated for a bit before starting to disperse to other regions. This is pretty much what happened with HIV. --47.146.63.87 (talk) 19:37, 16 April 2020 (UTC) - Right, though that probably won't happen as long as Trump is in charge. "China is terrible and they're killing us but it's so great they're led by the tremendous President Xi who's a great guy, believe me." But under President Tom Cotton... --47.146.63.87 (talk) 19:37, 16 April 2020 (UTC) - China's track record of bad biosecurity measures could be a logical explanation. If it came from a lab (possible) the likely explanation would be cross contamination or animals being packed tightly together allowed zoonosis to happen. If it was a biological weapon it would be a pretty shitty way to kill. As for a bioweapon, why would China economically cripple its trading partners? That would be stupid. The bad biosecurity measures be it mundane, would be more logical than a bioweapon. --Rationalzombie94 (talk) 23:08, 16 April 2020 (UTC) - SARS is known to have escaped Chinese labs on multiple occasions, and lab animals have been sold at local markets in the past, so an escape from a lab is entirely plausible. And the creation of potentially dangerous chimeric viruses has been going on for a while as a standard method of examining the infectious potential of nonhuman viruses. 192․168․1․42 (talk) 03:10, 18 April 2020 (UTC) - Let us accept for a moment your three assertions. There are a vast number of facts in this world which can be combined in many ways to support any particular hypothesis (or for that matter conspiracy theory). What exactly are you claiming happened in the case of COVID 19 and, more importantly, how would you test that claim? Remember the assertion that there is no evidence your claim is false does not make it true.Bob"Life is short and (insert adjective)" 10:25, 21 April 2020 (UTC) With the bat thing, covid is not thought to have jumped from bat to human but rather to have jumped from bat to a third animal, a pangolin is thought likely, and then to have jumped to human from there. As for speculation about mistakes in labs, I care not. while the virus still rages its an irrelevant distraction and in no way exonerates early mistakes nor the continued appalling handling of certain governments. what ever china's role in this ultimately in all this we have known of a threat like this for years. ours repsonses are entirely on us and our governments' responses are entirely on them. it seems that we will be unlikely to learn the lessons we are currently being taught in favour of who we can be blame. we don't need to ok to china for that when we can and should look closer to home AMassiveGay (talk) 12:41, 21 April 2020 (UTC) Some depressing news...[edit] A high school friend that I had has just died from COVID-19. You can't deny how powerful a single virus is. And I'm pretty sure that a lot of climate change denialists could actually point to the disease as evidence that the earth is not getting hotter, but colder. Sadly, my friend will never cringe at those idiots ever again.— Jeh2ow Damn son! 16:14, 16 April 2020 (UTC) - Keats once said "Nothing is real until it is experienced". I'm sorry COVID-19 is real for you. ikanreed 🐐Bleat at me 16:25, 16 April 2020 (UTC) - Yesterday it snowed in mid-April: maybe because the planet is shut down and lower greenhouse gases are being admitted, global warming is taking a holiday and colder whether is in the wind. Conversely, the no reusable plastic bags order was rescinded and we're backing to destroying the planet to save ourselves from a loathsome disease. Oh. the dilemmas are becoming to much for a simple mind to bear. nobsFree Roger Stone! 18:42, 16 April 2020 (UTC) - Yesterday it snowed That's not how global warming works you idiot. БaбyЛuigiOнФire🚓(T|C) 18:53, 16 April 2020 (UTC) - I stopped using bags years before it was popular. Like the overwhelming majority of people, I have these amazing things attached to the end of my arms that enable me to pick up and carry things. If you're unfamiliar, they're called hands. The Blade of the Northern Lights (話して下さい) 19:05, 16 April 2020 (UTC) - Pain in the ass to move thirty-forty items from conveyor (if the store doesn't bag for you) to cart to trunk to final destination, and hopefully you don't drop anything fragile. Harder for some people as well. Of course Costco and some other places have had this solved for decades: give the shipping boxes everything comes to the store in to shoppers. Other stores discard these instead and then give out bags because they would have to organize the boxes, and it's psychologically viewed as "cheap". Something something externalities. --47.146.63.87 (talk) 19:20, 16 April 2020 (UTC) - For sure. But in my case, I'm a tall, healthy, relatively young man, so I don't mind the extra trips. Not like I have anything so pressing that an extra 2 minutes a week will kill me. The Blade of the Northern Lights (話して下さい) 22:01, 16 April 2020 (UTC) - At least you're introspective enough to realize you have a simple mind. --47.146.63.87 (talk) 19:20, 16 April 2020 (UTC) - (trying to keep this discussion about Jeh2ow's serious loss) Oh god, how close is this friend to you? That's devastating. --It's-a me, LeftyGreenMario!(Mod) 20:16, 16 April 2020 (UTC) - That’s awful. I’m so sorry that happened. Chef Moosolini’s Ristorante ItalianoMake a Reservation 21:43, 16 April 2020 (UTC) - Yeah. I'm very sorry to hear of this incident. We're all doing our best right now to curb the spread of the disease so that no further tragic losses can occur. БaбyЛuigiOнФire🚓(T|C) 21:49, 16 April 2020 (UTC) Would pseudo-academic accrediting agencies be missional?[edit] Kinda curious on that one considering that there are some fake accrediting agencies that support creationism, woo and conspiracy theories. --Rationalzombie94 (talk) 22:18, 16 April 2020 (UTC) - Sure, the existing academic accreditation article already a section that deals with accreditation mills and could use expansion. Cosmikdebris (talk) - I agree with it being missional. These institutions are a key part of giving some of these cranks their 'credibility'. Feel free to continue full speed ahead @Rationalzombie94 --NavigatorBR (Talk) - 23:13, 17 April 2020 (UTC) - Yeah, definitely missional as part of the sliding scale pseudo academia that begins with more or less legit institutions with faith statements and the like, shading into woo and nutty religious academies/universities whose only connection to academia is their branding, and finally end up with out and out diploma mills, usually of no fixed abode (i.e. online scams). ScepticWombat (talk) 09:16, 18 April 2020 (UTC) Hungary[edit] So Hungary's "president" recently banned trans folk from transitioning. This is after Hungary's legislature gave him emergency powers. What do y'all think will come of this? RationalHindu (talk) 23:21, 16 April 2020 (UTC) - At this rate I will not be surprised if Hungary faces a credible attempt to kick it out of the EU by the end of this decade. Given its worsening human rights record and the fact it is now essentially an authoritarian state, if it were not already in the EU people would treat it like Belarus if it attempted to join. Granted, the EU is now in dire straits so they can bluster about universal rights but they can't really afford to kick it out just yet.-Flandres (talk) 23:34, 16 April 2020 (UTC) - More like the EU in its present form won't make it to the midpoint of the decade. Kicking out a member requires a unanimous vote of other members; Poland's ruling party is going down Hungary's path and will veto it. But Germany blocking any stimulus and enforcing crushing austerity for the second economic crisis in a row will tear it apart first unless they relent. --47.146.63.87 (talk) 01:11, 17 April 2020 (UTC) - @RationalHindu Shit. Hope Covid-19 takes him. Speaking of which, the guy who said SARS-CoV-2 is a divine punishment for Pride parades has got it.--Delibirda the Annoying Grammar Nazi (talk) 06:27, 17 April 2020 (UTC) - Hungary and to a lesser extant Poland are the total fucking embarrassments of the European Union (now that the UK is out). The only thing they listen to is having their EU funding cut (which the EU commission is slowly doing as much as the rules allow) and the EU court striking down laws like this. It will eventually lead to a show down where the EU can no longer allow them to operate with autonomy. If it were up to me they'd be out of the union at the snap of my fingers. But in the mean time the only tools that are available is cutting the funding they need because of their tanking economy because of the stupid laws they are enacting. anti-trans laws will never hold up in the EU-court. But in the mean time it certainly will make life miserable for trans in Hungary. AT least they can freely move to 27 other countries, many of which will be more than happy to support them. Though that certainly means leaving and losing the little support systems they have in the only home they know. Horrific really. ShabiDOO 09:45, 17 April 2020 (UTC) - And of course, they may also face a language barrier. The majority of Hungarians only speak Hungarian, which is virtually impossible to use outside the country.) @ 17:35, 17 April 2020 (UTC) - For now. Unfortunately as Britbongs should know a bunch of "foreigners" moving in tends to energize the bigots against the EU. "Get them out!" --47.146.63.87 (talk) 04:39, 18 April 2020 (UTC) - So, do we know if the Hungarian legislator are a bunch of fucking idiots, or a bunch of fucking stooges for this fuckin' prick? (Also backing Delibirda the Annoying Grammar Nazi's hope. Scum who make the situation worse deserve to be cursed with this, as punishment for the harm they've inflicted. Since in reality, it's liable to be the only punishment these scum face for their actions. And no, I'm not apologizing for this statement.)--NavigatorBR (Talk) - 23:19, 17 April 2020 (UTC) - Hungary's ruling party is neo-fascist and just passed their Enabling Act last month. This is where I remind people that Hungary was a Nazi German ally. Obviously this is not to say that Hungarians are all Nazis, but that far-right politics has a history there and unlike in Germany was never really publicly discredited. Sebastian Gorka is ethnically Hungarian, and a number of other ethnic Hungarians have been paling around with the English-speaking far-right. --47.146.63.87 (talk) 04:39, 18 April 2020 (UTC) HSUS[edit] The article about Humane Society of the United States really needs to be edited. If they are not PETA-tier bad, then the article should be deleted.Deleted it.--Delibirda the Annoying Grammar Nazi (talk) 08:03, 17 April 2020 (UTC)
https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive354
CC-MAIN-2022-21
en
refinedweb
Spheres are not as common as planar faceted objects in the architectural domain. In spite of that, this week has been the week of the spheres, after looking at how to generate spherical solids, display them using AVF, and use them for geometrical proximity filtering. We'll round this off now by displaying lots of spheres. A convenient and interesting way to generate a large number of them is to use Kean Walmsley's Apollonian gasket and sphere packing web service to fill a sphere with solid spheres (project overview). To give you a quick first impression of what it is all about, here are some views of different levels of Apollonian sphere packing using Revit transient solids generated by the GeometryCreationUtilities class and its CreateRevolvedGeometry method and displayed in the Revit graphics area using the Analysis Visualization Framework AVF. Apollonian packing with three levels: Apollonian packing with five levels: Apollonian packing with seven levels: Retrieval and display of this data in Revit requires the following steps and functionality, listed here in the order of their implementation in the source code: - String formatting and parsing - Web service request and JSON deserialisation - AVF Functionality - Spherical solid creation and mainline putting it all together I'll simply present these sections of code with some comments on each. String Formatting and Parsing The string formatting and parsing is required both to obtain input data from the user and to present some timing and statistical results at the end. The input to the sphere packing algorithm web service consists of an outer sphere radius and the number of steps to execute, which varies from 2 upwards, where 10 is a pretty high number. I implemented a little .NET form to request this input from the user together with a centre point defining the location in the Revit model to display the packing: The following trivial methods are used to parse this input data and format the timing results: /// <summary> /// Return a string for a real number /// formatted to two decimal places. /// </summary> public static string RealString( double a ) { return a.ToString( "0.##" ); } /// <summary> /// Return an integer parsed /// from the given string. /// </summary> static int StringToInt( string s ) { return int.Parse( s ); } /// <summary> /// Return a real number parsed /// from the given string. /// </summary> static double StringToReal( string s ) { return double.Parse( s ); } /// <summary> /// Return an XYZ point or vector /// parsed from the given string. /// </summary> static XYZ StringToPoint( string s ) { s.TrimStart( new char[] { '(', ' ' } ); s.TrimEnd( new char[] { ')', ' ' } ); string[] a = s.Split( new char[] { ',', ' ' } ); return ( 3 == a.Length ) ? new XYZ( StringToReal( a[0] ), StringToReal( a[1] ), StringToReal( a[2] ) ) : null; } All trivial stuff, and still it helps to have it nicely sorted. Web service request and JSON deserialisation The use of Kean's sphere packing web service is straightforward. You send it an HTTP request, it returns results in JSON format, you unpack them, and Bob's your uncle. Kean describes it in detail and presents the source code implementing it in his discussion of consuming data from a restful web service. It was originally designed for use in AutoCAD.NET. I simply grabbed his code and reuse it completely unchanged: static dynamic ApollonianPackingWs( double r, int numSteps, bool circles ) { string json = null; // Call our web-service synchronously (this // isn't ideal, as it blocks the UI thread) HttpWebRequest request = WebRequest.Create( //" " + ( circles ? "circles" : "spheres" ) + "/" + r.ToString() + "/" + numSteps.ToString() ) as HttpWebRequest; // Get the response using( HttpWebResponse response = request.GetResponse() as HttpWebResponse ) { // Get the response stream StreamReader reader = new StreamReader( response.GetResponseStream() ); // Extract our JSON results json = reader.ReadToEnd(); } if( !String.IsNullOrEmpty( json ) ) { // Use our dynamic JSON converter to // populate/return our list of results var serializer = new JavaScriptSerializer(); serializer.RegisterConverters( new[] { new DynamicJsonConverter() } ); // We need to make sure we have enough space // for our JSON, as the default limit may well // get exceeded serializer.MaxJsonLength = 50000000; return serializer.Deserialize( json, typeof( List<object> ) ); } return null; } AVF Functionality The code implementing AVF functionality has also already been presented and discussed elsewhere. The last use I made of it was for the initial sphere display using AVF. All I did here was to replace the PaintSolid method used there to handle multiple solids. I implemented this trivial helper class to associate a solid with a level in order to represent the levels in different colours in the visualisation: class SolidAndLevel { public Solid Solid { get; set; } public int Level { get; set; } } Here is the code to create an AVF display style, get or create a SpatialFieldManager, set up an analysis result schema, and display the spherical solids using PaintSolids, taking a list of SolidAndLevel instances as input: void CreateAvfDisplayStyle( Document doc, View view ) { using( Transaction t = new Transaction( doc ) ) { t.Start( "Create AVF Style" ); AnalysisDisplayColoredSurfaceSettings coloredSurfaceSettings = new AnalysisDisplayColoredSurfaceSettings(); coloredSurfaceSettings.ShowGridLines = false;(); } } static int _schemaId = -1; void PaintSolids( Document doc, IList<SolidAndLevel> solids ) {( _schemaId != -1 ) { IList<int> results = sfm.GetRegisteredResults(); if( !results.Contains( _schemaId ) ) { _schemaId = -1; } } if( _schemaId == -1 ) { AnalysisResultSchema resultSchema = new AnalysisResultSchema( "PaintedSolids", "Description" ); _schemaId = sfm.RegisterResult( resultSchema ); } foreach( SolidAndLevel sl in solids ) { FaceArray faces = sl.Solid.Faces; Transform trf = Transform.Identity; foreach( Face face in faces ) { int idx = sfm.AddSpatialFieldPrimitive( face, trf ); IList<UV> uvPts = new List<UV>( 1 ); uvPts.Add( face.GetBoundingBox().Min ); FieldDomainPointsByUV pnts = new FieldDomainPointsByUV( uvPts ); List<double> doubleList = new List<double>( 1 ); doubleList.Add( sl.Level ); IList<ValueAtPoint> valList = new List<ValueAtPoint>( 1 ); valList.Add( new ValueAtPoint( doubleList ) ); FieldValues vals = new FieldValues( valList ); sfm.UpdateSpatialFieldPrimitive( idx, pnts, vals, _schemaId ); } } } Spherical Solid Creation and Mainline The spherical solid creation is absolutely unchanged from the discussion on Monday, so I can get right down to the mainline Execute method putting it all together. It - Checks that we are in a 3D view. - Prompts the user for the input data. - Calls the web service and extracts the results. - Creates the spherical solids. - Paints the solids. Each of the steps is timed, the spheres are counted, and the final results are presented, looking like this for three levels: For five levels: For seven levels: Revit actually required some additional time on my system to complete and return from the command after these messages were displayed, and mostly that took longer than the entire command execution itself. Here is the Execute mainline implementing these steps:; if( !(doc.ActiveView is View3D) ) { message = "Please run this commahnd in a 3D view."; return Result.Failed; } XYZ centre = XYZ.Zero; double radius = 100.0; int steps = 3; using( Form1 f = new Form1() ) { if( DialogResult.OK != f.ShowDialog() ) { return Result.Cancelled; } centre = StringToPoint( f.GetCentre() ); radius = StringToReal( f.GetRadius() ); steps = StringToInt( f.GetLevel() ); } // Time the web service operation Stopwatch sw = Stopwatch.StartNew(); dynamic res = ApollonianPackingWs( radius, steps, false ); sw.Stop(); double timeWs = sw.Elapsed.TotalSeconds; // Create solids, going through our "dynamic" // list, accessing each property dynamically sw = Stopwatch.StartNew(); List<SolidAndLevel> solids = new List<SolidAndLevel>(); Dictionary<int, int> counters = new Dictionary<int, int>(); foreach( dynamic tup in res ) { double rad = System.Math.Abs( (double) tup.R ); Debug.Assert( 0 < rad, "expected positive sphere radius" ); XYZ cen = new XYZ( (double) tup.X, (double) tup.Y, (double) tup.Z ); int lev = tup.L; Solid s = CreateSphereAt( creapp, cen, rad ); solids.Add( new SolidAndLevel { Solid = s, Level = lev } ); if( !counters.ContainsKey( lev ) ) { counters[lev] = 0; } ++counters[lev]; } sw.Stop(); double timeSpheres = sw.Elapsed.TotalSeconds; // Set up AVF and paint solids sw = Stopwatch.StartNew(); PaintSolids( doc, solids ); sw.Stop(); double timeAvf = sw.Elapsed.TotalSeconds; int total = 0; string counts = string.Empty; List<int> keys = new List<int>( counters.Keys ); keys.Sort(); foreach( int key in keys ) { if( 0 < counts.Length ) { counts += ","; } int n = counters[key]; counts += n.ToString(); total += n; } string report = string.Format( "{0} levels retrieved with following sphere " + "counts: {1} = total {2}; times in seconds " + "for web service {3}, sphere creation {4} " + "and AVF {5}.", counters.Count, counts, total, RealString( timeWs ), RealString( timeSpheres ), RealString( timeAvf ) ); TaskDialog.Show( "Apollonian Packing", report ); return Result.Succeeded; } Since Kean did all the web service implementation and JSON extraction work for me, it all boiled down to just putting together a few ready-made components. Thank you, Kean! Please refer to Kean's project overview for all the nitty-gritty background details. Here is Apollonian.zip containing the complete source code, Visual Studio solution and add-in manifest for this external command. Anyway, this should provide enough spheres for this week and keep us all happy and occupied over the weekend.
https://thebuildingcoder.typepad.com/blog/2012/09/apollonian-packing-of-spheres-via-web-service-and-avf.html
CC-MAIN-2022-21
en
refinedweb
Hi everyone, I have MacOS 10.14. I’m trying to install opencv in the python interactor by this code: from slicer.util import pip_install pip_install(“opencv”) I get this error: ERROR: Could not find a version that satisfies the requirement opencv (from versions: none) ERROR: No matching distribution found for opencv WARNING: You are using pip version 20.1.1; however, version 20.3.3 is available. You should consider upgrading via the ‘/Applications/Slicer.app/Contents/bin/./python-real -m pip install --upgrade pip’ command. Traceback (most recent call last): File “”, line 1, in File “/Applications/Slicer.app/Contents/bin/Python/slicer/util.py”, line 2569, in pip_install _executePythonModule(‘pip’, args) File “/Applications/Slicer.app/Contents/bin/Python/slicer/util.py”, line 2545, in _executePythonModule logProcessOutput(proc) File “/Applications/Slicer.app/Contents/bin/Python/slicer/util.py”, line 2517, in logProcessOutput raise CalledProcessError(retcode, proc.args, output=proc.stdout, stderr=proc.stderr) subprocess.CalledProcessError: Command ‘[’/Applications/Slicer.app/Contents/bin/…/bin/PythonSlicer’, ‘-m’, ‘pip’, ‘install’, ‘opencv’]’ returned non-zero exit status 1. Also regarding the Warning that I get for the update for pip when I open the python real, it is not editable to be able to put this code: /Applications/Slicer.app/Contents/bin/./python-real -m pip install --upgrade pip Thanks for your help.
https://discourse.slicer.org/t/error-when-installing-opencv-with-python-interactor-on-mac/15544
CC-MAIN-2022-21
en
refinedweb
Set the POSIX flags and the QNX Neutrino extended flags in a spawn attributes object #include <spawn.h> int posix_spawnattr_setxflags( posix_spawnattr_t *attrp, uint32_t flags); POSIX defines the following flags: QNX Neutrino defines the following extended flags: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The posix_spawnattr_setxflags() function stores the POSIX flags and QNX Neutrino extended flags in the spawn attributes object pointed to by attrp, overwriting any previously saved().
https://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/posix_spawnattr_setxflags.html
CC-MAIN-2022-21
en
refinedweb
By default the bootstrapper will automatically add all pipelines by type in the entry assembly via reflection, but you can also use it to add or define new pipelines. It contains several methods to add additional pipelines by either type or as an instance: AddPipeline<TPipeline>()will add a pipeline by type. AddPipeline(IPipeline)and various overloads will add a pipeline instance. Finding Pipelines By Reflection The bootstrapper also includes the ability to use reflection to find and add pipeline types: AddPipelines()will add all pipelines defined in the entry assembly. AddPipelines(Assembly)will add all pipelines defined in the specified assembly. AddPipelines(Type)will add all pipelines defined as nested classes in the specified parent type. AddPipelines<TParent>()will add all pipelines defined as nested classes in the specified parent type. Typically, the name of the added pipeline is inferred from the class name though its also possible to specify an alternate name. Adding Directly You can add pipelines to the IPipelineCollection directly using the following extensions: AddPipelines(Action<IPipelineCollection>) AddPipelines(Action<IReadOnlyConfigurationSettings, IPipelineCollection>) Defining Pipelines In addition to adding pipelines defined as classes, the bootstrapper also has a robust set of extensions for directly defining pipelines: AddPipeline(...)overloads will directly define a pipeline. AddSerialPipeline(...)overloads will directly define a pipeline that has dependencies on all other currently defined pipelines. AddIsolatedPipeline(...)overloads will directly define an isolated pipeline. AddDeploymentPipeline(...)overloads will directly define a deployment pipeline. Each of these methods have overloads that allow you to: - Specify a collection of modules to be executed during the process phase or other phases. - Specify files to read during the input phase. - Specify how to write files during the output phase. - Specify pipeline dependencies to other pipelines. Pipeline Builder The bootstrapper also provides a fluent API specifically for defining pipelines using a "builder" style: BuildPipeline(string, Action<PipelineBuilder>)specifies a pipeline name and a delegate that uses a new PipelineBuilder. The PipelineBuilder includes a number of fluent methods for defining different parts of your pipeline: WithInputReadFiles()overloads define files to read during the input phase. WithOutputWriteFiles()overloads define how to write files during the output phase. AsSerial()indicates that the pipeline should have dependencies on all other currently defined pipelines. AsIsolated()indicates that the pipeline is an isolated pipeline. AsDeployment()indicates that the pipeline is a deployment pipeline. WithInputModules()adds modules to the input phase. WithProcessModules()adds modules to the process phase. WithPostProcessModules()adds modules to the post-process phase. WithOutputModules()adds modules to the output phase. WithInputConfig()uses a configuration delegate to define the output of the input phase. WithProcessConfig()uses a configuration delegate to define the output of the process phase. WithPostProcessConfig()uses a configuration delegate to define the output of the post-process phase. WithOutputConfig()uses a configuration delegate to define the output of the output phase. WithDependencies()defines the dependencies for the pipeline. WithExecutionPolicy()indicates which execution policy the pipeline should use. ManuallyExecute()indicates the pipeline should use the manual execution policy. AlwaysExecute()indicates the pipeline should use the always execution policy. To complete building the pipeline call Build(): using System; using System.Threading.Tasks; using Statiq.App; using Statiq.Markdown; namespace MyGenerator { public class Program { public static async Task<int> Main(string[] args) => await Bootstrapper .Factory .CreateDefault(args) .BuildPipeline("Render Markdown", builder => builder .WithInputReadFiles("*.md") .WithProcessModules(new RenderMarkdown()) .WithOutputWriteFiles(".html")) .RunAsync(); } }
https://www.statiq.dev/guide/configuration/bootstrapper/adding-pipelines
CC-MAIN-2022-21
en
refinedweb
Woks from Atom RUN but not when uploaded Hi forum, I'm new and I'm testing for a research project a monitoring solution based on LoPy. So fa I have added two DS18B20 to the LoPy + PySense. The strage things is that when i run the code from the Pymkr in Atom it correctly read the DS18B20 temperatures, but when I upload the code on the LoPy the reading is None. Doe anyone have any idea or similar experience? Where should I look to start debugging? import time from lib.onewire import DS18X20 from lib.onewire import OneWire ow1 = OneWire(Pin('P4')) ow2 = OneWire(Pin('P8')) time.sleep(1) temp1 = DS18X20(ow1) temp2 = DS18X20(ow2) while True: print("T1:",temp1.read_temp_async(temp1.roms[0])) time.sleep(1) temp1.start_conversion() time.sleep(1) print("T2:", temp2.read_temp_async()) time.sleep(1) temp2.start_conversion() time.sleep(1) It looks like it needs a double scan to find the rom. After inserting ow1 = OneWire(Pin('P4')) print('found devices:', ow1.scan()) Even if the print is an empty array ('found devices:[]') then the temperature reading is working with appropriate rom... Anyone can get why? Maxi
https://forum.pycom.io/topic/2528/woks-from-atom-run-but-not-when-uploaded
CC-MAIN-2022-21
en
refinedweb
Overview This is an example appliation and self-paced tutorial for implementing readiness checks properly in Kubernetes and Go. This Go application does not drop connections when terminating due to proper shutdown signal trapping and readiness checks. If you do not use readiness checks properly today, your service probably drops connections for users when your pods are re-deployed or removed. This follows the best practices for Kubernetes web services by implementing a readiness probe that removes the pod from the Kubernetes endpoints list once the check shows as ‘failed’. When an endpoint is removed, the Kubernetes cluster will reconfigure to remove it from all load balancing. Only after that process completes can your pod be removed gracefully. go code here This should be combined with a pod disruption budget to restrict how many pods can be unavailable at one time as well as a pod anti-affinity policy to stripe your pods across nodes for best production resiliency. Order of operations This is how graceful removal of pods from load balancers should look: - The Kubernetes API is sent a delete command for a pod and changes the pod to the Terminatingstate - The kubelet responsible for this pod instructs the CRI to stop the containers in this pod - The CRI sends a shutdown signal to the containerized processes - The containerized process catches this signal gracefully - The containerized process begins failing its readiness checks for enough time to have the pod removed from the endpoints list (default 30s) - The containerized application continues serving requests that find their way to it* - The containerized process waits for the time it takes for readiness probes to fail, plus the time it takes for your Kubernetes cluster to reconfigure (should be less than 10 seconds) - This formula is: readinessProbe.periodSeconds * readinessProbe.failureThreshold + 10s - This must be less than the terminationGracePeriodSeconds kubectl get podsshows the READYcolum as 0/1indicating readiness probes are down - The pod exits gracefully Example Spec This spec is in this repo as kubernetes.yaml. apiVersion: apps/v1 kind: Deployment metadata: name: graceful-shutdown-app spec: replicas: 1 selector: matchLabels: app: graceful-shutdown-app template: metadata: labels: app: graceful-shutdown-app spec: terminationGracePeriodSeconds: 60 containers: - name: graceful-shutdown-app image: integrii/go-k8s-graceful-termination:latest livenessProbe: path: /alive port: 8080 readinessProbe: periodSeconds: 2 failureThreshold: 3 path: /ready port: 8080 ports: - containerPort: 8080 resources: requests: memory: 128Mi cpu: 500m limits: cpu: 1 memory: 1Gi --- apiVersion: v1 kind: Service metadata: name: graceful-shutdown-app spec: ports: - name: "8080" port: 8080 protocol: TCP targetPort: 8080 selector: app: graceful-shutdown-app type: NodePort Try it yourself You can test this graceful shutdown yourself. Clone this repo and try the following: kubectl create ns graceful-termination kubectl -n graceful-termination apply -f <wait for service to come online> kubectl -n graceful-termination port-forward service/graceful-shutdown-app 8080 (in another terminal) kubectl -n graceful-termination logs -f -l app=graceful-shutdown-app (in another terminal) for i in `seq 1 100000`; do curl -v done (in another terminal) kubectl -n graceful-termination set env deployment/graceful-shutdown-app TEST=`date +%s` (this will cause a rolling update to the deployment) watch kubectl -n graceful-termination get pods <observe terminal doing curl tests> kubectl delete namespace graceful-termination (when you're done with everything) You should not see dropped connections during the rolling update, even though there is only one pod! Some Closing Notes It is too common that I have seen applications not take care when being removed from the flow of traffic, resulting in connection failures. Hopefully this clears things up. This process has always existed, even with traditional load balncers, and in those situations it still is regular procedure to remove backends from the load balancer before bringing down those applications. You also could alternatively do a graceful shutdown integration using preStop hooks, which can be configured to send a web request to your application before it is sent a termination signal – but that approach wasn’t covered here.
https://golangexample.com/kubernetes-application-example-does-not-drop-connections-when-terminating-implemented-in-go/
CC-MAIN-2022-21
en
refinedweb
The Common Agricultural Policy How the CAP operates, the key commodities, competitors and markets for the European Union UK FOOD GROUP Background Briefing 1 Introduction In order to support its advocacy work on agricultural trade and policy, the UK Food Group and Sustain: the alliance for better food and farming, commissioned the Institute for European Environmental Policy (IEEP) to document the way in which Europe’s Common Agricultural Policy (CAP) operates, the agricultural sectors that benefit most from subsidies or protectionist measures, the key produce, markets and competitors for the European Union (EU) and in particular UK agriculture. The briefing also outlines the main impact of the CAP on world trade, developing countries, consumers, farmers, processors and exporters and the environment. The paper is intended as a background briefing to enhance knowledge, and promote informed discussion on the reform of the CAP and the agricultural trade negotiations at the World Trade Organisations. IEEP were also commissioned to produce a second paper outlining possible reform scenarios for the CAP and their impact on key stakeholders. This will be the subject of a second background briefing to be published later this year. UK Food Group and Sustain members will be undertaking further work on the impact of the CAP and reform proposals that would lead to a more sustainable and equitable agricultural policy for the European Union. UK Food Group The UK Food Group is a network of non-governmental organisations from a broad range of development, farming, consumer and environment organisations, who share a common concern for global food security. Through raising awareness of the impact of globalisation in food and agriculture the UK Food Group seeks to promote sustainable and equitable food security policies. The priority areas of action are trade policies, sustainable agriculture and the regulation of food and agriculture transnational corporations. Jagdish Patel, Coordinator, UK Food Group, PO Box 100, London SE1 7RT, UK Tel: 44. (0)20 7 523 2369, Web: Sustain: The alliance for better food and farming Sustain represents over 100 national public interest organisations working at international, national, regional and local level. Sustain's aim is to advocate food and agriculture policies and practices that enhance the health and welfare of people and animals, improve the working and living environment, promote equity and enrich society and culture. Vicki Hird, Policy Director, Sustain: the alliance for better food and farming, 94 White Lion Street, London N1 9PF, Tel: 020 7837 1228, Web: Research: The Institute for European Environmental Policy. This background briefing has been funded by ActionAid, Christian Aid, CAFOD, Methodist Relief and Development Fund, Oxfam (GB) RSPB, RSPCA and the European Commission under its programme to raise public awareness of development issues. February 2002 Key Facts Contents • Agriculture contributes less than 2 per cent of Gross Domestic Product (GDP) in the EU as a whole, and accounts for around 4.5 per cent of employment. • More than three-quarters of land in the EU is dominated by agriculture or woodland. • The EU is the largest single import market for agricultural produce in the world. • The current cost of the CAP to the world economy is estimated at US$ 75 billion a year, two-thirds of which is born by the EU. • The CAP consumes about 45 per cent of the total EU budget – around 43 billion Euro each year. •Two-thirds of the CAP budget is spent on crops rather than livestock. 'Direct payments' to farmers account for around 65 per cent of the CAP budget. Other support includes fixing guaranteed minimum prices maintained by intervention buying and production quotas and this therefore is mostly funded through higher consumer prices. • At least one-quarter of the CAP budget is paid to processors, exporters and other organisations rather than the producer. • France, Denmark and certain other countries emerge as 'winners' in the CAP, having a greater share of expenditure under the CAP than their contribution to the EU budget overall. •Direct payments are biased in favour of larger farmers, being based on the scale of production. EU Agriculture and World Trade 20 Agricultural production in Europe 20 EU imports and exports 40 Supply balance and markets for key commodities 60 The Common Agricultural Policy 13 CAP in brief 14 Distributional impact of the CAP 15 CAP budget 16 CAP export refunds administration system 20 CAP intervention expenditure 20 Who does the CAP benefit? 21 Focus on UK agriculture 25 Products which benefit most from the CAP's WTO 26 compliant tariffs and quota restrictions Impact of the CAP 30 World prices 30 Developing country markets 31 The environment 33 Consumers 34 References 36 The Common Agricultural Policy 2 1. EU agriculture and world trade 1.1 Agricultural production in Europe Today, agriculture is no longer a major economic sector in the European Union. The agricultural sector contributes a limited share of Gross Domestic Product (GDP) in most Member States. In the EU as a whole it is about 1.8 per cent of GDP (Directorate-General for Agriculture, 2001). In the United Kingdom (UK), the figure is particularly low, at around 1 per cent, although the total agro-industrial complex has a much bigger share in national income. Including merchants, wholesalers, food and drink manufacturers, retailers and caterers, the contribution is around 9 per cent or £55 billion to UK GDP (MAFF, 2000a). The agri-food sector (primary production, processing and deliveries to these sectors) has a share of around 6 per cent of total gross value added in the EU as a whole. Agriculture also accounts for only a small and declining proportion of EU employment – currently 4.5 per cent, although the figure varies considerably between regions (European Commission, 2001b). However, agriculture is a dominant user of land in most European countries. More than three-quarters of the territory of the EU is agricultural or wooded land, and farming is a significant feature of Europe’s rural areas. For that reason, and because of its much greater economic, political and social significance in the immediate post-war period during the establishment of the European Community (EC), agriculture has a prominent place in EU policies and the CAP still absorbs about 45 per cent of the total EU budget. The EU is a key producer of food in the world market and it also represents the largest single import market in the world. While a relatively small proportion of total temperate foodstuffs (of the kind mainly produced in the EU) is traded internationally, the EU is a major exporter of several commodities, as well as a powerful importer, giving it considerable leverage in world trade. The gross production value of EU agriculture in 2000 is estimated at around 265 billion Euro, including crop products (150 billion Euro) and livestock products (115 billion Euro). The major commodities include: •Milk, which had a share of 17.6 per cent of total production value in 1997, and is the source of a wide diversity of traded products, including butter, cheese and milk powder. About 25 per cent of the total export of dairy products by the EU is to non-EU countries, including the Russian Federation, the US and Saudi Arabia; • Cereals, including wheat, barley, rye and maize, with a share of slightly less than 10 per cent by value and representing an important source of domestic livestock feed as well as a significant commodity for export; • Beef and veal, with a share of about 10 per cent of total agricultural production. The incidence of BSE in the UK and other EU Member States in recent years has resulted in fluctuations in output levels and instability in the domestic market. EU beef production was reduced between 1995 and 1997. Total beef production fell by around 30 per cent in the UK and all exports ceased from 1996, although many have since been reinstated. EU exports of beef to third countries, which are also significant, remained fairly stable until 1997 but have fallen more recently; • Pigmeat, with a share of 12.2 per cent of agricultural production. The sector has faced some severe disease outbreaks in recent years, including classical swine fever in the Netherlands in 1997 and the foot and mouth disease crisis in several countries in 2001. Intra-EU trade of pigmeat covers around 80 per cent of total EU trade in pigmeat but export to third countries is also significant. More than half of the exports of pigmeat to non-EU countries are to Japan (these are dominated by Denmark); • Poultry, accounting for about 5.5 per cent of output receives relatively little support under the CAP but production has been growing rapidly in recent years. Exports exceed imports with a self-sufficing ratio of around 110 per cent in 1997; •Oilseeds and protein crops, which have a relatively minor share of total production value but which are more important in relation to trade, mainly for livestock feed. Historically, the EU has been seen as a significant market for the US and other exporting countries, while domestic production has been heavily supported through the CAP; •Vegetables, with a share of 9 per cent of total agricultural production but a much less important role in international markets. The movement of vegetables is mainly intra-EU trade. The export of tomatoes to third countries accounts for less than 20 per cent of their total export value, and this is mainly to the USA and the Russian Federation; • Other products benefiting from CAP support with a considerable share in production value include fruit The Common Agricultural Policy 3 (4.1 per cent) and wine (6 per cent). Wine plays a very important role in export markets. The EU accounts for about 60 per cent of world production of wine, and is the leading exporter on the global market. Recently, there has been a growth in imports to the EU from ‘newer’ exporting countries like Chile, South Africa, the US and Australia; • Sugar beet has a share of about 2.6 per cent of agricultural production with a self-sufficiency ratio of about 113 per cent for sugar in the EU. Exports exceed imports, primarily of cane from tropical suppliers. Most sugar is used for human consumption but there is a small industrial market as well; • The EU is also a minor producer and a major importer of certain products of particular trade significance, such as rice, cotton, tobacco, bananas and sugar cane. Fruit and vegetable production is concentrated in the Mediterranean part of Europe while cereals, beef, dairy and oilseeds production are more dispersed around the Member States. The greatest intensities of production are in France, Germany and the UK. Pigmeat production is particularly important in Denmark, the Netherlands, northern Germany, Spain, Brittany and the UK. The main production areas for citrus fruit are Spain, Italy and Greece. Certain products, including potatoes and some kinds of fruit and vegetables lie outside the CAP and are not discussed further here. They account for about 13 per cent of all farm output in the EU. The Common Agricultural Policy 4 1.2 EU imports and exports There are important variations from year to year and in the course of production and market cycles but the levels of exports of some key commodities – by value and by volume – are shown in the tables below. Animal products are especially significant. Milk product exports include large volumes of butter and milk powder, lower value commodities which depend heavily on the availability of export subsidies under the CAP to enter external markets (fig.1.0-1.1). To appreciate the relative importance of the EU as a player in global trade markets, we need to look at its relative share of trade (imports and exports) in the principal commodities, as shown below (fig.1.2). Among the main commodities, EU dairy products dominate export markets, but cereals, meat and wine are also important, as well as particular kinds of fruit and vegetable products including tomatoes, citrus fruits and olive oil. However, it should also be remembered that processed and higher value products, such as spirits, biscuits and confectionery, canned and frozen foods, also make up a significant share of EU exports. In relation to EU imports, cereals and oilseeds are relatively more important than dairy products, but meat is also significant. To provide more detail on key commodities, the following tables and text look in particular at common wheat, oilseeds, wine, sugar and the key livestock products of dairy, beef, pigmeat and sheepmeat. The Common Agricultural Policy 5 (1) EU-15, including Canary Islands and the French overseas departments from 1997. (2) Figures are for 1996 and 1997 respectively. Source: Commission of the European Communities (2001c). fig 1.0 Exports of agricultural products by the EU (value in million US Dollars) Commodities EU-15 1997 (1) 1998 Cereals 2,355 1,866 Live animals 750 729 Meat and edible meat offal 4,128 3,674 Dairy produce; eggs; natural honey 5,423 5,002 Edible vegetables, plants, roots and tubers 1,355 1,445 Edible fruit and nuts, peel of citrus fruit or melons 1,681 1,509 Alcoholic beverages (2) 10,160 10,595 (1) EU-15, including Canary Islands and the French overseas departments from 1997. Source: Commission of the European Communities (2001c). fig 1.1 EU exports by product and aggregate (volume in 1,000 tonnes) Commodities EU-15 1997 (1) 1998 Wheat and wheat flour 14,784 13,324 Fruit and vegetable preparations 1,106 1,141 Cheese 283 175 Milk and other milk products 512 448 Wine (1000 hl) 12,226 12,855 Beef and veal 1,055 773 Pigmeat 1,105 1,267 Sheepmeat and goatmeat 4.1 4.2 (1) Exports (excluding intra-EU trade) and excluding processed products. (2) Cereals as grain; processed products excluded. (3) Including salted meat. (4) Excluding salted meat for trade. Source: Commission of the European Communities (2001c). fig 1.2 EU-15 and world production and trade in the principal agricultural products (1997) per cent of world trade World World Imported Exported production trade by EU by EU 1000t (1) 1000t Total cereals (except rice) (2) 1,523,167 191,483 3.2 10.2 – of which wheat 609,566 101,163 3.5 13.0 Wine 26,423 2,325 27.9 60.6 Total milk 471,794 599 3.0 28.0 Butter 6,607 830 11.1 20.2 Cheese 15,084 1,097 11.6 40.8 Milk powder (skimmed & whole) 6,035 2,522 2.9 30.3 Total meat (except offal) 221,025 (3) 11,456 (4) 6.6 19.1 – of which beef and veal 56,948 (3) 3,931 (4) 4.5 13.3 – pigmeat 87,873 (3) 1,243 (4) 3.0 51.2 The Common Agricultural Policy 6 1.3 Supply balance and markets for key commodities The following tables are based on EU statistics. They show the position in the late 1990s, including the most recent year available from published sources. However, the full effects of BSE, foot and mouth disease and the Agenda 2000 changes to the CAP will not be evident until data from the beginning of the new century becomes available. Even though the following data is from the same EU documentary source there appears to be some inconsistency between tables covering commodities such as milk and oilseeds (fig 1.3). Common wheat In 1999 the production of common wheat was up by 7.8 per cent (94.4 million tonnes) in spite of a total rate of set-aside of 10.4 per cent. Little common wheat is imported whereas 15.1 per cent of the usable production is exported (1997/98). Human consumption of common wheat in the EU increased during 1997/98 to reach its highest level over the previous four year period, at 63.5 kg/head. Major export markets for the EU include the Middle East and the Russian Federation. The EU competes with other major exporters, notably the US, Canada, Australia and sells a range of cereals, including malting barley, an important export for the UK. Imports also take place; for example in July 2001, Spain and Portugal were importing bread wheat from the US, while Italy and Austria were purchasing Hungarian wheat. Sugar In 1998 the area grown with sugar beet reduced by 2.4 per cent compared to the year before corresponding to 1,993,000 hectares. The average yield in 1998 reached 8.07 tonnes per hectare which was a decrease compared to the previous year but still 8.03 per cent above the average level from 1994 to 1998. The production of sugar (white sugar equivalent) in 1998 totalled 16,382 million tonnes of which 16,076 million tonnes derived from sugar beet, 257,000 tonnes came from cane and 49,000 tonnes from molasses. The chemical industrial use of sugar in 1998 increased by 20 per cent to 312,000 tonnes; human consumption has been stable over the last five year period (fig 1.5). Wine Wine production in the European Union in the 1998/99 wine year totalled 159 million hl. The European Union accounted for around 62 per cent of world wine production in the 1997/98 wine year, and was the world’s largest wine exporter with 12.8 million hl in 1998. For 1998, the main buyers of EU wine were the US (around 3 million hl), Japan (1.9 million hl), Switzerland (1.6 million hl) and Canada (1.1 million hl). Significant wine imports into the EU in 1998 came from Australia (1 million hl), the US (815,626 hl), Chile (780,906 hl), South Africa (760,439 hl) and Bulgaria (609,501 hl). The EU Member States importing most wine in 1998 were the UK with 44 per cent of total imports and Germany with 25 per cent of total imports. Human wine consumption in the more traditional wine drinking countries such as France and Italy is declining but there is growth in others such as the UK (fig 1.4). The Common Agricultural Policy 7 (1) Calculated on intra-import basis. Source: Commission of the European Communities (2001c). fig 1.3 Supply balance - common wheat (1,000t) Common wheat EU-15 1994/95 1995/96 1996/97 1997/98 Usable production 77,081 80,080 91,144 87,558 Imports 1,571 1,467 1,090 1,971 Exports 15,990 12,136 13,229 13,252 Intra-EU trade (1) 16,210 16,617 21,947 17,038 Internal use 67,483 70,364 72,647 75,825 Human consumption (after processing) 22,551 22,931 23,518 23,739 Human consumption (kg/head) 60.8 61.7 63.1 63.5 Self-sufficiency (per cent) 114.2 113.8 120.4 115.5 (1) EU-12. Source: Commission of the European Communities (2001c). fig 1.4 Supply balance – wine (1,000hl) Total wine EU-15 1994/95 (1) 1995/96 1996/97 1997/98 Usable production 155,423 154,696 169,323 156,671 Imports 3,862 6,676 5,725 6,169 Exports 12,498 9,663 13,720 14,187 Intra-EU trade 31,346 29,996 29,296 33,543 Human consumption 124,588 129,781 128,147 126,041 Human consumption (l/head) 35.9 35.2 34.7 33.6 Self-sufficiency (per cent) 112.0 108.0 122.0 116.0 (1) Excl. C sugar. (2) Excl. sugar traded for processing. (3) Ratio of human consumption to resident population at 1 January. Source: Commission of the European Communities (2001c). fig 1.5 Supply balance - sugar (year October/September) Sugar EU-15 1,000t white sugar 1995/96 1996/97 1997/98 1998/99 Total production – of which: 15,859 16,767 17,764 16,382 C sugar production for export 1 581 2,369 3,148 2,021 Usable production (1) 14,278 14,398 14,616 16,361 Imports (2) 2,200 2,272 2,181 2,316 Exports (1)(2) 3,600 3,313 3,720 3,700 Intra-EU trade (1,684) (1,871) (1,679) (1,700) Internal use 12,559 12,727 12,708 12,700 – of which animal feed 5 2 2 2 – industrial use 246 250 260 312 – human consumption 12,308 12,475 12,446 12,386 Human consumption (kg/head) (3) 33.2 33.5 33.3 33.1 Self-sufficiency (per cent) (1) 113.7 113.1 115.0 113.1 The Common Agricultural Policy 8 Oilseeds In 1998 overall oilseed production in the EU was 15.9 million tonnes (including 1.2 million tonnes of non-food production) which was an increase of 44 per cent and 11 per cent compared to 1996 and 1997, respectively. The European Union is a significant net importer of oilseeds. Soya beans account for most imports and between 1996 and 1998 the proportion of total imports varied between 81 and 86 per cent. Two product categories derive from oilseeds: oil for human consumption, and cake for animal feed. The latter is responsible for the main European imports, as soya beans (mostly from the US) are used as a important protein source in the EU livestock feed sector (fig 1.6). Milk production is the most important segment of EU agriculture in economic terms, particularly in northern Europe. The four biggest Member States and the Netherlands are responsible for three-quarters of output. The EU is the world’s largest exporter of milk. In 1998 EU dairy exports were about 15 million tonnes milk equivalent while imports were in excess of 3.6 million tonnes. Major export markets for EU dairy products include the Russian Federation, Asia and Latin America, Japan and North Africa. Exports rely heavily on subsidies, with volumes bound under GATT agreements. Imports into the EU include butter and cheese, particularly for the UK market, with New Zealand a major supplier. Dairy products Because of the variety of products derived from milk, it is more difficult to obtain a supply balance of the kind available for other commodities. The table shows output of the main products and trade levels in three key commodities (fig 1.7). The Common Agricultural Policy 9 (1) Rapeseed, sunflower seed and soya beans. (2) Based on quantities entering. (3) Soya beans are not included in the 1996 EU production figure. Source: Commission of the European Communities (2001c). fig 1.6 Oilseed internal and external trade (1,000t) Oilseed (1) EU-15 1996 1997 1998 EU production 11,021 (3) 14,336 15,902 Intra-EU trade (2) 3,428 4,094 4,072 Imports 17,143 16,057 17,574 – of which rapeseed 568 279 620 – sunflower seed 2,700 1,957 2,193 – soya beans 13,875 13,821 14,761 Exports 507 569 801 (1) Figures are for 1997. Source: Commission of the European Communities (2001c). fig 1.7 Milk and milk products - EU market (1998) Number of dairy cows (1,000 head) 21,506 Production (1,000t) Cow’s milk 120,837 Cow’s milk delivered to dairies 113,403 Fresh milk and fresh milk 38,793 products 1,833 Butter 6,341 Cheese 1,081 Skimmed-milk powder 1,015 Other milk powder 1,242 Concentrated milk 141 Casein Imports (1,000t) Butter 71 Cheese 100 Skimmed milk powder (1) 64 Exports (1,000t) Butter 222 Cheese 453 Skimmed milk powder (1) 279 of which – exports at world market prices 275 – food aid 4 The Common Agricultural Policy 10 Beef and veal There has been an excess of supply over domestic consumption for many years and strong reliance on export subsidies eg for exports to the Russian Federation and the Middle East. In 1998 total EU beef and veal production was down by 3.4 per cent but still accounted for about 14 per cent of world production. The per capita consumption of beef and veal in the EU fell in 1996 with the BSE outbreak and subsequently exports have been substantially affected (fig 1.8). Pigmeat In 1998 the world’s leading producer of pigmeat was China with output totaling 36.9 million tonnes, followed by the EU with 17.6 million tonnes, which was an 8.2 per cent increase on 1997. Production has been increasing in recent years and consumption has not declined due to food scares, as it has for beef. There is some production surplus but several EU countries, including Denmark are competitive exporters. The level of CAP subsidy in the sector is very low. The most important destinations for EU export in 1998 were Russia (335,000 tonnes), Japan (175,000 tonnes) and Hong Kong/China (145,000 tonnes). In 1998 35 per cent of exports qualified for export refunds due to depressed world prices relative to the EU but this varies – in 1997 for example the figure was only 18 per cent (fig 1.9). Sheepmeat and goatmeat Production of sheepmeat in the EU is heavily concentrated in a few Member States, notably the UK, Ireland, Spain, Greece and France. EU production has been steady or slightly declining through the 1990s mainly due to declines in flocks in certain Member States, particularly France. Most trade in sheepmeat is between the EU countries (including exports from the UK and Ireland) but there are significant imports from outside the EU, mainly from New Zealand. These imports traditionally complement the seasonality of lamb production in filling a gap in the market when EU lamb is less readily available. Exports are negligible (fig 2.0). After China the EU is the world’s second largest producer of sheepmeat and goatmeat. The EU is also the second largest consumer after China. EU imports are carried out under WTO tariff-free or reduced-tariff quotas together with additional quantities provided in specific trade agreements. New Zealand is the world’s main exporter and is generally close to its EU tariff-free import quota of 226,700 tonnes. Australia is the second largest exporter to the EU but at a much smaller level of around 19,000 tonnes. There are major imports of several other commodities, including: 7 •Proteins and animal feed •Tropical produce •Fruit and vegetables. The Common Agricultural Policy 11 (1) Total trade, with the exception of live animals. (2) All trade, including live animals (figures are based on imports). (3) Carcass weight. Source: Commission of the European Communities (2001c). fig 1.8 Supply balance – Beef/veal (1,000t (3) ) Beef/veal EU-15 1995 1996 1997 1998 Net production 7,964 7,950 7,889 7,624 Imports (1) 377 364 392 353 Exports (1) 1,006 965 971 694 Intra-EU trade (2) 1,974 1,671 1,811 1,832 Internal use (total) 7,480 6,934 7,114 7,398 Gross consumption (kg/head/year) 20.1 18.6 19.0 19.7 Self-sufficiency (per cent) 108.5 116.2 111.5 103.6 (1) carcass weight. Source: Commission of the European Communities (2001c). fig 1.9 Supply balance – pigmeat (1,000t (1) ) Pigmeat EU-15 1995 1996 1997 1998 Net production 16,088 16,384 16,279 17,584 Imports 83 95 62 44 Exports 772 861 949 1,034 Intra-EU trade 3,324 3,376 3,574 4,068 Internal use (total) 15,191 15,484 15,175 16,501 Gross consumption (kg/head/year) 41.0 41.7 40.8 44.0 Self-sufficiency (per cent) 106.0 105.7 107.3 106.6 (1) carcass weight - All trade with the exception of live animals. (2) All trade in carcass weight, with the exception of live animals (figures based on imports). Source: Commission of the European Communities (2001c). fig 2.0 Supply balance – sheepmeat and goatmeat (1,000t) Sheepmeat and goatmeat EU-15 1995 1996 1997 1998 Net production 1,180 1,172 1,130 1,153 Imports (1) 238 255 257 256 Exports (1) 6 8 3 3 Intra-EU trade (2) 225 244 214 214 Internal use (total) 1,412 1,419 1,383 1,406 Gross consumption (kg/head/year) 3.8 3.8 3.7 3.8 Self-sufficiency (per cent) 82.4 81.7 80.8 81.2 The Common Agricultural Policy 12 The major export markets for EU agricultural products are shown in the table below. The importance of Organisation for Economic Cooperation and Development (OECD) member countries is clear, reflecting the high proportion of processed products in this category. Nearly half of US imports from the EU for example consist of ‘Beverages’, notably wines and spirits. Russia is an important market for EU agricultural commodities, including grains, meat and dairy products and is the recipient of a large volume of food qualifying for export subsidies. Developing countries also appear on this list and it should be recalled that poorer countries importing relatively small quantities from the EU may nonetheless be affected significantly by the resulting impact on their national markets (fig 2.1). Source: European Commission 2001c fig 2.1 Principal export markets for EU Agricultural Products in 1998 – ranked by value Country Exports – Millions ECU US 8,034 Russia 4,038 Japan 3,627 Switzerland 3,131 Poland 1,767 Hong Kong 1,321 Saudi Arabia 1,210 Canada 1,172 Norway 1,171 Algeria 1,080 Czech Republic 951 Turkey 889 Egypt 742 Brazil 737 Australia 714 Taiwan 605 United Arab Emirates 602 China 585 Israel 561 Libya 544 Singapore 544 Hungary 524 South Korea 465 Morocco 443 Lebanon 434 Total of 25 countries (A) 35,892 Total of third countries (B) 51,424 per cent A/B 69.8 The Common Agricultural Policy 13 1.4. Conclusions This section has illustrated the relative importance of the EU as a player in world markets for agricultural produce, both as a significant exporter of some key commodities (eg dairy products, cereals, meat and wine) and as a marketplace in its own right. The EU market is supplied mainly by its own domestic producers but also offers major import opportunities to producers of certain commodities from outside the EU (notably cereals, oilseeds and beef, as well as rice, wine, sheepmeat, sugar cane and tropical produce). 2. Overview of the Common Agricultural Policy The EU’s Common Agricultural Policy is one of the longest established elements of common policy in Europe. Its overall aims were enshrined in the original Treaty of Rome and include protection of farm incomes, market stabilisation and ensuring security of supplies to consumers. These aims were pursued through a mix of mechanisms applied to the (then) principal commodities of the Community’s producers, notably dairy products, beef and veal, and arable crops. As the Community enlarged, new ‘regimes’ were added to cover a wider range of outputs (eg sheep and goats, triggered by the accession of the UK and Ireland, and a number of ‘southern’ products including olive oil and fruit and vegetables, when southern countries joined). At the same time, the CAP has developed a range of ‘structural support’ policies over the past 30 years, offering farmers help to restructure, modernise or otherwise adjust their enterprises. Thus, it is important to remember that the CAP is not a single comprehensive or uniform policy, but a sizeable collection of separate regimes or packages of policy instruments applied to different commodities, sectors or issues of concern. It is also a dynamic policy which has evolved significantly in recent years. Box 1 gives a brief list of the main components of the current CAP, as established following the Agenda 2000 reforms last year. Together these instruments cost the EU budget a total of around 43 billion Euro per year. In summary, the CAP divides into two kinds of support: a) Commodity support regimes each targeting specific agricultural outputs (c.90 per cent of the budget); b)Broader kinds of support for structural adjustment, diversification and environmental management (c.10 per cent of the budget). In category a), one finds a variety of regimes which include those offering a high degree of market support as well as those offering only minimal support. Some regimes rely heavily on classic market intervention mechanisms, fixing guaranteed minimum prices within the EU and maintaining these by intervention buying when markets get oversupplied or by applying quotas on production, and applying import tariffs and export subsidies to maintain differentials with world market prices. The classic examples here are dairy products and sugar, although a similar regime also applies to olive oil. However, other regimes now include alternative, less ‘trade distorting’ policy measures such as direct payments to farmers, paid per head of livestock held or per acre of crops grown in the past. These payments may be the principal form of producer support (as with the oilseed, sheep and goatmeat regime) or they may be part of a ‘compensation package’ that has resulted from the partial dismantling of former classic market intervention mechanisms (as with beef and veal, and cereal regimes). Arable and beef and veal regimes are now a complex mix of some market intervention and some direct payment, as guaranteed minimum prices have been gradually cut since 1992 and increased levels of compensation have been introduced. The balance of support between market intervention and direct payments also differs between individual commodities. So, for example, EU wheat production is currently relatively unsupported by market intervention and EU wheat prices are little different from world prices, whereas some other grains, oilseeds, protein crops and beef have all remained more heavily supported by these mechanisms. Obviously, the degree to which market intervention mechanisms are used reflects the dynamics of world markets as well as domestic considerations. With the ‘lightweight’ regimes, EU funds may be offered simply to promote more effective market organisations to supply goods in a co-ordinated way, rather than being used directly to subsidise or support production itself. This is generally the case with EU fruit and vegetables as a result of reforms to the regime in 1996, which significantly reduced the role of price support in these commodities. In the pigmeat regime, there are provisions for market intervention mechanisms to be applied in extreme circumstances, such that buying pigmeat into storage and offering export subsidies to maintain EU prices can sometimes apply. The Common Agricultural Policy Box 1: The Common Agricultural Policy in Brief A.‘First Pillar’ (generally commodity-related) Measures (wholly EU-funded) 1. The establishment and maintenance of a single internal market for agricultural products, involving the removal of barriers to trade between Member States. 2. Major support regimes using supported (guaranteed minimum) market prices, often with intervention buying or private storage mechanisms, for major agricultural outputs, for main commodities - • beef and veal, dairy products, • arable crops – wheat, barley, oats, maize, oilseeds, protein crops, rice • sugar beet, sheep and goats, olives, wine, cotton, starch potatoes, tobacco and the establishment of common import tariffs/export refunds in relation to trade in each of these commodities outside the EU, so as to maintain prices inside the EU. 3. Modifications to these regimes in recent decades – a) supply controls to limit output (eg production quotas on milk since 1984, also in sugar beet and starch potatoes), and compulsory ‘set-aside’ of a fixed proportion of producers’ arable land which must not be used to grow food crops (introduced in 1992, currently fixed at 10 per cent of all cropland) b) direct payments per head or per hectare mainly to compensate producers for cuts in guaranteed prices (eg in beef and arable sectors, introduced in 1992) or simply to support producers (sheep and goats) c) quotas and/or area ceilings to limit overall expenditure on direct payments (eg in sheep, beef and arable sectors, introduced in 1992, as well as in wine, introduced in 1998) d) maximum stocking density limits on producers’ eligibility for livestock direct payments, as well as a separate headage payment for more extensive production under the beef regime (introduced in 1992), to encourage more extensive (ie less productive) farming, thus also to control supply. 4. More ‘lightweight regimes’ involving emergency buying into storage and some other market support, including support for producer groups, etc, for certain other products (eg pigs, poultry, fruit and vegetables). NB Pigmeat can attract export refunds when world prices are low. 5. Regime adjustment mechanisms: ‘outgoers’ schemes (eg dairy) or aids for ‘grubbing up’ for different commodities in surplus (eg olives, wine, apples) – some introduced only for short periods, others more continuous. B. ‘Second Pillar’ – Structural and Rural Development Measures (part-funded by EU, part by MS) A second and increasingly significant aspect of the CAP is focused on broader structural, environmental and rural development aspects of agriculture and the countryside. This has included farm structures policies, the 1992 accompanying measures under the CAP, and most recently, the newly christened ‘second pillar’ of the CAP: the Rural Development Regulation 1257/1999. These policies include: a) aids for farming in marginal areas (paid per hectare of land farmed); b) agri-environment schemes to promote environmental land management (paid per hectare) c) aid for farm investment/modernisation and farm diversification, marketing and processing (generally capital grants, as are most of items d-i) d) assistance for farm forestry – both afforestation and certain forms of management e) early retirement aids, aid for young farmers f) vocational training for farmers and foresters g) aids for improved water management, land reparceling and land improvement h) support for farm-related tourism and craft activities i) a range of other rural development provisions. C. Horizontal Measures Introduced in 2000, the ‘Common Rules’ Regulation 1259/1999 applies horizontally across both pillars of the CAP. It enables Member States to use ‘modulation’ (capping direct payments) to switch funding from commodity support to certain elements of the ‘second pillar’; and it requires Member States to meet ‘environmental protection requirements’ in relation to commodity regimes, including the option of introducing environmental cross-compliance (environmental conditions) on direct payments. To date, the UK and France have applied modulation and Germany and Portugal plan to apply it from 2002/3, while a number of Member States have applied cross-compliance but mainly to reinforce the existing provisions of environmental legislation rather than to set new standards. D. Indirect Measures The CAP also has direct influence on national policies for agriculture. For example, there are EU rules to control ‘state aids’ to agriculture – support offered by national governments to particular groups of producers, usually due to ‘special circumstances’ affecting a sector (eg disease, particular hardship, etc.). Some Member States make extensive use of state aid, which amounts to billions of Euro per annum. The Common Agricultural Policy 15 3. Distributional impact of the CAP The distributional effects of the CAP are many and various. Some significant effects are as follows. • The redistribution of resources from society at large directly to the agriculture sector, notably because of the 43 billion Euro of EU budgetary resources devoted to the CAP each year as well as national contributions to the ‘second pillar’. Some of these resources underpin the supply of ‘public goods’ (eg environmental and social goods and services) by farmers to society but opinions differ greatly as to what proportion this represents. • The costs attributable to consumers (often initially to food processors and retailers) arising from the CAP market intervention regimes – including higher market prices than would prevail without them, and restricted access to lower priced imports. Impacts on consumers are discussed briefly in section 11 below. • The redistribution of resources between Member States – this arises because some countries receive a larger share of expenditure under the CAP budget than they contribute to the EU budget as a whole. Certain countries with a large share of output of more heavily subsidised commodities emerge as winners, including France and Denmark. Because many ‘southern’ commodities, such as fruit and vegetables are relatively lightly subsidised, there is generally held to be a ‘northern’ bias in the CAP. Nonetheless, certain southern products including olive oil, tobacco, rice and cotton receive a very high level of support per unit of output. •Differential impacts on different types of farm – this arises mainly from the uneven level of support between different commodities, as shown in the budget below. For example the traditionally high cereal prices in the EU have benefited arable producers but have raised the cost of this feed source to livestock farmers. However, the related arable import tariffs have probably encouraged EU livestock farmers to use a higher proportion of homeproduced feed than they would have if there had been free access for lower cost feedgrain producers from elsewhere – most notably the US. • The distribution of direct support (direct payments) under the CAP is skewed heavily in favour of larger farmers because it is based to a large degree on the scale of production – either farms’ present capacity or output (in land area or livestock numbers) or their output in the relatively recent past. Thus the more is produced, the more aid is received. Evidence of this is discussed below. • The CAP also influences the distribution of resources within the food chain. Some forms of support (eg olive oil subsidy, export subsidies) are paid directly to processors rather than individual farmers, and the heavy use of export subsidies in particular creates economic opportunities for large commercial exporting companies, wherever internal prices are high, for example, for butter. However, reliable information on the scale and impact of CAP benefits for the agribusiness sector is difficult to find. • The CAP and associated EU trade policy also create distributional effects between EU Member States and trading partners, including a large number of developing countries. Some of these are discussed briefly in section 9 below. The next section discusses the impacts of the CAP budget in more detail. The Common Agricultural Policy 16 4. The CAP budget The overall 2001 EU budget totals 96 billion Euro in appropriations from the Member States and 93 billion Euro earmarked for payments. The 'appropriations for agricultural expenditure' total is 43 billion Euro. Of this, 38 billion Euro is allocated for ‘first pillar’ measures, with the largest share of this (42 per cent) for particular arable crops. ‘Second pillar’ measures account for only about 4.5 billion Euro, just over 10 per cent of the total (fig 2.2). Source: European Commission (2001a). fig 2.2 Common Agricultural Policy - Budget 2001 EAGGF Guarantee Section Amount (million Euro) per cent Arable (cereals, oilseeds, protein crops) 18,026.0 41.6 Sugar 1,726.0 4.0 Olive oil 2,473.0 5.7 Dried fodder and grain legumes 384.0 0.9 Fibre plants and silkworms 855.0 2.0 Fruit and vegetables 1,654.0 3.9 Vine products 1,153.0 2.6 Tobacco 1,000.0 2.3 Other plant products 324.0 0.7 Plant products – Total 27,595.0 63.7 Milk and milk products 2,345.0 5.4 Beef/veal 6,007.0 13.9 Sheep and goats 1,620.0 3.7 Pigmeat, eggs and poultrymeat 170.0 0.4 Other 16.7 0.0 Animal products – Total 10,158.7 23.5 Ancillary expenditure 1,049.0 2.4 First Pillar - Total 38,802.7 89.6 Rural development 4,495.0 10.4 Second Pillar - Total 4,495.0 10.4 CAP – Total 43,297.7 100.0 The Common Agricultural Policy 17 The Common Agricultural Policy 18 This is not a complete account of CAP related expenditure since it excludes the contributions required from Member States to co-finance measures under the ‘second pillar’, which vary from 25 per cent to 75 per cent of the total expenditure on each measure, according to regional circumstances. In those Member States adopting ‘modulation’, further national funds will be used to match the sums generated by capping ‘first pillar’ aids and redirecting these monies into ‘second pillar’ measures. It also under-represents total spending within the EU on agricultural support because it does not include the ‘state aids’ provided by Member State governments on top of CAP support. In some countries, such as France and Italy, these have been offered on a substantial scale to particular regions or sectors. processors, exporters and other organisations rather than direct to the producer (fig 2.3). Nevertheless, the single largest component of the budget is now ‘direct payments’ to farmers, around 65 per cent of the total in 1999. These include headage payments for beef cattle, sheep and goats and area payments for cereals, oilseeds, protein crops and set-aside. As these payments reflect production levels either now, or in the recent past, the pattern of their distribution reflects the relative productivity of farms throughout the EU as well as the greater level of support offered to arable producers as opposed to livestock producers. Two-thirds of the CAP budget is spent on crops, rather than the livestock sectors, although this reflects the method of support applied to different sectors. For example, direct payments are fundamental to the cereals support system but dairy prices are maintained by a variety of methods, including import tariffs, intervention purchase quotas and export refunds. Direct payments and export refunds give rise to budgetary costs, but import tariffs and quotas do not. CAP expenditure on the ‘second pillar’ is projected to rise slightly but is planned to reach no more than 10.5 per cent of the CAP budget by 2006. A proportion of the CAP budget doesn’t take the form of direct payments to farmers, but includes refunds to exporters (14 per cent) and payments to government agencies and private sector companies which buy commodities from the market and store them, disposing of then later in order to maintain prices in weak periods (4 per cent). These and related payments account for: •16 per cent of arable support • nearly 40 per cent of beef support • 100 per cent of dairy support (ie the only items of direct spending, under this regime) • 100 per cent of olive oil support (as with dairy). Intervention payments in the dairy sector include sizeable ‘consumption aid’, around 1.2 billion Euro in 1998. This provides subsidies for farmers using milk powder to feed veal calves and for food manufacturers using EU butter instead of other fats in products such as biscuits. In total, at least one-quarter of the CAP budget is paid to The Common Agricultural Policy 19 (1) Grants are offered for grubbing up vineyards in areas of excess production. In addition aid is paid to wine manufacturers to encourage them to use grape must, in place of sucrose, to increase the alcoholic strength of certain wine products. Aid is also offered for the use of grape must for other purposes than winemaking. (2) Various distillation aids are paid to producers and manufacturers. These include aid for distillation of by-products, which is compulsory for every producer in order to eliminate the least valuable portion of production. Distillation aid generally funds the conversion of wine into alcohol for industrial use. Source: Commission of the European Communities (2000b). fig 2.3 Breakdown of the CAP budget (million Euro) 1997 1998 1999 Milk and milk products 2,984.9 2,596.7 2,510.1 - export refunds 1,753.3 1,426.7 1,439.4 - intervention, including storage 1,231.6 1,170.0 1.080.5 Arable crops 17,414.1 17,945.2 17,865.9 - export refunds 532.3 478.9 883.1 - storage 71.5 1,083.9 712.7 - direct aid per hectare 14,617.6 15,134.2 14,623.9 - other unspecified market support 300.8 280.0 362.4 Products of the vine-growing sector 1,030.1 700.0 614.6 - export refunds 59.7 41.2 27.4 - aid for grubbing up and grape must (1) 699.6 65.9 360.5 - private storage 49.1 54.9 41.2 - aid for distillation of wine (2) 221.7 247.0 187.1 Beef/veal 6,580.4 5,160.6 4,578.6 - export refunds 1,498.9 774.5 594.9 - direct payments and storage 5,081.5 4,386.1 4,008.6 Sheepmeat and goatmeat 1,424.9 1,534.6 1,894.3 - export refunds - 0.1 - - - direct payments 1,425.0 1,534.6 1,915.5 Pigmeat, eggs and poultrymeat 557.5 327.9 432.8 - export refunds for pigmeat 72.2 74.5 275.0 - private storage for pigmeat 0.2 - 45.9 - exceptional market-support measures 407.0 163.8 6.0 (storage, export refunds) Export refunds on certain goods obtained 565.9 553.1 573.4 by processing agricultural products Food aid 328.7 333.7 390.5 The Common Agricultural Policy 20 The export refunds administration system Export refunds are payable on the basic agricultural commodities as well as the ingredients (cereals, milk, sugar, rice and eggs) contained in processed products (eg chocolate, biscuits and alcoholic drinks), for all the regimes which support EU prices at levels above world prices. However, each regime has its own regulations fixed by the European Commission. The following is a brief overview of how the export refund administration system works for four different regimes, namely milk, sugar, beef and wheat. Initially, traders have to register and obtain an export licence from the Intervention Board in order to benefit from export refunds. At the moment, there is a general 60 Euro limit (de minimis) for all agricultural commodities in the EU, below which traders do not need an export licence before exporting products eligible for refunds. This limit is not set annually but for each application. The export licence gives a trader the right and the obligation to sell the commodity outside the EU. Furthermore, vertical limits (de minimis rules), relating to quantity rather than value, for different agricultural commodities exist and are listed as CN codes in the annexes to EC Regulation 1291/2000. For milk and beef products the limits are 150 kg and 250 kg, respectively (fig 2.4). The system works as follows: the trader fills in an application form which goes through a customs procedure. The trader is now allowed to export the products while the application is handed over to the Intervention Board. This body ensures that the application is correct and calculates the exact export refund. (The process is illustrated in fig 2.4). The trader will not get paid until documentation for the current export is produced. For each agricultural commodity there is a fixed price set by the EU Commission and the prices for the different agricultural commodities are exactly the same in all EU member countries. In the dairy sector, the main companies applying for export refunds include a broad range of businesses, from big companies like Unilever and Nestlé to small dairy companies. There is no minimum limit for which export refunds can be granted. However, some EU member countries have introduced administration fees due to an overload of the system, for instance in Denmark a minimum of DDK 200 (£17) has been introduced because 50 per cent of the applications were on or under this amount, and this resulted in administration overload, followed by delays in payment. In the UK there is no such administration fee. The sugar export refunds system works the same way as described above. However, the export of sugar in its ‘natural state’ (white sugar) within the permitted quota (A and B) takes place mainly under a weekly tendering procedure, where traders can apply for export refunds. For a specific amount of sugar, traders bid for refunds that are adjudicated by the EU Commission under the Sugar Management Committee – which decides who will get the refunds. As with the milk system there are minimum levels for licences, currently 2,000 kg for white sugar. Additional, other limits exist for processed sugar of 250 kg (isoglucose), 150 kg (invert sugar) and 150 kg (artificial honey). Any C quota sugar has to be exported without refunds. A trader trying to sell C sugar inside the EU will get fined. Agreements with certain Less Developed Countries (LDCs) enable limited quantities of raw sugar, typically sugar cane, to be imported without tariffs, refined and thereafter exported with refunds as white sugar. The application procedure for cereal exports is similar to that described previously, with a tendering system similar to that for sugar. Limits for cereals vary depending on whether the export is grain or a processed product. The whole grain limit is 5,000 kg, whereas that for cereal products derived from the milling industry in general is 500 kg. A refund is paid for cereals in free circulation within the EU which are distilled for the production of Scotch whisky or Irish whiskey. This refund reflects the quantities expected to be exported to non-EU countries. Intervention expenditure Intervention expenditure on storage covers both private and public storage costs ie the two systems work in parallel. Under private storage the EU pays traders or other companies to buy commodities and store them for a period, to maintain market prices. This is not used as heavily as it was in the 1980s but is still significant for several commodities. Under (public) intervention, producers and traders can sell their products to EU intervention buyers at the guaranteed minimum price provided they meet certain quality standards. Technically, the commodities are usually stored at private places but the system is defined as public storage. There are examples from the 1980s where large farmers and others took advantage of the private storage arrangement, building storage facilities which were paid off after short time from private storage receipts, following which they could use the buildings as machinery sheds or for other business uses. The Common Agricultural Policy 21 5. Who does the CAP benefit? The immediate recipients of expenditure from the CAP budget do not necessarily equate to those who benefit financially from the CAP. Most notably, for those regimes which still rely mainly upon guaranteed minimum market prices (eg dairy and sugar), the budget involves no direct payments to farmers, yet farmers benefit significantly from the increased market prices that result from application of the policy. In the absence of price supports, the farm gate prices for these products are likely to have been significantly lower, over a period of years. For example it has recently been estimated that liberalisation in the EU dairy sector would result in a 25 per cent drop in the EU market prices for milk. Thus in relation to production sectors which benefit from the CAP it is necessary to consider the relative strength of support offered by the different regimes, including market supports, direct payments and other aids. Generally speaking, the dairy, sugar and olive sectors are heavily supported, as well as the arable sectors (cereals and oilseeds), beef and veal and sheepmeat. Pig production and wine are supported to a lesser extent, as are poultry, eggs and fruit and vegetables. Of the minor products, some are quite heavily supported, largely on cultural/socioeconomic grounds (eg tobacco, bananas, cotton). A widely accepted measure of support levels is the OECD Producer Support Estimate (PSE). This is shown for a range of different commodities in fig 2.5. (fig 2.4) Exporting company 1. Customs procedure 4. Payment of export subsidies Customs 3. Application for export subsidies Intervention Board 2. Export Destination The Common Agricultural Policy 22 Source: OECD 2000 fig 2.5 OECD estimates of EU Producer Support Estimate by commodity 1997-99 Wheat Euro mn 11,893 Percentage PSE 53 Maize Euro mn 2,539 Percentage PSE 40 Other grains Euro mn 8,936 Percentage PSE 65 Rice Euro mn 174 Percentage PSE 23 Oilseeds Euro mn 2,927 Percentage PSE 47 Sugar (refined equivalent) Euro mn 2,629 Percentage PSE 51 Milk Euro mn 20,162 Percentage PSE 54 Beef and Veal Euro mn 18,688 Percentage PSE 58 Sheepmeat Euro mn 3,376 Percentage PSE 53 Pigmeat Euro mn 1,828 Percentage PSE 11 Poultry Euro mn 1,625 Percentage PSE 23 Eggs Euro mn 349 Percentage PSE 9 Other commodities Euro mn 30,837 Percentage PSE 38 All commodities Euro mn 105,467 Percentage PSE 44 The OECD (2000) estimated that the Producer Support Equivalent in the EU totalled 107 billion Euro in 1999, equivalent to 49 per cent of the total value of the EU’s agricultural output. In their 2001 report the OECD state that farmers’ gross receipts were estimated to be 62 per cent higher than if valued at world market prices and without support, and prices received by agricultural producers in the EU were on average 37 per cent higher than border prices in 2000. When products are supported by market intervention (guaranteed prices), farmers and exporters benefit and ‘consumers’ (in this case, often food processors, distributors and retailers in the first instance) pay through higher food prices. When products or specific actions are supported by direct payments, farmers benefit and taxpayers pay, through general taxation. Thus as various mainstream CAP regimes have been reformed over the past 15 years, there has been a significant shift away from the shouldering of support costs by consumers, towards more payment by taxpayers. This has particularly affected the arable and beef sectors as well as the development of the suite of ‘second pillar’ measures. However, it has not affected dairy and sugar, which remain supported by market measures in addition to tariffs, and it The Common Agricultural Policy 23 has not affected the lesser (market) regimes applied to pigs and poultry. There is some analysis available of the distribution of CAP direct payments to holdings of different sizes. As would be expected for payments linked largely to areas farmed (arable), volumes produced (olives), or number of stock kept (beef, sheep) payments tend to go mainly to larger farms. Recent work by the Australian Bureau of Agriculture and Resource Economics (ABARE) is particularly relevant. The ABARE analysis has been carried out focusing on the size group of farms that receive support. Size classification is based on standard gross margins per farm. There are five groups: Extra small 400,000 Euro. It appears that only 17 per cent of all farms, ie those in the two largest categories received 50 per cent of the agricultural support provided by CAP payments, illustrated in the figures below. On a full time equivalent basis the two largest groups of farms earned a higher average income in 1996 than the average worker in the EU. Large farms account for a very substantial proportion of total output in several sectors. According to Eurostat (2001) there were 6,989,100 farm holdings in the EU15 (1997 figures). Slightly over 3 per cent or around 226,300 of these farm holdings were of 100 hectares or larger. The farm holdings in the 100 hectare or more category control 53.2 million hectares out of the total 128.7 million hectares. Their overall share of total agricultural production is estimated at about 50-70 per cent (Consumers in Europe Group, 2000) (fig 2.9-3.0). Based on figures from 1996, EU farms with the highest gross margins earn the highest income. Not surprisingly, there is a strong link between farm size and farm income in the EU. On average, the EU farms with the highest gross margins have the largest farm area and receive the greatest CAP support. This is illustrated in the figures below (fig 2.6-2.8). Source: European Commission (2001a). fig 2.6 EU 1996, average farm income (1,000 Euro per farm) 200 150 100 50 0 Extra small Small Medium small Medium large Large Extra large The Common Agricultural Policy 24 Source: European Commission (2001a). fig 2.7 EU 1996, average farm size (ha) 150 120 90 60 30 0 Extra small Small Medium small Medium large Large Extra large Source: European Commission (2001a). fig 2.8 EU 1996, average support (1,000 Euro per farm) 40 30 20 10 0 Extra small Small Medium small Medium large Large Extra large Source: ABARE (2000). fig 2.9 Number of farms 4% Extra large 13% Large 19% Small 20% Extra small 21% Medium small 23% Medium large The Common Agricultural Policy 25 Source:ABARE (2000) fig 3.0 Share of support 3% Extra small 5% Small 14% Medium small 21% Extra large 28% Medium large 29% Large 6. Focus on UK agriculture – main products and competitors in domestic, EU and world markets Agriculture in the UK has traditionally been one of the more productive and efficient farm sectors within the EU, with average farm sizes generally significantly higher and agricultural employment significantly lower than elsewhere. However this varies between commodities. East Anglian grain production is generally regarded as particularly competitive relative to the rest of Europe, while UK dairy, beef and pigmeat production are today somewhat overshadowed by other Member States including the Netherlands, Germany, France and Denmark. The UK is 81 per cent self-sufficient in temperate foodstuffs (MAFF, 2000b) and the value of UK agricultural production in 1999 was around £13.7 billion. In overview, the UK’s main outputs include cereals, dairy products, sheepmeat and beef, as well as pigmeat. Thus to examine markets and competitors in more detail, we focus on the following key commodities: pigs, sheep, dairy products, wheat and beef: • For pigs, the UK produced just over a million tonnes of pigmeat in 1999, worth £782 million. It imported 209,000 tonnes from the rest of the EU and only 3,000 tonnes from outside the EU. It exported 193,000 tonnes to other EU countries and 32,000 tonnes further afield. The main competitors for UK producers both at home and in export markets are thus those elsewhere in the EU – particularly Denmark and the Netherlands, both for domestic markets and abroad (although pig exports are much less important for the UK than these countries). UK slaughter weights for pigs tend to be lower than those of the main global exporters (eg including US hogs), so they often fill different market niches. • For sheep, the UK produced 401,000 tonnes of sheepmeat in 1999, worth £1,007 million. The UK consumes 6.6 kg/head on average, each year. It imported 12,000 tonnes from the rest of the EU and 119,000 tonnes from the rest of the world, predominantly New Zealand. Thus the main competitor on UK markets is New Zealand and to a much lesser extent, the Republic of Ireland. While some southern Member States (Spain and Greece) are also important sheep producers, these are reared mainly for domestic and external markets and/or for milk production. There is a seasonality issue – New Zealand would claim it is not largely in direct competition with UK producers since its exports are available at a different time of year so it may help to safeguard the year-round UK market. However, this relationship is probably being eroded by changes in supply and demand, over time. Sheep production in the UK is largely for consumption in the UK and the rest of northern Europe, particularly France. In 1999 the UK exported 143,000 tonnes of sheepmeat to the rest of the EU and only 1,000 tonnes to the rest of the world. In recent years the UK established an important and growing export market of light lambs to southern Europe (Italy, Spain), where their products are in competition with The Common Agricultural Policy 26 Irish, Spanish and Greek producers. Exports to the Middle East have also become more important. • For dairy products, the UK is a major producer of butter, milk powder and basic (commodity) cheeses both for domestic markets and for export, as well as liquid milk for domestic consumption and industrial use (particularly the confectionery industry). In 1999, UK farms produced 14.3 billion litres of milk worth around £2.7 billion. Of this, around 6.8 billion litres is consumed as fresh milk while 7.1 billion litres is processed into a variety of products of which cheese and whole milk powder are the main uses. The UK produced 144,000 tonnes of butter, 378,000 tonnes of cheese, 277,000 tonnes of cream, 178,000 tonnes of condensed milk and 103,000 tonnes of whole milk powder as well as 102,000 tonnes of skimmed milk powder, in 1999. Looking at imports and exports, for selected products: • the UK imported 221,000 tonnes of cheese from the rest of the EU but only 41,000 tonnes from further afield, while it exported 46,000 tonnes to the EU and 13,000 tonnes to the rest of the world; • the UK imported 64,000 tonnes of butter from the EU and 48,000 tonnes from further afield while exporting 51,000 tonnes to the EU and only 5,000 tonnes elsewhere; • the UK imported 18,000 tonnes of milk powder (full and skimmed) only from the EU, but it exported 53,000 tonnes to the EU and 100,000 tonnes to the rest of the world. From this we can see that the main competitors for dairy produce markets in the UK are other EU countries for higher value products like cheese but they include non- EU producers for butter and cheddar cheese, while the UK’s competitors in export markets will include both other EU countries such as France, Eire and the Netherlands as well as Australia, New Zealand, eastern Europe and the USA. • For wheat, the UK produced 15.1 million tonnes, in 1999 worth approximately £1.54 billion, in the form of £1.057 billion in sales and £422 million in subsidies. Most production remains of feed wheat (soft wheat), which is in competition with a range of the global soft wheat producers eg US, France, Germany, Canada. The main markets for UK producers include both domestic and export, usually through the main grain trading organisations such as Dalgety and Cargill. A small but growing proportion of UK produced wheat is hard wheat suitable for bread making and this again is sold both at home and abroad, although international hard wheat markets are dominated by Canada and the US. In 1999, the UK imported 573,000 tonnes of wheat from other EU countries and 570,000 tonnes from the rest of the world. It exported 2,750,000 tonnes to the rest of the EU and 250,000 tonnes to the rest of the world. The main domestic uses for wheat were flour milling (37 per cent) and animal feed (41 per cent). • UK production of beef in 1999 was around 680,000 tonnes, worth £1,996 million. UK imports totalled 113,000 tonnes from the rest of the EU, mainly from Ireland, then the Netherlands, while imports from outside the EU, mainly Brazil, totalled 61,000 tonnes. Exports were negligible in 1999 due to the impact of BSE – only 9,000 tonnes, and only to other EU countries. About 917,000 tonnes were consumed domestically (MLC 2000). 7. Products which benefit most from the CAP’s WTO compliant tariffs and quota restrictions During the Uruguay Round Agreement on Agriculture (URAA) efforts were made to make agricultural support mechanisms less trade distorting and more transparent by removal of support mechanisms and their conversion into import tariffs with Tariff Rate Quotas (TRQs) or equivalent value. However various factors have prevented increased transparency and market access from being achieved. These factors include lack of standardisation in tariff expression (some are expressed in specific terms and some in ad valorem terms); Tariff Rate Quotas within which imports are subject to lower tariffs; and under-filling for most TRQs. When trying to compare tariff levels for different products one of the biggest problems is presented by the lack of standardisation. In the EU approximately 44 per cent of agricultural tariffs lines have specific tariffs (eg. Euro/tonne) rather than ad valorem tariffs (a percentage of total value), making levels of protection hard to compare across products. Comparison is also made more difficult by the use of TRQs, as in-quota tariff rates and quota size in relation to import size have a dramatic effect on the true level of protection offered by the tariffs. In the EU 28 per cent of tariff lines have TRQ allocations, so this is obviously a significant influential variable. (Gibson et al. 2001) Some aggregated data is given in the table to illustrate the importance of TRQs on calculating levels of protection. The obscure nature of information and administration in the areas of tariffs and tariff quotas leads to another The Common Agricultural Policy 27 phenomenon that obscures true levels of protection. This phenomenon is known as ‘quota under-filling’, and is when available TRQs remain unused. The average global (TRQ) quota-fill rate in 1995 was 66 per cent and in 1998 had fallen to 62 per cent. This shortfall arises partly from the lack of transparency in TRQ administration. Since the URAA, developing countries in particular have had little success at accessing the TRQs opened up during the agreement. Traders from developing countries surveyed by the Food and Agriculture Organisation (FAO) (1999) reported a lack of information about export opportunities under this market access measure. Confusion was widespread regarding TRQs allocation and administration. This failure to fill TRQs is of crucial importance to any attempt to assess levels of protection, as the difference between potential market access and actual achieved market access may be very significant. Detailed figures of quota fill rates for specific products were difficult to find. Any true measure of protection must take into account these important quota-related variables. However, simple aggregate figures are given below (fig 3.1). Given the complexity of trying to assess specific protection levels from import tariffs and TRQs, and the lack of detailed standardised data, we feel it would be misleading to try to give categorical statements about which products are most protected by this specific support mechanism. A spokesperson from the World Trade Organisation (WTO) has informed the researchers of this report that they are currently compiling a more detailed comprehensive analysis of more recent figures. These figures will give a more accurate idea of true levels of protection, taking into account the different influential variables mentioned above. Until such time as these figures are available this report gives more detailed information concerning general levels of protection as compiled in PSE and Consumer Support Estimate (CSE) indexes and a general overview of the use of tariffication in the EU. Source: WTO 2000 fig 3.1 In-quota and out-of-quota tariff rates and estimated maximum TRQ rents for selected agricultural produce within the EU, 1996 In-quota Out-of-quota Maximum Quota fill Quota as a ad-valorem ad-valorem quota rents ration per per cent tariff tariff (US$billion) cent of total per cent per cent imports Wheat 0 87 0.0 21 2 Grains 35 162 0.4 74 26 Sugar 0 147 2.4 100 87 Dairy 24 91 1.1 99 80 Meats 19 128 2.3 100 73 Fruit & Vegetables 11 51 0.0 78 20 The Common Agricultural Policy 28 Products typically regarded as having high levels of protection from import tariffs in the EU include dairy products, rice, tobacco, feed grains and beef. Some arable sectors remain disproportionately supported at present (eg protein crops) but Agenda 2000 set the process in train by which arable supports would be progressively harmonised over time, to remove this effect. For the past few years, wheat has been traded within the EU at prices not much different to world market prices. Most Favoured Nation, reduced or zero tariffs are currently granted to (among others) exports of lamb and dairy produce from New Zealand; to sugar cane exports from LDCs; and to bananas from former colonies in the West Indies. The EBA agreement concluded recently by the EU claims to remove tariffs for all imported produce from the world’s poorest countries (fig 3.2). A clearer picture of relative protection from tariffs and TRQs can be gained by calculating from these figures an average level of tariff protection for each sector, by taking into account the proportion of each commodity imported at lower ‘in-quota’ rates under special trading agreements. These give results as follows: •Wheat 85.3 per cent •Grains 129 per cent • Sugar 19.1 per cent • Dairy 28.3 per cent •Meats 48.4 per cent •Fruit and Vegetables 43 per cent According to these figures the highest levels of protection are given to grains, wheat, meats and fruit and vegetables (in that order). But the picture of protection that these figures gives us is a complicated one as while the combined tariff on, say, sugar might be quite low, the outof-quota tariff is so high that the TRQ effectively acts as an absolute quota and therefore acts to seriously restrict potential market access to any new importers. Another tariff mechanism that is particularly damaging for developing countries, is tariff escalation, whereby tariffs increase in relation to the extent to which raw materials are processed. If a higher tariff is applied to a product at each stage of processing this limits how much exporting countries can gain by doing the processing themselves. Under the URAA, tariff escalation was reduced and tariff escalation in the EU is now lower in the agricultural sector than other sectors, with the average tariff rate for finished goods being lower than that for semi-processed goods. This is potentially beneficial for developing countries as it gives them greater market access for finished goods, but tariffs on semi-processed and finished goods are often two or three times more than those for raw materials. (IMF and World Bank 2001). Again detailed, standardised, and thus comparable, information on tariff escalation for specific product lines has proved difficult to obtain. The Common Agricultural Policy 29 From Gibson et al, 2001, pp 25. *Megatariffs (as defined by ERS): Extremely high tariffs that effectively cut off imports other than the minimum access amounts under TRQs. fig 3.2 Mean and Median tariffs and number of Megatariffs* for agricultural products in EU expressed as per cent of product value Mean Median Megatariff All commodities 30 13 141 Grains 53 63 2 Grain products 48 45 2 Feed 47 11 9 Starches 24 20 - Oilseeds 0 0 - Vegetable oils 13 6 1 Fats & oils 10 3 1 Live animals 30 22 - Meat: fresh or frozen other meat 70 74 29 Meat: fresh beef, pork or poultry 41 27 6 Meat: frozen beef, pork or poultry 66 38 24 Meat: prepared 43 26 7 Dairy 87 70 41 Eggs 22 24 - Fruit: fresh 21 12 1 Fruit: frozen 20 21 - Fruit: preparations 21 21 - Fruit juice 37 22 3 Vegetables: fresh 16 10 2 Vegetables: frozen 14 15 - Vegetables: frozen or prepared (other) 18 12 1 Vegetables: dried & fresh roots and tubers 38 16 - Vegetables: dried 2 0 - Vegetables: preparations 21 14 2 Vegetable: juice 16 16 - Nuts 5 4 - Nuts & fruit: dried, fresh and prepared 16 17 - Sugar beet 349 349 2 Sugar cane 56 56 - Sweeteners 59 57 8 Tobacco: unmanufactured 14 11 - Tobacco: products 38 34 - Coffee 6 8 - Coffee: other 10 12 - Tea and tea extracts 2 0 - Cocoa beans & products 17 15 - Spices 2 0 - The Common Agricultural Policy 30 8. How does the CAP impact upon world prices? These impacts are variable, depending on the regimes examined and the scale of the EU’s presence in the world market. Heavily protected regimes which generate surpluses for export will tend to depress world market prices through the export subsidy system both in the short term and through their effects upon price expectations. The table below identifies those major temperate commodities where the EU has a share of more than 10 per cent of world trade, either in imports or exports (fig 3.3). The table shows the importance of the EU in a number of export markets, particularly for wine and livestock products. Generally the overall effect of the CAP will be to depress world price levels in these sectors because of the domestic support in place, reinforced by import tariffs and, in some sectors, by export subsidies as well. In the case of butter for example where EU internal prices are around double those on the world market, dumping by the EU will be a major factor in keeping world price levels low. However, where price differentials are much smaller, as they have been for wheat in recent years, the impact on world prices will be smaller. Even the less trade distorting elements of the CAP can have impacts on world prices. For example, the direct payments on cereals, beef, oilseeds, olives and sheepmeat will help to bolster the competitiveness of EU farmers and allow them to adapt to lower market prices than otherwise would be possible. The scales of such effects are difficult to measure. Various attempts have been made to determine the effects of the CAP on world prices. Some of these attempts have used economic modelling to predict what effect removal of market-distorting CAP measures would have and use these results to show how the CAP is currently influencing world economy. One such economic modelling study was undertaken by Borrell and Hubbard (Economic Affairs, June 2000). They used the Global Trade Analysis Project (GTAP) database and standard economic model. In the model all EU barriers to trade and direct subsidies are eliminated, thus removing all CAP support to farmers and lowering the Source: European Commission 2001c fig 3.3 Major Commodities for which the EU has more than 10 per cent of world trade (1997) EU Share of Trade Commodity (1) Imported (2) Exported (3) EU net by EU by EU share of world trade Cereals (except rice) (3.2) 10.2 (7.0) – of which wheat (3.5) 13.2 (9.5) Oilseeds 39.6 (1.8) -37.8 – of which soya 39.4 (0.7) -39.3 Wine 27.9 60.6 32.7 Sugar (5.3) 18.8 13.5 Total Milk (3.0) 28.0 25.0 – of which butter 11.1 20.2 (9.1) – Cheese 11.6 40.8 29.2 – Milk powder (2.9) 30.3 27.4 Beef and veal (6.6) 19.1 12.5 Pigmeat (3.0) 51.2 48.2 Poultrymeat (3.7) 20.4 16.7 Eggs (2.7) 29.5 26.8 The Common Agricultural Policy 31 prices they receive and EU consumers pay. Their model predicts that as a result consumption would rise, production fall, imports and exports would be affected, and it considers how producers and consumers in other countries would react. They conclude that CAP has had profound effects on not only agriculture but also other industries of the EU and other countries. They estimate that the current cost of CAP to the world economy, through resource misallocations and missed opportunities for trade, is US$75 billion a year, two-thirds of which they estimate is born by the EU. The model used by Borrell and Hubbard is a simplified one and the results and conclusions they give are very generalised; they do however give a slightly more in depth study of the effects of CAP on the world sugar market. Borrell and Hubbard’s model predicts that if the CAP mechanisms to protect sugar prices were removed world prices could rise by 18-22 per cent; which suggests that the CAP is currently depressing world sugar price levels by this amount. Some sectors comprise a variety of highly differentiated markets. Wine is a case in point. Whereas CAP subsidies may be a significant element in the price of some of the cheapest low quality wines they will be largely irrelevant to the price of fine wines where other factors are far more important. Overall, the CAP should have less impact on world prices now than it used to because of the shift from ‘amber’ to ‘blue box’ support mechanisms for some key commodities and the reduction of surpluses, but there is serious debate as to just how trade neutral ‘blue box’ mechanisms really are. The use of ‘blue box’ mechanisms such as direct aids has been referred to by some critics as ‘pick-pocketing instead of mugging’, because it is largely transparency that is decreased under such a system, as opposed to ‘amber’ box supports, not trade distortion. There are even some (eg Jacques Berthelot) who believe that green and blue box support measures are more trade distorting than those of amber and red boxes that are usually considered so. Their argument is that a global agreement to use green and blue box mechanisms would be even more trade distorting as developing countries cannot afford to use domestic support payments and so would be at a disadvantage. What is certain however is that the CAP does exert a downward pressure on the prices of several major commodities. 9. Impact of the CAP on developing country markets OECD economists believe that agricultural protection still harms developing countries. The farm policies of OECD countries – even after the reforms under the URAA have been taken into account – have been estimated to cause annual welfare losses of $19.8 billion for developing countries. This is more than three times the losses that developing countries incur due to OECD countries’ import restrictions on textiles and clothing. However, at a more detailed level the extent to which the CAP affects developing markets depends on the type of economy of the developing country. The CAP has various effects on developing country markets, which are summarised in the table below (fig 3.4). The general opinion of many developing countries as well as the free trade oriented international organisations, including the OECD, is that the CAP has a negative effect on developing country markets. This is partly because of the general destabilising effect the CAP has on world markets generally, because a significant proportion of producers are protected from world price fluctuations; this means that the effect of any fluctuation is effectively magnified for those producers that do not benefit from protection. Magnified fluctuations of world prices are of particular concern to developing countries with low food security and no social/economic safety nets for producers. Another source of negative influence on developing countries is the highly restricted access to the EU for certain temperate products which are grown in Europe and the combined impact of domestic support and export subsidies which increase the availability of low priced EU products on the world market. This can depress prices for importing countries. There are a number of very low income developing countries with limited agricultural capacity at present which benefit from these low cost imports, and there are many others which are concerned about the competitive pressure on their own farmers. On the other hand, the export oriented developing countries seeking to maintain or expand their own sales are affected by unfair competition from subsidised European products. In general developing countries do not use export subsidies as a policy tool. The variations between developing countries and fluctuations in their markets and economic circumstances make it difficult to generalise about this group as a whole. The interests of Brazil are very different from those in many The Common Agricultural Policy 32 Source: The Catholic Institute for International Relations (1998). fig 3.4 Types of CAP effect on developing countries Type of effect Positive features Negative features Implications for development policy Increased world supply Lowers import costs for importers (and may increase supply of food aid) Lowers export prices for exporters Disincentive to agricultural development of importers and exporters May undermine agricultural development policies, but also reduces food costs Artificially high EU prices Artificially high prices for developing countries able to export (eg because of Lomé Protocols) Exports may be viable only if high prices continue May support export diversification, but new exports may be unsustainable Over-subsidised prices of exports Lowers import costs for importers May undermine domestic agriculture and disrupt legitimate trade May undermine agricultural development policies Increased world price instability Increases food insecurity and complicates agricultural development planning Disrupts long-term agricultural development sub-Saharan African countries for example. The relative impact of the CAP upon developing countries depends critically upon their current and future strategies for economic development. For countries that have decided to build up their export markets and particularly to supply commodities that are currently produced within the EU, the policy will act against their interests. It will undercut their exporting sectors and increase the vulnerability of world markets in which they trade via the effects of restricted supplies (relative to those which would exist without the CAP) and sporadic ‘export dumping’. However, among those countries following this development strategy there will be some who currently benefit from preferential trading arrangements with the EU and who are therefore more positively affected by the policy, since it gives them some of the benefits of a guaranteed market opportunity and supported market prices that are given to domestic producers in the EU. For countries who are seeking primarily to build up their self-sufficiency in food production, the CAP may be neutral or negative in its effects upon their development. By denying easy access to significant export markets the policy may act as a disincentive for domestic producers to focus on ‘cash crops for export’ and thus give greater impetus to production to meet domestic demands. On the other hand, where EU export dumping or food aid policies involve supplying such countries with cheap food imports The Common Agricultural Policy 33 that can act as substitutes and therefore competitors to domestic production, the CAP effect is potentially seriously damaging. Further detail of impacts of this kind can be infered from the variety of papers submitted to the WTO in advance of the latest round of trade talks, which outline the views of different DC groupings on trade issues. Most of these are available on the WTO website. There is a growing body of Non-Governmental Organisations (NGOs) both in developing countries and within the EU which are critical of the implications of increased trade in agricultural products and wary of the ambitions of many developing countries to increase their exports. Such exports may distort national development and generate social and environmental problems even if they contribute positively to the balance of trade. 10. Impact of the CAP on the environment A great deal has been analysed and written about the impacts of the CAP upon the environment which will not be repeated here. The sources of this information include environmental and farming organisations, academics, government agencies and departments, and the EC itself. In practice much of this literature points to the undeniable scale of environmental change associated with contemporary agriculture, particularly intensive production, without necessarily analysing the specific role of CAP policies. Many changes would have occurred without the CAP. In overview, the findings of recent studies and reports indicate the following main points. •A widespread view among both NGOs and academics is that by supporting market prices, CAP measures have accelerated existing trends in technological development and adoption on farms, leading to enlargement, specialisation, intensification, where these changes have been economically favourable to producers. Arable crop production and intensive cattle production (both beef and dairy) are the most frequently cited examples, but sugar, wine, olives, cotton and fruit and vegetables have also been mentioned in this context. Alongside this, some have also highlighted the role of measures under the CAP and related EU structural funds in promoting capitalisation on farms as well as enabling farm enlargement and modernisation to the detriment of the environment, to a greater degree than would have happened without this grant aid. The effects of such aid appear particularly marked in the cohesion countries. •Views about the role of the CAP in relation to marginal farming are more varied. Certain commentators from all three interest groups, as well as some research literature, tend to the view that support has slowed the decline of farming in marginal areas, and that this has been further helped by specific Less Favoured Area (LFA) support, to the general benefit of the environment. Others hold that because most aid was production-linked rather than socially targeted this has not prevented a continuing decline in the numbers of farmers and farm workers although they have maintained and sometimes increased production in these areas. From this perspective these policies have had some detrimental effects on the environment. •In some areas, differential changes have been strongly linked to the effects of particular CAP instruments. Examples include arable set-aside and headage based sheep premia fueling expansion in the number of sheep during the 1980s leading to overgrazing in the UK, Ireland and Greece on a large scale (and some more local scale effects in Italy). Irrigated crop premia have promoted the replacement of dryland or traditional extensive cropping/ olive/dehesa systems in Spain. Support for forage maize has increased the area of this crop, at the expense of grass, accelerating an existing trend in many countries. •In other areas, particularly where CAP instruments are widely acknowledged to have little impact, negative environmental effects have been linked to the opening up of competition within the EU and the effect of EU policies in relation to international trade. Examples include recent horticultural intensification in Spain and an increased tendency to rear cattle indoors on imported soya/oilseeds rather than grazing on grass, in the Netherlands. However, other countries report opposite trends in relation to these sectors in some respects, so the picture is somewhat unclear. The sectors which have exhibited perhaps the greatest intensification and concentration in recent years – namely pigs, poultry and horticulture – have been those which receive relatively little market support from the CAP. • On the other hand there are agreed to be distinctive environmental benefits associated with the CAP although the policy’s precise role in supporting the more sustainable forms of agriculture is subject to debate. By The Common Agricultural Policy 34 supporting production in certain sectors at a much higher level than on the world market the CAP has helped to maintain a range of cultural landscapes and land management practices which otherwise might have been heavily modified or have disappeared altogether. These are particularly associated with pastoral agriculture – beef and dairy production, sheep and goats – all sectors where PSE rates are high. Environmentally deleterious changes have occurred in these sectors, including large-scale farm amalgamation and pervasive intensification, especially on dairy farms, but in the absence of support, many of the negative trends could have been more pronounced. • Another benefit of the CAP has been the recent incorporation of support for agri-environment measures, including schemes to maintain existing lower input systems and encourage organic conversion. This policy absorbs a small proportion of the CAP budget but is nevertheless on a much larger scale than in the US for example. Agri-environment schemes are open to criticism on several grounds including a mismatch between the distribution of payments and the incidence of environmental priorities (see Court of Auditors 2000 for a critique, which also extends to the CAP as a whole). Nonetheless, some environmental benefits have been demonstrated already and there is potential to extend these in future. • In the reforms made of the CAP in 2000, new steps have been taken to ‘green’ the policy. Some of these have the potential to address several of the negative impacts cited above, but it is too early to make a judgement about their likely effectiveness. The most important elements are as follows: • The introduction of an obligation to Member States to ensure ‘environmental protection’ in respect of all those regimes which offer direct payments to producers. Member States must report progress on this to the Commission by April 2002. In response a number of countries are known to be applying new environmental conditions to the payments, mainly to reinforce the effectiveness of existing environmental legislation. •A definition of ‘Usual Good Farming Practice’ within the Rural Development Programmes under Reg. 1257/1999 which becomes a condition for LFA support in future as well as reference level for agri-environmental aids. •A shift from headage to area payments for LFA aids to reduce any incentive for overstocking. 11. Impact of the CAP on the consumer Very little empirical analysis is available concerning the detailed effects of the CAP on consumers. In general, much of what has been written on this theme asserts from the basic principles of economic market theory that the CAP involves a significant cost to consumers. As a protectionist policy, the CAP raises the prices of supported commodities above the levels that they would achieve without such support and limits access to certain imported commodities. The OECD has developed a methodology for measuring the scale of theoretical Consumer Support to agriculture. This is known as Consumer Support Estimate, or CSE. In recent years this has been calculated to be around 60 billion Euro per annum for the EU – 61.7 billion in 1999 (OECD 2000). Closer examination of the CSE for the years 1997-1999 shows that two commodities in particular dominate the calculation. Of a total PSE of Euro 57.5 billion in 1997-99 (equivalent to 31 per cent), 16.7 billion was accounted for by milk products (53 per cent) and 11.9 billion by beef and veal (a 46 per cent CSE). Sugar also achieves a high percentage CSE because of the high price on the EU market. Calculations of this kind about costs to consumers generally involve the assumption that if CAP market supports were removed, the prices of the relevant products to EU consumers would fall to levels similar to the prices of goods on world markets. Hence it is implied that the majority of this saving would accrue to consumers in the EU. In reality, there are a number of reasons why these estimates may overstate the apparent cost of the policy to consumers, as the end-users of agricultural products: •Without EU support and export dumping, world prices would be expected to rise, slightly reducing the differential between EU supported prices and unsupported prices; • Food is increasingly sold to consumers in a highly processed form, and it is often food processors and manufacturers, a group dominated by large multinational companies, who actually buy raw agricultural commodities and who would therefore be the immediate beneficiaries of falling agricultural prices. The extent to which these price cuts were passed on to end consumers could vary greatly between different commodities and is likely to be less for the products subject to the greatest degree of processing (eg sugar The Common Agricultural Policy 35 used in confectionery, milk products used in ready meals, biscuits, etc). These commodities are also some of those which are currently most protected by the CAP. • Retailers also have a role in modifying the impact of commodity prices on those imposed in their own shops. A detailed study by the National Consumer Council (NCC) in 1988 attempted to set out the reasoning behind five stated impacts of the CAP on consumers, namely that it: • Overcharges consumers for food • Reduces consumer choice • Has an adverse effect upon food quality • Has an impact upon nutrition • Harms consumers indirectly by contributing to environmental damage. The first and last points have already been discussed. above and in an earlier section of this paper. Thus the remaining text briefly considers the evidence on the remaining points about choice, quality and nutrition. On choice, the CAP may indirectly influence consumer choice in the EU because it changes the relative prices of different products and raw materials and thus influences what processors and retailers choose to offer on their shelves. Subsidising EU products relative to those available elsewhere encourages higher levels of EU production and thus EU consumers are more likely to buy domestically produced products. The structure of import tariffs also tends to limit access to a wider variety of imported produce. However, the impact of this factor needs to be considered alongside other, possibly more significant effects upon choice. These include greater international sourcing of food by supermarkets competing for higher value market niches and all-year continuity of supplies, and marked downturns in consumer confidence in certain EU products as a result of food scares. increased importance of supermarket specification in determining product quality for the majority of sales within the UK and an increasing share in the EU, may have decreased this impact over time. On nutrition, the argument is more complex. It is widely accepted that low income families tend to have poorer nutrition than affluent families and that the former group is much more price-sensitive in relation to food choices. For those CAP regimes which are responsible for generating significant surpluses, the practice of subsidised surplus disposal often targets particularly low income groups or groups in need (eg hospital patients, those on benefit, etc). Thus it is argued that offering low-cost butter supplies to low income consumers is a practice which is bad for their nutrition, since butter is a high fat food. Against this case can now be set the following considerations: • the scale of provision of these kinds of subsidised foods has reduced significantly under CAP reforms since 1992; • as with the prices argument, consumers buy an increasing proportion of processed foods in which ingredient choices are likely to be influenced by many more factors than this one. In conclusion, therefore, there is reason to believe that CAP effects on food choice, quality and nutrition are likely to be relatively weak today, by comparison with non-CAP effects. However, more detailed empirical research would be required to address these issues properly. On quality, the argument is that by setting intervention standards for EU products, the Community has established low common standards for food bought into intervention and producers tend to look to this rather than higher standards, in their production activities. The NCC study reports the view of processors and traders that this had indeed occurred in several sectors including cereals, fruit and vegetables, in 1988. However, it could be argued that changes in supply chains since 1988, including the much The Common Agricultural Policy 36 References 1ABARE (2000) US and EU Agricultural Support: Who Does it Benefit? ABARE Current Issues 2000, No 2. Australian Bureau of Agricultural and Resource Economics. Canberra, October 2000. 2Berthelot, J. (2001) Some theoretical and factual clarifications in order to get a fair Agreement on Agriculture at the WTO Solidarité, Geneva, 7 August 2001. 3Borrel, B and Hubbard, L (2000) ‘Global economic effects of the EU Common Agricultural Policy’ Economic Affairs Vol. 20, No. 2, pp.18-26. 4Consumers in Europe Group (2000) The CAP doesn’t fit. How failure to reform the Common Agricultural Policy threatens world trade liberalisation and EU enlargement. London, September 2000. 5Court of Auditors 2000. Special Report No 14/2000 Greening the Cap together with the Commission’s replies. OJ C353. 8/12/2000 6Directorate-General for Agriculture (2001) Agriculture in the European Union. Statistical and Economic Information 2000. January 2001. 7European Commission (2000): 29th Financial Report on European Agricultural Guidance and Guarantee Fund (EAGGF) Guarantee Section, 1999 Financial year. Brussels, December 2000. 8European Commission (2001a) The General Budget of the European Union for the financial year 2001.The Figures. Brussels, Luxembourg, January 2001. 9European Commission (2001b) Employment in Europe 2001. Recent Trends and Prospects. Directorate-General for Employment and Social Affairs. Belgium, Luxembourg, July 2001. 10European Commission (2001c) The Agricultural Situation in the European Union: 1999 report. Brussels and Luxembourg, 2001. 11Eurostat (2001) Agriculture – Statistical yearbook 2000. Data 1990-1999. Luxembourg, 2001. 12FAO (1999) Experience with the implementation of the Uruguay Round Agreement on Agriculture – developing country experiences (based on case studies). FAO Symposium on Agriculture Trade and Food Security: Issues and Options in the Forthcoming WTO Negotiations from the Perspectives of Developing Countries, 13-24 September. 13FAO (2001) Experience with the implementation of the Uruguay Round Agreement on Agriculture, Sixty-third session of the Uruguay Round Agreement on Agriculture, Rome 6-9 March. 14Gibson,P, Wainio, J, Whitley, D and Bohman, M (2001) Profiles of Tariffs in Global Agricultural Markets Agricultural Economic Report No. 796. Economic Research Service, US Department of Agriculture. January 2001 15IMF and World Bank (2001) Market Access for Developing Countries’ Exports. World Bank, April 27, 2001. 16MAFF (2000a) Strategy for Agriculture. Current and prospective economic situation. London, 2000. 17MAFF (2000b) Agriculture in the United Kingdom 1999. The Stationery Office, London. 18Meat and Livestock Commission (2000). Beef Yearbook 2000. Milton Keynes, UK. 19OECD (2000) Agricultural Policies in OECD Countries – Monitoring and Evaluation 2000. Paris, 2000. 20Performance and Innovation Unit, Cabinet Office (1999) Report on Rural Economies. London, 1999. 21Potter, C.; Lobley, M.; Bull, R. (1999) Agricultural liberalisation and its environmental effects. Environment Department, Wye College, University of London. June, 1999. 22The Catholic Institute for International Relations (1998) Levelling the Field. Will CAP reform provide a fair deal for developing countries? London, 1998. 23WTO (2000) Market Access. Submission by Cuba, Dominican Republic, El Salvador, Honduras, Kenya, India, Nigeria, Pakistan, Sri Lanka, Uganda and Zimbabwe. Committee on Agriculture Special Session. September 28. CAP contacts from Sustain and UK Food Group member organisations and observers ActionAid rtripathi@actionaid.org.uk Agricultural Christian Fellowship pmd@uccf.org.uk Banana Link blink@gn.apc.org CAFOD dgreen@cafod.org.uk Centre for Food Policy david.barling@tvu.ac.uk Christian Aid kbundell@christian-aid.org Consumers Association mona.patel@which.co.uk Consumers International npallai@consint.org Compassion in World Farming peter@ciwf.co.uk Family Farmers Association p.woods. Tel: 0154 885 2 794 Farmers' Link flink@gn.apc.org Farmers' World Network adrian@fwn.org.uk Friends of the Earth sandrabe@foe.co.uk International Institute for Environment and Development bill.vorley@iied.org Institute for European Environmental Policy central@ieeplondon.org.uk National Consumer Council r.simpson@ncc.org.uk National Federation of Women’s Institutes b.savill@nfwi.org.uk National Farmers Union nfu@nfu.org.uk Oxfam pfowler@oxfam.org.uk Panos Institute kittyw@panoslondon.org.uk Pesticides Action Network (UK) barbaradinham@pan-uk.org Royal Society for the Protection of Birds pete.hardstaff@rspb.org.uk matthew.rayment@rspb.org.uk Royal Society for the Protection of Aiminals dbowles@rspca.org.uk Small and Family Farmers Alliance michael@mhart.fsbusiness.co.uk Soil Association gazeez@SoilAssociation.org Sustain vh@sustainweb.org UK Food Group jagdish@ukfg.org.uk Wildlife and Countryside Link debbie@wcl.org.uk WWF-UK rperkins@wwfnet.org This background briefing documents the way in which Europe’s Common Agricultural Policy operates, the agricultural sectors that benefit most from subsidies or protectionist measures, the key produce, markets and competitors for European Union and in particular UK agriculture. The briefing also outlines the main impact of the CAP on world trade, developing countries, consumers, farmers, processors and exporters and the environment. Background Briefing 1 February 2002
https://www.yumpu.com/en/document/view/27492719/common-agricultural-policy-uk-food-group
CC-MAIN-2019-43
en
refinedweb