text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Why does world frame get TF prefix?
Hello I am working with two UR5, I made them launch using
<group ns="robot1"> and
<group ns="robot2">. I also gave a unique tf_prefix to each robot within the group namespace tag i.e.
<param name="tf_prefix" value="robot1_tf" /> and
<param name="tf_prefix" value="robot2_tf" />. After running the launch file what I notice in the
rqt_tf_tree is that the "world frame" also gets a namespace i.e.
robot1_tf/world followed by
robot1_tf/baselink and all the other joints of robot1 ............... And then
robot2_tf/world and then
robot2_tf/base_link and all the other joints of robot2.
But what I want is world frame (without having any namespaces) connecting to
robot1_tf/baselink followed by all the joints of robot1............... and also to
robot2_tf/baselink followed by all the joints of robot2.............
How do I avoid the world frame to have any namespaces???
Have you solved the problem? I am in the same predicament. Can you help me? Please tell me the solution
yes i did, basically what I did that I created two tf_broadcaster for each of the tf frame i.e for. robot1_tf/world and robot2_tf/world and then I mapped both of them to world frame. Have a look below
|
https://answers.ros.org/question/334644/why-does-world-frame-get-tf-prefix/?answer=334694
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
I use version managers for setting up separate programming environments, due to each project having different needs; such as different versions of an interpreter or a package. For Python, I use pyenv, and for Node, I use NVM (Node Version Manager), which I'll cover in this article.
It's been a while since I've used Node, and for a project that I've been working on, I needed to setup Node on OS X so that I could assemble a front-end that's built with React and Material-UI; and I have documented the entirety of my steps, including key web pages that I read.
When I started writing this article, I realized that I'll also want to port some of this work to run as a an application on a Server (Linux and Unix: -nix), Mobile (iOS and Android), and Personal Computers (OS X and -nix), so this is part of a set that I will be expanding upon, as time permits:
In an effort to make this cross-platform, I'm working on a Docker image, and will update this article at a later time. I didn't want to hold up publishing this article, so for now, I'm covering the process from an OS X perspective.
If you know how to use -nix systems, then you should be able to follow this guide with slight changes for your particular flavor.
( brew update brew install nvm brew install yarn --without-node yarn global add \ create-react-app \ serve \ json \ goog-webfont-dl \ ; )
Setup your login shell to behave identically to a non-login shell:
~/.bashrc is executed for interactive, non-login shells (e.g. GNU Screen), while as
~/.bash_profile is executed for login shells (e.g. opening a terminal window). I tend to place the bulk of my configuration in
~/.bashrc and source it, so that my environment is the same for both of my daily uses:
source $HOME/.bashrc
Now that
~/.bashrc is being sourced, setup a directory for NVM, and then setup environment variables:
mkdir -pv ~/.nvm cat <<EOF >> ~/.bashrc # NVM export NVM_DIR=~/.nvm source $(brew --prefix nvm)/nvm.sh EOF
You can either close your shell and re-open it,
or run
source ~/.bash_profile
To verify that required software is working:
( echo -n "NVM: "; command -v nvm >/dev/null 2>&1 && echo "PASS" || echo >&2 "FAIL"; echo -n "Create React App: "; create-react-app --version >/dev/null 2>&1 && echo "PASS" || echo >&2 "FAIL"; )
Node has 2 major release streams to choose from: LTS or Stable. By using NVM, you can use multiple versions at the same time. When you visit the Node website, you'll find a similar set of choices that provide information about both versions:
nvm install has an alias for the Stable release, but not for the LTS release (there's a discussion about it on NVM's issues page).
I'm using the Stable release for this article.
nvm install stable nvm alias default stable
Store your site's FQDN as a variable for use throughout the rest of the shell commands in this article:
read -p "Website FQDN : " fqdnSite
Create application for your FQDN in
~/src/:
create-react-app ~/src/sites/${fqdnSite} \ && \ cd $_ \ ;
After a moment, it will result in similar output:
Success! Created ${fqdnSite} at ~/src/sites/${fqdnS ~/sites/${fqdnSite} yarn start Happy hacking!
Your freshly-created application structure should now resemble:
Add Node packages at a local level to a new project:
yarn add \ material-ui \ webfontloader \ ;
There seems to be a bit of confusion when following Material-UI's installation instructions to include the Roboto font for offline use and/or hosted separately from Google (which can include blocking). 4) 5)
I'll admit, it took me a hot minute to read several sites to figure it all out. 6) 7) 8) 9) 10)
I've tried to document the process for making this as turnkey as possible. If you know of a better way, please let me know so that I can update this article.
goog-webfont-dl \ --font 'Roboto' \ --styles '300,400,500,700' \ --all \ --destination ~/src/sites/${fqdnSite}/src/fonts/roboto/ \ --out ~/src/sites/${fqdnSite}/src/fonts-roboto.css \ --prefix 'fonts/roboto' \ ;
Set your project's homepage as a relative path:
json \ --in-place \ -f ~/src/sites/${fqdnSite}/package.json \ -e 'this.homepage="./"' \ ;
It is convenient to organize programs and libraries into self-contained directories, and then provide a single entry point to that library. There are three ways in which a folder may be passed to
require()as an argument.
“Node : Folders as Modules”, 2017-09-30 17:20:04 UTC
Node has an ordered sequence for entry point selection:
-
package.json
-
index.js
-
If
mainis not set in
package.json, it continues down the aforementioned list.
Based on Stack Overflow answer : difference between app.js and index.js in Node.js
Boilerplate JavaScript has been setup at
~/src/sites/${fqdnSite}/src/index.js
The panel on the left is the original (as of 2017-09-30), and the panel on the right are changes needing to be made to support additional packages:
goog-webfont-dl, loaded by
webfontloader)
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import registerServiceWorker from './registerServiceWorker'; ReactDOM.render(<App />, document.getElementById('root')); registerServiceWorker();
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import registerServiceWorker from './registerServiceWorker'; import './fonts-roboto.css'; import WebFont from 'webfontloader'; WebFont.load({ custom: { families: [ 'Roboto:300,400,500,700', 'sans-serif' ], }, loading: function(){ console.log('WebFonts loading'); }, active: function(){ console.log('WebFonts active'); }, inactive: function(){ console.log('WebFonts inactive'); } }); ReactDOM.render(<App />, document.getElementById('root')); registerServiceWorker();
In
index.js, the 4th line
import App from './App'; is what opens this section of code, and the prior Entry point section discusses some of the structural nuances. This particular implementation is due to the way that Create React App structures initial projects.
If you forgo this step for the next step, you'll be able to view
a fresh installation of your Front-end application in your browser.
Feel free to try it, then make the following changes, and then try it again. See the difference?
Boilerplate JSX has been setup at
~/src/sites/${fqdnSite}/src/App.js
The panel on the left is the original (as of 2017-09-30), and the panel on the right are changes that need to be made in order to implement:;
import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css'; import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider'; import RaisedButton from 'material-ui/RaisedButton'; class App extends Component { render() { return ( <MuiThemeProvider> <div className="App"> <div className="App-header"> <img src={logo} <h1 className="App-title">Welcome to React</h1> </div> <p className="App-intro"> To get started, edit <code>src/App.js</code> and save to reload. <RaisedButton label="Material UI Raised Button" /> </p> </div> </MuiThemeProvider> ); } } export default App;
If you compare the two panes of code, you'll notice that I've added a few lines.
In the header:
import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider';
import RaisedButton from 'material-ui/RaisedButton';
In the body:
<MuiThemeProvider>and
</MuiThemeProvider>wrap everything else.
<header>tags were changed to
<div>tags:
<header className="App-header">→
<div className="App-header">
</header>→
</div>
<RaisedButton label=“Material UI Raised Button” />to test Material-UI and fonts (if fonts aren't working, the letters won't be bold).
To start a test server for viewing changes as you make them in
~/src/sites/${fqdnSite}/
yarn start
Are you ready to deploy your React website with assets to a framework such as Django and/or a web server such as Nginx?
yarn build
~/src/sites/${fqdnSite}/build/
When you built your project, static files were generated and linked together. You can test the results against a local web server that functions identically to your production deployment (e.g. using Nginx to serve your content).
serve \ -s ~/src/sites/${fqdnSite}/build \ --clipless \ ;
Even though Create React App was used as a starting point, there is no vendor lock-in. When you're ready for your Node project to stand on its own, you simply run this command for the configuration and build dependencies to be moved directly into your project, and all connections to Create React App to be severed.
You should only run this command if you are sure of your decision.
yarn eject
This article only serves as an introductory article for creating a front-end with React, and to document my steps for:
There are many different ways to structure your React application. As you progress, and your program becomes more complex, especially if you're working on a team, organization of your code and assets will be just as important.
Relating to structure, I suggest that you read:
Saving the best for last:
Buy and read The Road to learn React by Robin Wieruch.
Buy and read Fullstack React by Anthony Accomazzo, Ari Lerner, Clay Allsopp, David Guttman, Tyler McGinnis, and Nate Murray.
Watch React video tutorials by Ben Awad.
|
https://thad.getterman.org/2017/09/30/building-a-front-end-app-with-react-and-material-ui
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
How to separate Drag and Swipe from Click and Touch events
Gestures like swipe and drag events are a great way to introduce new functionalities into your app.
Everyone has a favourite library, but there are chances that has no version for the framework you are using or it’s an old port with issues.
PhotoSwipe is an image gallery, the library wrapper for react has unnecessary dependencies, deprecated life cycles and issues that needed the creation of react-photoswipe-2.
With these problems it's unwise to depend of something that is no longer supported.
This is how to create React wrapper for PhotoSwipe, but the basic principles are the same for all other libraries. You need to import the package and expose its APIs.
First, we need to import the library and bind it with React. We will create a component in a file called photoSwipe.js. It will be a functional component, but you can change it in a class component with an ease. The imports will be:';
The library want us to include two CSS and two JavaScript files. We know that the CSS rules will apply globally when the page is loading, but how to access the JavaScript file? We look the JS file as a module and we need a name to access it later this JS scope.
In our case we bind the two files with the React component with the names
PhotoSwipe and
PhotoSwipeUI_Default.
Second step is to add all needed DOM elements. We do that in the
return. We need to convert the
html to
jsx so change everywhere
class to
className,
tabindex to
tabIndex and
aria-hidden to
ariaHidden.
Next step is to initialize PhotoSwipe constructor:
const photoSwipe = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, props.items, options);
The constructor has 4 params:
pswpElementis the first
<div>in the return, the one with the
className="pswp". To have reference we will use useRef Hook. We initialize the variable
let pswpElement = useRef(null);and then add the ref in the
<div>
ref={node => { pswpElement = node; }}.
PhotoSwipeUI_Defaultis the import that we already have;
props.items this is the data array from which we will build all the slides:
// build items array const items = [ { src: '', w: 600, h: 400 }, { src: '', w: 1200, h: 900 } ];
options is the object where we define options:
// define options (if needed) const options = { // optionName: 'option value' // for example: index: 0 // start at first slide };
The PhotoSwipe constructor should init only once when the component is loading and we must bind events that listen when we open and close the gallery with React. All of this should be in the useEffect Hook.
In the final code below we will see the code we need in the useEffect Hook. We notice that useEffect Hook has an array as a second argument. This means that the Hook will execute every time when props or gallery options are changing.
Now we can change props in the parent component. For example, we can notify parent component that the gallery is closed when listening for PhotoSwipe's
destroy and
close events. Also, with props we can know when to open the gallery, because changing props from the parent will fire useEffect Hook. This is the general way to wire events and life-cycles.
Our main component should have thumbnails, galleries, images' data and methods to open and close PhotoSwipe.
Image data for PhotoSwipe should be an array of objects.
The methods for open and close as you can see in the final code below are just simple functions that change state and pass it through props to the wrapper. The only difference is that when we interact with the grid of thumbnails we need to open different image. To do that, we add another property to the state which is the index.
Creating a wrapper in React is simple enough when we understand the way wrappers and our project communicated and how to wire events and life-cycles. In this guide we did all of this and is the foundation to create and extend React wrappers.'; const PhotoSwipeWrapper = props => { let pswpElement = useRef(null); const options = { index: props.index || 0, closeOnScroll: false, history: false }; useEffect(() => { const photoSwipe = new PhotoSwipe(pswpElement, PhotoSwipeUI_Default, props.items, options); if (photoSwipe) { if (props.isOpen) { photoSwipe.init(); photoSwipe.listen('destroy', () => { props.onClose(); }); photoSwipe.listen('close', () => { props.onClose(); }); } if (!props.isOpen) { props.onClose(); } } }, [props, options]); return ( <div className="pswp" tabIndex="-1" role="dialog" aria- <div className="pswp__scroll-wrap"> <div className="pswp__container"> <div className="pswp__item" /> <div className="pswp__item" /> <div className="pswp__item" /> </div> <div className="pswp__ui pswp__ui--hidden"> <div className="pswp__top-bar"> <div className="pswp__counter" /> <button className="pswp__button pswp__button--close" title="Close (Esc)" /> <button className="pswp__button pswp__button--share" title="Share" /> <button className="pswp__button pswp__button--fs" title="Toggle fullscreen" /> <button className="pswp__button pswp__button--zoom" title="Zoom in/out" /> <div className="pswp__preloader"> <div className="pswp__preloader__icn"> <div className="pswp__preloader__cut"> <div className="pswp__preloader__donut" /> </div> </div> </div> </div> <div className="pswp__share-modal pswp__share-modal--hidden pswp__single-tap"> <div className="pswp__share-tooltip" /> </div> <button className="pswp__button pswp__button--arrow--left" title="Previous (arrow left)" /> <button className="pswp__button pswp__button--arrow--right" title="Next (arrow right)" /> <div className="pswp__caption"> <div className="pswp__caption__center" /> </div> </div> </div> </div> ); }; export default PhotoSwipeWrapper;
import React, { useState, Fragment } from 'react'; import PhotoSwipeWrapper from './photoSwipe'; export default () => { const [isOpen, setIsOpen] = useState(false); const [index, setIndex] = useState(0); const items = [ { src: '', w: 600, h: 400 }, { src: '', w: 1200, h: 900 } ]; const handleOpen = index => { setIsOpen(true); setIndex(index); }; const handleClose = () => { setIsOpen(false); }; return ( <Fragment> <div> {items.map((item, i) => ( <div key={i} onClick={() => { handleOpen(i); }} > Image {i} </div> ))} </div> <PhotoSwipeWrapper isOpen={isOpen} index={index} items={items.large} onClose={handleClose} /> </Fragment> ); };
Gestures like swipe and drag events are a great way to introduce new functionalities into your app.
Create a simple sticky header only with functional components and React Hooks with no npm packages or other complicated functionality.
Easy to fallow steps to bootstrap Aurelia CLI with Pug (Jade) and Webpack. With working metadata passed from webpack config file.
Here is a simple React app example using Material UI. The problem I stumbled is how to add JSS withStyles into Higher-Order Components (HOC).
|
https://pantaley.com/blog/Create-React-wrapper-for-PhotoSwipe/
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
H (named aitch /ˈeɪtʃ/ or haitch /ˈheɪtʃ/ in Ireland and parts of Australasia; plural aitches or haitches)
Haitch is an HTTP Client written in Swift for iOS and Mac OS X.
Features
- Full featured, but none of the bloat
- Easy to understand, Builder-based architecture
Request/
Responseinjection allowing for "plug-in" functionality
- Extensible
Responseinterface so you can design for whatever specific response your app requires
Swift version
Haitch
0.7+ and
development require swift 2.2+. If you’re using a version of swift < 2.2, you should use Haitch version
0.6.
Installation
CocoaPods
Add the following line to your
Podfile:
pod 'Haitch', '~> 0.8'
Then run
pod update or
pod install (if starting from scratch).
Carthage
Add the following line to your
Cartfile:
github "goposse/haitch" ~> 0.8
Run
carthage update and then follow the installation instructions here.
The basics
Making a request is easy
let httpClient: HttpClient = HttpClient() let req: Request = Request.Builder() .url(url: "", params: params) .method("GET") .build() httpClient.execute(req) { (response: Response?, error: NSError?) -> Void in // deal with the response data (NSData) or error (NSError) }
JSON
Getting back JSON is simple
client.execute(request: req, responseKind: JsonResponse.self) { (response, error) -> Void in if let jsonResponse: JsonResponse = response as? JsonResponse { print(jsonResponse.json) // .json == AnyObject? } }
Custom Responses
If you use SwiftyJSON you could create a custom Response class to convert the
Response data to the
JSON data type. It’s very easy to do so.
import Foundation import SwiftyJSON public class SwiftyJsonResponse: Response { private (set) public var json: JSON? private (set) public var jsonError: AnyObject? public convenience required init(response: Response) { self.init(request: response.request, data: response.data, statusCode: response.statusCode, error: response.error) } public override init(request: Request, data: NSData?, statusCode: Int, error: NSError?) { super.init(request: request, data: data, statusCode: statusCode, error: error) self.populateJSONWithResponseData(data) } private func populateJSONWithResponseData(data: NSData?) { if data != nil { var jsonError: NSError? = nil let json: JSON = JSON(data: data!, options: .AllowFragments, error: &jsonError) self.jsonError = jsonError self.json = json } } }
Why is there no
sharedClient (or some such)?
Because it’s about your needs and not what we choose for you. You should both understand AND be in control of your network stack. If you feel strongly about it, subclass
HttpClient and add it yourself. Simple.
Why should I use this?
It’s up to you. There are other fantastic frameworks out there but, in our experience, we only need a small subset of the things they do. The goal of Haitch was to allow you to write modular, reusable notworking logic that matches your specific requirements. Not to deal with the possiblity of "what if?".
Haitch is sponsored, owned and maintained by Posse Productions LLC. Follow us on Twitter @goposse. Feel free to reach out with suggestions, ideas or to say hey.
Security
If you believe you have identified a serious security vulnerability or issue with Haitch, please report it as soon as possible to [email protected] Please refrain from posting it to the public issue tracker so that we have a chance to address it and notify everyone accordingly.
License
Haitch is released under a modified MIT license. See LICENSE for details.
Latest podspec
{ "name": "Haitch", "version": "0.8", "license": "Posse", "summary": "Simple HTTP for Swift", "homepage": "", "social_media_url": "", "authors": { "Posse Productions LLC": "[email protected]" }, "source": { "git": "", "tag": "0.8" }, "platforms": { "ios": "8.0", "osx": "10.9" }, "source_files": "Source/**/*.swift", "requires_arc": true }
Thu, 28 Jul 2016 09:16:04 +0000
|
https://tryexcept.com/articles/cocoapod/haitch
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Here you can find the source of deleteTempDir(File dir)
public static void deleteTempDir(File dir)
//package com.java2s; /*// w w w . j a v a2s .File; public class Main { /** * Recursively delete specified directory. */ public static void deleteTempDir(File dir) { // TODO(cpovirk): try Directories.deleteRecursively if a c.g.c.unix dep is OK if (dir.exists()) { for (File f : dir.listFiles()) { if (f.isDirectory()) { deleteTempDir(f); } else { f.delete(); } } dir.delete(); } } }
|
http://www.java2s.com/example/android-utility-method/file-delete/deletetempdir-file-dir-19d44.html
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Running Hibernate Validator in the JBoss Fuse
Environment
- JBoss Fuse 6.0
Issue
I try to run Hibernate Validator [1] in the OSGi environment. When I attempt to create the Validator instance I see the following message:
Add a provider like Hibernate Validator (RI) to your classpath. javax.validation.ValidationException: Unable to create a Configuration, because no Bean Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. at javax.validation.Validation$GenericBootstrapImpl.configure(Validation.java:271) at javax.validation.Validation.buildDefaultValidatorFactory(Validation.java:110)
[1]
Resolution
Default Hibernate ValidatorFactory is not OSGi-friendly. In order to successfully run the Hibernate Validator in the OSGi environment you need to use a custom ValidationProviderResolver explicitly returning the HibernateValidator instance. The snippet below demonstrates such configuration.
public class HibernateValidationProviderResolver implements ValidationProviderResolver { @Override public List getValidationProviders() { return singletonList(new HibernateValidator()); } }
The snippet below demonstrates how to wire the custom ValidationProviderResolver into the ValidatorFactory.
Configuration<?> configuration = Validation.byDefaultProvider().providerResolver( new HibernateValidationProviderResolver() ).configure(); ValidatorFactory factory = configuration.buildValidatorFactory(); Validator validator = factory.getValidator();
Attachments
- Product(s)
- Red Hat.
|
https://access.redhat.com/solutions/734273
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Crash in [@ mozilla::gfx::Open
VRSession::Setup Contoller Actions]
Categories
(Core :: WebVR, defect, critical)
Tracking
()
People
(Reporter: marcia, Assigned: daoshengmu)
Details
(Keywords: crash, regression)
Crash Data
Attachments
(1 file)
This bug is for crash report bp-e64c4758-7a08-454c-a9e7-182e20190701.
Seen while looking at nightly crash data:. 21 crashes/5 installs on 69 nightly. No URLS for the nightly crashes. There are also a few single install crashes that happened in 66 and 67.
Top 10 frames of crashing thread: 0 xul.dll bool mozilla::gfx::OpenVRSession::SetupContollerActions gfx/vr/service/OpenVRSession.cpp:820 1 xul.dll bool mozilla::gfx::OpenVRSession::Initialize gfx/vr/service/OpenVRSession.cpp:311 2 xul.dll void mozilla::gfx::VRService::ServiceInitialize gfx/vr/service/VRService.cpp:260 3 xul.dll nsresult mozilla::detail::RunnableMethodImpl< xpcom/threads/nsThreadUtils.h:1176 4 xul.dll MessageLoop::DoWork ipc/chromium/src/base/message_loop.cc:523 5 xul.dll base::MessagePumpDefault::Run ipc/chromium/src/base/message_pump_default.cc:35 6 xul.dll MessageLoop::RunHandler ipc/chromium/src/base/message_loop.cc:308 7 xul.dll MessageLoop::Run ipc/chromium/src/base/message_loop.cc:290 8 xul.dll base::Thread::ThreadMain ipc/chromium/src/base/thread.cc:192 9 xul.dll `anonymous namespace'::ThreadFunc ipc/chromium/src/base/platform_thread_win.cc:19
It looks like the only possibility to make this crash is
vr::VRInput() is null [1], it is odd and only happened at one install, I will keep an eye on it.
[1]
I have no idea so far, we should add logs for helping check the error msg. ()
I think we should check if
vr::VRInput() is null and its EVRInputError. Then, ask for
Shutdown().
This patch should resolve this issue, but I would like wait for Bug 1564203 be solved to make sure there is no other regression.
Pushed by dmu@mozilla.com: Checking if OpenVR vrinput() is null when doing initialization. r=kip
Does this patch need a Beta approval request?
Comment on attachment 9076625 [details]
Bug 1562679 - Checking if OpenVR vrinput() is null when doing initialization.
Beta/Release Uplift Approval Request
- User impact if declined: This crash of
OpenVRSession::SetupContollerActionswill continue to happen.
-): This is a right fix for the OpenVR runtime stability. For some users, the VR input API is not ready yet when the VR system is trying to launch. We would like to give it a early return and ask for retry rather than this crash.
- String changes made/needed:
Comment on attachment 9076625 [details]
Bug 1562679 - Checking if OpenVR vrinput() is null when doing initialization.
Fixes a WebVR crash. Approved for 69.0b5.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=1562679
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
You can follow the interrupted sleep tutorial by NoMi Design to learn just how to set things up.
My code, however, is just like this example, provided on Engblaze.com, except that Ive added some serial communications to see that its working visually and to re-enable the interrupt attach so that I can constantly bring the device out of sleep every time it goes to sleep.
//remove the space between '<' and 'avr'. #include < avr/interrupt.h> #include < avr/power.h> #include < avr/sleep.h> #include < avr/io.h> void setup() { Serial.begin(9() { // Stay awake for 1 second, then sleep. // LED turns off when sleeping, then back on upon wake. delay(2000); Serial.println("Entering Sleep Mode"); sleepNow(); Serial.println(" "); Serial.println("I am now Awake"); } // void sleepNow() { // Choose our preferred sleep mode: set_sleep_mode(SLEEP_MODE_PWR_SAVE); // interrupts(); // Set pin 2 as interrupt and attach handler: attachInterrupt(0, pinInterrupt, HIGH); //delay(100); // // Set sleep enable (SE) bit: sleep_enable(); // // Put the device to sleep: digitalWrite(13,LOW); // turn LED off to indicate sleep sleep_mode(); // // Upon waking up, sketch continues from this point. sleep_disable(); digitalWrite(13,HIGH); // turn LED on to indicate awake } void pinInterrupt() { detachInterrupt(0); attachInterrupt(0, pinInterrupt, HIGH); }
Dont mind any avr's at the end. Its a glitch the website keeps doing when I post #includes at the top of the code on the blog.
Sleep ModesFinally, its important to discuss the types of Sleep Modes that you can choose from. There are 6 sleep modes available on the Arduino Uno (ATMEGA328):
- SLEEP_MODE_IDLE – least power savings
- SLEEP_MODE_ADC
- SLEEP_MODE_EXTENDED_STANDBY
- SLEEP_MODE_PWR_SAVE
- SLEEP_MODE_STANDBY
- SLEEP_MODE_PWR_DOWN – most power savings
Ive decided to go.
You're welcome to ask questions, about this code if you arent sure whats going on. The bit of setting up the pins in the Arduino hardware is explained more on the Engblaze post.
VERY Interesting! However in both Arduino 0023 and 1.01 I get this error when pasting this code:
---------------------( COPY )----------------------
sketch_oct11a.cpp:1:41: error: C:\Users\TerryKing\Desktop\ArduinoDev\arduino-1.0.1\hardware\arduino\cores\arduino/avr interrupt.h="interrupt.h": Invalid argument
-----------------( END COPY )----------------------
Am I missing some library??
Thanks!
Regards, Terry King
...In The Woods in Vermont, USA
terry@yourduino.com
It should be ' #include < avr/interrupt.h' > without quotes and spaces. Blogger reformatted how the code was supposed to display. I've fixed it now.
You can give this a try instead:
#include < avr/interrupt.h>
#include < avr/power.h>
#include < avr/sleep.h>
#include < avr/io.h>
with no space between < and avr.
I know for sure that those work in my code. Let me know how it goes. If so, I'll update the code above to reflect this edit.
I dont get the setting of the pins as input or outputs and pull-ups. Im trying to get a FIO based app as energy efficient as possible.
Besides pins, I do:
-power down sleep
-ADC disable
-AC disable
-BOD disable
This gets me down to 108 uA, of which I assume the majority to be 'unsaveable' due to it being used by the voltage regulator (~100uA).
When I add in your pin code (copy paste, input and pull-up), it goes up to 155 uA while sleeping. So I dont see how this is saving power, and different sources seem to disagree on the web. Any explanation?
PS: only FIO board being used, no peripherals/sensors/XBEE. 9V battery powered
It could be because you are using a 9V battery. Try using a 3.3V Li-Ion battery and then let me know. It could also be the use of The PWR SAVE mode. Try PWR DOWN to see if you save more power.
Seems there was a problem with pin 13. Although I did put it as input, the pull-up seems to make the LED draw a tiny amount of current, even making it glow, albeit very dim.
Anyway, after I tested again (already used power down sleep), and now results are there isnt any difference between all 4 combinations (in/out and high/low).
The 9V battery is only used in development/testing. For the real deal Ill be switching to 40AH 6V battery. The device must last several years at a calculated avg current of 1 mA....I also have no LIPO at my disposal atm.
For now Ill keep the code in, perhaps I havent been in situations where the difference is more profound...currently at a stable 105 uA sleep power. Thanks anyways!.
hi may I have the library for avr/interrupt, avr/power, avr/sleep, avr/io because I could not find the link to download it from the web. if its possible could you zip the file? thank you and sorry for the trouble.
They are libraries that come with the Arduino software. Download the latest version of Arduino from and you will have them.
Hi I understand it already thank you so much! However I am getting error when verifying. Is it that the function cannot work with the arduino version that I have? Below is the error that I get.
sleepmode.ino: In function 'void sleepNow()':
sleepmode:32: error: 'SLEEP_MODE_PWR_SAVE' was not declared in this scope
sleepmode:32: error: 'set_sleep_mode' was not declared in this scope
sleepmode:40: error: 'sleep_enable' was not declared in this scope
sleepmode:44: error: 'sleep_mode' was not declared in this scope
sleepmode:47: error: 'sleep_disable' was not declared in this scope
Hi for the error that I ask you earlier on I solved it. So sorry to trouble you cause I forgotten to //remove the space between '<' and 'avr'.
Hi Tamir, thanks for the very useful tutorial!
I've experienced an issue with the Serial.print before sleepNow(); and just figured out, it had no time to transfer the text. You might consider to give it a delay(30) before executing sleepNow(); just in order to properly Serial.print the text of choice.
.
.
.
Serial.println("Entering Sleep Mode");
delay(30);
sleepNow();
.
.
.
Interesting input on the delay bit. I hadnt thought of that. Thanks for sharing!.
As the post says, its anything with an Atmega 168/328p. I was using a Fio. I know this example works, because I used it many times. I dont post example code unless Ive tried it and used it in my projects.
Also, which documentation are you referring to? You should use the chip manufacturer's manuals not Arduino.
Just a question, if the Arduino uses bluetooth as a serial connection, and I set the Arduino to sleep, will it still provide enough power to the Bluetooth to wake it up via serial?
It should. I was able to successfully wake the Arduino from sleep using XBee commands, so I would think that Bluetooth is similar enough to work..
Hope this helps!
|
http://tae09.blogspot.com/2012/10/arduino-low-power-tutorial.html
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
load the javascript libraries just to do one or more AJAX calls before data is actually shown to the user. Still, you usually need the same Ajax calls later on to refresh some parts of the UI. I went to think and search for a nice and easy way to preload some data and avoid those “load-time” Ajax calls.
To begin with, I always like to keep the number of javascript function to the minimum possible, so I implemented this function:
function updateModules(data, userContext, methodName) { if (data) { // do whatever you need with the data } else { // Do ajax call PageMethods.GetModules(updateModules, errorHandler); } }
That allows the function to act with 3 different behaviours:
// Will invoke the Ajax call updateModules(); // Will skip the Ajax call and process the data right away updateModules(someData); // This is the handler for a successful ASP.Net Ajax call updateModules(data, context, methodName);
With this function I’ve pretty much every possible usage scenario covered, so all that was missing was a way to preload the data in the first request. This can be handled natively by ASP.Net by serializing the “GetModules” method response on the server side. On top of it, ASP.Net allows you to register a script block entirely from the “C#” side. So, I simply needed to add this to my Page_Load event:
using System.Web.Script.Serialization; ... protected void Page_Load(object sender, EventArgs e) { ... Page.ClientScript.RegisterClientScriptBlock(GetType(), "preloadedModules", "updateModules(" + (new JavaScriptSerializer()).Serialize(GetModules()) + ");" , true); }
Alternatively you could also assign it to a variable and use it when ready.
In the end, my page will get loaded a lot faster not only because it didn’t have to process multiple Ajax requests but because the ASP.Net engine would have to initialize a lot of code before processing and serializing that method’s returned object. This little optimization trick can even save more load time if you’ve several Ajax calls beeing done when the page loads.
Tags: ajax, ASP.Net, ExtJs
[…] See the rest here: Serializing Objects To JSON To Preload Ajax Data « Alexandre Gomes […]
|
http://www.alexandre-gomes.com/?p=401
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
System Calls Cls and Clear
Use the function call system ("cls") or system (clear") so that the screan is clear on the begining of the pogram
What happend when the output is redirected
In file heads_or_tails.h:
#include <stdio.h>
#include <stdlib.h>
#define MAXWORLD 100
int get_call-from user(void)
void paly (int how_many);
void prn_final_report(int win, int lose, int how_many);
void prn_instruction(void);
void report_a_win(int coin);
void report_a_loss(int coin);
int toss(void);
I need to input the function clear on the begining of the program to make it clear
Solution Preview
#include <stdlib.h>
int system(const char *command);
The system() function passes the string pointed to by command to the host environment to be executed by a command processor in an implementation-dependent manner. For example, in Dos you issue ...
Solution Summary
Understanding of the System Calls Cls and Clear
|
https://brainmass.com/computer-science/processor-architectures/system-calls-cls-and-clear-95254
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
by Rick Anderson, Suhas Joshi
This tutorial illustrates the steps to migrate an existing web application with user and role data created using SQL Membership to the new ASP.NET Identity system. This approach involves changing the existing database schema to the one needed by the ASP.NET Identity and hook in the old/new classes to it. After you adopt this approach, once your database is migrated, future updates to Identity will be handled effortlessly.
For this tutorial, we will take a web application template (Web Forms) created using Visual Studio 2010 to create user and role data. We will then use SQL scripts to migrate the existing database to tables needed by the Identity system. Next we'll install the necessary NuGet packages and add new account management pages which use the Identity system for membership management. As a test of migration, users created using SQL membership should be able to log in and new users should be able to register. You can find the complete sample here. See also Migrating from ASP.NET Membership to ASP.NET Identity.
Getting started
Creating an application with SQL Membership
We need to start with an existing application that uses SQL membership and has user and role data. For the purpose of this article, let's create a web application in Visual Studio 2010.
Using the ASP.NET Configuration tool, create 2 users: oldAdminUser and oldUser.
Create a role named Admin and add 'oldAdminUser' as a user in that role.
Create an Admin section of the site with a Default.aspx. Set the authorization tag in the web.config file to enable access only to users in Admin roles. More information can be found here
View the database in Server Explorer to understand the tables created by the SQL membership system. The user login data is stored in the aspnet_Users and aspnet_Membership tables, while role data is stored in the aspnet_Roles table. Information about which users are in which roles is stored in the aspnet_UsersInRoles table. For basic membership management it is sufficient to port the information in the above tables to the ASP.NET Identity system.
Migrating to Visual Studio 2013
- Install Visual Studio Express 2013 for Web or Visual Studio 2013 along with the latest updates.
- Open the above project in your installed version of Visual Studio. If SQL Server Express is not installed on the machine, a prompt is displayed when you open the project, since the connection string uses SQL Express. You can either choose to install SQL Express or as work around change the connection string to LocalDb. For this article we'll change it to LocalDb.
Open web.config and change the connection string from .SQLExpess to (LocalDb)v11.0. Remove 'User Instance=true' from the connection string.
- Open Server Explorer and verify that the table schema and data can be observed.
The ASP.NET Identity system works with version 4.5 or higher of the framework. Retarget the application to 4.5 or higher.
Build the project to verify that there are no errors.
Installing the Nuget packages
In Solution Explorer, right-click the project > Manage NuGet Packages. In the search box, enter "Asp.net Identity". Select the package in the list of results and click install. Accept the license agreement by clicking on "I Accept" button. Note that this package will install the dependency packages: EntityFramework and Microsoft ASP.NET Identity Core. Similarly install the following packages (skip the last 4 OWIN packages if you don't want to enable OAuth log-in):
- Microsoft.AspNet.Identity.Owin
- Microsoft.Owin.Host.SystemWeb
- Microsoft.Owin.Security.Facebook
- Microsoft.Owin.Security.Google
- Microsoft.Owin.Security.MicrosoftAccount
Microsoft.Owin.Security.Twitter
Migrate database to the new Identity system
The next step is to migrate the existing database to a schema required by the ASP.NET Identity system. To achieve this we run a SQL script which has a set of commands to create new tables and migrate existing user information to the new tables. The script file can be found here.
This script file is specific to this sample. If the schema for the tables created using SQL membership is customized or modified the scripts need to be changed accordingly.
How to generate the SQL script for schema migration
For ASP.NET Identity classes to work out of the box with the data of existing users, we need to migrate the database schema to the one needed by ASP.NET Identity. We can do this by adding new tables and copying the existing information to those tables. By default ASP.NET Identity uses EntityFramework to map the Identity model classes back to the database to store/retrieve information. These model classes implement the core Identity interfaces defining user and role objects. The tables and the columns in the database are based on these model classes. The EntityFramework model classes in Identity v2.1.0 and their properties are as defined below
We need to have tables for each of these models with columns corresponding to the properties. The mapping between classes and tables is defined in the
OnModelCreating method of the
IdentityDBContext. This is known as the fluent API method of configuration and more information can be found here. The configuration for the classes is as mentioned below
With this information we can create SQL statements to create new tables. We can either write each statement individually or generate the entire script using EntityFramework PowerShell commands which we can then edit as required. To do this, in VS open the Package Manager Console from the View or Tools menu
- Run command "Enable-Migrations" to enable EntityFramework migrations.
- Run command "Add-migration initial" which creates the initial setup code to create the database in C#/VB.
- The final step is to run "Update-Database –Script" command that generates the SQL script based on the model classes.
This database generation script can be used as a start where we'll be making additional changes to add new columns and copy data. The advantage of this is that we generate the
_MigrationHistory table which is used by EntityFramework to modify the database schema when the model classes change for future versions of Identity releases.
The SQL membership user information had other properties in addition to the ones in the Identity user model class namely email, password attempts, last login date, last lock-out date etc. This is useful information and we would like it to be carried over to the Identity system. This can be done by adding additional properties to the user model and mapping them back to the table columns in the database. We can do this by adding a class that subclasses the
IdentityUser model. We can add the properties to this custom class and edit the SQL script to add the corresponding columns when creating the table. The code for this class is described further in the article. The SQL script for creating the
AspnetUsers table after adding the new properties would be
CREATE TABLE [dbo].[AspNetUsers] ( [Id] NVARCHAR (128) NOT NULL, [UserName] NVARCHAR (MAX) NULL, [PasswordHash] NVARCHAR (MAX) NULL, [SecurityStamp] NVARCHAR (MAX) NULL, [EmailConfirmed] BIT NOT NULL, [PhoneNumber] NVARCHAR (MAX) NULL, [PhoneNumberConfirmed] BIT NOT NULL, [TwoFactorEnabled] BIT NOT NULL, [LockoutEndDateUtc] DATETIME NULL, [LockoutEnabled] BIT NOT NULL, [AccessFailedCount] INT NOT NULL, [ApplicationId] UNIQUEIDENTIFIER NOT NULL, [LegacyPasswordHash] NVARCHAR (MAX) NULL, [LoweredUserName] NVARCHAR (256) NOT NULL, [MobileAlias] NVARCHAR (16) DEFAULT (NULL) NULL, [IsAnonymous] BIT DEFAULT ((0)) NOT NULL, [LastActivityDate] DATETIME2 NOT NULL, [MobilePIN] NVARCHAR (16) NULL, [Email] NVARCHAR (256) NULL, [LoweredEmail] NVARCHAR (256) NULL, [PasswordQuestion] NVARCHAR (256) NULL, [PasswordAnswer] NVARCHAR (128) NULL, [IsApproved] BIT NOT NULL, [IsLockedOut] BIT NOT NULL, [CreateDate] DATETIME2 NOT NULL, [LastLoginDate] DATETIME2 NOT NULL, [LastPasswordChangedDate] DATETIME2 NOT NULL, [LastLockoutDate] DATETIME2 NOT NULL, [FailedPasswordAttemptCount] INT NOT NULL, [FailedPasswordAttemptWindowStart] DATETIME2 NOT NULL, [FailedPasswordAnswerAttemptCount] INT NOT NULL, [FailedPasswordAnswerAttemptWindowStart] DATETIME2 NOT NULL, [Comment] NTEXT NULL, CONSTRAINT [PK_dbo.AspNetUsers] PRIMARY KEY CLUSTERED ([Id] ASC), FOREIGN KEY ([ApplicationId]) REFERENCES [dbo].[aspnet_Applications] ([ApplicationId]), );
Next we need to copy the existing information from the SQL membership database to the newly added tables for Identity. This can be done through SQL by copying data directly from one table to another. To add data into the rows of table, we use the
INSERT INTO [Table] construct. To copy from another table we can use the
INSERT INTO statement along with the
SELECT statement. To get all the user information we need to query the aspnet_Users and aspnet_Membership tables and copy the data to the AspNetUsers table. We use the
INSERT INTO and
SELECT along with
JOIN and
LEFT OUTER JOIN statements. For more information about querying and copying data between tables, refer to this link. Additionally the AspnetUserLogins and AspnetUserClaims tables are empty to begin with since there is no information in SQL membership that maps to this by default. The only information copied is for users and roles. For the project created in the previous steps, the SQL query to copy information to the users table would be
INSERT INTO AspNetUsers(Id,UserName,PasswordHash,SecurityStamp,EmailConfirmed, PhoneNumber,PhoneNumberConfirmed,TwoFactorEnabled,LockoutEndDateUtc,LockoutEnabled,AccessFailedCount, ApplicationId,LoweredUserName,MobileAlias,IsAnonymous,LastActivityDate,LegacyPasswordHash, MobilePIN,Email,LoweredEmail,PasswordQuestion,PasswordAnswer,IsApproved,IsLockedOut,CreateDate, LastLoginDate,LastPasswordChangedDate,LastLockoutDate,FailedPasswordAttemptCount, FailedPasswordAnswerAttemptWindowStart,FailedPasswordAnswerAttemptCount,FailedPasswordAttemptWindowStart,Comment) SELECT aspnet_Users.UserId,aspnet_Users.UserName,(aspnet_Membership.Password+'|'+CAST(aspnet_Membership.PasswordFormat as varchar)+'|'+aspnet_Membership.PasswordSalt),NewID(), 'true',NULL,'false','true',aspnet_Membership.LastLockoutDate,'true','0', aspnet_Users.ApplicationId,aspnet_Users.LoweredUserName, aspnet_Users.MobileAlias,aspnet_Users.IsAnonymous,aspnet_Users.LastActivityDate,aspnet_Membership.Password, aspnet_Membership.MobilePIN,aspnet_Membership.Email,aspnet_Membership.LoweredEmail,aspnet_Membership.PasswordQuestion,aspnet_Membership.PasswordAnswer, aspnet_Membership.IsApproved,aspnet_Membership.IsLockedOut,aspnet_Membership.CreateDate,aspnet_Membership.LastLoginDate,aspnet_Membership.LastPasswordChangedDate, aspnet_Membership.LastLockoutDate,aspnet_Membership.FailedPasswordAttemptCount, aspnet_Membership.FailedPasswordAnswerAttemptWindowStart, aspnet_Membership.FailedPasswordAnswerAttemptCount,aspnet_Membership.FailedPasswordAttemptWindowStart,aspnet_Membership.Comment FROM aspnet_Users LEFT OUTER JOIN aspnet_Membership ON aspnet_Membership.ApplicationId = aspnet_Users.ApplicationId AND aspnet_Users.UserId = aspnet_Membership.UserId;
In the above SQL statement, information about each user from the aspnet_Users and aspnet_Membership tables is copied into the columns of the AspnetUsers table. The only modification done here is when we copy the password. Since the encryption algorithm for passwords in SQL membership used 'PasswordSalt' and 'PasswordFormat', we copy that too along with the hashed password so that it can be used to decrypt the password by Identity. This is explained further in the article when hooking up a custom password hasher.
This script file is specific to this sample. For applications which have additional tables, developers can follow a similar approach to add additional properties on the user model class and map them to columns in the AspnetUsers table. To run the script,
Open Server Explorer. Expand the 'ApplicationServices' connection to display the tables. Right click on the Tables node and select the 'New Query' option
In the query window, copy and paste the entire SQL script from the Migrations.sql file. Run the script file by hitting the 'Execute' arrow button.
Refresh the Server Explorer window. Five new tables are created in the database.
Below is how the information in the SQL membership tables are mapped to the new Identity system.
aspnet_Roles --> AspNetRoles
asp_netUsers and asp_netMembership --> AspNetUsers
aspnet_UserInRoles --> AspNetUserRoles
As explained in the above section, the AspNetUserClaims and AspNetUserLogins tables are empty. The 'Discriminator' field in the AspNetUser table should match the model class name which is defined as a next step. Also the PasswordHash column is in the form 'encrypted password |password salt|password format'. This enables you to use special SQL membership crypto logic so that you can reuse old passwords. That is explained in later in the article.
Creating models and membership pages
As mentioned earlier, the Identity feature uses Entity Framework to talk to the database for storing account information by default..
In our sample, the AspNetRoles, AspNetUserClaims, AspNetLogins and AspNetUserRole tables have columns that are similar to the existing implementation of the Identity system. Hence we can reuse the existing classes to map to these tables. The AspNetUser table has some additional columns which are used to store additional information from the SQL membership tables. This can be mapped by creating a model class that extend the existing implementation of 'IdentityUser' and add the additional properties.
Createa Models folder in the project and add a class User. The name of the class should match the data added in the 'Discriminator' column of 'AspnetUsers' table.
The User class should extend the IdentityUser class found in the Microsoft.AspNet.Identity.EntityFramework dll. Declare the properties in class that map back to the AspNetUser columns. The properties ID, Username, PasswordHash and SecurityStamp are defined in the IdentityUser and so are omitted. Below is the code for the User class that has all the properties
public class User : IdentityUser { public User() { CreateDate = DateTime.Now; IsApproved = false; LastLoginDate = DateTime.Now; LastActivityDate = DateTime.Now; LastPasswordChangedDate = DateTime.Now; LastLockoutDate = DateTime.Parse("1/1/1754"); FailedPasswordAnswerAttemptWindowStart = DateTime.Parse("1/1/1754"); FailedPasswordAttemptWindowStart = DateTime.Parse("1/1/1754"); } public System.Guid ApplicationId { get; set; } public string MobileAlias { get; set; } public bool IsAnonymous { get; set; } public System.DateTime LastActivityDate { get; set; } public string MobilePIN { get; set; } public string LoweredEmail { get; set; } public string LoweredUserName {; } }
An Entity Framework DbContext class is required in order to persist data in models back to tables and retrieve data from tables to populate the models. Microsoft.AspNet.Identity.EntityFramework dll defines the IdentityDbContext class which interacts with the Identity tables to retrieve and store information. The IdentityDbContext<tuser> takes a 'TUser' class which can be any class that extends the IdentityUser class.
Create a new class ApplicationDBContext that extends IdentityDbContext under the 'Models' folder, passing in the 'User' class created in step 1
public class ApplicationDbContext : IdentityDbContext<User> { }
User management in the new Identity system is done using the UserManager<tuser> class defined in the Microsoft.AspNet.Identity.EntityFramework dll. We need to create a custom class that extends UserManager, passing in the 'User' class created in step 1.
In the Models folder create a new class UserManager that extends UserManager<user>
public class UserManager : UserManager<User> { }
The passwords of the users of the application are encrypted and stored in the database. The crypto algorithm used in SQL membership is different from the one in the new Identity system. To reuse old passwords we need to selectively decrypt passwords when old users log in using the SQL memberships algorithm while using the crypto algorithm in Identity for the new users.
The UserManager class has a property 'PasswordHasher' which stores an instance of a class that implements the 'IPasswordHasher' interface. This is used to encrypt/decrypt passwords during user authentication transactions. In the UserManager class defined in step 3, create a new class SQLPasswordHasher and copy the below code.
public class SQLPasswordHasher : PasswordHasher { public override string HashPassword(string password) { return base.HashPassword(password); } public override PasswordVerificationResult VerifyHashedPassword(string hashedPassword, string providedPassword) { string[] passwordProperties = hashedPassword.Split('|'); if (passwordProperties.Length != 3) { return base.VerifyHashedPassword(hashedPassword, providedPassword); } else { string passwordHash = passwordProperties[0]; int passwordformat = 1; string salt = passwordProperties[2]; if (String.Equals(EncryptPassword(providedPassword, passwordformat, salt), passwordHash, StringComparison.CurrentCultureIgnoreCase)) { return PasswordVerificationResult.SuccessRehashNeeded; } else { return PasswordVerificationResult.Failed; } } } //This is copied from the existing SQL providers and is provided only for back-compat. private string EncryptPassword(string pass, int passwordFormat, string salt) { if (passwordFormat == 0) // MembershipPasswordFormat.Clear return pass; byte[] bIn = Encoding.Unicode.GetBytes(pass); byte[] bSalt = Convert.FromBase64String(salt); byte[] bRet = null; if (passwordFormat == 1) { // MembershipPasswordFormat.Hashed HashAlgorithm hm = HashAlgorithm.Create("SHA1"); if (hm is KeyedHashAlgorithm) { KeyedHashAlgorithm kha = (KeyedHashAlgorithm)hm; if (kha.Key.Length == bSalt.Length) { kha.Key = bSalt; } else if (kha.Key.Length < bSalt.Length) { byte[] bKey = new byte[kha.Key.Length]; Buffer.BlockCopy(bSalt, 0, bKey, 0, bKey.Length); kha.Key = bKey; } else { byte[] bKey = new byte[kha.Key.Length]; for (int iter = 0; iter < bKey.Length; ) { int len = Math.Min(bSalt.Length, bKey.Length - iter); Buffer.BlockCopy(bSalt, 0, bKey, iter, len); iter += len; } kha.Key = bKey; } bRet = kha.ComputeHash(bIn); } else { byte[] bAll = new byte[bSalt.Length + bIn.Length]; Buffer.BlockCopy(bSalt, 0, bAll, 0, bSalt.Length); Buffer.BlockCopy(bIn, 0, bAll, bSalt.Length, bIn.Length); bRet = hm.ComputeHash(bAll); } } return Convert.ToBase64String(bRet); }
Resolve the compilation errors by importing the System.Text and System.Security.Cryptography namespaces.
The EncodePassword method encrypts the password according to the default SQL membership crypto implementation. This is taken from the System.Web dll. If the old app used a custom implementation then it should be reflected here. We need to define two other methods HashPassword and VerifyHashedPassword that use the EncodePassword method to hash a given password or verify a plain text password with the one existing in the database.
The SQL membership system used PasswordHash, PasswordSalt and PasswordFormat to hash the password entered by users when they register or change their password. During the migration all the three fields are stored in the PasswordHash column in the AspNetUser table separated by the '|' character. When a user logs in and the password has these fields, we use the SQL membership crypto to check the password; otherwise we use the Identity system's default crypto to verify the password. This way old users would not have to change their passwords once the app is migrated.
Declare the constructor for the UserManager class and pass this as the SQLPasswordHasher to the property in the constructor.
public UserManager() : base(new UserStore<User>(new ApplicationDbContext())) { this.PasswordHasher = new SQLPasswordHasher(); }
Create new account management pages
The next step in the migration is to add account management pages that will let a user register and log in. The old account pages from SQL membership use controls that don't work with the new Identity system. To add the new user management pages follow the tutorial at this link starting from the step 'Adding Web Forms for registering users to your application' since we have already created the project and added the NuGet packages.
We need to make some changes for the sample to work with the project we have here.
- The Register.aspx.cs and Login.aspx.cs code behind classes use the
UserManagerfrom the Identity packages to create a User. For this example use the UserManager added in the Models folder by following the steps mentioned earlier.
- Use the User class created instead of the IdentityUser in Register.aspx.cs and Login.aspx.cs code behind classes. This hooks in our custom user class into the Identity system.
- The part to create the database can be skipped.
The developer needs to set the ApplicationId for the new user to match the current application ID. This can be done by querying the ApplicationId for this application before a user object is created in the Register.aspx.cs class and setting it before creating user.
Example:
Define a method in Register.aspx.cs page to query the aspnet_Applications table and get the application Id according to application name
private Guid GetApplicationID() { using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString)) { string queryString = "SELECT ApplicationId from aspnet_Applications WHERE ApplicationName = '/'"; //Set application name as in database SqlCommand command = new SqlCommand(queryString, connection); command.Connection.Open(); var reader = command.ExecuteReader(); while (reader.Read()) { return reader.GetGuid(0); } return Guid.NewGuid(); } }
Now get set this on the user object
var currentApplicationId = GetApplicationID(); User user = new User() { UserName = Username.Text, ApplicationId=currentApplicationId, …};
Use the old username and password to login an existing user. Use the Register page to create a new user. Also verify that the users are in roles as expected.
Porting to the Identity system helps the user add Open Authentication (OAuth) to the application. Please refer to the sample here which has OAuth enabled.
Next Steps
In this tutorial we showed how to port users from SQL membership to ASP.NET Identity, but we didn't port Profile data. In the next tutorial we'll look into porting Profile data from SQL membership to the new Identity system.
You can leave feedback at the bottom of this article.
Thanks to Tom Dykstra and Rick Anderson for reviewing the article.
|
https://docs.microsoft.com/en-us/aspnet/identity/overview/migrations/migrating-an-existing-website-from-sql-membership-to-aspnet-identity
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
I'm trying to open a file from a directory and file name is not fix. it could be anything and there are many files on the directory. I create a list of the file from the directory. Now I want to open the List[0] file or first file of the directory. how could I do that. Thanks in advance.
- Code: Select all
#here I create a list
import os
L = os.listdir("data_small")
file_object = open("data_small/L[1]","r") #<--------- here needs change
#some code goes here
file_object.close()
|
http://www.python-forum.org/viewtopic.php?f=6&t=2662&p=3154
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Last month we announced the Developer Preview of Windows Azure Active Directory (Windows Azure AD) which kicked off the process of opening up Windows Azure Active Directory to developers outside of Microsoft. You can read more about that initial release here.
Today we are excited to introduce the Windows Azure Authentication Library (referred to as AAL in our documentation), a new capability in the Developer Preview which gives .NET developers a fast and easy way to take advantage of Windows Azure AD in additional, high value scenarios, including the ability to secure access to your application’s APIs and the ability to expose your service APIs for use in other native client or service based applications.
The AAL Developer Preview offers an early look into our thinking in the native client and API protection space. It consists of a set of NuGet packages containing the library bits, a set of samples which will work right out of the box against pre-provisioned tenants, and essential documentation to get started.
Before we start, I want to note that developers can of course write directly to the standards based protocols we support in Windows Azure AD (WS-Fed and OAuth today and more to come). That is a fully supported approach. The library is another option we are making available for developers who are looking for a faster & simpler way to get started using the service.
In the rest of the post we will describe in more detail what’s in the preview, how you can get involved and what the future might look like.
What’s in the Developer Preview of the Windows Azure Authentication Library
The library takes the form of a .NET assembly, distributed via NuGet package. As such, you can add it to your project directly from Visual Studio (you can find more information about NuGet here).
AAL contains features for both .NET client applications and services. On the client, the library enables you programmatically
- Leverage service principal credentials for obtaining tokens for server to server service calls
The first two features can be used for securing solutions such as WPF applications or even console apps. The third feature can be used for classic server to server integration.
The Windows Azure Authentication Library gives you access to the feature sets from both Windows Azure AD Access Control namespaces and Directory tenants. All of those features are offered through a simple programming model. Your Windows Azure AD tenant already knows about many aspects of your scenario: the service you want to call, the identity providers that the service trusts, the keys that Windows Azure AD will use for signing tokens, and so on. AAL leverages that knowledge, saving you from having to deal with low level configurations and protocol details. For more details on how the library operates, please refer to Vittorio Bertocci’s post here.
To give you an idea of the amount of code that the Windows Azure Authentication Library can save you, we’ve updated the Graph API sample from the Windows Azure AD Developer Preview to use the library to obtain the token needed to invoke the Graph. The original release of the sample contained custom code for that task, which accounted for about 700 lines of code. Thanks to the Windows Azure Authentication Library, those 700 lines are now replaced by 7 lines of calls into the library.
On the service side, the library offers you the ability to validate incoming tokens and return the identity of the caller in form of ClaimsPrincipal, consistent with the behavior of the rest of our development platform.
Together with the library, we are releasing a set of samples which demonstrate the main scenarios you can implement with the Windows Azure Authentication Library. The samples are all available as individual downloads on the MSDN Code via the following links:
- Native Application to REST Service – Authentication via Browser Popup
- Native Application to REST Service – Authentication via User Credentials
- Server to Server Authentication
- Windows Azure AD People Picker – Updated with AAL
To make it easy to try these samples, all are configured to work against pre-provisioned tenants. They are complemented by comprehensive readme documents, which detail how you can reconfigure Visual Studio solutions to take advantage of your own Directory tenants and Windows Azure AD Access Control namespaces.
If you visit the Windows Azure Active Directory node on the MSDN documentation, you will find that it has been augmented with essential documentation on AAL.
What You Can Expect Going Forward
As we noted earlier, the Developer Preview of AAL offers an early look into our thinking in the native client and API protection space. To ensure you have the opportunity to explore and experiment with native apps and API scenarios at an early stage of development, we are making the Developer Preview available to you now. You can provide feedback and share your thoughts with us by using the Windows Azure AD MSDN Forums. Of course, because it is a preview some things will change moving forward. Here are some of the changes we have planned:
More Platforms
The Developer Preview targets .NET applications, however we know that there are many more platforms that could benefit from these libraries.
For client applications, we are working to develop platform-specific versions of AAL for WinRT, iOS, and Android. We may add others as well. For service side capabilities, we are looking to add support for more languages. If you have feedback on which platforms you would like to see first, this is the right time to let us know!
Convergence of Access Control Namespace and Directory Tenant Capabilities
As detailed in the first Developer Preview announcement, as of today there are some differences between Access Control namespaces and Directory tenants. The programming model is consistent across tenant types: you don’t need to change your code to account for it. However, you do get different capabilities depending on the type of tenant. Moving forward, you will see those differences progressively disappear.
Library Refactoring
The assembly released for the Developer Preview is built around a native core. The reason for its current architecture is that it shares some code with other libraries we are using for adding claims based identity capabilities to some of our products. The presence of the native core creates some constraints on your use of the Developer Preview of the library in your .NET applications. For example, the “bitness” of your target architecture (x86 or x64) is a relevant factor when deciding which version of the library should be used. Please refer to the release notes in the samples for a detailed list of the known limitations. Future releases of the Windows Azure Authentication Library for .NET will no longer contain the native core.
Furthermore, in the Developer Preview, AAL contains both client and service side features. Moving forward, the library will contain only client capabilities. Service side capabilities such as token validation, handling of OAuth2 authorization flows and similar features will be delivered as individual extensions to Windows Identity Foundation, continuing the work we begun with the WIF extensions for OAuth2.
Windows Azure Authentication Library and Open Source
The simplicity of the Windows Azure Authentication Library programming model in the Developer Preview also means that advanced users might not be able to tweak things to the degree they want. To address that, we are planning to release future drops of AAL under an open source license. So developers will be able to fork the code, change things to fit your needs and, if you so choose, contribute your improvements back to the mainline code.
The Developer Preview of AAL is another step toward equipping you with the tools you need to fully take advantage of Windows Azure AD. Of course we are only getting started, and have a lot of work left to do. We hope that you’ll download the library samples and NuGets, give them a spin and let us know what you think!
Finally, thanks to everyone who has provided feedback so far! We really appreciate the time you’ve taken and the depth of feedback you’ve provided. It’s crucial for us to assure we are evolving Windows Azure AD in the right direction!
- By Alex Simons, Director of Program Management, Active Directory Division
|
https://azure.microsoft.com/ko-kr/blog/introducing-a-new-capability-in-the-windows-azure-ad-developer-preview-the-windows-azure-authentication-library/
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Lead Maintainer: Trevor Livingston
swaggerize-express is a design-driven approach to building RESTful apis with Swagger and Express.
swaggerize-express provides the following features:
See also:
There are already a number of modules that help build RESTful APIs for node with swagger. However, these modules tend to focus on building the documentation or specification as a side effect of writing the application business logic.
swaggerize-express begins with the swagger document first. This facilitates writing APIs that are easier to design, review, and test.
This guide will let you go from an
api.json to a service project in no time flat.
First install
generator-swaggerize (and
yo if you haven't already):
$ npm install -g yo$ npm install -g generator-swaggerize
Now run the generator.
$ mkdir petstore && cd $_$ yo swaggerize
Follow the prompts (note: make sure to choose
express as your framework choice).
When asked for a swagger document, you can try this one:
You now have a working api and can use something like Swagger UI to explore it.
var swaggerize = ;app;
Options:
api- a valid Swagger 2.0 document.
docspath- the path to expose api docs for swagger-ui, etc. Defaults to
/.
handlers- either a directory structure for route handlers or a premade object (see Handlers Object below).
express- express settings overrides.
After using this middleware, a new property will be available on the
app called
swagger, containing the following properties:
api- the api document.
routes- the route definitions based on the api document.
Example:
var http = ;var express = ;var swaggerize = ;app = ;var server = http;app;server;
Api
path values will be prefixed with the swagger document's
basePath value.
The
options.handlers option specifies a directory to scan for handlers. These handlers are bound to the api
paths defined in the swagger document.
handlers|--foo| |--bar.js|--foo.js|--baz.js
Will route as:
foo.js => /foofoo/bar.js => /foo/barbaz.js => /baz
The file and directory names in the handlers directory can also represent path parameters.
For example, to represent the path
/users/{id}:
handlers|--users| |--{id}.js
This works with directory names as well:
handlers|--users| |--{id}.js| |--{id}| |--foo.js
To represent
/users/{id}/foo.
Each provided javascript file should export an object containing functions with HTTP verbs as keys.
Example:
moduleexports ={ ... }{ ... }...
Handlers can also specify middleware chains by providing an array of handler functions under the verb:
moduleexports =get:{ ... }{ ... }{ ... }...
The directory generation will yield this object, but it can be provided directly as
options.handlers.
Note that if you are programatically constructing a handlers obj this way, you must namespace HTTP verbs with
$ to
avoid conflicts with path names. These keys should also be lowercase.
Example:
'foo':{ ... }'bar':{ ... }{ ... }...
Handler keys in files do not have to be namespaced in this way.
If a security definition exists for a path in the swagger document, and an appropriate authorize function exists (defined using
x-authorize in the
securityDefinitions as per swaggerize-routes),
then it will be used as middleware for that path.
In addition, a
requiredScopes property will be injected onto the
request object to check against.
For example:
//x-authorize: auth_oauth.js{;}
|
https://www.npmjs.com/package/swaggerize-express
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
First thing, rcov
compare controller size, model size, look for largest files
then wc -l on app/models
Biggest files are most important and/or biggest messes
Observer functionality with any kind of external resource go in sweepers often
observe ActiveRecord::Base
uber-sweeper
weird
not necessarily bad, certainly unusual
before_create - initializes magic number for global config - needs description, and probably relocation - no intention revealed - should be described for what it is
assigning id directly -> "major leaky abstraction"
Chad dissed some of his own code from Rails Recipes - live and learn
set associations through the association code - the value of semantic code
use Symbol#to_proc
"never make an action have more than five lines."
"whenever it's more than five lines, it's bad."
Eric Evans - Domain-Driven Design - huge recommendation
(And Kent Beck Smalltalk Best Practice Patterns)
def fullname, def reversed_fullname, no, set a :reverse keyword or even a reverse method on the attribute itself (not even the class)
Don't you think rules like "Never make X have more that N lines" are just arbritary and silly? Why not six or seven or even four?
Maybe they should at least pick a number that allows the incredibly common create action with XML support to exist without being considered "bad".
It's notes, dude. It's not meant as an endorsement. It's just so I remember what was said. And yes, of course a hard and fast rule is silly. These aren't hard and fast rules. They're guidelines. You're taking this way too seriously.
def fullname, def reversed_fullname, no, set a :reverse keyword or even a reverse method on the attribute itself (not even the class)
Giles, can you expand on this any? I presume for the sake of the example, it does something sensible like like person.fullname is 'Jamie Macey' and person.reversed_fullname is 'Macey Jamie', how would a :reverse keyword help with that?
This comment has been removed by the author.
|
http://gilesbowkett.blogspot.com/2007/08/rails-edge-notes-chad-fowler-marcel.html?showComment=1188159540000
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
[following up on some compile options] hi there all... :) > A great idea. Are you familiar with the automake / autoconf tools? > If yes, why don't you go ahead and create the necessary files? > This upgrade would greatly increase the usability of sword. hmmm, just had a quick look (I'm not familiar with autoconf, etc), and unfortunately I don't have time to get to know it before 1.5.2, but perhaps I'll make myself familiar with it? unless someone else already understands how to get it all up and running in 5 seconds? but a quick hack that may work (not sure?): I added the lines: UNAME_ARCH = `uname -m` UNAME_CPU = ${UNAME_ARCH} under the "#compiler" section in the Makefile.cfg and changed the intel check from -m486 to be: ifeq ($(system),intel) CFLAGS += -mcpu=${UNAME_CPU} -march=${UNAME_ARCH} endif NOW, I've only tested this under Mandrake 7.2, so have to idea what this shall do to other systems... ;) but that's my 2 cents of effort with the amount of time I have free this week... what it's doing is this: The string "`uname -m`" is being passed to the command line, and so each time gcc is being called, uname is being called to figure out the arch of the machine... Anyway, warned you it was a hack... :) nic... :) ps: thanks for that info on WINE... just thought it might be easier to combine all frontend efforts into a single project, and cut down duplicated effort... :) -- "Morality may keep you out of jail, but it takes the blood of Christ to keep you out of Hell." -- Charles Spurgeon ------------------------------------------------- Interested in finding out more about God, Jesus, Christianity and "the meaning of life"? Visit -------------------------------------------------
|
http://www.crosswire.org/pipermail/sword-devel/2001-June/012130.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
IaxAX -axAX -."
P.S., RAID is a waste of your goddamned time and money. Is your personal computer a high-availability server with hot-swappable drives? No? Then you don't need RAID, you just need backups.
100% agree. I've had too many close calls with broken RAID implementations to even consider bothering with it anymore. I just back up using a similar rsync command to what you proposed. I'm planning to buy an external 750GB hard drive specifically for backups when they get just a bit cheaper.
RAID = Taking the multiple failure rate of JBOD and giving you one chance to recover from a fuck up instead of zero. (Note: See maximum universe irony above).
The problem is, you can't back it all up. Tape drives are too small, and you need another RAID of disks to hold all the data. There exist boxes that let you combine hard drives into a "portable" box, but at this point you might as well replace "hard drives" with "computers" in your post above.
The best solution is to figure out which data is (A) Critical and static (i.e. archivable, like photos or for me a Genealogy project) or (B) Critical and dynamic (Email, Recent work/CVS). DVD-R's though a pain in the ass, are decent for A as it might be a once a year activity to burn three disks and store them in different locations (if it's 100GB or less, stop thinking about tape and get thine ass to a burner). The dynamic stuff could be taken offsite, but really it's likely not that huge and an offsite mirror would do the trick, if you can figure out what's *really* important.
Then there's (C) which is all that crap on your hard drive (Farscape, Bleach, tentacle porn, whatever your poison) which should be classified as "shit that I can download again if my house burned down" and not stressed over.
Not enough people do A (I do), almost nobody does (B), and people who fret about the loss of (C) need to get on with their lives.
Forget classifying data. It's too error prone.
Making backups must be a brain-dead easy activity for two reasons:
That means burning "critical static data" to DVDs is fine as far as that goes, but it should only be a fall-back defence; the same data should always be included along with every "critical dynamic data" backup because You Will Forget To Include Something Important. If you want to exclude "shit I can download again", more power to ya, but your backups should still include everything by default, and only specifically exclude the directories where that stuff goes.
It also means: forget tapes. Way too slow and way too little capacity, which means way too fucking painful; also, way too much operator involvement. In fact, forget backing up to dedicated media like DVDs and tapes. It's passè. It used to be worthwhile when the backup media had much more capacity at much lower per-megabyte prices than hard drives. But the per-megabyte price of hard drives is now so ridiculously low and their capacity relative to backup media so large that it's far easier to just use extra hard drives as backup media.
Tapes useless? I'd have to say that's highly dependant on what you are doing. For the home user then yes, the home user generally only has data of sentimental or low economic value. For commercial enterprises or research organizations - especially larger ones with high value data they're a much better choice than rotating media. The initial investment is higher but the media price per GB is very comparable (a 300GB SDLT is $70 (and as low as $45 in bulk)) and the archival storage life is 30+ years.
For a home user, yeah, tapes are useless.
I understood jwz's advice to be aimed at home users and that's what I talked about.
Organisations have the resources to pay people for making backups, which means laziness is not an issue and operator error can be negated by rigorous procedure and routine tests. Obviously the rules differ when the premises differ.
But backups are increasingly important for all of us, and conventional wisdom has not caught up with the change in premises for home users. It's vital that we get away from that one-track thinking.
Also, most "enterprises" and "organizations" can afford to have a robot do the boring work of changing tapes.
Yeah. Though there's lots more tedious work than just changing tapes:
(Missing the last point in particular is a classic way to lose - but if you use tapes and only own a single machine, how are you going to do that? In contrast, if you copy everything to a second hard disk, then a test restore is as simple as booting from it.)
Dear readers, do not listen to this person.
Especially not at 5am on a sleepless night, when I ramble on like a senile goat.
Dare I ask why?
I have to disagree with this. On at least 3 occasions now I've had a drive fail and lost absolutely 0 data because of it, thanks to RAID. This was on a machine running FreeBSD and serving my web sites, email and DNS, so maybe it stretches you definition of 'not a server', but the fact that the machine stayed up and running until I could deal with the failed drive later that day is a godsend for me.
But of course I *also* have backups for that machine as well.
You also would have lost absolutely 0 data if you used that second drive as a backup. And the time it takes to rebuild the RAID array is the same or longer as the time it takes to restore the backup. And you don't have both drives on the same power supply. And you don't have the option of the OS deciding to take a shit on both drives' data simultaneously.
I say again, RAID is a waste of your goddamned time and money.
But hey, if you want to spend twice as much money on drives for no appreciable benefit, and also spend a huge amount of time debugging your RAID config, and re-learning which arcane command you have to use to rebuild the array when you need to remember that again two years from now, more power to you.
RAID protects the data you've added or amended since you last ran your backup. Like everything since 5am if you use your cronjob example.
"OMG,
threefour drives is so expensive! That sounds like a hassle!"
Unless you lose the drive exactly after the backup, you'll lose data, at least on a mail server with any real volume. And unlike restoring from a backup, you can still use the machine while rebuilding the array. More important, you're still up and running until you do replace the failed drive.
Yes, I do have to relearn how to rebuild the array every time I lose a drive, but that takes less time that it does to go to the store and by a replacement. And that's only because I'm using software raid; every hardware raid I've seen is very easy to use.
Finally, you're recommending a total of 3 drives; going with RAID1 means one more drive, so hardly twice the money. (Remember I said raid is not a replacement for backups.)
I'm using two 500GB drives in RAID 1 on a Windows box. The RAID 1 is provided by the onboard SATA component of the motherboard, so I spent nothing extra to gain RAID functionality.
Given that I have 3-400GB of data, I need a second 500GB drive for backing my data up whether I use RAID or not, and RAID simplifies the process. Setting it up was not at all complicated, I just had to read the motherboard manual and then use the semi-graphical user interface that you can access during the boot process.
I think that many home users are in the same place I am, which is to find themselves in an option of 1) spending a brief amount of time setting up a RAID 1 array which then automatically provides at least some redundancy as long as you leave it that way or 2) telling themselves they'll make regularly backups and then inevitably forgetting or slacking off. And I just don't get the "spend twice as much money" thing, given that RAID cards are cheap, or RAID is built onto motherboards these days. All you're really paying for is two hard drives, and isn't that what it would cost to do backups your way anyway?
You have to do backups anyway, so all RAID does is add one drive (or add two drives, if your backup disk is also RAIDed.)
You may think dicking around with BIOS settings is "not at all complicated" but I characterize it somewhat differently, more along the lines of, "please stab me in the eyes with chopsticks instead."
I'm done talking about RAID now. All of you can stop.
Um.. if RAID is your backup, I don't see how it just "adds one drive".
If you use RAID, YOU HAVE TO DO REAL BACKUPS ANYWAY, OR YOU WILL GET SCREWED.
RAID also fails to protect you from the "Oops my stupid self/cat/parent/interloper accidentally deleted a lot of data", because RAID copies your mistakes too.
All my photos are on redundant Firewire drives, and I'm reaching the point where I'm going to have to swap them all up a larger capacity. I foresee this as some kind of infinite game of leapfrog.
I did a survey paper on holographic data storage way back in college -- where the hell is my gazillobyte holographic data cube?
I have an Enterprise 450 server with about a dozen hard drives. Is a ZFS RAID array okay?
No. ZFS uses one drive worth of parity. Your failure rate on 2 of 12 drives is going to be worse than a single drive.
It's using raidz2, so actually three would have to fail. Granted, the purpose isn't really to act as a place to store backups. It's acting as a fileserver--I just happen to be using it as an additional place to store backups; hence, to lose data, three hard drives plus the hard drives on this machine would have to fail in fairly quick succession.
The universe tends toward maximum irony. Don't push it.
Somewhere along the line I can see that being included in the set of most memorable quotes of all time.
The irony is, it won't.
It already has.
"The perversity of the Universe tends towards a maximum."
As a counter-PSA, I'll point out that rsync doesn't behave entirely correctly on HFS. (Yes, even with Apple's improved version. It still screws up some random bits of metadata in the backup.) If this matters to you, you can either use ditto and live with the fact that it isn't incremental, or shell out $30 for SuperDuper, which is totally worth it.
What does it screw up?
(Whoa, stupid completion tricks on the subject. I just typed "metadata" there and Safari filled in the rest.)
rsync screws up BSD flags, certain types of utime data, and ACLs. Here is a much more exhaustive treatment of the topic.
Ok, so, nothing that even remotely matters?
Not unless you mind the incorrect timestamps, basically. Also, I'm really uncomfortable with how rsync handles resource forks - it seems to work most of the time, but it also has a tendency to spit out unsettling error messages.
Of course, if you don't have anything that uses resource forks (and unless you've still got pre-OS X stuff floating around, you probably don't), this is a non-issue as well.
The only thing that I'm aware of that uses resource forks in a non-disposable way at all any more is Finder, with text-clippings and web-location files. (Curse you, Finder. That's totally unnecessary.) I suppose it's possible that some application bundles have stuff inside resource forks, but I don't know of any. I guess the only way to find out would be to try and restore from a backup without copying the resources and see what breaks. I strongly suspect the answer would be "almost nothing".
import os
for path, dirnames, filenames in os.walk('/'):
for f in filenames:
try:
if os.stat(os.path.join(path, f, '..namedfork/rsrc')).st_size > 0:
print os.path.join(path, f)
except:
pass
Note that I said "in a non-disposable way".
Lots of things still write resource forks, for example, Photoshop and Illustrator write resource forks into every saved JPG and PDF. But as far as I can tell, there's nothing in those forks that matters.
It looks like all fonts are in resource forks (why?). But aside from that resource forks do seem to be less common than they used to be.
yes, fonts are in resource forks, and I could never figure out why either ...
Well, fonts targeted at MacOS are, probably because that gives them compatibility with older OS9 apps, but OSX can deal with .ttf and .otf files (and some others) just fine, in fact most of the non-OS bundled fonts I have installed are non-resource forked.
There's probably a way to bulk de-rez them if you're never going to use OS9.
the "why" is probably just historical inertia.
fonts on the old macos could be added to any file. so if you used a funky font in a Word document, you could make sure it looked&printed ok on any target mac by simply attaching the font to the document. as Resources, they could travel invisibly with the document's user-data.
nowadays, people have forgotten about the (to my mind, lovely) idea of fonts not being fixed to a particular os installation/config. and i'd be surprised if macosx respected a document's embedded fonts and used them for its display.
The other one I'd watch out for is applications which store resources in resource forks. Checking my own apps, I noticed at least a few which look as though they'd turn into pumpkins if their resource forks were removed.
Lots of Mac fonts, too.
I'm currently doing backups to a disk attached to a NSLU2 over SSH and a wireless network (saves me the hassle of plugging the laptop into the network each night) via an rsync process at 4 AM. Three issues:
1. All HFS+ metadata is lost since the remote disk is ext3 on Linux.
2. For some weird reason, rsync can't set timestamps on symlinks.
3. The backup job appears to pull a ton of stuff into disk cache, making system performance somewhat sluggish in the mornings.
Maybe I should mount as SSHFS instead of using SSH and let OS X create those ._ fork files.
rsync 3 is in active development and just recently became dependable as a replacement to OS X's built-in version.
:pserver:cvs@pserver.samba.org:/cvsroot
no acls (no loss) but excellent support for forks and metadata. also fixes OS X's large directory crashes and memory leaks.
the only hangs i've seen have been when the target disk is full.
been using this for weeks for nightly backups of several large servers and it's been fine.
Thank you for this. You just resolved a very large number of headaches.
rsync 3 command line i use:
$ rsync --verbose --archive --xattrs --delete --stats --delete-excluded --exclude-from=excludes-system / /Volumes/backup 2>&1 >> log-backup-laptop
excludes-system looks like this:
/.vol/*
/Network/*
/Volumes/*
/automount/*
/cores/*
/dev/*
/afs/*
/private/tmp/*
/private/var/launchd/*
/private/var/imap/socket/*
/private/var/run/*
/private/var/servermgrd/*
/private/var/spool/postfix/*
/private/var/tmp/*
/private/var/vm/*
/proc/*
/tmp/*
Yep. That article put the fear into me wrt Mac backups. I'm currently using psync to an external drive, but after following the links in these comments, I'll be checking up on psync and going through the other possible tools with Backup Bouncer, a set of torture tests for Mac backup tools which some helpful soul has put together.
just use rsyncX, plain rsync patched to handle the resource forks. add the --eahfs flag (rsyncX only) to turn on the hfs special stuff. the install of rsyncX gives you a useless gui and a new rsync in /usr/local/bin.
There's no need for rsyncX; the version of rsync that Apple ships in /usr/bin/ already supports the -E option for preserving resource forks (--extended-attributes).
Strangely, with the laptops I have, the display is way more prone to outright fail than the hard drive. I only wish there were a way to matter-compile a back-up of one of those.
I don't do this: I have a different backup policy.
I work on laptops.
When I buy a new laptop, I probably sell the last but one. The old one then becomes the hot backup machine. I never have less than two similarly-configured work machines.
(If the new laptop's drive is much larger than the old laptop, consider buying the old lappie a new hard disk.)
Use rsync to backup data from the current work machine to the old one every week. Meanwhile, every day (or more often) use rsync to backup critical data -- work files, email archives, not random MP3s -- onto an 8Gb thumb drive that lives on a keyring. Every couple of days, use rsync to backup the critical data to a server 500 miles away.
When you boil it down, this is essentially the same level of cover as jwz's spare drive policy; it just uses a different kind of drive enclosure (the laptop).
Oh, as for Windows ... use rsync. (You did install Cygwin, right?)
Horrible confession time: my most recent laptop is running vista. And it's staying that way for, oh, another month or so (until Ubuntu Gusty Gibbon is stable and I have a spare few days to spend getting the wifi drivers to work). Again: rsync is available for Windows.
Alternatively, if you don't want to fight with Cygwin (I mean, isn't that the Windows equivalent of "recompile your kernel"?), you can try cwrsync.
If you've upgraded to an Intel based Mac from PPC, you can't rsync the entire drive. While applications on Mac OS X can be universal binaries, the system bits are platform specific.
However, you could sync your home folder and applications.
If I recall correctly, this limitation should be resolved when Leopard ships.
of course when Leopard ships you can just use Apple's fancy-pants rsync in the form of Time Machine.
vista has robocopy.exe
If, for some reason, you don't feel like futzing with rsync, the built-in XCOPY command in DOS isn't too bad, really. XCOPY /C/R/I/K/E/Y/D/H C:\ F:\Backups\CDrive will copy your entire C: drive, maintaining permissions and continuing after any error, only copying files whose dates are more recent than the ones already on the backup. This is inferior to rsync in a bunch of ways -- no --delete option to keep your backups clean, and the date-based system is pretty simplistic -- but superior in that you can remember the switches (Steve Irwin warning D.H. Lawrence not to pet the crocodile, perhaps? "Crikey, D.H.!!!") and it's already there and functional on every MS system since DOS 3.
Posted from an Ubuntu machine. I only know MS-DOS because I've been using it for the last twenty-odd years, but I can give up anytime, you know?
its nice idea!
Go buy two more drives of the same size or larger. If the drive in your computer is SATA2, get SATA2. If it's a 2.5" laptop drive, get two of those. Brand doesn't matter, but physical size and connectors should match.
Well, brand matters sometimes, as the typical user won't find out that models with seemingly identical sizes (e.g. 400 GB) but of different brands have different capacities (off by some bytes). That is no problem for isync, but if you backup your disk with dd (or any other partition copy tool) so you have a replacement to plug in, you should have the same capacity (or more).
Also, raid: I use a raid 1 setup on my main workstation (WinXP) and on a server (debian). Pro: backup disk is always up to date, reads are double as fast. Cons: If your disk does not die 'naturally', but from any physical cause (PC falls down, electrical problems), probably both disks are gone.
Any chance you tell us what prompted you to write this post?
Which is exactly why my advice does not include the use of "dd".
I say again: RAID is a waste of your goddamned time and money.
(Option++;): Buy a DDS-4 tape drive, a copy of Retrospect and a bunch of DDS-4 tapes. Divide the tapes in half -- one set is the current backup set the other is the "keep at a friend's house" backup set. Swap between the two.
I backup four systems this way (two laptops, two desktops) and have saved my ass many, many times.
This fails miserably because the price point for the tape drive and tapes is far higher than that of the two hard drives and their external enclosures. Really.
The idea is to do this sanely and cheaply when confronted with the problem as a home user.
It doesn't "fail miserably" at all. It simply costs money. How much are daily, incremental backups and an off-site copy worth to you?
I have four systems that need daily, incremental backups, three mac and one XP. Sure, I could buy EIGHT hard drives and enclosures and do all sorts of fiddly moving drives around, or I could invest a tape drive, tapes and software and not have to fool with it.
I could also save a LOT of money by not using Macs and just using bsd on generic PCs. I wouldn't be nearly as productive, but I'd be saving money!
There's one prerequisite you're missing, and that's the home user bit. You are not the typical home user because you have a myriad of machines under your care. The details in this case are trivial. I.e., I don't care if you're some supernerd with all four machines sitting on your desk, nor do I care if you're just some parental figure doing tech support for your children.
My claim is that tapes and tape drives are expensive compared to hard drives and external hard drive enclosures in a typical home user setting. The best indicators of that are to consider the mean lifetime of a tape, the price per gigabyte of storage that a tape affords you, the assurance of data integrity (i.e., how do you know that your data won't rot before you end up needing it?), and the amount of human intervention in the process of performing a backup and recovery.
I'll leave the proof of that as an exercise to you because I don't care to pursue this further than the riling-up-shit-on-the-internets-to-blow-off-steam stage.
(Also, you suck for using conjunctive elimination. Just FYI.)
Tapes, like RAID, are a waste of your goddamned time and money.
When you back up to a drive, and that drive is sitting on your desk and mounted, you know when the drive has gone bad because your computer starts yelling errors at you. When you back up to a tape or a DVD, that media is just sitting on your shelf silently rotting and you won't know it's gone bad until that horrible day in the future when you try to read it.
Never, ever, ever back up to anything except a connected, live file system.
I second this.
My experience recovering data from tapes over the past ~15 years is good enough that I'm unlikely to change based on your fear that all my tapes are "silently rotting".
So no, they haven't been a waste of my time or money.
>except a connected, live file system.
Which can be erased in real time along with all your data or destroyed in real time during an earthquake/fire/flood/whatever.
Agreed. If you care about your data you want offsite and you want archival quality. Tapes work and to be perfectly honest, with all the tapes we have at work, I've never heard of one of them getting bit rot. We have a few petabytes of tape on line (two silos controlled by an old J90) at all times and if we've ever had a user lose data because of a bad tape I've not heard about it in the past 12 years.
J90? You guys must not pay your own power bills.
So no, they haven't been a waste of my time or money.
According to the subsequent followup comments by the original poster, it's only a waste of "goddamned" time and money. If you've got regular old non-damned time and money, go for it!
And yes, with the cost of drives, tapes can suck it.
To drive home one of Jamie's points, ad nauseam, RAID sucks. It sucks, among all the other ways, for the same reason that tapes suck. The data can go bad behind the scenes, and you will know nothing about it until it's too late.
A story: A friend of mine decided that his new work machine was going to have RAID to protect his data. RAID-1 mirroring. Our work didn't do backups, so he thought RAID was the next best thing. Paid good money for the system.
Then, a few months later, he went to open a very important file he hadn't used in a while and got a disk error. Oh noes! He runs disk check, and finds that, somehow, his file system has become corrupted, and there are now lots of files that are corrupt.
But how can this happen he thinks? I had RAID, why isn't the second disk OK? Cue sickening realisation...
Because, of course, RAID works at the block level. If the filesystem gets screwed up on one disk, the RAID sub-system faithfully reproduces that filesystem corruption on the second disk. Now you have two identically corrupt disks. RAID, however, has done it's job correctly - all it cares about is that the blocks on disk 1 are now on disk 2. It doesn't care about your *data*. That is your job.
RAID is an availability system, not a backup system. Don't buy a car if you need a boat.
Ob-tapes-suck-quip: I fondly remember that the failure rate of DAT tapes, back when I used them for UNIX backups in the 90s, was about one in two. Maybe modern tapes are better, but most people wouldn't know, because almost no-one ever tries regular trial restores to make sure the damn backups actually, you know, worked. Tapes suck.
I had a client I had to explain this too, who was rather cheap and didn't understand the concept. Two mirrors were kept, one live and one off site, swapped once every week. The solution was two fold, bootable backup that can only be as stale as a week, quicker read times (at the cost of slower write times, which for this machine was not an issue at all).
He insisted I show him exactly what needed to be done to the SCSI controller to stop the mirror and boot off of the drive sitting in the swap bay. He said this was just in case I was on vacation or unable to come into the office and he had to do it himself during an emergency.
Truth be told when he accidentally hacked off all of his client's web site content, instead of calling me in to deal with recovery (his excuse for being a cheap bastard was he called me twice and got voice mail), he attempted to boot off of the live backup himself. The only issue was that the live backup drive was just a copy of the same missing sites on the boot drive. Him being the smart person he thought he was, ended up futsing with pretty much every changeable option the SCSI controller card had to offer, as from his observations he was still booting off of the "bad" drive and not the "good" live backup.
In the end I got paid double time to fix his mess over the weekend.
I've always used RAID systems, normally RAID 5 arrays in the belief that the data was secure - not necessarily so.
3 months ago I had a server fail, a server backed up using DDS-4 tapes, only the tapes or the drive or something was corrupted and the data couldn't be read.
Luckily the data was recovered by a data recovery company, , but for the privilige I was relieved of £1200.
In my experience all the backups in the world will not help you, unless you check that the data stored on them is intact, free from corruption, and stored away from the server in question.
is there a reason why rsync would be better than using carbon copy cloner?
Carbon Copy Cloner uses rsync. They are identical.
They're the same, under the hood. CCC (the new version of which, by the way, is excellent) is a shiny Cocoa wrapper around rsync. For scheduled jobs it seems to use its own ccc_helper background app instead of using cron, but other than that it's essentially a GUI way to do what's suggested here.
The `ditto` command can also be used to make one-off bootable backups, preserving the resource forks (for fonts, I guess, or any mac os classic applications you may be using if you're still running that ancient copy of quark?)
Other utilities like the carbon copy cloner can be used for this (useful for the 'third' backup drive option that you leave in your desk).
The 10.5 'time travelling' thing seems to just wander your filesystem looking for modified files and backing up previous revisions of the files, so at least things are looking up in the automated Mac OS X backup world(since this post is obviously inspired by a friend or relative's drive crashing?).
Personally, I use rdiff-backup instead of rsync so that I get incremental backups, and eventually I'll by rsyncing that backup offsite.
But yeah, what he said. :)
Semantics aside, how is rsync not ending up being incremental. If I use the switches indicated in the original post, it says "only copy the new shit" and "delete all the old shit I deleted". It doesn't have to do what I would normally think of as a Level 0 anymore than one time, when the drive is synchronized the first time.
Seriously asking, not just yanking your chain or playing. Am I missing something?
Is it just because rsync automatically looks at the entire list of files first before determining what has changed? Or is there some other thing?
Thanks
Because rdiff-backup actually keeps every incremental version. So if you run it 10 times, you actually have 10 different backups you can restore from.
Ah, okay, that is different. Thanks for the info.
I do a complete backup at my office on to an external drive with SuperDuper! and then an rsync to StrongSpace of my Documents, Mail, Address Book, and Pictures. I've found this to be useful because not only do I have a completely off-site secondary storage, but I also can browse from a friend's house and find a photo or document I have on StrongSpace.
That looks like a lot more typing for zero benefit.
The main benefit is that if your crontab runs "rsync -vaxE --delete --ignore-errors / /Volumes/Backup/" when your backup drive isn't mounted you won't end up with a duplicate directory full of your user data at /Volumes/Backup. (This happened to me a few times.) It gets even stupider when you plug your Backup drive in and then you've got /Volumes/Backup (the directory) and /Volumes/Backup.1 (the new mount point for your backup drive).
A secondary benefit is that once you've got a launchd script in ~/Library/LaunchAgents/ everything is fire-and-forget. The backup script gets backed up along with everything else, which means that whenever I do a clean reinstall and restore the LaunchAgent gets restored along with everything else and my backups continue without further intervention.
Launchd also lets you control your jobs a little better. If I ever want to do a special backup (I'm leaving on a 10pm plane flight and I'll miss the 3AM sync) I can run the script early by running "launchctl start com.tongodeon.backup". Launchd isn't a huge thing of course. Mostly I just wanted to make sure I understood how it works, more or less.
Does one no longer have to bless a Mac backup drive before it's bootable? Did that come in with Tiger?
A bit-by-bit copy of every data on the drive will include the blessing. It's not in the partition table.
Even if the drive wasn't copied bit-by bit, all that matters is that the destination disk was created as a GUID partition table. All I had to do to upgrade the disk in my laptop was 1) create GUID partition on replacement drive with Disk Utility; 2) rsync the old drive to the new one; 3) swap them and reboot.
If the machine is a PPC, I believe blessing the replacement volume is still necessary.
StartupDisk will do the blessing. Just open System Prefs, pick Startup Disk, and click on your disk. Done.
I hereby invoke the jwz-fanboy-advice=baiting.
I've got an iBook with a tiny hard drive, which I've been cloning to an external drive using CarbonCopy Cloner whenever I remember and/or whenever Apple Software Updater runs. Not a lot of important data here, but every once in awhile I dump my documents folder onto a CD and put it away somewhere.
I remember hearing horror stories of a graphic artist somewhere who had impeccable backup technique, except that they were all sitting by his computer when his house burned down. Leetle paranoid after hearing of that.
Eventually I'll upgrade my desktop paperweight and have something running OSX, at which point I will come back here and follow the above advice.
Your house need not burn down to get fucked.
$20,000 reward for stolen memories
Basic rules of IT:
1. Information wants to be free.
2. It's not a matter of "if" but "when" it will fail.
3. People do what they can do, not what they should do.
My desktop system is hardware RAID1 to buy me time when then the first drive does fail.
I backup to one of these weekly. "RAID-X" which allows my to grow the volume size dynamically just by popping in a larger drive at a time and letting the volume re-balance without having to do a full backup and restore.
It handles CIFS (Supports AD auth), NFS (UDP only for now), AFP, FTP, Rsync, http, has built-in iTunes and Slimserver streaming and more..
Comes in a rack mount model too.
Just some thoughts:
1. "My desktop system is hardware RAID1 to buy me time when then the first drive does fail."
RAIDs tend to use the same disks. Actually, identical types, sometimes taken from the same production lot.
Exercise to the reader: Go figure about the probability of them failing at similar times. Also take into account damage caused by over-current, etc.
2. ""RAID-X" which allows my to grow the volume size dynamically just by popping in a larger drive at a time and letting the volume re-balance without having to do a full backup and restore."
Do a disaster recovery exercise. Do it while your primary hard drive is OK (onto a new, spare, empty drive).
Better yet, do it after doing another backup onto some other computer/drive at a friend's house 50 miles away.
Couple of years a friend of mine got a RAID controller for his work PC, some disks and - before production use - tried out its balancing capabilities, result of a simulated hardware fail, drive exchange, etc.
It was not a cheap RAID solution, and the vendor claimed everything would be fault-tolerant and even hot-pluggable.
Net result: five trials, five complete data losses.
Option 3
When Leopard comes out, use Time Machine, the thing that's been making my weekends disappear for the past few months.
Time Machine looks cool and all.... but I'm supposed to trust a version 1.0 of a really-complex backup utility with all my precious bits?
I'm thinking I need to get a real backup scheme working BEFORE switching to Leopard and its time machine.
Time Machine is fairly useful if you're the everyday Joe who's got a desktop machine and thinks that getting back on your two feet after a system crash of sorts will require some time.
Running Time Machine on my MBP, the thing has a nack for starting a versioned backup without really informing you. If you have the Finder opened (I use Path Finder) you'll see a rotating sync animation where the eject button should be next to your backup volume. This tiny thing is to let you know that A) There's an automatic backup happening right now and B) Don't fuck with me.
If you ignore B by just simply not noticing that there's a backup in progress because you didn't have a look, put your machine to sleep, unplug all USB devices, stuff the machine in your bag, bike to a net cafe, then wake the thing up, you'll notice that...
- Finder is frozen.
- Path Finder vomits violently when you try to eject your backup drive's volume.
- The Time Machine preference pane becomes unresponsive.
- sudo umount -f /Volumes/backup doesn't do shit.
- You'll need to do a cold reboot to make your MBP functional again.
- And when you go home you'll have the unpleasant surprise that the backup volume on the external drive is completely toasted and unfixable.
Applause for Apple in creating Time Machine, a backup solution not for me.
Did I also mention the drive isn't bootable in any way? You must have a OS X Leopard disc handy to recover any (I mean all) of your data during a full HD crash, that is after you've replaced the drive in your machine. Here's a link to the most useful/detailed information I've found on this new feature...
Most people seem to think that they don't need backups. It might not be productive to call them idiots, but masturbating on drugs isn't productive (except in the white sticky sense) either, and that's totally worth it too.
As someone else pointed out, SuperDuper really is a lovely bit of backup software for those allergic to doing things in the Terminal, and having a bootable backup is supremely useful as well as wise.
As an additional tip, paranoia (and with backups, every little helps) has me keeping my backup drives switched off whenever I'm not actually doing a backup. No one can wipe or mess up a drive remotely if it's not connected.
As a former computer store owner, I can safely say that virtually nobody backs up their data. I know this, because data recovery was my #2 source of income after virus removal. Then you get bizarre happenings, like the woman who bought a 250GB external drive, plugged it in, and thought it would 'magically' backup her data. Her primary hard disk failed, she brought it in all smiles expecting me to restore it from the backup USB drive. Boy, she was sure unhappy to find out that it was empty!
Personally, I use an online backup service for anything important. I just rsync directly to them via cron every morning at 5am, and it's done. I think it's something like $5.00 a month or so for a couple of gigs. Takes care of the "keep your backups off-site" problem, too.
-RS.
That's one of the things Maxtor's "OneTouch" line is meant to solve -- once you've plugged the drive in and configured it, you just push the button on front whenever you want a backup. You can also do scheduled backups too, of course.
I just ordered another of these from Fry's. 750 GB, $185.
Learned this the hard way. Can't agree enough.
You sir, are wise. I had a "maximum irony" episode back in february, where a software installation/config failure was followed by intermittent hardware failure followed by complete drive failure on my colo server followed by a stolen laptop.
Now, amazon s3 and rdiff-backup are my daily friends. I have learned my backup lesson many times before, but I had become lax. That's when the universe strikes.
I more or less do exactly this. I do use OSX's software RAID-1 on some file servers, but this is backed up still using this procedure. The software RAID is there merely to prevent losing a day's worth of data, which is decent value for your $100 SATA drive.
You always have to back up. And you always have to have an off-site backup. Good post.
Make sure to test booting off your external drive. MacBooks can only boot off firewire (sigh). The partition type of your external drive matters; the apple disk utility defaults to Apple Partion Map, which won't boot an intel mac. Make sure to pick GUID. This will boot an intel mac. And trust me, it really sucks thinking you have a bootable backup when you actually don't.
Intel Macs can boot from USB drives as well as FireWire (I've done this first-hand on my MacBook). The partition table type is still significant and does need to be GUID.
Is it tempting fate to have the "live" backup (i.e. not the my-house-burned-down external backup locked in some other location) be to a second internal drive? It seems to beg for a lightning strike or exploding power supply that would take out both.
Yup, that's sure one good way.
Regrettably, it'll do nothing for me when I discover that I didn't save the original when I cropped a photo too tight (or otherwise over-edited it) two months ago. ... that I know I had Frank in my address book last summer when I called him about a trip. And for all sorts of other situations where you need multiple instances of a file. (And I sure as H don't want to overwrite my CURRENT address book with one from 9 months ago to save one entry, and burn a couple dozen.)
Dunno why anybody in this business assumes HIS practices are right for everybody else, too.
I imagine we'll find ways that the OMG Time Machine!!! isn't so neat, either -- perhaps, for somebody who maintains multiple, large databases that have small tweaks once a day, and needs detailed rollbacks. Easy way to max out the 750GB external that "should easily" hold all the data. But we're getting pretty specialized here.
And your 3rd drive is a fine idea for off-site, even WITH TM.
Thanks for pointing out that Backup Matters, and that you don't need Leopard to do it. But I'm not sure that's what Time Machine is about.
You can limit what Time Machine hits, say just your home directory minus Music, Movies and Downloads. Then have rsync, CCC or Super Duper deal with the rest on its own time.
Sir, thank you for a fine post, mostly for the switches on rsync.
For someone who dwells in the city and knows Jake, I've got a good feeling we've either met at some point in time or our paths will eventually cross.
Hrm I can't get this to work. I'm syncing to an external fw drive with the command:
sudo rsync -vaxE --delete --ignore-errors / /Volumes/Backup/
Rsync finishes with:
rsync error: some files could not be transferred (code 23) at /SourceCache/rsync/rsync-24.1/rsync/main.c(717)
And my backup doesn't boot. Well, it almost does, but complains about mDNSResponder spawning too fast or something.
Hrm, well I did a full mirror using SuperDuper, which created a bootable backup. Then using the same rsync command, I now have a bootable drive. Maybe some external drives still need the 'bless'?
Or, something didn't get copied the first time. It would have been interesting to run a diff to see. But, now that SuperDuper has done its thing, they are likely identical again.
I just verified that I can boot my Intel iMac off my external USB2 backup drive... I've never run SuperDuper or anything other than the usual rsync incantation on it. So, I dunno what went wrong on your end...
Just to clear up the confusion, I've tried making a mirror to my backup disk three times now. My backup drive has both usb2 and firewire. The first time I used usb; the other two times I used firewire. I think the connection type is irrelevant.
First time: Used usb2. Formatted external drive using disk util, didn't mess with any settings. Assumed osx would do the right thing and use a GUID partition since I have a macbook. It used Apple Partition Map. Couldn't boot from this at all, didn't even show up as an option.
Second time: Used firewire. Re-formatted as GUID this time. Did an rsync. Holding down option, I could see my external drive when I booted, though it was named 'EFI Boot'. Booted, but dropped to console mode flashing "mDNSResponder: invalid system call; respawning too fast" or something.
Third time: Used firewire. Did a full mirror using SuperDuper, which re-formatted my drive yet again. After that completed, I rsynced. Drive showed up as 'Backup' when I booted with the option key, and worked fine.
I'm guessing SuperDuper just un-checked 'ignore ownership on this volume.' as unstablehuman mentioned. I'll re-format and do another mirror tonight, see if that works.
I hate computers.
The first time I tried to create a clone of my drive using rsync, I got the exact same mDNSRepsonder error when I tried to boot, along with an error that the OS expected some files to be owned by root that weren't. After a little googling I learned while rsync will preserve ownership and permissions, the OS defaults to ignoring that information on external drives. This is what I did to fix it:
1) In disk utility, create an HFS+ Journaled partition using the GUID partition map (For intel-based macs)
2) Select the newly-created drive in the finder, do a Command-i, expand "Ownership & Permissions" and ensure that "Ignore ownership on this volume" is NOT checked
3) run the rsync command:
sudo rsync -vaxEH --delete --ignore-errors / /Volumes/Backup/(the H switch is to preserve hardlinks, just in case)
4) Order Thai food and drink a few beers
5) Reboot system. Hold down option key to boot off of backup drive to test.
I think that you missed a second detail that you need in order for the disk to be bootable. I think that you also (may?) need to do bless -verbose -folder "/Volumes/whatever/System/Library/CoreServices" -bootinfo after an rsync.
Not on my core duo iMac.
More to the point, not coming from the main disk in your core duo iMac, because you haven't screwed up the permissions in some way and you never had "ignore ownership" enabled on your target volumes. bless(8) is cheap and guarantees that the appropriate EFI is present for the OS you've duped. (It's not clear to me what it would do on a PowerPC mac, yet, but I'll tinker with my Powerbook some time in the future.)
Rather, that some people may need, depending on the state of Disk Utility's opinion about whether their permissions need to be "fixed" and so forth.
I followed the original instructions, but Boot Camp Utils had troubles seeing the new startup volume and refused to boot into OS X (using the volume selector that comes up when holding Option on bootup, the startup disk had "EFI Boot" as the only entry).
fixed this (note that this command differs from yours in the --bootefi switch; I cargo-culted this so that the --info output is the same as it was with the previous volume.
Thanks for the fine article. While I've had automatic rsync backups to other machines for a long time, this is far nicer. However I have one issue with this: Spotlight.
Whenever I would reconnect my backup disk, Spotlight would start indexing the backup disk. Not much sense in that. Adding the disk to Spotlight's Privacy list would immediately stop the indexing process, but it would do the same thing the next day (I have a MacBook, so I disconnect the backup drive during the day, then attach it in the evening so the nightly cronjob can do its thing).
Obviously rsync overwrites the backup drive's Spotlight settings with those from the system drive. After a while of procrastination I finally went and found the culprit. And a fix.
The file /Volumes/BackupDrive/.Spotlight-V100/Store-V1/Exclusions.plist contains the items to ignore on this drive. Since I don't have anything for Spotlight to ignore on my system disk, this gets carried over to the backup drive.
The solution is simple: Add your backup drive to Spotlight's Privacy list. Then copy the file and put it with your backup script. In the end just copy it back to the backup drive after rsync has finished (as root):
Hope this helps somebody. Annoyed the heck out of me.
I'm a little late to the party, but...
When running rsync as root, and you have FileVault enabled for your account, it will not back your /Users/you directory up. It will copy the encrypted image of your profile.
Your /Users/you directory will be empty, and there will he a /Users/.you directory with a you.sparseimage file in it. That's the encrypted home directory.
Is this a good thing, or a bad thing? It depends. By using FileVault, you are now automatically encrypting your personal files in all the backups you make. This will not be a problem if you are backing up to a bootable drive (like is suggested). But for pulling off individual files from a backup, that is impossible.
Running a separate rsync as your user for your home directory should do the trick. This will leave all your personal files unencrypted on a backup volume somewhere though.
If you have or have been tempted to copy anything else to the drive and if that is no longer on the source drive when you copy you will delete the contents. I personally would use --delete but put the contents one level down instead of a root and sacrafice the ability to boot from the drive but yet still get true incremental backups without erasing anything else at the root level.
|
http://www.jwz.org/blog/2007/09/psa-backups/
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Post your Comment
Oracle Corp. introduces updated version of Java
To overcome the recent security loop holes of Java, software giant Oracle Corp. has released a new and updated version of Java Programming Language that runs inside Web browsers and protects it hackers and hacking.
The updated version
Latest version of Java
Latest version of Java I want to know the latest version of Java?
Latest version of Java is Java7.
Java-7 Features
Updated Releases
Updated Releases
Sun Java Studio Enterprise 8.1
Recently Sun released Java Studio....
For More information visit Java
News section at http
itext version
itext version
In this program we are going to find version of
the iText jar file which is using to make a pdf file through
the java program.
In this example we need
Java get Version
Java get Version
In this section, you will learn how to obtain the java version and the OS
version. We are
providing you an example which will obtain the java version
which version of java supports util.regex
which version of java supports util.regex which version of java supports util.regex
Hi,
I think Java version 1.4.2 and above supports the util.regex.
The classes present in util.regex package is used
change pdf version
change pdf version
In this program we are going to change the version of
pdf file through java program.
In this example we need iText.jar file, without this jar file
Open Source Version Control
Open Source Version Control
CVS:
Open source version control
CVS is a version control system, an important component of Source Configuration... to function as a single team. The version history is stored on a single central server
java code to get the internet explorer version
java code to get the internet explorer version Hi,
Could u plz help me in finding the internet explorer version using java code...i have the javascript but i want this in java code...
Thanks,
Pawan
Get Java Version
Get Java Version
... libraries
is Java Version
Understand with Example
In this Tutorial we want to describe you a code that help you in
understanding in get Java Version. For this we have
Struts 2 version 2.3.15.3 released
Struts 2 version 2.3.15.3 released - Download and use the latest
version of Struts 2
The latest version Struts 2, version 2.3.15.3 released... be used to develop enterprise grade
applications in Java technologies which can
Technical description of Java Mail API
Technical description of Java Mail API
This section introduces you with the core concepts of Java Mail API. You must understand the Java Mail API before actually delving
which version of eclipse is used to execute Servlet or JSP
any version of Eclipse and i run then i can see the New Java Project...which version of eclipse is used to execute Servlet or JSP Hi
Can any body tell me which version of eclipse i have to use inorder to execute
date format updated
STRUTS2 Dynamic data is not updated
java.lang.UnsupportedClassVersionError: Bad version number in .class file
java.lang.UnsupportedClassVersionError: Bad version number in .class file My java servlet not working correctly on my website. I have tried every...
java.lang.UnsupportedClassVersionError: Bad version number in .class file
exception in thread main java.lang.unsupportedclassversionerror unsupported major.minor version 50.0 - Java Beginners
major.minor version 50.0 I am getting the below error when the class now.
java -version
java version "1.4.2_06"
javac -version
java version... java.lang.unsupportedclassversionerror unsupported major.minor version 50.0
exception in thread main java.lang.unsupportedclassversionerror unsupported major.minor version 50.0 - Java Beginners
for compilation and execution of java program. Therefore set the path of one java version... major.minor version 50.0 Below class is compling but runtime i am... java.lang.unsupportedclassversionerror unsupported major.minor version 50.0
class xxx
{
public static void
Loading updated values In my jsp project profile updating is one of that part. While updating i have to show updated values in choice list for birthday date column. How can we show the previously updated values in choice list
Java example program to get the operating system's version
Java example program to get the operating system's
version
java get OS version
You can check your current system's operating system
version by the java code also
Java Around the Globe
Java Around the Globe
Sentilla Puts Java on Chips
Sentilla Corp. introduced a software suite for java applications to run on low-power microprocessors
updated with current date n time
updated with current date n time package LvFrm;
import java.awt.Color;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.ItemEvent;
import
JPA Introduction
;
This section introduces you with the Java Persistence API (JPA). We... frameworks that can be used with the JPA specification.
The Java Persistence API or JPA for short is Java Specification for persisting the Java Objects
Post your Comment
|
http://www.roseindia.net/discussion/48205-Oracle-Corp.-introduces-updated-version-of-Java.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Every.
Your task:
1) Figure out what this reference is
2) Figure out why it's an appropriate one
Doug);
}
}
}
I]
public class YearClass
{;
}
}.
Thanks to all for their comments on “what's wrong with this code”.
I will confess to making a tactical error in presenting the code. I started only showing a single error, and then I went back and showed another one.
Ones that people commented on:
Not checking for InnerException to be null
I didn't intend this one, so +1 for my readers
Datastore not getting tested in the use
I hadn't intend this to be a full, usable class, so there's other code not shown that makes this a non-error.
Error in constructor
This was the error that I added, which just confused this issue. Whidbey may catch this one - I'm not sure.
Not rethrowing in the catch
This was the error I was intending to highlight. The code I wrote swallows all errors that weren't of the proper type.
There are really two issues with this code. The first is the more obvious one - the fact that I'm dealing with one type of exception, and not rethrowing all the other types.
The more subtle issue is that the api that I'm calling is flawed. APIs should never force their users to have to depend on the inner exception for everything. If you find your users writing code like that, you haven't given them enough richness in the set of exceptions that you throw..
m.
TechEd is rapidly approaching, and I'm signed up to do a “C# Best Practices” kind of talk.
Rather.); } } }}
J
Anson)..
Derek said that I should write about the virtues of OneNote, which is, in his words, “one sweet app...”
I've been using it off and on, and while I think it's a nice too, it has a few drawbacks for the kind of notes I do. For the C# language design notes, for example, we've been using Word for a long time, and it lets me do everything I want - keep the notes numbered and outlined, do tables, and have a big document with all the notes. OneNote doesn't appear to support the more formal sort of note taking that I do, though it is nice for other kinds of note taking.
So, I guess I'll have to forgo my endorsement, at least for now.
I.
Min provides a new link to his excellent document on debugging the debugger.
|
http://blogs.msdn.com/b/ericgu/archive/2004/03.aspx
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
heya,
i have a c++ header in *.h format. I have put
#include <iostream.h>
#include <fareylist.h>
but am given the error: unable to open include (referring to fareylist.h>
i am running the borland turbo c++ compiler.
My question is what is the difference between #include <iostream.h> and plain #include <iostream>
is it legal to write #include "iostream.h"?
and what could be the causing my error with the unable to open include?
The files are all saved in the same directory so i presumed it should be in my path, but im not sure.
tia
|
http://cboard.cprogramming.com/cplusplus-programming/58047-include-files.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
RoaminSMPP 1.0
RoaminSMPP is a SMPP library written in C#.
RoaminSMPP is a SMPP library written in C#. Intended to be fully SMPP v3.4 compliant and extensible for future versions of the spec.
This software is meant to provide an open source edition of the basic SMPP v3.4 functionality. It includes PDU definitions for all of the PDUs in the spec.
This software is meant to provide an open source edition of the basic SMPP v3.4 functionality. It includes PDU definitions for all of the PDUs in the spec.
- last updated on:
- October 12th, 2008, 12:05 GMT
- price:
- FREE!
- developed by:
- Chris Bouzek
- sourceforge.net
- license type:
- LGPL (GNU Lesser General Public License)
- category:
- ROOT \ Communications \ Email
FREE!
In a hurry? Add it to your Download Basket!
0/5
What's New in This Release:
- License changed to GNU Lesser General Public License, V3. This should make combination with other (older) open source licenses such as the original BSD license possible.
- Added the remainder of the code to the project; I will no longer be working on this app.
- I've replaced much of the older code with new (hopefully more useful) code. This new set of namespaces includes both incoming and outgoing functionality for all the PDUs. Note also that this release is GPL only-I have dropped the QPL portion of the license from here on out. I also fixed the dates on this changelog-for some reason I had them a year off.
- Host of changes. The PDUFactory and several incoming PDUs were changed for compatibility after some testing. Added some documentation tags. Fixed TLVTable so that null tags are simply dropped (i.e. trying to set an optional param to null does nothing). Also added the debug file and xml file to the binary release. Also changed the zip files so that they store relative path info rather than full path info (to those of you who downloaded the early release-sorry about that).
Add your review!
SUBMIT
|
http://linux.softpedia.com/get/Communications/Email/RoaminSMPP-34884.shtml
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
On Thursday 26 March 2009 10:26:13 pm Russ Allbery wrote: > "Gerald I. Evenden" <address@hidden> writes: > > I am quite new to using this system but managed to get it to make a > > distribution of a shared library. The first try was, however, simple > > and straight forward. > > > > However, I want selectively add two files to the library based upon the > > condition that they are *not* present on the target computer. In > > configure.ac I added the lines: > > > > AC_CHECK_FUNCS([strcasecmp strncasecmp],[CASECMP=1],[CASECMP=0]) > > AM_CONDITIONAL([NOCASECMP],[test CASECMP = 0]) > > You need a $CASECMP here, not CASECMP. That's interesting. I had no idea although I guess I should have. > However, more fundamentally, you're reinventing AC_REPLACE_FUNC, which you > probably don't want to do. Check the Autoconf manual for AC_REPLACE_FUNC, > which does exactly what you're trying to do, even down to how it adds > additional source files to a variable that you can use in Makefile.am. I had seen it but, again, I was unsure how it was going to work. I will reread. Also, I am not replacing functions with a new name but adding the source with the same entry names to be compiled into the new library. The documentation is a bit fuzzy to me in this area. > > In Makefile.am I added the lines: > > > > if NOCASECMP > > libproject_la_sources += strcasecmp.c strncasecmp.c > > You need an endif here, which may also be part of the problem with the > errors you're seeing. In the monkey-see-monkey-do example I was working from did not have an endif. So much for my reading material. > Taking yet another step back, do you have a target system in mind that > doesn't have strcasecmp? It's required by POSIX and is usually in the set > of things that aren't worth bothering checking for unless you're porting > to Windows (which particularly for shared libraries involves quite a bit > of work that Automake can't entirely help with). This is a tryout case that will become more complex later when the library grows to its 150 or so files and there may be other places where optional inclusion may be a factor especially where some potential users will accept the X11 license I distribute under but don't want to be involved with GPL and will want to use alternate software. As far as the case insensitive routines, yes, part of the motive was because of M$ but I heard some rumblings that they may be missing at some other OSs. Many thanks for the comments. -- The whole religious complexion of the modern world is due to the absence from Jerusalem of a lunatic asylum. -- Havelock Ellis (1859-1939) British psychologist
|
http://lists.gnu.org/archive/html/automake/2009-03/msg00066.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
04 June 2012 16:47 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
“In combining with the Acron Group, the ZAT Group would gain access to a [cheap] raw material base, an extensive network of logistics and sales, as well as access to new technologies and business experience in the international market,” said Acron vice president Vladimir Kantor.
“We believe that our offer to purchase shares is attractive to ZAT shareholders,” he added.
However, the management board of ZAT,
It issued a statement stating that the board was “strongly” against the offer as it does not take into account the fair value of the company or incorporate its long-term strategy.
Looking at the latest developments, Prague-based investment bank Wood & Company concluded that Acron's is “highly unlikely” to be successful in its bid for a majority stake in ZAT, a producer of nitrogen and multi-component fertilizers, caprolactam (capro), polyamide 6, oxo-alcohols, plasticisers and titanium dioxide (TiO2) which is controlled by voting rights held by the Polish treasury ministry.
“According to ZAT's management, the proposed price does not reflect the potential synergies to be unlocked upon the takeover and threatens the execution of the ZAT group's market consolidation strategy,” the bank said in a note to investors.
“Also according to management, uncertainties include the further development of the construction polymers segment, oxo-alcohols and plasticizers, and TiO2. The abandonment of, or limited investment in these products, would, in management’s opinion, be detrimental to the development of the group,” it added.
Shareholders have been invited by Acron to accept its offer in a bid window that runs from 6 June to 22 June.
($1 = €0.81)
($1 = Zl 3.55, €1 = Zl 4
|
http://www.icis.com/Articles/2012/06/04/9566530/russias-acron-responds-to-growing-opposition-to-zat.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
This is the best explanation of the further compiliation to machine code.
"In C#, however, more support is given for the further compilation of the intermediate language code into native code."
That's really deep.
The differences between C# and Java are very small. C# has enumerators and operator overloading, which are nice, but nothing earth shattering. Consider the hello world in C# vs hello world in Java.
From that page, in C#
using System;
public class HelloWorld
{
public static void Main()
{
Console.WriteLine("Hello World! From Softsteel Solutions");
}
}
Compare this with helloworld in Java.
public class HelloWorld
{
public static void main(String[] args) {
System.out.println("Hello World");
}
}
Vs C++
#include <iostream>
using namespace std;
int main() {
cout << "Hello World" << endl;
return 0;
}
You see that C# and Java are much closer to each other than to C++.
My biggest problem with Java is the lack of templates, so one cannot use compile time typesafe container classes. Is C# going to fix that? Or would that require straying from Java too far?
|
http://cboard.cprogramming.com/brief-history-cprogramming-com/140-net-why-2-print.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
2
results of 2
Another suggestion: let's suppose I want to generate bytecode from the
result of the Parser.
Or that I parsed an expression and I know the type of every
BSHAmbiguousName in a certain context and want to know then the type
returned by the expression.
This would be quite easy to implement "externally" if SimpleNode had a
"visit" method (as well as its subclasses of course).
Anybody interested ? I can code this quickly and I need it.
Hi, this is my first post in this list and I hope it is appropriate. If
not, let me know.
I'm trying to integrate BeanShell in a commercial software not only to
provide scripting but also to support dynamically created methods in
also dynamically created classes.
For the dynamically created classes, we generate the bytecode and load
ourself for various reason, mainly easy integration in our screen &
persistence system.
But on top of normal, stored attributes in these classes, I would like
to support some formulas too. Basically, the idea is to allow a
power-user to add its own types in the system and stuff like "field
c=field a+field b"... Nothing new but helps a lot to make a commercial
system more flexible.
Instead of parsing and compiling/interpreting the formulas myself, I
would like to use BeanShell of course. But a lot of access control comes
in the way.
Here are a few examples:
- I want to check in realtime if the formula the user enters is valid.
To do so, the system calls a method like this:
public LabeledBoolean checkFormula()
{
if(formula!=null && formula.length()>0)
{
Parser parser=new Parser(new StringReader(formula));
try
{
parser.Expression();
}
catch (ParseException exception)
{
return new LabeledBoolean(false, exception.getMessage());
}
}
return LabeledBoolean.OK;
}
and display the results near the formula field on the screen. But if I
want to check more things, for instance the syntaxic tree out of the
parser to detect any BSHAmbiguousName and checks its name against the
valid ones, i.e. the ones declared in my dynamic class, I can't.
Because even SimpleNode is package protected, not public.
Isnt'it strange that "popNode" is public but returns a package-only class ?
- I find also difficult to remove the default imports. For instance, I
do not want the scripts in the system to access Swing. I can of course
give my own namespace and detect such access but the simplest solution
would be to extends NameSpace and to override loadDefaultImports.
But even if I do so, I cannot easily put this MyNameSpace as the global
one.
Because the interpreter constructor does:
BshClassManager bcm = BshClassManager.createClassManager( this );
if ( namespace == null )
this.globalNameSpace = new NameSpace( bcm, "global");
And if I want to pass it MyNameSpace, it has to be built before and so
cannot contains the link to BshClassManager.createClassManager( this )...
Ok, I know the BshClassManager is not used a lot in NameSpace and I
could pass null but will it still work in the future...
- Same thing for getNameResolver and the names Hashtable? Even if I
found workaround, it would have been nice to have them as protected to
connect the NameSpace to my own objects (importObject is not enough.
Works fine for fields and methods but not for bean properties)
I know that moving something from "package" access to "protected" has a
lot of impact because it becomes part of the contract of the class with
the external world and cannot be changed easily later on without
breaking source code compatibility but in some cases, it would also
extend the useability of BeanShell a lot.
Waiting for your answers and back to coding...
|
http://sourceforge.net/p/beanshell/mailman/beanshell-developers/?viewmonth=200702&viewday=14
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Difference between revisions of "EclipseLink/Release/2.4.0/JAXB RI Extensions/XML Location"
Revision as of 12:35, 1 June 2012
Design Documentation: @XmlLocation @XmlLocation annotation is one of these extensions - it allows the user to specify a property on the JAXB object that will be updated (upon unmarshalling) with that object's XML location information (i.e. the line number, column number, and system ID that points to this object's location in the XML input).
This document will outline the design for an EclipseLink equivalent to this extension.
Requirements
- Deliver an @XmlLocation annotation in the EclipseLink library that will provide the same functionality as the Sun extension.
- Line number
- Column number
- System ID, if applicable
- Support drop-in-replacement functionality if the user is currently using the Sun versions of this annotation (com.sun.xml.bind.annotation.XmlLocation or com.sun.xml.internal.bind.annotation.XmlLocation)
- Have zero impact on memory/performance if the user is not using @XmlLocation.
Behaviour
If an object containing an @XmlLocation property is unmarshalled, a Locator object will be created and set on the property, containing the XML location info.
Not all unmarshal sources will be able to provide XML location information. For example, unmarshalling from a File would be able to give you line, column and system ID (filename); system ID is not available when unmarshalling from an InputStream; unmarshalling from a Node would give you no XML location information at all.
Configuration
In order to use @XmlLocation, the user must first have a property on their Java object (either a field or get/set pair) of type org.xml.sax.Locator. The user can then specify that this property should be used to track XML location by using either EclipseLink Annotations or XML Bindings.
Properties marked with @XmlLocation should also be marked as @XmlTransient, as the location information is not relevant in a marshalled document.
EclipseLink will also recognize the Sun versions of this annotation (om.sun.xml.bind.annotation.XmlLocation and com.sun.xml.internal.bind.annotation.XmlLocation).
Annotations
package org.eclipse.persistence.oxm.annotations; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; @Target({METHOD, FIELD}) @Retention(RUNTIME) public @interface XmlLocation {}
XML Bindings
eclipselink_oxm_2_4.xsd:
... > ...
Config Options
The @XmlLocation feature does not expose any configuration options, it is merely a tagging annotation that indicates the property to be used for tracking XML location information.
Examples
The following examples refer to this XML instance document:
<?xml version="1.0" encoding="UTF-8"?> <customer> <id>1872874</id> <name>Bob Smith</name> </customer>
Example 1
This example shows the most basic use case; the Locator field is annotated with @XmlLocation (or, in XML Bindings, the "xml-element" has its "xml-location" attribute set to "true").
Annotations:
import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.XmlLocation; import org.xml.sax.Locator; @XmlRootElement public class Customer { public int id; public String name; @XmlLocation @XmlTransient public Locator locator; @Override public String toString() { String loc = " noLoc"; if (locator != null) { loc = " L" + locator.getLineNumber() + " C" + locator.getColumnNumber() + " " + locator.getSystemId(); } return "Customer(" + name + ")" + loc; } }
Equivalent XML Bindings:
... <java-types> <java-type <xml-root-element /> <java-attributes> <xml-element <xml-element <xml-transient </java-attributes> </java-type> </java-types> ...
When a Customer is unmarshalled, the Locator field is automatically set to contain the XML location information for that object. By default, if that object was then marshalled back to XML, the XML location information would be written out as well.
Unmarshalling and Marshalling:
File f = new File("D:/temp/instance.xml")); Customer c = jaxbContext.createUnmarshaller().unmarshal(f); System.out.println(c); // Output: // Customer(Bob Smith) L15 C35 file:/D:/temp/instance.xml
Example 2
Accessor methods can be annotated instead of the actual Java field:
import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.XmlLocation; import org.xml.sax.Locator; @XmlRootElement public class Customer { private int id; private String name; private Locator locator; @XmlLocation @XmlTransient public Locator getLocator() { return this.locator; } public void setLocator(Locator l) { this.locator = l; } ... }
Design
When processing XML through our various parsing mechanisms (UnmarshalRecord, SAXDocumentBuilder, XMLStreamReaderReader, etc), a Locator object is supplied by the underlying parser libraries. This Locator is constantly updated throughout the unmarshalling process (with the location of the currently parsed node). When parsing a startElement(), if that element equates to a Descriptor that is location-aware, the Locator will be cloned at that point, and stored on the UnmarshalRecord, indicating the XML Location of that object in XML.
- New annotation: org.eclipse.persistence.oxm.annotations.XmlLocation
- New boolean "xml-location" flag on "xml-transient" and "xml-element" in XML Bindings
- Must be added to new 2.4 schema - eclipselink_oxm_2_4.xsd
- Corresponding updates to o.e.p.jaxb.xmlmodel classes
- New field on XMLDescriptor: boolean isLocationAware
- New field on UnmarshalRecord: Locator xmlLocation (each object instance has its own UnmarshalRecord, and this will be set only if Descriptor isLocationAware)
- New field on Property: boolean isXmlLocation
- Property is set up for isXmlLocation in AnnotationsProcessor and XMLProcessor
- MappingsGenerator: in generateMapping(), create a specialized XMLCompositeObjectMapping for the Locator property
- New class: NullInstantiationPolicy - we want to create a regular mapping for the Locator, but will never be instantiating it like a regular mapping - its value will be set manually during parsing. Plus, Locator does not have a default constructor. This InstantiationPolicy simply returns null for buildNewInstance().
- New constants in XMLConstants: LOCATOR_CLASS and LOCATOR_CLASS_NAME
- New JAXBException ("XmlLocation is only allowed on properties of type org.xml.sax.Locator, but [{0}] is of type [{1}].")
|
http://wiki.eclipse.org/index.php?title=EclipseLink/Release/2.4.0/JAXB_RI_Extensions/XML_Location&diff=304458&oldid=304430
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Getting Started
This article helps to plot multiple charts or Chart Area on a single Chart Areas of Chart control.
Refer below image:
Four different Chart Areas get plotted on single Chart Area.
Prerequisites: MVS used: 2008/2010, DB used: Northwind.
Code Explained Database
Stored Procedure used PROC_CALL_MULTICHARTS, which will output four different set of rows.
1. ProductName vs. UnitPrice. -- Chart Type used Column.
2. Country vs. Orders. -- Chart Type used SplineArea.
3. Employee vs. Orders. -- Chart Type used Bubble.
4. Company vs. Orders. -- Chart Type used Range.
Source Page
<asp:Chart</asp:Chart>
Code Behind Namespaces used using System.Web.UI.DataVisualization.Charting;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
Method BindRecords:
1. Below line of code will clear any already created Series and ChartAreas.
Chart1.Series.Clear();
Chart1.ChartAreas.Clear();
2. Opening the DB connection and calling the stored procedure.
3. SqlDataAdapter will store all the four set of rows and later filling to Dataset.
4. Creating a List Collection for creating ChartArea.
List<string> ChartAreaList = new List<string>();
ChartAreaList.Add("Products");
ChartAreaList.Add("Country");
ChartAreaList.Add("Employee");
ChartAreaList.Add("Company");
5. Doing a for-loop on Dataset tables count, new ChartArea and Series added to Chart1 control.
6. Setting the ChartArea features like XY Titles, BackColor etc. Axis by calling method SetChartAreaFeatures
7. Setting AlignWithChartArea (A value that represents the name of a ChartArea object to which this chart area should be aligned).
8. Finally plotting the chart by calling DataBindXY method.
Sample code and sql script attached, download and test the code.
Conclusion
Hope the article helped to plot multiple charts on single chart control.
©2014
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/uploadfile/suthish_nair/multiple-charts-or-chartarea-on-a-single-chart-control-mschart/
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
19 October 2009 10:07 [Source: ICIS news]
LONDON (ICIS news)--The CECA division of ?xml:namespace>
The Awards, now in their sixth year, continue to be sponsored by Dow Corning as lead sponsor.
CECA’s entry, a novel surfactant formulation to reduce the environmental impact of preparing and laying asphalt road surfaces, also won the closely contested Best Product Innovation category, sponsored for the first time, by US-based consultancy CRA International.
In a year that attracted a record number of entries from around the globe, the five judges had a difficult task to select the category winners, said John Baker, global editor for custom publishing at ICIS and manager of the Awards.
“The quantity and quality of the entries confirm the Awards are growing in status”, he added. “With the number of sponsors increasing, we felt the time was right to nominate an overall winner for the first time this year.”
The winners in this year’s four categories are:
Best Product Innovation: CECA/Arkema (
Best Innovation by an SME: Oxford Catalysts/Velocys (UK/US) - A microchannel reactor for the distributed production of third-generation biofuels
Best Business Innovation: DSM (
Best Innovation in Corporate Social Responsibility (CSR):Tata Chemicals (
Additionally, Lucite International (UK) was nominated for a special mention in the Best Product Innovation category for its development of Alpha technology for production of methylmethacrylate.
These five innovative entries were chosen as the best from the shortlist of 13 entrants, details of which were published earlier this year in the 3 August issue of ICIS Chemical Business.
Stephanie Burns, chairman, president and CEO of lead sponsor Dow Corning, noted: “Successful innovation takes courage. It requires a clear vision and a strategy. But more importantly it takes commitment and tenacity.
“The winners of these awards are proof of what can be achieved when a company makes innovation an integral element of its business and pursue that goal with steadfast determination and commitment.”
Neil Checker, vice president for chemicals at category sponsor CRA, added: “We are particularly encouraged by the unprecedented number of high quality submissions … a fact that reinforces our view that innovation remains vibrant in the chemical industry despite the current industry environment.”
Full details of the winning entries and interviews on innovation strategy with the two sponsors are published in the 19 October issue of ICIS Chemical Business. They can also be seen on tyhe ICIS Innovation Awards 2009
|
http://www.icis.com/Articles/2009/10/19/9255998/ceca-is-overall-icis-innovation-awards-2009-winner.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
16 October 2013 23:01 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
With the decision having been delayed previously due to cost reviews, Delaware-based Cronus said the two states are still the frontrunners for the facility initial projected to be in operation by 2016.
Officials said that others states were reviewed in the beginning but the decision is now between locations in Mitchell County, Iowa, and
The questioning of site location gained traction earlier this week when Illinois Governor Pat Quinn said that other states were possible choices for the project. At the time he said he still thought
Cronus spokesperson Dave Lundy said the company still anticipates making an announcement on which location has been chosen within the next several weeks.
The facility is anticipated to have an annual production capacity of 800,000 tons of ammonia, which would be converted into 1.4m tons of granular urea.
Construction of the plant is expected to take up to 30 months. The plant will employ about 225 full-time workers when at fully operational, as well as 1,000-2,000 during the construction phase.
Cronus Chemicals is headed by veteran fertilizer executive Erzin Atac, the former president of Transammonia Fertilizer, and is owned by Swiss and Turkish
|
http://www.icis.com/Articles/2013/10/16/9715931/illinois-or-iowa-still-choice-for-cronus-chemicals-ammonia-plant.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
#include <stdio.h>
int fputc(int c, FILE *stream); st_ctime and st_mtime fields of the file shall be marked for update between the successful execution of fputc() and the next successful completion of a call to fflush() or fclose() on the same stream or a call to exit() or abort().
Upon successful completion, fputc() shall return the value it has written. Otherwise, it shall return EOF, the error indicator for the stream shall be set, and errno shall be set to indicate the error.
The fputc() function shall fail if either the stream is unbuffered or the stream's buffer needs to be flushed, and:
The fputc() function may fail if:
The following sections are informative.
None.
None.
None.
None.
ferror() , fopen() , getrlimit() , putc() , puts() , setbuf() , ulimit() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdio.h>
|
http://www.makelinux.net/man/3posix/F/fputc
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
In a competitive world, there is a definite edge to developing applications as rapidly as possible. This can be done using PyGTK which combines the robustness of Python and the raw power of GTK. This article is a hands on tutorial on building a scientific calculator using pygtk.
Well, let me quote from the PyGTK source distribution:
"This archive contains modules that allow you to use gtk in Python programs. At present, it is a fairly complete set of bindings. Despite the low version number, this piece of software is quite useful, and is usable to write moderately complex programs." - README, pygtk-0.6.4
We are going to build a small scientific calculator using pygtk. I will explain each stage, in detail. Going through each step of this process will help one to get acquainted with pygtk. I have also put a link to the complete source code at the end of the article.
This package is available with almost every Linux distributions. My explanation would be based on Python 1.5.2 installed on a Linux RedHat 6.2 machine. It would be good if you know how to program in python. Even if you do not know python programming, do not worry ! Just follow the instructions given in the article.
Newer versions of this package is available from :
The tutorial has been divided into three stages. The code and the corresponding output are given with each stage.
First we need to create a window. Window is actually a container. The buttons tables etc. would come within this window.
Open a new file
stage1.py, using an editor. Write in the following lines to it :
from gtk import * win = GtkWindow() def main(): win.set_usize(300, 350) win.connect("destroy", mainquit) win.set_title("Scientific Calculator") win.show() mainloop() main()
First line is for importing the methods from the module named gtk. That means we can now use the functions present in the gtk library.
Then we make an object of type GtkWindow and name it as win. After that we set the size of the window. The first argument is the breadth and the second argument is the height. We also set the title of our window. Then we call the method by name show. This method will be present in case of all objects. After setting the parameters of a particular object, we should always call show. Only when we call the show of a particular object does it becomes visible to the user. Remember that although you may create an object logically; until you call show of that object, the object will not be physically visible.
We connect the signal delete of the window to a function mainquit. The mainquit is an internal function of the gtk by calling which the presently running application can be closed. Do not worry about signals. For now just understand that whenever we delete the window (may be by clicking the cross mark at the window top), the mainquit will be called. That is, when we delete the window, the application is also quit.
mainloop() is also an internal function of the gtk library. When we call the mainloop, the launched application waits in a loop for some event to occur. Here the window appears on the screen and just waits. It is waiting in the 'mainloop', for our actions. Only when we delete the window does the application come out of the loop.
Save the file. Quit the editor and come to the shell prompt. At the prompt type :
python stage1.py
Remember, you should be in Xwindow to view the output.
A screen shot of output is shown below.
Let us start writing the second file,
stage2.py. Write the following code to file
stage2.py.
from gtk import * main(): win.set_usize(300, 350) win.connect("destroy", mainquit) win.set_title("Scientific Calculator")) : y,x = divmod(i, cols) table.attach(button[i], x,x+1, y,y+1) button[i].show() close.show() box.pack_start(close) win.show() mainloop() main()
The variables
rows and
cols are used to store the number of rows and columns, of buttons, respectively. Four new objects -- the table, the box, the text box and a button are created. The argument to
GtkButton is the label of the button. So
close is a button with label as "closed".
The array ,
button_strings is used to store the label of buttons. The symbols that appear in the keypad of scientific calculator are used here. The variable
button is an array of buttons. The
map function creates rows*cols number of buttons. The label of the button is taken from the array
button_strings. So the
ithe button will have the
ith string from
button_strings as label. The range of
i is from 0 to rows*cols-1.
We insert a box into the window. To this box we insert the table. And in to this table we insert the buttons.
Corresponding
show of window, table and buttons are called after they are logically created. With
win.add we add the box to the window.
Use of
text.set_editable(FALSE) will set the text box as non-editable. That means we cannot externally add anything to the text box, by typing. The
text.set_usize, sets the size of the text box and the
text.insert_defaults inserts the null string as the default string to the text box. This text box is packed into the starting of the
box.
After the text box we insert the table in to the box. Setting the attributes of the table is trivial. The for loop inserts 4 buttons into 9 rows. The statement
y,x = divmod(i, cols) would divides the value of i by cols and, keeps the quotient in y and the remainder in x.
Finally we insert the close button to the box. Remember,
pack_start would insert the object to the next free space available within the box.
Save the file and type
python stage2.py
A screen shot of the output is given below.
Some functions are to be written to make the application do the work of calculator. This functions have been termed as the backend. These are the lines that are to be typed in to
scical.py. This is the final stage. The
scical.py contains the finished output. The program is given below :
from gtk import * from math import * toeval=' ' myeval(*args): global toeval try : b=str(eval(toeval)) except: b= "error" toeval='' else : toeval=b text.backward_delete(text.get_point()) text.insert_defaults(b) def mydel(*args): global toeval text.backward_delete(text.get_point()) toeval='' def calcclose(*args): global toeval myeval() win.destroy() def print_string(args,i): global toeval text.backward_delete(text.get_point()) text.backward_delete(len(toeval)) toeval=toeval+button_strings[i] text.insert_defaults(toeval) def main(): win.set_usize(300, 350) win.connect("destroy", mainquit) win.set_title("Scientific Calculator: scical (C) 2002 Krishnakumar.R, Share Under GPL.")) : if i==(rows*cols-2) : button[i].connect("clicked",myeval) elif (i==(cols-1)) : button[i].connect("clicked",mydel) else : button[i].connect("clicked",print_string,i) y,x = divmod(i, 4) table.attach(button[i], x,x+1, y,y+1) button[i].show() close.show() close.connect("clicked",calcclose) box.pack_start(close) win.show() mainloop() main()
A new variable !
toeval has been included. This variable stores the string that is to be evaluated. The string to be evaluated is present in the text box, at the top. This string is evaluated when the
=
button is pressed. This is done by calling the function
myeval. The string contents are evaluated, using python function
eval and the result is printed in the text box. If the string cannot be evaluated (due to some syntax errors), then a string 'error' is printed. We use the try and except for this process.
clear, the
close and the
=, will trigger the function
print_string. This function first clears the text box. Now it appends the string corresponding to the button pressed, to the variable
toeval and then displays
toeval in the text box.
close button then, the function
calcclose is called, which destroys the window. If we press the
clear button then the function
mydel is called and the text box is cleared. In the function main, we have added the 3 new statements to the for loop. They are for assigning the corresponding functions to the buttons. Thus the
= button is attached to
myeval function, the
clear is attached to
mydel and so on.
python scical.py
at the shell prompt and you have the scientific calculator running.
8. Conclusion 78 of Linux Gazette, May 2002 !
|
http://www.tldp.org/LDP/LGNET/issue78/krishnakumar.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
The webhelpers.feedgenerator module provides an API for programmatically generating syndication feeds from a Pylons application (your TurboGears 2.1 application is a particular configuration of Pylons).
The feed generator is intended for use in controllers, and generates an output stream. Currently the following feeds can be created by using the appropriate class:
All of these format specific Feed generators inherit from the SyndicationFeed() class and you use the same API to interact with them.
Example controller method:
from helloworld.lib.base import BaseController from tg.controllers import CUSTOM_CONTENT_TYPE from webhelpers.feedgenerator import Atom1Feed from pylons import response from pylons.controllers.util import url_for class CommentsController(BaseController): @expose(content_type=CUSTOM_CONTENT_TYPE) def atom1( self ): """Produce an atom-1.0 feed via feedgenerator module""" feed = Atom1Feed( title=u"An excellent Sample Feed", link=url_for(), description=u"A sample feed, showing how to make and add entries", language=u"en", ) feed.add_item(title="Sample post", link=u"", description="Testing.") response.content_type = 'application/atom+xml' return feed.writeString('utf-8')
To have your feed automatically discoverable by your user’s browser, you will need to include a link tag to your template/document’s head. Most browsers render this as a small RSS icon next to the address bar on which the user can click to subscribe.
<head> <link rel="alternate" type="application/atom+xml" href="./atom1" /> </head>
Normally you will also want to include an in-page link to the RSS page so that users who are not aware of or familiar with the automatic discovery can find the RSS feed. FeedIcons has a downloadable set of icons suitable for use in links.
<a href="./atom1"><img src="/images/feed-icon-14x14.png" /> Subscribe</a>
The various feed generators will escape your content appropriately for the particular type of feed.
Base class for all syndication feeds. Subclasses should provide write()
Adds
|
http://www.turbogears.org/2.1/docs/modules/thirdparty/webhelpers_feedgenerator.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
I've been using Xcode for quite some time now, and I just installed it on my MacBook (I had it installed before, but I accidently erased my hard disk). Now I'm just trying to rewrite my game engine, so I started off with a simple cPoint2D class.
When I try to compile it, the linker keeps failing because "all the symbols in cPoint2D are undefined". I checked to make sure that cPoint2D.cpp was getting compiled, and it was. Then I tried moving all the code into main.cpp, and the problem went away. So, I figure this is more of a compiler issue than with my code, so I posted it here in the Tech Board.
But I just don't get it.......
Here is my code, just for you to observe:
Please do not criticize me for bad class design. This is a work in progress.
cPoint3D.h:
cPoint2D.cpp:cPoint2D.cpp:PHP Code:
// cPoint2D - basic class for 2D points
#ifndef CPOINT2D_H
#define CPOINT2D_H
#include <iostream>
using namespace std;
template <class T>
class cPoint2D {
public:
T x, y;
T operator*(cPoint2D<T> &value);
cPoint2D<T> operator+(cPoint2D<T> *value);
void print();
};
typedef cPoint2D<float> cfloat2D;
typedef cPoint2D<int> cint2D;
#endif
Basically, the compiler doesn't cough on this; it's just whenever I try to call any of the member functions from cPoint2D that I get a linker error saying that the function is undefined.Basically, the compiler doesn't cough on this; it's just whenever I try to call any of the member functions from cPoint2D that I get a linker error saying that the function is undefined.PHP Code:
// cPoint2D - basic class for 2D points
#include "cPoint2D.h"
template <class T>
T cPoint2D<T>::operator*(cPoint2D<T> &value) {
return x*(value->x) + y*(value->y);
}
template <class T>
cPoint2D<T> cPoint2D<T>::operator+(cPoint2D<T> *value) {
value->x += x;
value->y += y;
return value;
}
template <class T>
void cPoint2D<T>::print() {
cout << "cPoint2D: x=" << x << "; y=" << y << endl;
}
|
http://cboard.cprogramming.com/tech-board/81059-why-earth-xcode-%2A-%5E%25-acting-up.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
- .; t?.vMLa-oa." fT-
lMya 9mm j ! -
MttEaHI nowna mmm iwiW" ?!
4tMv44VliVBhV nhnmafaT ifipfW
WSBH 1" !'
MMMtaMWI W'
MW-4BHb unnn "
J!mmmmmZZLm A mm wii
n-M- -an- I MfcfA.-ar
X iji m liiJtint- '
- iHfeK r A "i Mad .. Ib-aNBte-
' - ImIM"
-rMrrva'vr t nvM i..
wm. ir
r" iiirii i rv
wspie m m Lfat
ten Vive
nfunnt ahnw r VC aWw M-t
tew I4tw
.7 II I Ml I
i.t tit. .
Mfct,fw fcnurni. llu ileln
." IWi' nil!
a:rAii.-ri n
Will, i i i . I
ama, l4U.Hfc K
t rjaii. Kti .
mm niiin. in
- I.
tar-.V-lMU
kk n tit k.
1MTIV
IJ I im Willi i.mm k Hi
MlHigi, but tfcUTrtlir S.l4mtb.-ViMml of
iit noun is until
WCT.VESDAT. JANCAIIY -. ISii.
Tin- xrtian of 3l8TbaI SorwT in pnttinR
a Mop to th iffl or lottery which was to
bTf lu M tbf othrr ermine VfcrTe
reramttbition. do not think that (tie
pitlan. who fur diwiUlile object ,-ot
np the lottery, for a moment con5lereJ
that they TerelireVingthelmr; laws have
often winle.1 at, anil nllowpil to
remain a a dead letter that it wan no won
der that they did not realize that what
tber were dofnp wa UlepaL and wc fym
ltkee with them for their tronble. o
WLrr-r with the ;8rf." that the affair of
tiatnnlay nicht may lw expectetl to hare
a wboleonie euect m more ways man
ene. it iia I loo lonj; tue nranicr
to condone the iim tliat one race lsi Riren
to. It damning precisely similar fins in
another race. We wonld liVe to fee the
Marshal etrile bisher, and pnt a stop to
th" pamlJing which is earned on. on by
no xseam a lirfit Kale, not by Chinie,
list Itv JLnglo-Saion!. One oi two lessons
wonlj b raffioent
will smrive the present depression in the
latter indnstry and rise to more solid suc
cess when they hare finally distributed
their cygs instead of keeping them all in
one basket.
We hope onr ritixen are enjoying the
condition of the roncls jat at present.
They are really charming, and considering
the amount of money that has lieen voted
out of the tax payers' pockets to pay for
thera, the ul tax layers must lie thor
oughly natisfied We recommend anyone
who wwild thomnghly enjoy a drive to
take a tum on King street and if he snr
viveil tliat to try Pnncbbowl rtreet, which
is a morass, or School street which is fall
of large jnt falls. We have U-en in places
where the roads are lnd. but we think we
bare never lwen in a city where they are
o systematically neglected as here. Some
times tlie excne is maae mat onr cumau;
is againt the ntails that the rains injure
them. We have no difficulties to tight
arainst milar to tho-e countries which
have to contend again-i the destructive
isrwers oi frost and fnow. Uurstrwlsare
a witive digrace to a civilized town. .Vll
owners of carriages are directly injnrcd
by tie condit ion of them, f or the wear and
tear on these vehicles is greatly increastsl:
the iedetrian is injured in the extra
etraiu upon his Imots and when lioth car
riage folk and iiedetrians think what they
have just paid out in taxes, theymu-t
really tie delightnl at the (.mail value they
receive for their dollars.
Tnx Hawaiian Government seem to lie
mighty leisurely over the Mflrv matter,
but from an article in the official organ it
eeems that they are not asleep, and that
the care of draughting a reply has lieen
handed over to the most able oi tue Jlin
iters. There is to be no answering the
nclv nnetion-. "inconsiderately.'' So far
the ftovernment are in the right: with
euch a weighty thing liefore them, it lio
iMMves them to take the greatest care lest
anotber false fteplie added to tho al
ready taken. It is to lie hoil however,
that "they will not delay too long and wear
out the iatience of the Government which
i nppoTtmg tue claims oi tue Mmimr.
As to the claim made bv the oriran. of
"patriotic conduct" on the part of tlio Ha
waiian Jlmi-ters, inilicwoni-01 tuor-asi-crn
prophet it is "lwsh." Tlieir motives
may have been well intention's!, but the
affair was condncted in a mot blundering
manner, and far from Iieinir 'patriotic
conduct,' it was Hilnudt-ringcondnct1 of
the CTosest kind. In the end the jikjai
landed her iiasM'ncers, as she had asked
to do m the hrst instance, and w lien landed
ther wTe far more dangerous to the health
of the community, and fnr more care was
required to eaTe the country from an epi
demic, than would baie lieen at first. No
1 letter evidence of this can be put in than
the opinion of the Port Physician, Doctor
J nnisean. tlad lus advice IH-en lollowetl.
the Mmlnr matter would never have as
sumed the prujiortions it has.
Tnx French war in China Fecms to have
arrived at a state when the least said is
soncst mended, from a French point of
view. The French have evidently got an
affair on their hands which is harder to
manage than they supposed. An intelli
gent Chinese gentleman speaking of the
war tho other day, said the days when
China was willing to pay a high price for
the purpose of avoiding a war, are passed.
In the days of the Taeping rebellion it was
unavoidable, but now that tho Empire is
at peace internallv, he considered it in an
almost impregnable position. "Let the
t rench take a portion oi our sea uoaro,
paid he, " what can they do with it? they
cannot maiitain themselves there for long,
it is too expensive and will bring them no
advantage."
A letter recently received from Paris,
has this view of the question: "The Chi
nesc have no reason to complain of the
debates in the Chamber upon tho war It
has been establishes! that they were not
responsible fer the Lengson blnnder, that
by their publishing the text of the Tient
sin Treaty in the Jnnal Of.ctal of Pekin
was an earnest to seal tieace and the offer
of three and a half millions of francs for
tho Lengson slaughtering was not unrea
sonable." It is a sort of winte elepuant
that Jlons. Ferry has pnt on tho hands of
the French nation, and how to get ont of
tho broil in the liest way is vihatthornlers
of that nation arc puzzling their brains
over. Hence the lack of news.
TnocGH we have enough misery here in
Hawaii, hen we go below the surface and
see th terrible distress caused by leprosy,
yet we may consider ourselves blessed
aljve many of tho-e lands which are our
snperiors in wealth and population. Take
thi picture from New ork a6 given by
ui i rifivhf. j ue nignt alluded to is uv-cenilsSOth.
TW TrA mjhi Al tbc Svw streM station 30
iea ih 1 wurnen were luucru, ind nntotirr were
lml swt, trr being no roota lor turni. At
tto Ktixsbrtu slreM station tbe room for tue Iodc
er was Ittmtftl. At the Oak at Fret itUtion tin mr
w tarnrtf awn,, f be wrfvtm in cnaoia rmtnnc
toemvd ta w,ijle in lile sardine rstber tbsn
tare iLMn imi lalo ine uiiier nijrut. i ninj Ho
me and ninetr men were eleemns there at ten
n'dwlu of all sorts and Linds some re"iiectbly
dreaded an! airneljese racs arsrcelT coretej tbelr
eblerlnc timlM. tin all fares was that hunted
kink wfaieli tild that lbs rrim wolf of tiorrrlT,
tianeer and ain wa follow inc their fuotteps nijnt
aaoaaj.
A jirivate letter from Liverpool sjteaks
I "a rrf-at deal oi tustress tuis winter.
ami the writer thinks tho depth of misery
has not bt-en reached, "llnsiness is so
Imd that there are immense numbers of
iieople out of employment and although
evervthimr is very cheait, what is tho use
if iieotile have no monev to buy with." Tho
eaiue letter states tliat on tue east coast ox
Kngland they are worse off than around
LivTj"iL In all the lioanl schools they
are giving the children dinners and inany
of them M nothing else to eat!
IV predion is coming upon tia rapidly,
ami there will Im doubtless discomfort in
many a modest household here before tho
present crisis isover, but we thank Heaven,
fervently, that the want and miser which
comes elewhere is unknown here, and can
hardly by any combination of circum
stances come alamt. Truly in this is Ha
wail Mesed.
tm take pari. An Immetiae
jih IhhI kem ereelMl la front f Kaalmmaav
Qliili la Wwa.lfeLft. and m prrfarei
tarK. 1W aleaner afalW- IwoQEbl an of
UHfMaMMraaaalleaeaera fnwa MokAal annitwr-iwawlaaatiwia-alatirT
cam la whale UaUa. The
ft IT la i'iK.la Hie KbaJl t ma liana and 11a
ffW Hie erJeaw of UaikB, lkau and Makawao
wMM-aiwelna. 11 ta Iteaaflit tbere were 1JW
Uauctwa api were WaslJnl fur lie the rratdeats
fwafaLa. Ssjmm partara entertauied A) and 2
aswall eaww 1 ttewr tinan n. loeetaUalhNl liecaa
a rta m rV ll) and csmtinoed w:
vtaa a rwaerltom was talen ap to defraj- ex.
Igm mm ii la KN- rewwh rtfj. llde&llr llls-rallt waa
tweawsW XUe a? all rlawva threa in I reelf
'lMllM"r- - xajMd ta -par vS atl tudalxed.
aa 3ta anatatnationa rf xiie dereat arlanla
tat -twear IHSHUli aaa-ttiletaerita were frj
all ilailatj la the eatxamttee anvanied to aoree.
ifJfaaJ aaad eajuar eumaj-rrs were rere acreraUv
aiiwliiwini laate the pruaUtDje wuh wturheaeh
eesaww awswerr tbe wscmwi propiainded. lliee
1 laaaajlai iillini nanilliii I n an 1 1 1 1 anil lln ii
eawj w nwrwtnaa warm Bnan. Saae f the acbnola
iaaaateaVawI rWleus from the lale-naUanal
lioa aewwawaaad were welt tawled.
11-awTaliad fallowed thru-wan plan wfatadr and
aawal aayiatiw ta-iwupt and well drilled the
I uiaiill wewad not deride that any cn sdaaa
tbr iiB. ItlslbeDrauKwiiif Ibewnt.
S laSi' ilU xkmw an uaiu anwwrvd lo
TwaelerM and eclaaars were re.
ina-haMy wait dreed and the dl-wdae wa iim
laal mml la ! I Ima I a ifl i it TIk pravMere of the
Ii i aVwis the V wmjtatalatewca nf the eiwuniaai
hv. "twwaadwwwl Xaahaaiana t'hiLrch bad wade
wawwJaalcadabaah ef tl Mwrra whieh were aiane.
laewe wwre few! In Unel.a. ntMHern laics, axal
wnWtt bamaa aaT w sswaasjned at the feast so
taf..lii iiat i thai all liaj entttogpt to eal. ll.K.
i.XalBoLalaw si 4a ttfTUad I rota lloawlala tw at.
ileatd, auad ra?etbia- wath a rwlatv f Hawaiian
feoaea craeed tlae daw- dance the wbulrdae. lu
aawwwutc awiecteana Isn racii vt the school
anaavaiin-nrlncatMtaseallathebatb wtuchwaf
SilWai t rrawrteaa.
We ran aay tliat "SYw 1 ear IHj a -was kept in
Xtsilaka ia araoa enliiteJ riiirciia raaa.
wiefl. Tliere warre arrra IsalilMtti ca,e4a f ra
It 1 1 Lai and taaasty f rout the dlSVrrlU wuiasea m
.'Taaan. CarneaT theta weasoojed the wadmaoe with
m Happr 3ear aear aadtbe writer for hinewU and
for tar awdirmr nsaro ano beartU, a -Hap,
"Sew lear to tlaeaa andtiop that taejtrray all
fawe a Baslr La rem a annre omnd rennaan aa
3aw Tver' l-wai. latOTTrt, JiKn
"attaLw. Jan. -aa.
WrKlalej- Urew-tlntC
TW -ataatancton cwrrrataaadent of the f 7rcreW
JailarwTaUngtnaAexdleaa aOfWalier xb, enrals
-aa (aOaara aat tae lawtnar at the L'nlted NtateafVaa.
a!talfcrxtT
Hajar "aklunleT. ltaintuireWlert to the
Wawl laawaas, wwa em thrSoor wdil and rereiredtb
cirwwTal&laUtara ucLla Ira-iad Ihe LandahaatBca
1 - 1 ij
ana-aavaiai. the aweanbera saT both rsdrieal paruea
HSir"r"' aaw, aelb, XaraaWaB aif f HArt." It
will la lana iiilniial that he was wnaeated daiics
"tar latter pszt nf the laat wewaon wf the Hosae on
awxrerr partiaan crowxats. wndjTarlur on that
iwiwiiiai iwaili the praxlicrjoe in arordi acbatastt.
aj HiaX "Vaamxaujawtaoaiwicns La will rrlnra
wMy art as aanxxont TOO la the face of tlm
ijerrlx fn?rec it" i imc rrrjrt.
The silver question remains in M
Hut there can lie no doubt of the ultimato
result: the silver will have to come down to
its bullion "value or a large iKjrtion of it
will have to lie sent out of the country. In
the United States the amount of silver is
alamt per capita, and as has been
iiointed out by one of the ablest trado
journals it is an "unwelcome commodity
which the people with great unanimity
ami constancy continne to reject." Here
in Hawaii with a iiopnlatiou of say 7o,000
people and a million JialaLaua dollars, the
amount of silver is alsrat $13 per capita.
No country can stand such an amount of
this coin. It is easy to see that even now
the drain has commenced on the gold and
it is not difficult to predict that in a given
time.it the Government keeps delayin-r
action in the matter, there will lie none
left, and exchange will ran tip to any fig
ure under 20 per cent that theholders may
choo to ask.
All delays are dangerous, but delay in
such an affair is fatal, While tho Gov
ernment is playing fast and loose with the
question the fabric of financial prosperity,
is la-ing underminisl. and is likely to come
crashing about our ears. In this view of
the situation we are but voicing the opin
ions of the liest financial men here. If the
trovernment do not act for themselves, the
silver queation will find its own solution:
gold will go, prices will advance and great
misfortune and discomfort will fall on
those members of the community who are
least able to liear it, namely the mechanics
and those who have to live upon wages or
small salaries.
Tar year has commenced with anythin!
bnt a pleasant aspect. Hetrenchment is
the order of tho day and every one begins
to feel the pressure. Men who heretofore
have been sanguine, begin to take a very
gloomy view of the future and prophesy
many evil things that will liefall us soon.
It is liest, however, to look at the bright
eide of the medal, rather than the dull one.
If disaster does come upon the country, it
may reel under the blow for a moment, but
-so have that confidence in its natural re
sources, and in the energy of the men who
are doing their ntmot to develop them,
that recovery will lie quick, More than
ever should our agricnllurista now look
out for fresh crops and not stake all. as
lias been in the past, upon one staple
sugar. The reward fromthisinduttrvh.is
been so hamlsome, the fortunes realized
from it have lieen so quickly made that it
is do wronucx tuat it iias engroasea uie cap
ital, the time and the brains of the com
munity. But the days of sugar lying as
handsome a profit as formerly, seem to be
numbered, and nothing is left but to turn
to something else. Because sugar ceases
to pay are our fertile valleys and plains to
relapse into their pristine luxuriance of
fern and brush wood, yielding little or
not lung (or the use of man! W c do not
think so. We believe, for instance, that
there is a future for us. in fruit cul
ture. We hope a great deal from Ilamie
cultivation, and we look forward to our
being able to rase some of our own needed
supplies, for which at present we are en
tirely dependent on the Coast. The islands
survived the loss of the whaling industry
and have thriven well upon sugar, they
Da. X. B. Eveksox contributes a very
readable Hawaiian Legend, or. more prop
erly a folk-story of tho fabulous order, to
tue JIatriHiajk Aimahae ana .innvw, just
issued by Mr. T. G. Thrum. While tho
Jiwirfe iq rhnrnrterislie of the earlv legends
of tho group, it is to be regretted that it
has been so shaied as to seriously distort
tho history oi the islands, and set at deii
ance tho accepted chronological lino of the
Jungs ol Ualiu.
Tn the stnrv Kanlii rebels against Kaku
hihowa, the King of Oaliu, and is finally
conqnerd, and then Kakuhihewa resigns
the Kingdom to tho vntttlrd hero of the
tale, Kalelealnaka. All this is a perversion
of history. Xo such king as tho latter
ever ruled over Oalin. Kakuhehewa was
ono of tho most distinguished and in
teUiirent of tho earlv sovereigns of Oahu.
The island wasnnited under his reign, and
his court was noted for its brilliancy. He
was contemporaneous with tho chivalric
Lonoikamakahiki, king of Hawaii, and was
visited by tho latter after tho difficulty
with his Queen had driven him into exile.
Ho lived to an old age, and was duly snc
cceded nv nis son.
As for tho rebel Kaulii. who is made to
marry the two daughters of Kakuhihewa,
he wits the great grandson of that sove
reign, and was also distinguished as one
of tho most warlike and encrcetic ol tue
early Oahuan Kings. Tho Government
Ol luu i:iauu ll IIUII,-,, lUluvl Iim Uln",
and it is not improbable that he extended
his foreign conquests to a portion of tho
island of Kauai. He lived to an extrcmo
old age, and when ho died the trusted
custodian of his bones pounded them to
flonr, mixed tho mass with invited
the chiefs to n feast, and thus nnnrned
the bones of his royal master in a hun
ilred living sepulchres.
This story is well known, and such nn
neccessary perversions of history as are
found in the leirend nnder notice are cal
culated to mislead thoso who aro seeking
to unravel the snarl of early Hawaiian
history,
The table of the principal exports of the
Hawaiian Islands for tho year ending De
cember 31st, l&W, is published in our com
mercial column, lho hirures standing:
$7TrlQSJ32, against 7,'J21,727.11, or a
gain of ?."3,181.71. Considering that some
portion of the goods may havo been in
voiced too high, at least by some firms, and
tho stnallness of tho apparent advance in
tho value of our exports, tho chances aro
tliat the cxiwrts havo really fallen lielow
tho value of tho former year. Yet if we
look at tho amount of our trreat staplo
which has been turned out, them has been
n largo increase: 11.273 tons of sugar over
and abovo what wc produced last year is
no mean increase. Had prices been any
way favorable, in spito of tho decrease in
paddy and nee, instead oi such a paltry
increase ns S53.1SL7L there would havo
1 uui, jm incnuwint . wiuil lf a-onsiniy-
froin one hall to tureo quarters ol a mil
lion ol dollars: tnodiUerenco iielween pay
ing no dividend as has been tho caso with
some plantations, and making a handsome
profit
To form some idea of tho lo-s entailed
by the present depression, let us como
down from the treneral to the particular:
n rougli estimate lias uecu giieu oi iuu
loss on tho country at large. It will bo
probably more easy to realizo if wo take a
couple of samplo plantations. On one
which wo have information of tho differ
ence in the price of suirar has rnado a dif
ference in tho returns of S&U000. On an
other the difference has lieen $50,000. In
tho latter instance, the plantation, after a
year's work paid its expenses, and left just
M,U"W to bo divided among tno stocK
holders. Tho plantation is n rrood sized
ono and has cost between 500,000 and
JGOO,000 to start. The return on tho money
invested has been about .OOG or three fifths
of ono per cent. It is plain to every one
that at such returns sugar cannot long bo
irofitably cultivated. This year tho loss
las come upon tho capitalist: tho laborer
nuu tue nmzau tins rvceitisi ins iuu ikij :
but such a stato of thinss cannot continue,
nnd the waces of tho laboring classes will
fail, and this by no action.of planters and
capitalists, but bv the national cause of
capital going ont of tho business and so
contracting iuu amount oi laour requireo,
ize how great is their responsibility Lep
rosy has been used as n political engine,
both by the President of tho Board of
Health, and wc are told, by the King him
self. It is said and as far as it has been
possible for our researches to go, we have
lOllOWOU up me cviueuin. it is sum, mm
fne Hie alre of riminfT nonnlaritr lho law
of segregation which stands on our statute
book has been defied, and wo must say we
believe tho statement to be true. Last
Satnrday we learned of a leprous China
man who was offering money to bo allowed
to stay out tlireo months longer scventy
fivo dollars was the sum mentioned. His
efforts were unavailing, we are told, and he
was sent to Kakaako. Good. Whence did
ho get lho idea that by means oi bribery ho
could save himself I That is a matter to
which at pre-ent we can givo no accurate
solution, any more than wc can givo one
to the conduct of anotlier son oi tno
Flowery Kingdom who wanted to get gold
from tho Custom House at tho timo that
exchango was so high. His demand was
refused, naturally, but ho dropped a re
mark that ho could tret it at -tho Foreign
Office. Enauirr at tho Treasury Depart
incnt shows that tho Foreign Office did
draw considerabe gold at that time, but it
was to save exchango in paying onr envoys
abroad.
Wo throw asido tho suspicions which two
such statements by these Chinese would
naturally call up in tho minds of thinking
men. Wo will frankly say that wo don't
believo tliat tlie v.uinaman at tlie uustom
Hons had any tangible ground for what
he stated. A hat, however, must bo tho
estimato these men pnt upon tho Gov
erntnent of this country? Tho Chinese
are a shrewd people, and they can sco as
quickly, aye, nnd more quickly than most
peoplo into the motives of thoscih author
lty over thera. AVo regard tho offer made
bv that miserable Chinaman as simply
tho result of study on his part. 'Peoplo
are let ofF said he. Arguing by his lights
and with a knowledgo of the venality of
Eastern officials, ho could seo but one rea
son why thev wero let eo. and he mado his
liltlo oner, lie did not understand mat
motives other than money could influence
officials. Wo know Iictter, we know 'hat
the motives of this mistaken leniency havo
been dictited purely by a hunger for pop
ularity. But seo to what a position tho
nation has been brought On tho ono
hand charged with a corrupting inflnence
upon all who come hero: on tho other bo
lieved to 1 ready to contimio corrupting
for tho payment of money, it is timo mat
wo should "reform it altogether." Let
our authorities do their duty in the mat
tor of leprosy, and unpopular as they are.
Tcnal and corrupt as wo believe them to
be, thev will cam for themselves credit
and will condone for many of their sins.
Tho stitoments of Jndgo Davidson were
hichly colored ceneralitics, wc wish the
Government had given ns tho power to
brand them as downnglit falsehoods.
The sweeping denunciation of Hawaii
by 3Ir. Davidson is ono of those half truths
which is so hard to meet As Tennyson
puts it:
"A lie which is half a Irolh is ever the blackest of
lies,
A lie which U all a lie msv lie met and foastl oat-ncht.
Hot a lie which is part a trnth is a hirder matter
to fichu"
That leprosy is ono of tho crealest evils
that this unfortunate nation is suffering
from cannot bo denied. That it claims its
victims from young and old alike cannot
be denied: but that the casual stranger
coming into our midst is likely to contract
the disease, as long as he does notwilfnlly
run into ilnnger, does not court tlie disease
by immoral conduct is absolutely absurd.
There is no fear of Leprosy to tuosewho
behave themselves properly, sleep in de
cent houses, mingle not too familiarly with
thoso who may havo tho disease. Leprosy
as far as we know, is not a diseaso liko
small pox or cholera, to lc caught by the
mere contact, or tho mere fact of lieing in
the room with tho unfortunate sufferer.
Wc may safely say that no passing stranger,
that no visitor, save of his own immorality,
has carried from here tho seeds of this
disease in his system. We seo so many
wuo nave attended to and mingled Willi
lepers passing scot tree, and our cxion
enco extends over a period of seventeen
years, tuat we leei mat mere is no neces
sity for such a statement as Jiultro David
son made; no good ground for an argu
mcnt on tho premises as ho suited it
On the other hand wo who live here, who
have our children to bring up and educate
here, have a clear right to demand that all
chance of such talk as Judge Davidson
indulged in should be removed. If the
Government had but done their duty, ii
thev had showed in any shape or manner
. i , . . . , i . ,
itieir ui?-iiv to assist iuu cuuuuuuuy, micu
charges could not be brought, and we
should not be under Uionecessity ol held
ingthis halt truth. In tho present state
of medical knowledge of this disease, one
thing, and ono only, gives perfect safety
to a community. That thing is strict seg
regation. Were segregation thoroughly
carried out, we might have given tho lie
direct to me statement maae in oan dose.
As matters now stand ue cannot do so.
The Board of Health is criminally lax in
this matter. It is a question to bo grap
pled with in no dilletantio fashion, it is one
to be met and fought with ungloved hands.
Neither words nor phrases can explain it
away. Leprosy is here. Leprosy is made
a handle by vihich the good name of this
country is made to stink in the nostrils of
every ono who hears of Hawaii nei. It is
the duty of those who arc in chta-geof the
Government of this country to so meet
tho question that there can be no trouble
for those in charge of the public press of
this country to refute an accusation such
as that of J udge Davidson. On the heads
of those who pretend to administer our
affairs lies the blame in this matter.
Unfortunately the men at the head of
affairs do not know their duty, do not real
NOTES.
Tiie Knuxror of German recent I r subjected
himself to the thoaiht-KR(linfi" power of fatnart
Cumberland, liis Majesty thought of tbc
rear oi dm coronnuon ns runt; oi rnmsin, anu
Mr. camber. and wrote it at tue nrit imen.pt.
How oon will onr cinchona trees be ready we
ondf-r? The Drico of nnininc is mine rsDidlv
nnd there is quite an excitement in the drag trade.
The Kc Yorfc 7WtffMe says the rise is not dae to
pccnlatire purchase, bat to n falling off in sup
ply, ii me trees m uavaii wereomy ieaay, now
won u ue .1 nne iiroo 10 nni meir usric on mo mt-
Glaxcixo oyer a child's boot the other div en
titled n.e dreadfal cotuin" we were snnried
to find Rlletton in one of the conversations to tho
Rconrce of Ientwr on these Hlanda. Frit her I).ira-
ienit great twlf sacrifice is highly praied and the
rctnnrk mado that by "isolating the lepers, it is
boned to stamn ont the disease." (joroinc ncros.4
tots in inch n form caTe one rather a shock.
There was a time when these islamlswere associat
ed in the child mind with the death of a cclebra
ed navigator. Now they are beng associated with
nureaa aiscase onia 10 ncavcn we coaiu reiaic
the statement!
Tdeee is nothin" like a rleardefinition. Evola
tion is rightly termed "a change from nn indefinite
incoherent homogeneity to a definite coherent het
erogenlty, throngh oontinnoos differentiations and
integrations." For the benefit of the moderately
edncated, CUautlei Jtmrnal translates the above
into, "tvoiotion u a cuanga irora no-nowiu on-talk-aboatable
alMiLenens to a dorae-howish and
in-ceneral talL-abontable not-at-all-likenesn. bv
continioaB something-ebeiGcations nnd stiet-to-gttherations."
To all of which wo modestly call
the attention of Mr. Castle when neit he writes
Wfran the BObjectof evolntion.
F. Gebicit A Co- in their circalar of November
23. reported the fines r market as showing a better
tone early in the week, bat tlifc improvement was
not 01 long do ration, as tue market relapsed, into
dullness, with prices rolimr the sanie as the pro
ceeding week. In cane sugars Demarara crj suls
suowea a angm improvement, wuu increasea
sales, bnt nnces at the close of the week showed
preceeuing twk. aii ouer Rons were not mncn
inqoired for. In tha market for beet-root snsar
prices gradually advanced, an active demand pre
vailing in me, earner part of tbe wtck, iuia
good condition did not last long, and price re
ceded, closing lower thin tbe nrevion? week The
prices for unstoved goods advanced early in the
weeK. ddi recoueu ana cioteu at tiie proceed in
week a prices.
Loud Texnibox has been no idle man since bis
elevation to the peerage as the followlug extract
mill shew, as no great interval bis clupsed since
he published his last play, lie hat aHo written
Fome short poems, onoofTihich appeared in the
nizLTTe of last week. An exchange says ; Lord
Tennyson's new drama, " 1 horn as a Heckct," which
baa appeared, is dedicated to Karl Kelbornc, Lord
High Chancellor. In the preface tho poet 8aysthe
work fs not intended in its prnfteut form tomrtt
the exigencies of the modern theater. The drama
is too long for acting. The two principal scenes
are a visit of Queen Eleanor to ltosamund and the
murder of a IteckeL Koiamnnd is Enmmoned to
choose death by poison or stabbing,' and rejects
both. The Queen is theu about to rtnb her. when
a ItecLrt appears upon the scene in time to prevent
the deed, lie upbraids Ihe Queen and advises her
to retire to a cement. Jiosamaiid is filled with
gratitude for her rescue, and attempts to rescue
Jtecket front the men who are sworn to murder
bim. After a Jtecket had been murdered itosa-
mund if found kneeling over his corpse in a catbe-
1 Jeadst e fxrs has the followin lelative to tLe
Itritisb Shipping industry. It says that: In a Ice-
iuid at Xaatrrfniui rtt'niij aji.n.aiiiiiucia in con
trastincthe British mercantile marine with that
of other countries stated that one-third of the
tonnage ol tue world was nnd. r ti e Jlntiu Hag,
the English having doubted their tonnae within
the lata twenty) ears. The French had doubled
their slm five rear, but he did not think nc-
land need have fear of French competition, though
nnder their I want v sv-teni thev wero makmr cood
progress. The English commerce had held its
own and employed IW.OOO seamen, U0.O00 mater
nnu ai,wu ocruneii omccrs. iueimcccisol Unttsh
commerce was due ton variety of cause, the one
being the predominance of its naval "j-ower. If
tlie working of this great mercantile machinery be
forced into other gronves parsed from the hand, of
those by which it bad attained its present great
ness nnd efficiency, or te hampered by official
clogging or red taneism. as had been the caso with
America. Holland, Spain an J France, then let
.Uagiand prepare tor mat decline wuicu sometimes
v-emeu tue ioreuouing 01 many 01 ner wise men.
Hetboucht the improvement of Khionim? should
be left to the improving ftptrit of tho age rather
man permit micrierence wit 11 tee luventtve genius
01 tue nation oy too muca government supervi
sion through hard nnd fust rules of construction.
McStarey, a prominent carriage manufacturer
ol iwiiuiguam, England, ma series of useful hict
on their preservation, savs tint a carriage should be
kept in an air-tight coach-house, with a moderate
amount 01 iigut, otherwise tue colors will be des
troyed. There p hould be no communication between
the stable and the cor.ch-housc. Themanureheap
w i'u ruuuiu JiaiHj ue krnb at inr nvray as possioie.
Ammonia cracks varnish, and fade the colors both
of painting and lining. A carriage should never,
nnder any circumstances, be put away dirty. In
washing a carriage, keep ont of the sun. and have
the lever end of the 'ita.-tt.n rovrrivl with lcnlhpp.
Vw plenty of water, which apply (where practicable-,
with a hose or syringe, taking care that the
watrr Uaii.t driven into tbe body to the injur) nf
the lining. When forced water it not attainable,
use for the body a Urge, soft sponge. This, when
Maturated, squeeze over the rrinels. and by the
flow down of the water the dirt will soften and
harmlessly run nil; then finish with a soft chamois
leather and oil ilk hanJLerchler. The fume re
marks apply to tbe underworks and wheel, except
that when ihemod is well soaked. & aoft mop,
free frnra auy hard substance in the head, may be
nsod. Never use a "spoke bruh" which, in con
junction with the grit from the road, acts like
.tand-piperon the varnish, scratching it, and of
course removing all gloss. Never allow water to
dry itself on the Caunage, as it invariably leaves
stains. Ho careful to greao the bearings of the
fore carriage so as to allow it to turn freely. Ex.
mine a carriage occasionally, and whenever a
bolt or slip appears to be getting looje. tighten it
up with a wrench, and always lute littte repairs
done at once. Never drai out or back a carnage
into a coach-hoc with the bonnes attached, as
more accidents occur from this than from any
other cause.
wiiHnincrtassnf nrly a million and a quart- !
er gallons over 13S3, It follows, therefore, that '
we muit be dependent upon foreign countries for
tbe bulk of our sugar, and should the demand In
crease in the mdm ratio ia this and subaequent
years a in ine pv, a very tenons economic ques
tion is riK- on the batis of international ex-
rhf.nr-fiA. This ii obvioui from tha faet that mr
chief sources of supply are countries which buy
tctt utile in mam irom us. it loiiows, there
fore, that sound policy would suggest changes iu
this respect beneficial to this country. In other
word, the preference should be given to countries
which take American manufactures in exchango
for their own raw products. This U the policy
which England has success fullyparsi led. It ad
mits free of duty all raw products of manufacture
thus creating a market for these, and simulta
nNmslj alfo a demand for English manufactures.
n nere b nuu-uiouuiiicauiiag roonirj kui iu raw
products Itbere also will it buy its supply of ma
nufactured articles, H will not tie its money in
the mouth of its sack and earrv it to some other
country to spend it. This policy i what has giv
en Great Hntain, the manufacturing and trading
supremacv among ue nations. lAnn power pre
vents tbe Umted States from challenging Hnush
manufactures in tbe open marketa of tbe world,
but a new departure, looking to the same end, has
been made by the negotiation of commercial
treaties which will increase foreign purchases con
siderably. It ii not the purponeof this article to
discuss the policy of commercial treaties: their
drift in thh cae is only alluded to. and that drift
is dearly in the aiircuou or tree trade, under the
miv of commercial reciprocitr. with sugar-pro
ducing oountne.
We bare had ample experience ol the working
of the Hawaiian treaty to be able to estimate pret
ty accurately now ine Mexican and bpnni"ti
Indian treaties are likely to work, in 1S75, tho
vear before tho Hawaiian treat v went into effect,
our total trade with that country was $ 1,S),1V5
hich was about an average of the eeven previous
.tars. In ISS. seven foil vears under thn treat v.
tbe trade had increased to (12,01 IJBS. Mil this
was not alL The San Francisco AffrrJUnf shows
that about $.tW0,000 were expended in this coun
try on account 01 Hawaiian planters and trauo in
thebuildineandeauimiincot wood and iron sail
strain vessels and tho manufacture of iron pipes
lor irrigation in tne loiana. 1 no iron vessels were
built at rhiladelphia, by Cramp A Son; tho
wooden ones w-ere built and equipped on the
Pacific coast. A sngat refinerv at San Franci-o,
costing considerably over I.OiO.OiO. likewise re
united from tho Hawaiian treaty. These aro alt
rrnrudactive investments entirelv outsiJe com
mercial exchanges, and man bo credited to the
treatr. About m.U-0.000 of Amencnn capital has
also been invested in the inland i, although it U to
be regretted that tue Hawaiian ?ugar uompany
wis not a soccers. Itut tue tauil I lei entireir
with the manacentent. It follows, therefore, that
a similar treaty with Mexico and Spanish Wert
indies tiuouiu create a uenianu lor tue pnNiucis 01
American labor, and thus develop new and im
portant channel of profit to American labor nnd
capital. That England anticipates this i appir-
eut irom ner anxiety to negoiiaie a similar ircaiv
in favor of her Went Indian possessions, which
would otherwise be cut off from a market. Tins
will more fully appear if the condition of tbe
Euroiauansui-arsunnlv be considered.
lue consumption 01 canenugnr in Europe 1 or
the three vears ended March 31. re-inectivelv, nc
cording 10 nrnrCaur, an English authoritvon tho
subject. was,iulS$l, 2,m tons; 19S2, 2 llV. tons;
isj, zm tons and tue total mock in tue cuiei
markets of Europe on March 31, lNS3,was TfH ton.
Itut on tho 17th of Mar, 1-t, the Imports of sucar
by England to date exceeded thtne of 1SS3 by
4u.i tons, w line uie uenverie-; snoweu a uecreaso 01
3,701 tons. Tho fctocl( iu the I.ritih markets nn
Aiay 17 vi imn year was nim tons, against -i i,
Gil tons in 1SS3 and 217,7tr tons in 1SS This ac
cumulation of sugar in England, and the largo
production of beet sugar on tho European conti
nent, sufficiently account for tho depren-ed state
of the eucar market. "The consumption on the
European continent is estimated at about 2,T00,tt)
tons of sugar in lf$t-Si, and there Is a vinble pro
duct of beet sugar this season of 2T.'W."ilCX)0 tons, so
that cane sugar win not nave mncn sale lucre, 11
nnv. this season. Germany will have a surplus of
about S00.000 ton?. Hot the area of sugar-cane
cultivation U rapidly extending in the South Paci
fic New South Wales and Queensland mado M,
871 tons of cane sugar this Reason, the Australian
con samp; ion being aDout uutuuu tons per annum.
Fiji is turning out large quantities of cane sugar,
and plantations Are projected in Samoa. It is
evident from these considerations that the price of
sugar will decline, although consumption may in
creaio propoitionately. The consumption per head
in the United States and Australia i-t about 70
pounds per annum. Mr. Gladstone anticipates
that the consumption of sugar In England will vet
reacn iu pounas per ueau ns society auvanccs.
aSTtu) Ivltiwrtiscintnts.
Heads of Families!
PLEASE BEAR IN MIND
IRE WOOD, CHARCOAL
stove: coal
Of tin h st utility, which wr offiT fur tale at
Wholesale and Retail.
The Sucv Trade.
The following letter appeared in tbe latest num
ber of Bratlttrtrt' and it well worth the while of
the perusal of all thoe interested in the Sagar
Industry.
The consumption of sugar ner capita of popula
tion is greater In tho United States than in any
other country of the world. It is largely used aa
an article of food, and in a variety of ways in
manufactures. One would naturally expect that
this country, from it& vast extent of territory
available for sugar-raising, would be comparatively
self -Kostaining, but such is not the fact. The home
supply ia not a very important factor in the case.
Louisiana is the great eugar-growing section, acd
its average vjcld per annum tn tbe past three yeara
has been 2i33US,a32 pound . Boctsurar is an in
significant item, the Alrarado factory in California
being the only ona now ia operation in the coun
try. Professor W. II. Wfley, Chemist of the Na
tional Department ol Agriculture, states in his tut
report that the outlook is more hopeful for the
sugar beet than for sorghum, although the depart
ment is horWnl of the final sacccss of bo'h. It is
veil to cherish hope, but the promise of Mcees
in this department is exceedingly remote. The
) ield ft Borghum sugar this year will not exceed
l.UOG0 pounds, and of eyrup about 30,000,000
gallons.
The total importation last fiscal ear was 2,733,
tlfs W7 pounds, of which I2iIi8,G. 1 pounds cane
from Hawaii on tbe free list, Thiswasanin
create over the imports of 1S5 of GlSiD.1,773
Xauods, or nearly five years Hawaiian supply and
two ami half years LousUna supply in the
pat4 year we likewise imported 34J2S,G4t gallons
mousses, of which 163,31 gallons came from Ha-
Sorghum Sugar.
At the Fourth Annual Convention of Fruit
Growers, held lately at San Jose California, Dr.
v. uauuen 01 axa jrranciico dj special request
gave his views on tbe manufacture of Sorghum.
Dr. Hidden: Sir, Chairman and gentlemen of
the Convention: Sncar makim? in tho United
States is destined to bo one of the leading indus
tries and its profit when properly managed will be
greater than any of the leading staple crop now
grown in the country. In the btate of California
we have not only the climate and soil for the culti
vation of tho finar cane, in certain sections, espe
cially Southern California, but in the whole state,
the soil and climate is better adapted for tho culti
vation of the sorghum than i a any Stato of the
Union: but the creat drawback to the raisineof
the Borghum in the nU ha-t not only teen for the
wnu.ui ci.irucD iniia cumuii) aim 1110 uuw-
lcdccof the making of bujarnr sirup from the
aame, but owing to the large importations f raw
sugar from other countries ainouutiug to hundred
of millions of dollars annually. Of course the
consumption of sugar and Evrup is large, and the
question has of ten como to me. "Why should wo
import into this country a commodity lu such
common use, when wo have a climate and soil not
only capable of producing suthcienl for home con
sumption but for exportation, if necessary r Then,
again, it i an industry which can bo made profita
ble, Tbe cultivation of sorghum and tho prodnc
tion of suzar. therefore, though still in an expert
mental stage, is receiving great attcntiou, sud I
trust the timo U not far distant when each and
everv owner of a small farm will be so situated
lat Uai fan mil raily roaluoo, lut inmufiictaro nt
11 small coat, all the sugar be need, enongh and to
fpare, especially in the State of California, for it
is well known that nnd land capable of producing
corn or wheat will produce sorghum. Then the
question conies, what vanety of sorghum Ubest,
nnd that which contains the most saccharine mat
ter. I can only say, when Gen. 1 Dao was at tho
head of the Iicptrtment of Agnculturo at Wath
ington, D. Xf lie gave tbe subject of sugar syrup
a lhoroagh invcstimtion. and tho result wai nuitt
suflicicnt to extablioh tho fact that both sugar and
roolaises of a superior quality dwelt to a largo de
giro in the sorghum plant, and as a consequence a
new departure has been taken; and to-day sagar
and molassei of a very superior quality, having a
groat commercial value, iinn ctablished fuel, and
will warrant thecmp'.oymeDt of energy and capital.
I shall not enter into any detail of each variety
imported into tin countrvl but will I onto others.
who have devoted more tune than 1 have to its
cult a re, to immrl information on that subiect.
My nbj'-ct in spunking to-night H fimpl? to slate
a tact mat sugar aca syrup can o, mauuiacturcd
from tbe sorghum plant, and tho process is ho
simple tint nnvone who is interested iu the wel
fare of a State, and subsUIs upon the productions
of tho soil, ought not to pay a duty on any article
that so witty of culture And so profitable to the
prouueer. ivcn tuose wuo no not wi-n to manu
facture the buznr. certainlv can make the by run:
nnd in all fruit proving districts, where it is well
knonn so much is wasted for want of a market
profitable for its sale. With tho cnltuntion of tho
sorghum nnd the manufacture of the syrup, all
fruits that am now wasted could bo made into
preserves and would nlwava find a ready market
ana csiawisu an ituiu-try mat would give employ
ment to everv hona.hoId in the rural dutrii-tn-
Frora experiment. I luade when In Georgia, in
the manufacture of Bugar from tho sorghum
plint, I found it necetmry first to filter the juice
aa prcadcd from the cane, and by oxidization du
ring the filter-! ion, by the action of certain gasca,
to neutralize tho aciJa and pemhable matter
which all cane juices contain, takiug cam not to
have it exposed to the air as eiposuro destroys its
crytlallizablc properties to a great extent. Hence,
it ront lo conveyed from the prets to the filter
and be in an enclosed or hermetically pealed re
ceiver; from there it can ba run into the. evapora
tor and then reduced into tbe form of syrup or
sugar, as suits the fancy of the raanuNctorer. Of
course there are many improvements in the means
employed aa regards evaporators, but at somo fu
ture period I intend to practically demonstrate be
fore this Convention of fruit grow era, tho process
by which all vegetable juices can be decolorized
and clarified, especially that of the sorghum plant
so that tbe Eyrup and sngir obtained therefrom
can be made light in color and retain all Its sac
charine properties and be far sweeter than the re
fined Bugars of the present day. The necessity of
importing raw sugnr for tho purpose of refining
it ana selling 11 at a great prout, will soon be
thing of the past, I front.
Spcrial iXotircs.
MR, W. a PARKS
JUt an OGIee ever Iefrt.Brhop Jt Co Hank, corner
Vrrtbsnt and Kaihnmatu Mand mill be bappyto
attend to any lia.ftlne cntrnfted to hit care. PK13 Cm
MR. W. F. ALLEN.
Ilavati oOtce with Mcr, nuiiopJt 10, corner of .Mer
chant and Kiifanmat.ii btreet, and he will be pleased
tn itten'1 lo any bniilne entrust ed to him, 1X1 Cm
HONOLULU, Aag. 2ht!i( HSM.
C O. BERGER, ESQ- I
Agent Macneade Si urban Safes. I
Dcau mb: 1 Uke great pleaxnre In rtillnt- thit the
'Mj.cxEH.c.aa: Urbik Sirs 1 parchaved from yoa, and
which went Ibroa-h the Ute dUantroat Are Id my
torr.eatne est Ia mr entire cat I traction. I opened
mii- huicuu mr ivaiuiniUDDinq luanu SIJ IIS CCHUCIIII
in iwrirci conuiiion xonri Trolr,
P A. nus..
Xtw vlliifrti5fmnits.
SHEEP RANCH !
fpiii:owxKuorv.sin:i;pi;AXcii
X tin Hawaii withe to meet llh a canalile rain s a
TiorkiDc ponncr .ippiy 10
That w--
Ltaudafa'IStock ef
er Ordra mprctfolly oIlcltnI.
S.
Telephone yo. IS;
F. GRAHAM A- CO.,
So. -tj Kin? .Strrrl.
ion
HAY AND GRAIN!
Messrs. S. F. Graham & Go.
'iLe tr Iu aanonncln- to their old friends
an.' PiUoaa that they hare
JUST RECEIVED,
A FKEMI LOT Of
Choice Hay and Grain
WHICH THEY OtFEK AT TIIE
Lowest Market Rates.
Shipping.
TO TIE U HI BACK
Inter-Island S.N. Co
VOLCANO xi ltETUKX, ran tw hsd at h
lIotioTnliwllinUMeeftVe W U t Xt-l-
landed at rnaa!a, thence by Kalltwd i TabaU.
whereIlwetandWr'itllbeli.ittenNnc. ,
Wythlrwite.TcHritaiwWethe iwindttlp l
diyn, ettns 1 days t v)U th Vo-ene.
TIUKr. it v;it uie. nn. -
Hone, liablr, Hoard lad Lodffin?. " f
civ tor Mnner iwriicnwrnrni"'":
lnler-lsland S. N. Co., Honolulu,
Or 10 J, F J 11HV Vixxo Hnrs t"IJ
Steamship Company
1 1 ivi x t : x
t7 IlajraiitlKmlilttlrvmltoanTpiitor Hie citf.
S. F,
Tfln'tKMie Xo. IN?
. GRAHAM A. CO.,
No. sSKInsMrwt.
REMEMBER THIS.
IF YOU AEE SICK.
If joa are !ck, HOI ntlTEIIS ntll)
i-nrefy old itnre In nn Hoc Ton eil
nmin wbrn atl rle tall.
H jimi arc coniparatlrelyitrll. Imt feel
the nerd of a crand ten Ic and ulliimlant,
rcver rent cay till you are midc a new
bcln j by use of
HOP BITT RS.
Ifyou arccoetlveor djrpeptie. or ate
faffiT.ns from mr other of thcnnnif ron
diveaaeaor thcntomachorbovelii.lt I
yonrown ficltlf yon remain III, for
MOP 3ITT R3
are a Mrrrclsn remedy in all vnch cm-
pIllDtl.
If yon are mating away lth any form
or Kidney lHi-ea-'e, itop tempt In; Death
this Biomcut, and torn for a care tn
HOP BITTERS.
If yoa are sick with that terrible pick
nmn non nes, von w ill find a "Halm
la tilleadlji the n or
HOr BITTKRS.
If yon arc a frequenter, or arerldentof
a ml urn tic district, barricade your pjt
tem atralnkt the ecourse ef all cnimtrle
malarial, epidemic, Llllou and Inter
mittent few -by the Ue of
HOP BITTERS.
It yoa hae ronzh, pimply, or sallow
pVln, bad breath, pains and ached, arid
reel mlnerable pnerally, HOP ntrrEIt
will cive yon fair pki, rich blood, the
9 weetcst breath, hsatlJi and comfort.
In hnrt, they core AI.I-DUeane of
the btomach. Bowel. I!IodL Mrer.
Nertrn, Kidneys. Ac, and
500
IU be paid for a case they will not cure
or help, or for anythin Impure or Inju
re" found la them.
That poor, bedridden, fnralld wife,
mtcr, mother or dinshter, can be made
the picture of health by a few bottle of
Hop I.lttcri, costing but a trifle.
Will yon let them suffer 7
Cleanse, Purify and Enrich the
Blood with
lion Bitter.
And you will have no rlcknefs or f oflVrlns r doctors
piiiv 10 pay.
For Mle by I10LLISTEK JL CO
!! lyr
lutt bt
II. HACKFKLI) & 0.
NOTICE !
"ri:. if. i.osi: & mi:. 11 mipi.t.i:ii
.lL hare tbl day been actborlied to n onr Arm
11. II ALht bLIF E Lt).
tun 31
P0UHD K0TICE.
TiinnK wii.1. itcMtii.n at ri iu
lie anctlon at tbe tiOVEUNMEXT
rol'M at KOII0LALO, at IS M.
&.ATVUHAY. Jlfiurr 11th. the rnllowlii
Cednote. vblte iMton forebemd. botb bind fivt
hite. brand risht 6.
1W tt A. B. KAJtrStT. Teoad ifaitr
Notice to the Public !
ANY OXi; 1)1.31 ROUS OF
tbe owner ef a valuable work, entitled
"Fifty Years in tho Church of Rome,'
Hv FATIIEIt CniXIQCY.
Can become one by tbe tbtcr.ptlo& of Five Dollarf.
Money vhoaM be st In Iteriitmd letter or by Port
U3ke Ordiv. payable U " Kev. Charles Cblafany. M.
Anne. Kanlakee Co.. Illlanif, C. . A.
cr The boo, will be a larre octavo of TU0 paret
bandfomcly btnnd. Tbree-foerlb of it If 1redy
finUbed, 10O It
DIARIES, 1885!
J. M. OAT, JR&Co.'s.
(N)-iai(m'rliii Notice.
vroTiei: ni:iti:nv givi:x
Ibit Jaroea u Holt. Jr. and Joau ITnalle bare tbla
dar cnlerru Into ctvpartnertnip lor ine mnrOtt or
rvm? on a central ranehlnar and atoek talain? ln.l
at alalna, Oahn, under toe .nrm name or Holt A
Hrodle. J.tSlKS Ii. IIOI.T, JH,
Walalnl. Dec. 1Mb. lWj. ldlj
GEO. F. WELLS,
IIS and 1420 Market St Sn Frnnclaeo,
WIIOLKSALE liCTAU. DEALKK IN-
SELF PLAYING INSTRUMENTS !
l'arlor Orchertrone, Mant;l )rthottninc,Clrlona.
Ac. Alo Mole Arcnl for Mathnt-hek and Meln
ty may IMinim forllm Hawaiian lulitndg. t
IMPORTANT !
I'trtBMEXbottHoinrr Folk Saxe)are breeders
vnii imponera 01 eteiit tar ety or viioiua'OHBitrn do
men He lire atocL. We have made lliUnnrtiiithml
nesa for the past rtitHTEEX tears t baie Import oil 15
vmr inaus imm mc ca-irm nnu .uiuuio ciaies 10 Cali
fornia. (Home ufl.ee. Lick House, ian FranrUeo.) We
are alo dealers In ttratte AMaAU,epecla1ly heep
and mtlch rows. Wo always sell at Vrty reasonable
lty lho M,Ufl" tprcckrts,' rrritwl nine head ol
breeding cattle, and w'll.ltt ni veeki eteral rrates
of breediuji lterkshlre pis and ho We ttpectto
remain nerc lwoorinree moiitbs or mora for health
and "climale"ln thciH,antlra wtllrecilTe onli-r
for any breed family or strain, of anon BniEDivado-
meftlc animals. NTirAcrn-i (J rji nx ste k n Office
nn it. uiine, i-.sq , ,o.3l tort Mrc-t, Ilonolulo,
im) i kti.ii j: iionan I'otK ake.
Xotict
I5V JIVKN
Atltniiiistralor's
at otic 1; is m:i:i
lhat the undersized has been a i pointed Admin-
Utralorviith the Will aanezet, of tho Kptateot Itobert
r- Miia-ani. nieoi aaknmacii', liamaivna. 111 wall, tie
eeatMil: all ix-rson. harin- rlatma tmlmi tK l.l
Kelalaarn notified that theymmt preent the same
wui. uimnt nnu nitn ice iroHT luucners to IOC HH
derlmed. w Ithln six (SI moulh from ih i1.ua r thi
notice, or Iheywill be forever barred; and all persons
owlnj the said Estate arc rencUd to make Immediate
payment to the undersigned, ItUKL'S A. LYMAN.
AdmlnlMrator with the Will annexed of the Estate of
raauhan. llamukny llairall, Dec 11. 1581. J0i Ii
The Enemy of Fire. The
Harden Hand Grenade.
Omn or the Afiixr at i
Detuoit. MiciiMarSTth. 1VL f
tjETtxauv-Ua the nisht of May Mh, I had oeca
slon to ase the Ilaiden Hand Grenade Fire Kill nzul ti
ers, with Alilch my boat -a at sttppllod, and found it dUl
K. :tas, Master SlrH Arctic,
(lEMTEVEX.-My Hlfr.afttr licltlDj Ihr oil Hove,
was called toan .llitrpan of the house. When he ic
titrned she fonnd the floor covernl with bnrnln? nil ml
flrenndenray. bhe quickly applied one of y onr
I I m "" -nanmiq aiHIDBl IDIIIIIIIJ taCU-
, , , .V. Bektost.
tolicibla Iron orks, cor Fulton & Jefferson tl
riuxnT..SepL21th,ll.
IER hin. We take pleasure In Informles yon of the
rood work done by joor Hand Grenades at out factory
In AUmejla yesterday. A fire caught upon the shin-le
rootofaUrRe frame sttneture andbnrned forlorn. ly,
and lor a time endangered our entire works. The Are
waa barnlnzoTer about one hundred feetof surface
wa-cuen, ana in men ill bfln on
.c hi.iu.-ji. h u iac ETeniaes mQa nan to Co m
three Rizhta of stairs to thd top or the bnildlns.wt.lcl
la sixty teet high, and .- difficulty In calnln
arreitaj tothe fire on aecnnnt of the rteernes of lb
roii ana the absence of cleats, which ceca.toned con-
rmniwreurwj. "Bfoinc men Teaenea tbe Ore thei
IneUntly exllncnhhed It by the use of your Han
(irrnidf . although the shlnzlca were wrll burned. W
destruction.
Route and Time Table.
STEAMER KINAU
KING, Commander.
mil Writ 1! n.not nlii rarh TvrAtr at 1 P. M.. 1m La-
balna, .Maalaea. Mikenk Mihnkona Kawaihar Lan-
)ahoeho and Hllo. Leave Illln Thursdays a vm,
touch In- at the same ports on return. arrtvli-V. irk
Saturdays at SF.M.
IAtisKNitF.lt TIUIN from NlullI will leave each
Friday at l r. M., toeonnrtt wna ine Klnaa at .iiaan
The Kin aw WILL TOUCH at Ifonokal and laahaa
on dote trijt tat rai-enrs. If a signal Is made from
mesnoiv.
IfT" Meimer Kfnan will not take heavy frrlsht for
laaupahoehoe.- U-hl frelfht and paekases only. All
heay freight for the abnre port will be taken by the
LaTDna iim Muara lion.
STEAMER 'LIKELIKE,'
LORENZEN, Commander.
Iaret Ilonotntn ererr Mondarat I I. 51. fnr Kin-
nakakal. Kahulul. Keanae emy mhrr week; IlnelA,
liana, hi pa hut and Nan Itetnrnlnir will stop at tbe
I or mans inn pnenjTii oniy.
STEAMER "LEHUA,"
WEISBARTH, Commander,
f.rre Itonnlnln each MondiT. at 3 I. M. for Fain-
bun. Kohnlalele Ookala. Kukaiau. llonohlna. Laona-
horho, Hakalanand Unomea. Krtcrntn;nuiarrive
DSCK eacn paioniay.
STMR. KILAUEA HOU,
MCDONALD, Commander.
W 111 leare Ilonolnln once each week ft,r same porta
at ine wnns
STEAMER "MOKOLII,"
McCRECOR. Commandor.
Leavra Honolulu rich Wednesdiv. for KaanakakaL
Kamaloo, Fnkoo, Moanul, Hilawa, Wailao, Felrkuan
and kaiaupapa, retnrnin- each Monday evening.
cirThe ComDinv will not be reniMinslble for any
frelcht or package unless rerclpted for, nor for iier
socal base unless plainly marked. Not responsible
for money or Jewelry unkus pheeil In charge of the
All iMMslble care will be taken of Live !tock. bat the
(.nnininr win nvnirnmr unj rin "i mrwirni.
hAM'L. 11. WILD ETt, I'resWent;
. 11. iwt, ecrriary.
OFFICE Corner Fort and tjueen Mreers,
Honolnlo. Sept. It. ist. IWi
INTER-ISLAND
STEAM NAVIGATION CO.
(xxanroEJiD.)
Steamer "W.G.HALL"
riATK1, t'ommanilrr.
Leaves lloiinlulii Tor .Maalnon, Konn
nnd Kan, nn
Cctml Jh'tftisnncnla
Tiik .rrsTicKs ov the sit.
riiKSK IvrRT Im rtlt daj arMalnl
n. w. t. rcsTis
Cktl ef la.Clrtalt Oart fer ta. fori a Jadlclal fim.
till.Tica Fraok nlaJl mli.nl. w
iryraentart. wiLbua ruaTKR, vVrt
ll.aarahi.Jaa. I.I. ISO. tari..
Xortgagtc's Notirt of Intention to i'ortdeM
aim vi uaic,
X ACn)HI)AXCK WITH A iw.
L er of sale eentalned la a cerUla mortsa-e made w
1a lb In. alUs II Aktat of Ht to. IsUad or IUwiii U
Kin Chew, alias II Atona, dited tTth day r OctetM-T
l-l. mwdrd In the office of the Reslstnr ef Coa:
aners.ln liber J7. on follfn 042. notiee ! .JJl
fiven that said nwrtcscr Inlet il to rceeckw
mortise for rondliloa Woken, and at- mm! rZ
doire wilt sell at pnbllc ainloa, at the store ef ill
It. I-Vl. tm ..lal rtllA T.l.a.,4 r.t II IL o
HAY. thw Jlst day of Janciry. lC at al f mm h
tbv Ins and etbrr inoyeny aadewrribed la tald m
jra-ebelowfpetfled.
tunherpartkalirscanbc had nt J 3fofi,T
AUwiey-at Liw. KIN CHEW MorreeV
iioneiaia, sit ieeetnter,
The folVmln la the mnrlftt hnrnrrtti. t . .
la'e of ptera.M and tmihtiars thereoa itaatML
Frvnt street in safd Hllo, rivca by .Wat Akaa tm B
Akaut, alUs llo chin, datad the Tli of Apnk Iia.
hoe. o-lnc the nne iand,Bj upen the pmalw.
lrard from Ihcsald Aa Akan
a shnwcars. I bis ire rrarkrrs t i.xfaehew.lVt
Termlcvlll I b peaaat etl. bs t nin.-s e Medldae,
nxssnnoi- ns v aiaie saace, 1 bi t hincse cibbt-v
baoat.9b3xeforChtarMnf-dkrtn -m. a t-i
roll tin!. dox tins loNrrs. (dot nn salmva.w
tins, I larje MraK s small Chlm r 3 utrron. 1
rlixk, 3 table. doa Chlneve bonla, o 1 2 4s d
No.t: lilaiito. .Va S: I -W UiMn ikon 1 nli
cotton cloth, I rolls calico, t dox shirts, doxuftlte
andrrhlrts. 2 dox towels, I tr tnaers (brMMkkHkk 1
Chlnre nedlclae kntfr, t CalaeseMrdlcla hxlrr I
boat (Ironk 1 dot Chinese bat, i (fit feats, a teapots' 1
doa pr wblte ho. 4 doa Chinese bols.l sUv ric
pkes- .idopksJ'ncwnot. 1dt haadherchtfs.
i S Chinese fftk thread, l dox hxi toys mu
Sui'itiiMK co u iit or tiikIia.
wailin Iland la Frobat. ta the atitirr ml
tbe Kmh of ROREirr I'HEHYRE JAMUTf. Ute r
LUerpool, EnjUod. drrd- frderapfotaUat time
for Frobate ef Will and iHrrcunc in li ilea' lota t4 mw.
of tbe sane.
A document, parponinx 1 br an itlcrlrtl tvr ml
th Ia Will and Tetumrnt nnd t'odkll af ttabm
Cheshyr Jaalon, deceased, havmr the I9ih day al
recetnbev. A. D. l-t, beea prrseatej to saht rreal
Court, and a petition for the probata thereof, and fx
the Issuance of letters trUmfBtry to Theephilaa II.
It Is hrrehr ordered, thai WIEDXESaAY. th lb it
day of January, A. I. t"W,tlO o'clock a. m-.f saM
dty, at tbe Court Room of said Coart, at AUUUal
line, ia saia iinaoiaia or, ana tne same t. airy ap
pointed the time for prmlnc said Will and hemrins said
application, when and where aay person latrreatrd
mar aooear ltd contest the raid Will, aad tbe vraatlae
of Letters Testamentary
it is run ner oruerea, un noiicr isert gmm ny
publication, for three succihive wreka, la tha IU.
ntHtliMrm.aaewspaper printed, and pahrtvhed la
iionoiaiu.
lmted llonolain. It. 1.. uecetnTxr. inm.
L.VWIIENCE McCCIXY.
Attest: Justice of the Supreaic Court.
Htnai tami. Drpmy Clerk mil
Tuesday. January 11 -
Frldav. Januarv
.M on day. Febmarvi.
Wednesday, February It ,
Mondiy, February si
Tuesday. March 3 ,
M on day, 5! arc h it
Arrhini; at Honolulu on
at lp
at Ip
at In
....at Ip
...al I p m
...at I p m
..at 1pm
...at I p m
Tuesday, Ja n nary JU
Friday, Jannary
t-undar, February
Thursday, Kebrnary 11 ,
haturday, February 23.. . ,...,
Toe-Hlay, March IU
r"rtdsy,.Maah5l... . ...
Sunday. March J
atl b
.. .at 3 p m
..atspm
....at 3 p in
....it .(Dm
. ..at 5pm
....at3p m
,..ai ! p m
Steamer "PLANTER'
ITiiUnoo)
UAMEROS. Commandtr.
UavcamrjTURSDAV.atS ji.m., for Xlallialll,
Kotoa. Elrcla aail Walmra. Kc-rarnlnr. frarra Naarltl-
atlt rry hATt'KlrAY cvrnlnsi arrl.tos KhL a.rry
Steamer "IWALANI'
FREEMAN, Commander.
Lea res Honolulu for llamoa, Kukulhaclc,
and 1'aauhau, pn
Wednesday, January 7
Friday. January It
Tncsitay. January 27 ..... .... , ....
Friday, Febrnary 6.....
Tuesday, Febrnary 17
Friday, February ?7 ,
Tacuday. March 19 ,
Friday, March -W
Arrtrlnsat llonolain on
Wednesday, January 1 1 .
Mtnrdav. Janaarv it
Wednesday, February I
!atnrday, Febrnary II
Wedeesday, February S3
Saturday, March 7
Wednenday, March 13 . . . .
Mturday. Marth 7
Hoi
It 4 p M
...at 4 p as
...at I p m
it 4pm
.it t pm
....at I pm
,...at I p m
...at 1 pm
.atl a m
...its a m
....its a m
. ..it a m
....at a m
..itiin
...llhim
..at nam
Stmr. C. R. BISHOP,
MAUAULEY, Commander.
Learea Honolulu every SATVTWAY noon for Walanae.
Hanalcl, Kllanea and llanamaali. Hrlurn ln leaves
llinilel every Wednesdav at3n. m.. touchla? at Wat-
anae every Ihursdiy mornlnj. arrivln back the same
nay.
Stmr. JAS. MAKER
J
erenade savea the bnlldtn-from total
riee send ni ia additional supply Im-
T L'OLKXtS Co.
Orders i-hnnid be idil rosed to Z. K. Myers, Mana
per California Troduee and Provision Co , Hotel Street.
Honolulu, II. I. 10 Hu.
WEIR Commander.
Iavea Ilonolnln rverr Fit In IV at 9 m fn. ir.i..
tua and Kanaa. Usturnlnir -Ira re KatMa r Vnn.
day at 1 p. m.. toochlu; at Walalua rrrry Tuesday
uiniiiiX,.iliiui, nvkimriuiciua
PCK t0
Tax Collector's Notice.
DISTRICT Or HILO, HAWAII.
rpiK tax coi.inxnoii ron ii i no
X has opened an oflice in the Court House at II 1L
for the purpose of collectin; taxe. aud will be at the
officr Thursday. November Jth, Friday November 21st,
andSalnrday, Nonrnber 2M; Monday, forenoon No
vember 21th. at faufcaa I'lanutlon. Monday noon a I the
office of K. U. Hltcbeoek. Tapaikon; Tuesday morn
In. November .5th. at Maea-r'a office Uaomta I'lant
aliou: Tuesday afl-vnoon at 1'epet-keo I'lanUtlon;
W ednesday evenlni; and Thursday morn In,-, Nor ember
arth.al Hakalaa t'lanUtlon, Thursday noon Novem
ber 7lh. at Houomu I'lanUtlon; Friday, November
2"ih, and Saturday. Norcmbertnth. at L)d-?atea office
atLaupahoehoe-andJudje li K. IVs midence, and
Monday, December 8th, at offlee la Hllo. and after thai
date will retails la Hllo for Ihe purpoc- of collect nz
taxes.
Fcrsooswho are lUbk- to tixulon are hereby re
quested to pay the same b'forr the ISlh day of Decem
ber, or they are Habl to pay (KT) ten per cent, more
aceordtac to law TAXCOLLECTOK.
The old Law jpne into effect December IL
IL A. LYMAN Tax Collector, Hllo.
lloniemberSM-WL 77i
pacific hail sTtAMSHip company
For San Francisco.
tiii: hri.i:.snin xr.niMiii-
AUSTEALIA
lilll?T. )IJ1 A.NDLK.
WILL IUU HONOLULU FOR SIN FfiMCISCO
On or abost Hocd&j Jan. 18,
FOE SYDNEY VIA AUCKLAND!
Tiii:pi.rj(niiNri:Aii!iii.
CITY OF SYDNEY!
iKRuon.v, ooi.iAsnr.ii,
On or about January 24th, 1885,
Tor Fre;ht aed pasaaxe, apply to
iarja n iucxFULtAto..ief.
U-it ttr iblpmeul per H tenia er rnis nwn
e5trrdF I rfrnl brcr. In the l'lreproM
flarci.sf nenr ihe Nleamrr W barf.
The Aceats here are no" prepared to
cr OFFICKof the Cam pin), foot of Kllanea Mreet
near inv s o am.
Honolulu. January a. 10tl
FOR EUROPE VIA NEW YORK
CUNARD LINE
Established 1840.
Two Sailings Every Week
roic i.ivi:itiooi.i
.FViwtjVe York everv M'tdnttdoy,
r rom action erery bftlnntiy
RATES OF PASSAGES
fni.iit. . ait), ami 8IOO ;ii
taccordlnz to AccommodaUoo,
ETLU: TICKETS FAVOIIADLE TERMS.
MrtrsEf. - ..- .,,.sa "ur rvcy
Good ceomaiodatloas can always be secured o
nllcallnto WILLIAMS IHMONU Jt CO..
8aa KnnHtfr.
instate street, Boston,
VERXON U. BROWN CO
Cow 11 a j Green, New York
QotlcetoFassenrers from Anstralla, IV cw Zealaad
and Ilonolnln The Canard LlaeaSorda more than usual
Utilities to Ihroa-h p?searr from Traar-Faclfle
rcn.ine irraacarj oi us iiiiazs preciuainzailDOsi
Irillty of delay In Kiw York. "
nrGood accommwditloni always reserved.
YKES02TIL UltOVN At CD..
I'WV It Bowl! a- Green. Sew York.
O MONEY TO X.OAK "
Cood Property Security, Stocks,
oonai, tic Appiy to
M. THOMPSON, ATTORKEY IT LAW,
ornCE- Corner Fort and Xerehist l5tm,
I"-' HoaoLtxc. II. I. Si
EXCHANGE ON CHINA !
rrIK jranEiisiG3na auk ikk-
X PAKKO TO DRAW OX THE
Chartered Bank of India, AtutrmlU ud
Calna, Hoiurkonc.
BlStlur .fc CO. Jtr
FOH -JOn WORK KXKCUTKD IX
thaaeateat ttyl. call at GAZETTEOmcE.
1XT11K CIKtTITCOUKTOK TIIK
JL M Judicial Circuit of the IUnaiUa Klrlm
K.LAKAU.,nytheiiraceI tied. rite llaaUa
Islands, htxa
TnJXU. II. rElLEM.Mamhalor Thr Klard,
or his Deputy la tha 3d Judicial Clrcait. - (taiiTiao:
Yon are hrtrby rommaaded to innoa T. 9. FICK
AIID Defendant, tn case he shall tie written answer
it itta twenty day a tier semce aerrrr. i t aad
tpear oeiorvine uircuu voanii lae ffrpiemorr Term
lloue. Valohlan, la the Island of Hawaii, en TlU'Ks
IAY,the 4th day of September ant. illv eloeh a.
toshowranse woylheclalmof EMMA IL FICKAKD.
1-uiaiiB.sQoq'a not oe aw arum n"f pursuant to tae
tenor of her annexed, prtiliao.
And hate yon then there this nt. with fall return
of jour prneeedlnsatbereoa.
(eal) af oar Supreme Court, thu Sad dar r
May, A.D.
M , Ditiit roaran. Clerk r Cirrnlt Coart.
To which Summons the Xarhal mad the follewta-
retnra:
llarlntr tnadedill!entseareb fattt.a.ithLti mmiHBi
T. f. llckard, he la not to he fonnd la the KlatrdiMn.
1 do herer.y return the ummoaa not mnl this ita
oay oi uecemoer, iw u
r.u. ii. sui'LU, jiaraal
Iherehreertlmhat the wtiktn and (HMniBu .
true atd faithful copy of th" ot1;1d1 .ummoaa Uae4
la the llhcl for Divorce Emma R. Firbard t. T. I.
I1ckard,andalsoof the Marshals return thrtvtn, and
that la the meantime, an attested copy of tar alt
summons be prtated as pretlhd by tne SUtae. re
qulriaz the said respondeni to answer at tht said Srp
I ember term.
In witness whereof. I haw bcrennto eel my
hind this 11th day of December. A. D.IKM.
DANIEL lDlTTKC
toil 6t Clerk 3d Judicial Circuit Cwntt. ItawalL
tx TiiKcnicuiT ronrrorTiiK
X 3d Jadlrifl Circuit of the HawatUn Kingdom.
KAL.KAUA. br tha Grace nf t,oduf the I Una i tan
I stand. Kn:
TJ. II. SOI'EIL Em-Sarsha f 'h- Kla-das.
hl Deputy UatBTtso-
Yon ar hereby eommaaded t uainoa FETE IT
VALENTINE,Dcfendantincasehr itbail Sir artttra
answer witnin twenty nays atler setMce nereor t be
and appear before the ald Circuit Court at the ISnseas
ber Terra ihereof. to be fcolden al the Conit Koom nf the
Court Houe. Walmca. la the Island of IlawalLnn
Thurday, the&th day of November nest. at ttf o'clock,
mtosbowcaasewhythe claim of AKAN A VALES
TINE w. Ilantltr should sot ba awarded hvrprsa
mi in tue itiiui ui ner inmiiii x-.i.ion.
And hare yja then there this Writ niib 'ull return
of your procerdln-s thereon.
Seat of the I Chlf J tire nf Oar kinrrm I'miS
Supreme Conrtt this SIt day of October. A. D. tt
Clerk of Circuit Court.
To which summons the Marshal made the faUowias
return i
Havine made dillrcnt aeareh for thr within nintim-
et reter Valrnttne, aad aa he caanot be foand la
lata Mnrilom l hereby retarn the sum mom net wrrtl
Marshal
I herchr cenlfr that the within nd forv-uJa? Ui
true and faithful copy of the original tumm-we taxl
In tho libel for dlrurce Akin Valentine t. FeUr
Valeallne. and also of the Marshal return thereto, aad
that la Ihe meantime an attrird copy of tha said
summons be printed ta the A'toioa and Ilawartas
Gaiittk for six (aituccraslTe weeks as ifeecribet by
tbe Matate, rr)ufHaa the said respandmt lo answer
at said term.
In witness whereof 1 harr h n unto s t my hand tV
rtthdnyof Norrmber. A.l. lL
DANIEL lUSTEIL
liKX fit Clerk Third Judicial Circuit l art IUwi
Guardian's Sale of Heal Estate !
W"OTICK IS IIKIiniJY c;t"-2C
Ll that be vlriue of am ni-ilr I n al Kv fT,lf Jutwaa
J odd, at Chambers, on the 19th day of DeerMber A. I.
11, 1 will sell at Fnbllc AncUoa. on SATl'KDAV
Janaary ITtb, .. D. two, at IS o'clock anon at Alikolaat
Hale, llonolain, all the risht. titk and lMret mt
Itobert McLean, minor, la and t all that certain ptce.
or parcel of laud allnated en School btrrct In Uonolaln
aforesaid, and more particularly described n follow,
lo wit: Comment t air on ci.ool Mreet at a pviat Ti
feet distance from the corner of tarrie K. Godfrey a
piece on ihe Mania corner of Xanana aad School
streets and adjoin tar the property of Am one Georz
d Can ha. tbe bunntUry ran alonr ttf property mi
said Cuaha la a nocthatrrIy dlrectioa 1U fret, thence
In a south-easter ty illrectlud alons ppperty of saM
Cunbaarj feet. Ihcace alonj; property of W. E. FoeUr
la a S south westerly direction tai feet, thr-tcs south'
easterly along property of said Fostei fl feet a la.
thence aluns property tKlonatajr to Cnrrir E. U"ltnj
in n ot..a-wesieriy atreciion. reel n in. to sc1mmH
Street In a westerly direction lo point nt connaenc--meat,
brln? the same premlres conTeyed Id said Minwr
by deed of Kobcrl Gray, dated SepUaaber i;th. IiU,
aad recorded with Hawaliaa ieltrrr Dc.-d Ibwk
lt, paea 1-11. $S and itti.
Said premises will be inild at th. up r prie r Ss
ThonsandlMla. UUBEICT GRAY,
fiuardlaa of Itobert McLean.
Ilonolnln, II. I. Dec. tth. ltl. Pm tt
MARSHAL'S SALE
BYV1UTL KOFA WU1TUF V.XIU
cation Issued out of the Saprease Cnnrt la fasor
of Kan ihe w. et al. inalalllT. asainst D. Kaaht. LMrn
dint.for tbe sum or fn.73, 1 has- le ted upon and
shall offer for sale in front uf Allhilam Hal at li
o'clock noon of
Saturday, the 24th day ot January, 1B85, 1
to the bUbeft bidder for cash all thr right title and
Interest of the said D. Kaahi la aad to all Uat piece or
parcel or Land known at Taukoa. allnated In llonolain
aina. Island of Oahu, a will mart- fellj appear In Roy
al luteal No. YJ. nnlesa said jadnent, interest,
ro-ti and my ezpensea are preeionsly paid.
Deeds at eipease of pare hater.
IVT For farther pantcaUrs apple toW A Kiaaar
. JNO. IL sOrEXL Manhal
Jlunolalu. Dec 17th. lis I, Vi 'A
Notice ol Temporary Administrator.
rrilh UXJ1KIWIOXKU, IIAVIXCi
X b-rn daly appointed Administrator of th Estate
of the latn Franklin IL Endcra M. D.. of Wallah n.
deceased, (by Hon. A. Foraaudt, Ci rrU Jndei, calls
upon atl perona Indebted to said stair to aaatve lm
medlale paytnenl to Ihe nadcraizned. and all persons
hirlnc claims azalast said esUU are rraeoted to pre
seat the same with proper vuuehers whether eecnml by
mort-a-e or othrfwUa lo mi al my oSlo in ike Court
House. Wallaka, wltt, I a si month from dair or be
forevrr barred. THO W EVERETT
Temp. Admr. Est Fraokitn 11 Erden M. D.
Wallnka. Maat, December Sth twt MO t
AilniItiNtrators N'oticr.
nlIKUXI)KU.SIOKI) GIVto XO-
X tic that h haa been duly appolaled Adntlalsfi
tor with ihe wilt attached, of thr EUf of CAIT
THUM S S FENCE It, late of IIUo, IsUad ot Hawaii
deceased. All persona having aay rUtaa against the
said estate aro notified that they nasi prr-nt tbe sanr
daiyTnifd,audwlthproir eouchera to W under
si-ned wllhla six months from the dt- .f uu nolk
or they will he forrrer barred aad alt persona wwms
the said estate are requeued u makr immediate pei
meat to tbe sadef stoned. i KlTTRKlMiK
-dr. Ei.Tlo peu r fclfb Hin.iiirbe4.
Hllo, Hawaii, Oct. 3rd. 1H na
P. DALTON
No. 92 King Street.
Once more lOtlclU ih patron z- mml - ipport u -sC
who for twenty yr, knw aad
dealt with kirn
Plain Talk Pays Always
Fmahasfornuuyyt-ar- w..rk-i ! awl endeavored
to pleas every class of thecoma natty from th hlglKst
la the bad down to the hamblet nf the wotklV
cUstes, aad he can say thatdarin? that t.m be f "
made sa enemy or lest a entoBer ..h. ka
put his hand to the fhrw. aad t a. --)! able awA
lnr to Clre honest wotk. Cood nMtrrUl. and fair r
for money as ever yet was done la th- Hawaiian!
Uads. HAS ALWAYS OX IIAXD
Single & Double Harness
Express Harness,
Plantation Harness,
Whips, (Spurs,
Chamois, Spongos,
Brushes, and
Everything Requisite for the Stable.
a rctt use or
English & Sydney Saddles,
SMJ Ctou,, BUaltlr, e.. ,),,,, ,uxk.
-WUtk,ka,,ot,r)tk,cmta igge
xml | txt
|
http://chroniclingamerica.loc.gov/lccn/sn83025121/1885-01-07/ed-1/seq-2/ocr/
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
C:Custom Resource Files
Do you demand more out of life? Are you tired of having your BMPs and WAVs flapping naked in the wind, for all to see? You, my friend, need to gird your precious assets in a custom resource file!
Disgusting imagery aside, custom resource files are an essential part of any professional game. When was the last time you purchased a game and found all of the sprites, textures, sound, or music files plainly visible within the game's directory tree? Never! Or at least, hardly ever!
So, what is a custom resource file? It is a simple repository, containing the various media files needed by your game. Say you have 100 bitmaps representing your game's tiles and sprites, and 50 wave files representing your sound effects and music; all of these files can be lumped into a single resource file, hiding them from the prying eyes of users.
Contents
File Format
The format you use for your custom resource file is up to you; encryption and compression algorithms can easily be incorporated. For the purposes of this tutorial however, I'll keep things simple. Here's a byte-by-byte outline of my simple resource file format:
The Header
The header contains information describing the contents of the resource, and indicating where the individual files stored within the resource can be located.
- First 4 bytes
- An int value, indicating how many files are stored within the resource.
- Next 4n bytes
- Where n is the number of files stored within the resource. Each 4 byte segment houses an int which points to the storage location of a file within the body of the resource. For example, a value of 1234 would indicate that a file is stored beginning at the resource's 1234th byte.
The Body
The body contains filename strings for each of the files stored within the resource, and the actual file data. Each body entry is pointed to by a header entry, as mentioned above. What follows is a description of a single body entry.
- First 4 bytes
- An int value, indicating how many bytes of data the stored file contains.
- Next 4 bytes
- An int value, indicating how many characters comprise the filename string.
- Next n bytes
- Each byte contains a single filename character, where n is the number of characters in the filename string.
- Next n bytes
- The stored file's data, where n is the file size.
Example Resource File
Examples tend to make things clearer, so here we go. Numbers on the left indicate location within the file (each segment is one byte), while data on the right indicates the values stored at the given location.
BYTELOC DATA EXPLANATION ******* **** *********** 0-3 3 (Integer indicating that 3 files are stored in this resource) 4-7 16 (Integer indicating that the first file is stored from the 16th byte onward) 8-11 40 (Integer indicating that the second file is stored from the 40th byte onward) 12-15 10056 (Integer indicating that the third file is stored from the 10056th byte onward) 16-19 9 (Integer indicating that the first stored file contains 9 bytes of data) 20-23 8 (Integer indicating that the first stored file's name is 8 characters in length) 24-31 TEST.TXT (7 bytes, each encoding one character of the first stored file's filename) 32-40 Testing12 (9 bytes, containing the first stored file's data, which happens to be some text) 41-44 10000 (Integer indicating that the second stored file contains 10000 bytes of data) 45-48 9 (Integer indicating that the second stored file's name is 9 characters in length) 49-57 TEST2.BMP (8 bytes, each encoding one character of the second stored file's filename) 58-10057 ... (10000 bytes, representing the data stored within TEST2.BMP. Data not shown!) 10058-10061 20000 (Integer indicating that the third stored file contains 20000 bytes of data) 10062-10065 9 (Integer indicating that the third stored file's name is 9 characters in length) 10066-10074 TEST3.WAV (8 bytes, each encoding one character of the third stored file's filename) 10075-30074 ... (20000 bytes, representing the data stored within TEST3.WAV. Data not shown!)
If we had a copy of the file described above it would be 30074 bytes in size, and it would contain all of the data represented by the files TEST.TXT, TEST2.BMP and TEST3.WAV. Of course, this file format allows for arbitrarily large files; all we need now is a handy-dandy program that can be used to store files in this format for us!
Resource Creator Source
In order to create a tool capable of storing files in our simple custom format, we need a few utility functions. We'll start off slow.
int getfilesize(char *filename) { struct stat file; //This structure will be used to query file status //Extract the file status info if(!stat(filename, &file)) { //Return the file size return file.st_size; } //ERROR! Couldn't get the filesize. printf("getfilesize: Couldn't get filesize of '%s'.", filename); exit(1); }
The getfilesize function accepts a pointer to a filename string, and uses that pointer to populate a stat struct. If the stat struct is not NULL, we'll be able to return an int containing the file size, in bytes. We'll need this function later on!
int countfiles(char *path) { int count = 0; //This integer will count up all the files we encounter countfiles again (recursion) and add the result to the count total count += countfiles(entry->d_name); chdir(".."); } else { //We've found a file, increment the count count++; } } } } //Make sure we close the directory stream if (closedir(dir) == -1) { perror("closedir failure"); exit(1); } //Return the file count return count; }
Things get interesting now. The code above describes a handy little countfiles function, which will recurse through the subdirectories of a given path, and count all of the files it encounters along the way. To do this, a DIR directory stream structure is initialized with a given path value. This directory stream can be exploited repeatedly by the readdir function to obtain pointers to dirent structures, which contain information on a given file within the directory. As we loop, the readdir function will fill the dirent structure with data describing a different file within the directory until all files have been exausted. When no files are left to describe, readdir will return NULL and the while loop will cease!
Now, if we look within the while loop, some cool stuff is going on. First, strcmp is being used to compare the name of a given file entry to the strings "." and ".."; this is necessary, as otherwise the "." and ".." values will be recognized as directories and recursed into, creating a nasty infinite loop!
If the entry->d_name value passes the test, it is then passed to the stat function, in order to fill out the stat structure, called file_status. If a value of zero is returned, something must be wrong with the file, and it is simply skipped. On a non-zero result, execution continues, and S_ISDIR is employed, allowing us to check if the file in question is a directory, or not. If it is a directory, the countfiles function is called recursively. If it is not a directory, then the count variable is simply incremented, and the loop moves on the the next file!
void findfiles(char *path, int fd) { findfiles again (recursion), passing the new directory's path findfiles(entry->d_name, fd); chdir(".."); } else { //We've found a file, pack it into the resource file packfile(entry->d_name, fd); } } } } //Make sure we close the directory stream if (closedir(dir) == -1) { perror("closedir failure"); exit(1); } return; }
You may notice that the code above is quite similar to that contained within the countfiles function. I could have removed the code duplication through the use of function pointers, or various other means, but I believe code readability would have suffered; and this is meant to be a quick and dirty resource file creator. Nothing fancy! Besides, the upside is that most of this code is already familiar to us.
Basically, the findfiles routine loops recursively through the subdirectories of a given path (just like countfiles), but instead of counting the files, it determines their filename strings and passes them to the packfile function.
So, bring on the packfile function:
void packfile(char *filename, int fd) { int totalsize = 0; //This integer will be used to track the total number of bytes written to file //Handy little output printf("PACKING: '%s' SIZE: %i\n", filename, getfilesize(filename)); //In the 'header' area of the resource, write the location of the file about to be added lseek(fd, currentfile * sizeof(int), SEEK_SET); write(fd, ¤tloc, sizeof(int)); //Seek to the location where we'll be storing this new file info lseek(fd, currentloc, SEEK_SET); //Write the size of the file int filesize = getfilesize(filename); write(fd, &filesize, sizeof(filesize)); totalsize += sizeof(int); //Write the LENGTH of the NAME of the file int filenamelen = strlen(filename); write(fd, &filenamelen, sizeof(int)); totalsize += sizeof(int); //Write the name of the file write(fd, filename, strlen(filename)); totalsize += strlen(filename); //Write the file contents int fd_read = open(filename, O_RDONLY); //Open the file char *buffer = (char *) malloc(filesize); //Create a buffer for its contents read(fd_read, buffer, filesize); //Read the contents into the buffer write(fd, buffer, filesize); //Write the buffer to the resource file close(fd_read); //Close the file free(buffer); //Free the buffer totalsize += filesize; //Add the file size to the total number of bytes written //Increment the currentloc and current file values currentfile++; currentloc += totalsize; }
This function is really the heart of the program; it takes a file and stores it within the resource as a body entry (which we described above, in the file format section). packfile accepts a filename pointer and an integer file descriptor fd as arguments. It then goes on to store file size, filename, and file data within the resource file (which is referenced with the fd file descriptor). The variables currentfile and currentloc are globals, described in the next segment of code. Basically, they contain values which instruct the packfile function where to create and store this new body entry's data.
Putting these utility functions together is now fairly simple. We just need a Main function, and some includes:
#include "stdio.h" #include "dirent.h" #include "sys/stat.h" #include "unistd.h" #include "fcntl.h" #include "sys/param.h" //Function prototypes int getfilesize(char *filename); int countfiles(char *path); void packfile(char *filename, int fd); void findfiles(char *path, int fd); int currentfile = 1; //This integer indicates what file we're currently adding to the resource. int currentloc = 0; //This integer references the current write-location within the resource file int main(int argc, char *argv[]) { char pathname[MAXPATHLEN+1]; //This character array will hold the app's working directory path int filecount; //How many files are we adding to the resource? int fd; //The file descriptor for the new resource //Store the current path getcwd(pathname, sizeof(pathname)); //How many files are there? filecount = countfiles(argv[1]); printf("NUMBER OF FILES: %i\n", filecount); //Go back to the original path chdir(pathname); //How many arguments did the user pass? if (argc < 3) { //The user didn't specify a resource file name, go with the default fd = open("resource.dat", O_WRONLY | O_EXCL | O_CREAT | O_BINARY, S_IRUSR); } else { //Use the filename specified by the user fd = open(argv[2], O_WRONLY | O_EXCL | O_CREAT | O_BINARY, S_IRUSR); } //Did we get a valid file descriptor? if (fd < 0) { //Can't create the file for some reason (possibly because the file already exists) perror("Cannot create resource file"); exit(1); } //Write the total number of files as the first integer write(fd, &filecount, sizeof(int)); //Set the current conditions currentfile = 1; //Start off by storing the first file, obviously! currentloc = (sizeof(int) * filecount) + sizeof(int); //Leave space at the begining for the header info //Use the findfiles routine to pack in all the files findfiles(argv[1], fd); //Close the file close(fd); return 0; }
The code in function main is primarily concerned with creating the resource file (either giving it the name "resource.dat", or using a string passed as a command-line argument), counting up the files and storing this value in the header, and then calling the findfiles function which loops through all subdirectories and makes use of packfile to pack them into the resource. The user must specify a path command-line argument when executing the program, as this path value will be passed as the initial argument to the findfiles routine. All files within the given path will be found and packed into the resource file! A sample execution:
UNIX: ./customresource resource myresource.dat WINDOWS: customresource resource myresource.dat
Calling the program with the command-line arguments shown above would result in a resource file called myresource.dat being created, containing all files found within the directory called "resource" (including its subdirectories).
Source code
- To download the sample source code (and some media files to play with), click here.
- NOTE: The above source code will not work with MSVC++, as the dirent.h header file is not included with VC++! If you are using VC++, please download this source code instead (provided by Drew Benton).
Related tutorials
So, we have the power to create custom resource files on a whim... now what? Try one of these lovely tutorials:
|
http://content.gpwiki.org/index.php/C:Custom_Resource_Files
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Management Views¶
A management view is a view configuration that applies only when the URL is prepended with the manage prefix. The manage prefix is usually /manage, unless you've changed it from its default by setting a custom substanced.manage_prefix in your application's .ini file.
This means that views declared as management views will never show up in your application's "retail" interface (the interface that normal unprivileged users see). They'll only show up when a user is using the SDI to manage content.
There are two ways to define management views:
- Using the substanced.sdi.mgmt_view decorator on a function, method, or class.
- Using the substanced.sdi.add_mgmt_view() Configurator (aka. config.add_mgmt_view) API.
The former is most convenient, but they are functionally equivalent. mgmt_view just calls into add_mgmt_view when found via a scan.
Declaring a management view is much the same as declaring a "normal" Pyramid view using pyramid.view.view_config with a route_name of substanced_manage. For example, each of the following view declarations will register a view that will show up when the /manage/foobar URL is visited:
The above is largely functionally the same as this:
Management views, in other words, are really just plain-old Pyramid views with a slightly shorter syntax for definition. Declaring a view a management view, however, does do some extra things that make it advisable to use rather than a plain Pyramid view registration:
- It registers introspectable objects that the SDI interface uses to try to find management interface tabs (the row of actions at the top of every management view rendering).
- It allows you to associate a tab title, a tab condition, and cross-site request forgery attributes with the view.
- It uses the default permission sdi.view.
So if you want things to work right when developing management views, you'll use @mgmt_view instead of @view_config, and config.add_mgmt_view instead of config.add_view.
As you use management views in the SDI, you might notice that the URL includes @@ as "goggles". For example, is the URL for seeing the folder contents. The @@ is a way to ensure that you point at the URL for a view and not get some resource with the __name__ of contents. You can still get to the folder contents management view using that folder contains something named contents.
mgmt_view View Predicates¶
Since mgmt_view is an extension of Pyramid's view_config, it re-uses the same concept of view predicates as well as some of the same actual predicates:
- request_type, request_method, request_param, containment, attr, renderer, wrapper, xhr, accept, header, path_info, context, name, custom_predicates, decorator, mapper, and http_cache are supported and behave the same.
- permission is the same but defaults to sdi.view.
The following are new view predicates introduced for mgmt_view:
tab_title takes a string for the label placed on the tab.
tab_condition takes a callable that returns True or False, or True or False. If you state a callable, this callable is passed context and request. The boolean determines whether the tab is listed in a certain situation.
tab_before takes the view name of a mgmt_view that this mgmt_view should appear after (covered in detail in the next section.)
tab_after takes the view name of a mgmt_view that this mgmt_view should appear after. Also covered below.
tab_near takes a "sentinel" from substanced.sdi (or None) that makes a best effort at placement independent of another particular mgmt_view. Also covered below. The possible sentinel values are:
substanced.sdi.LEFT substanced.sdi.MIDDLE substanced.sdi.RIGHT
Tab Ordering¶
If you register a management view, a tab will be added in the list of tabs. If no mgmt view specifies otherwise via its tab data, the tab order will use a default sorting: alphabetical order by the tab_title parameter of each tab (or the view name if no tab_title is provided.) The first tab in this tab listing acts as the "default" that is open when you visit a resource. Substance D does, though, give you some options to control tab ordering in larger systems with different software registering management views.
Perhaps a developer wants to ensure that one of her tabs appears first in the list and another appears last, no matter what other management views have been registered by Substance D or any add-on packages. @mgmt_view (or the imperative call) allow a keyword of tab_before or tab_after. Each take the string tab name of the management view to place before or after. If you don't care (or don't know) which view name to use as a tab_before or tab_after value, use tab_near, which can be any of the sentinel values MIDDLE, LEFT, or RIGHT, each of which specifies a target "zone" in the tab order. Substance D will make a best effort to do something sane with tab_near.
As in many cases, an illustration is helpful:
from substanced.sdi import LEFT, RIGHT @mgmt_view( name='tab_1', tab_title='Tab 1', renderer='templates/tab.pt' ) def tab_1(context, request): return {} @mgmt_view( name='tab_2', tab_title='Tab 2', renderer='templates/tab.pt', tab_before='tab_1' ) def tab_2(context, request): return {} @mgmt_view( name='tab_3', tab_title='Tab 3', renderer='templates/tab.pt', tab_near=RIGHT ) def tab_3(context, request): return {} @mgmt_view( name='tab_4', tab_title='Tab 4', renderer='templates/tab.pt', tab_near=LEFT ) def tab_4(context, request): return {} @mgmt_view( name='tab_5', tab_title='Tab 5', renderer='templates/tab.pt', tab_near=LEFT ) def tab_5(context, request): return {}
This set of management views (combined with the built-in Substance D management views for Contents and Security) results in:
Tab 4 | Tab 5 | Contents | Security | Tab 2 | Tab 1 | Tab 3
These management view arguments apply to any content type that the view is registered for. What if you want to allow a content type to influence the tab ordering? As mentioned in the content type docs, the tab_order parameter overrides the mgmt_view tab settings, for a content type, with a sequence of view names that should be ordered (and everything not in the sequence, after.)
Filling Slots¶
Each management view that you write plugs into various parts of the SDI UI. This is done using normal ZPT fill-slot semantics:
- page-title is the <title> in the <head>
- head-more is a place to inject CSS and JS in the <head> after all the SDI elements
- tail-more does the same, just before the </body>
- main is the main content area
SDI API¶
All templates in the SDI share a common "layout". This layout needs information from the environment to render markup that is common to every screen, as well as the template used as the "main template."
This "template API" is known as the SDI API. It is an instance of the sdiapi class in substanced.sdi.__init__.py and is made available as request.sdiapi.
The template for your management view should start with a call to requests.sdiapi:
<div metal:
The request.sdiapi object has other convenience features as well. See the Substance D interfaces documentation for more information.
Flash Messages¶
Often you perform an action on one view that needs a message displayed by another view on the next request. For example, if you delete a resource, the next request might confirm to the user "Deleted 1 resource." Pyramid supports this with "flash messages."
In Substance D, your applications can make a call to the sdiapi such as:
request.sdiapi.flash('ACE moved up')
...and the next request will process this flash message:
- The message will be removed from the stack of messages
- It will then be displayed in the appropriate styling based on the "queue"
The sdiapi provides another helper:
request.sdiapi.flash_with_undo('ACE moved up')
This displays a flash message as before, but also provides an Undo button to remove the previous transaction.
- title, content, flash messages, head, tail
|
http://docs.pylonsproject.org/projects/substanced/en/latest/mgmtview.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
The problems solved by file storage virtualization products might not be real enough to motivate buyers
Storage virtualization gets downsized
New technology eases storage management
Storage worlds converge
Networked-attached storage devices are a common tool for saving files throughout the government. NAS’ objective is to streamline storage. Instead of adding storage to every server, NAS lets multiple servers share storage provided by NAS appliances connected to the network. NAS devices generally support several file-sharing protocols, so people can more easily access their files.
But NAS can create administrative problems. In time, organizations may accumulate multiple NAS units from different manufacturers, or they may own so many systems from one vendor that they must establish several separate file systems. To address those management challenges, vendors have created NAS virtualization products.
The products give users a unified and stable view of where the systems store files, even though the files may be physically located on different vendors’ storage systems and transferred as needed.
Specialized companies such as Acopia Networks, BlueArc and NeoPath Networks offer NAS virtualization. Joining them are NAS device manufacturers. For example, EMC offers its Rainfinity storage virtualization product, which it acquired in 2005. Network Appliance (NetApp) offers its
V-Series virtualization platform and Virtual File Manager products. The typical product offering is an appliance that attaches to the network between the users and the NAS devices.
Those vendors seek to do more than tame mixed NAS environments with their virtualization wares. Industry executives say their products can also ease data migration and provide a steppingstone to tiered storage, which involves placing data on the most cost-effective layer of storage.
But as promising as the technology appears to be and with some vendors reporting only a handful of government sales, anecdotal evidence suggests that agencies are not flocking to NAS virtualization.
Douglas Hughes, a service engineer who helps run a storage service at NASA’s Jet Propulsion Laboratory, said he hasn’t looked at storage virtualization recently. JPL Information Services’ storage utility uses a number of NetApp NAS devices.
The situation is similar at the San Diego Supercomputer Center, where a spokeswoman said virtualization technology is interesting, but the center is not using it.
Jeff White, a technical specialist at CDW Government, agreed that customers aren’t yet clamoring for NAS virtualization. “We haven’t seen a whole lot of demand for it,” he said. “At the end of the day, it’s almost kind of a luxury product.”
In an era of budget cuts, storage spending focuses instead on items such as primary storage, disaster recovery and security, White said.
Nevertheless, TheInfoPro reports that interest in NAS virtualization among enterprise information technology buyers is growing quickly and emphasizes that managers are starting to feel some pain from trying to keep pace with the growing need for storage.
The case for virtualization
Storage consultants and vendors suggest several reasons for using NAS virtualization products.
The technology works by creating a layer that masks the physical location of data. Client devices and servers are no longer mapped to specific physical storage devices.
Instead, the virtualization appliance maps the physical location of data to a logical address — the one a user employs to access a file. Administrators can create policies to govern the management of data in the virtualized environment.
“The premise of virtualization is breaking the physical binding between the front end and the back end,” said Brendon Howe, senior director and general manager of NetApp’s V-Series business unit.
With virtualization, users or programs can tap a drive on a network to access files “without knowing physically where the drive resides or how many [file storage devices] it is on,” said Kirby Wadsworth, senior vice president of marketing and business development at Acopia.
Storage administrators also benefit. By creating a single storage pool, virtualization harmonizes heterogeneous NAS environments. White called this management boon the biggest benefit of NAS virtualization.
“Instead of having 40 different file servers…this gives you one management console for all of them,” he said. Virtualization, he added, creates one homogenous file system out of the multiple
file systems found in different NAS boxes.
To accomplish this, some vendors offer global namespace management, a feature that provides a single view of file systems spanning multiple or mixed NAS devices.
Jack Norris, EMC’s vice president of marketing for Rainfinity, called global namespace “one of the key building blocks of file virtualization.” He said a namespace functions in much the same way as the Internet’s Domain Name System, which converts domains into IP addresses.
Similarly, “the function of a namespace is to provide an abstraction layer so that end users are not tied to a physical address but are accessing a logical name,” Norris said.
The ability of virtualization to harmonize multivendor NAS settings eases management and opens purchasing options. Norris said virtualization lets customers buy hardware with the best price and performance rather than sticking with the same brand of equipment.
Specific benefits
Howe said many of the issues NAS virtualization seeks to address are identical to those that users experience with storage-area networks, the specialized networks that connect servers to storage devices for the exchange of low-level pieces of data called blocks. SAN virtualization products have been available for a few years and operate on a basis similar to NAS virtualization.
File virtualization with NAS boxes facilitates data migration and tiered storage, according to industry executives.
In the case of migration, virtualization lets organizations take files off an old NAS box and move them to a new machine without disrupting users, Wadsworth said. Migration occurs in the background, because users remain attached to the virtual presence of the file during the process, he added.
Acopia’s virtualization technology copies the file contents from the old storage device to the new device. When the copy is complete, the new physical location of the file is updated in the appliance’s virtual-to-physical mapping tables, but the virtual address remains the same, Wadsworth said. If users accessed their files on their G drive, they continue to do so.
“The administrator is free to relocate data at any time without having to worry about the impact on end users,” Norris said. He said some virtualization customers have reported performing migrations in one-tenth the time they would normally take.
Similarly, industry executives say NAS virtualization enables storage tiering. Tiers may include high-end primary disk storage, near-line storage featuring cheaper disk technology and, lastly, archival storage media, usually tape. Organizations pursuing the tiered approach create policies for moving data from one tier to the next, based on the data’s value to the organization.
NAS virtualization automates processes related to the movement of data according to storage policies, Wadsworth said.
Customers seeking NAS virtualization benefits could pay more than $50,000 for an enterprise-class appliance. EMC’s Rainfinity appliance costs $80,000. Acopia’s midrange product costs less than $100,000 mark, while high-end machines cost more than $100,000. Acopia’s entry-level appliance costs less than $50,000, however. NetApp’s V-Series pricing starts at $15,670 for the GF270c, an entry-level model.
Customer interest
Vendors report awakening customer interest in NAS virtualization.
“The adoption curve is really starting to pick up,” Norris said. He cited an example of a government agency using Rainfinity to migrate more than 33,800 file systems.
“But it’s not at the point where everyone has completed a file virtualization deployment,” Norris added.
Wadsworth agreed that NAS virtualization deals are happening. But as for the scale of adoption, “I wouldn’t say it was widespread,” he said.
Norris said he expects the market to grow rapidly in the next year.
White said he thinks the demand for NAS virtualization will eventually materialize.
“I think it’s a great idea, but demand isn’t there for it yet,” he said.
The worlds of network-attached storage virtualization and storage-area network virtualization will converge in the next few years, some industry executives say.
The integration of NAS and SAN gear is already under way. Single gateways that allow users and applications to access file-level NAS and block-level SANs have been available from several vendors for a few years.
Brendon Howe, senior director and general manager of Network Appliance’s V-Series business unit, said he believes a similar merger of separate NAS and SAN virtualization products will also occur.
“NAS virtualization, when considered by itself, is a technology…that doesn’t have a unique set of customer problems versus SAN virtualization,” Howe said. He said the argument for having distinct, stand-alone NAS and SAN virtualization platforms will diminish in time.
Ashish Nadkarni, principal consultant at storage consultant GlassHouse Technologies, said creating a unified virtualization product is doable. But he added that vendors will have to consider performance issues and whether customers will want to put all their virtualization eggs in one basket.
Network-attached storage virtualization may not be red hot in the government, but storage market watchers suggest the overall market is gaining momentum.
TheInfoPro, a New York-based market research firm, reported that adoption has doubled in recent months. A survey of Fortune 1,000 storage managers conducted in fall 2005 pegged the file virtualization base at 7.5 percent. By spring 2006, 14.2 percent of the storage professionals polled said they had file virtualization in use.
TheInfoPro’s heat index, which tracks spending commitments, ranks file virtualization as sixth out of the 16 technologies the company monitors. File virtualization ranked toward the bottom of earlier assessments.
File virtualization, meanwhile, stands at 16 out of 20 in TheInfoPro’s adoption index. Robert Stevenson, a managing director at TheInfoPro, said a technology that ranks high on the heat index and low on the adoption index “shows a lot of room for growth.”
Stevenson said his firm is seeing considerable interest in file virtualization. “Clearly, people have a lot of data mobility challenges, and file virtualization helps with that,” he said.
Kirby Wadsworth, senior vice president of marketing and business development at Acopia Networks, identified large NAS deployments as a virtualization sweet spot. In organizations with hundreds of users who access a home directory, administrators can change the storage environment without taking all the users off-line.
Wadsworth also cited performance-intensive applications that involve accessing large files, such as satellite image analysis.
Anand Iyengar, founder and chief technology officer of NeoPath Networks, said virtualization is also valuable to organizations that are upgrading to new storage platforms. He said the company’s File Director product provides for a nondisruptive migration path. Iyengar said an application seeking to access files won’t notice that the data has moved from one storage server to:
|
http://fcw.com/articles/2006/10/02/a-little-too-virtual.aspx
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Mesoderm - Schema class scaffold generator for DBIx::Class
version 0.122290
use Mesoderm; use SQL::Translator; use DBI; my $dbh = DBI->connect($dsn, $user, $pass); my $sqlt = SQL::Translator->new(dbh => $dbh, from => 'DBI'); $sqlt->parse(undef); my $scaffold = Mesoderm->new( schema => $sqlt->schema, schema_class => 'My::Schema', ); $scaffold->produce(\*STDOUT);
Mesoderm creates a scaffold of code for DBIx::Class using a schema object from SQL::Translator. At time of writing the version of SQL::Translator required is not available on CPAN and must be fetched directly from github.
The result is a hierarchy of packages describes below. Moose is used so that any custom methods needed to be added to the result or resultset classes can be done by writing Moose::Role classes. This allows separation between generated code and written code.
Mesoderm defines methods to map table names to class names, relationships and columns to accessor methods. It is also possible to have any table, relationship or column excluded from the generated model. If the defaults do not meet your needs, then it is trvial to subclass
Mesoderm and provide overrides.
Given a
schema_class name of
Schema and a schema containing a single table
foo_bars the following packages would be created or searched for with the default settings.
Top level schema class. The user needs to provide this themselves. See "Example Schema Class".
The main generated package that will be a Moose::Role to be consumed into the top level schema class. See "The _scaffold Role"
Although the model generated is a hierarchy of packages, it is expected that all generated code be in one file loaded as Schema::_scaffold. This file contains all the generated code and should never be modified.
A subclass of DBIx::Class::Schema that will be used to register the generated classes.
Schema::FooBar will be the result class for the table
foo_bars
During scaffolding Module::Pluggable will be used to search for Schema::Role::FooBar, which should be a Moose::Role class. If it exists then it will be consumed into Schema::FooBar.
Schema::ResultSet::FooBar is the resultset class for the table
foo_bars.
During scaffolding Module::Pluggable will be used to search for Schema::ResultSet::Role::FooBar, which should be a Moose::Role class. If it exists then it will be consumed into Schema::ResultSet::FooBar.
The _scaffold will define methods for each resultset. In our example above it will define a method
foo_bar.
It also has a method
dbic which will return the DBIx::Class::Schema object.
The minimum requirement for a schema class is that it providers a method
connect_args. The result of calling this method will be passed to the connect method of DBIx::Class::Schema.
package Schema; use Moose; with 'Schema::_scaffold'; sub connect_args { return @args_for_dbix_class_connect; } 1;
Some other useful additions
# delegate txn_* methods to the DBIx::Class object itself has '+dbic' => (handles => [qw(txn_do txn_scope_guard txn_begin txn_commit txn_rollback)]); # Fetch a DBI handle sub dbh { shift->dbic->storage->dbh; }
With our example schema, searching of the
foo_bars table would be done with
my $schema = Schema->new; $schema->foo_bar->search({id => 27});
Required. A SQL::Translator::Object::Schema object that the scaffolding will be generated from.
Required. Package name that the scaffold will be generated for. The actual package created will be a Moose::Role with the named
schema_class plus
::_scaffold
Name of method to generate that when called on any result row or result set will return the parent Mesoderm schema object. Defaults to
schema
Optional. Namespace used by default to prefix package names generated for DBIx::Class result classes. Defaults to
schema_class
Optional. Namespace used by default to prefix package names generated for DBIx::Class result set classes. Defaults to
result_class_namespace plus
::ResultSet
Optional. Namespace that will be searched for, during scaffolding, for roles to add to result classes. The generated code will include
with statements for any role that is found during scaffolding. Defaults to
result_class_namespace plus
::Role
Optional. Namespace that will be searched for, during scaffolding, for roles to add to result set classes. The generated code will include
with statements for any role that is found during scaffolding. Defaults to
resultset_class_namespace plus
::Role
Returns a list of DBIx::Class components to be loaded by the result class
Returns a list of DBIx::Class components to be loaded by the result class
Returns a list of Moose::Role classes to be comsumed into the result class Default is to join result_role_namespace with table_class_element, if the module can be found by Module::Pluggable
Returns a list of Moose::Role classes to be comsumed into the result class. Default is to join resultset_role_namespace with table_class_element, if the module can be found by Module::Pluggable
Returns a hash reference which will be serialized as the arguments passed to
add_column
Provides a hook to allow inserting objects to have default values set on columns if no value has been specified. It should return valid perl code that will be inserted into the generated code and will be evaluated in a scalar context
Return a boolean to determine if the passed object should be excluded from the generated model. Default: 0
Returns name for a relationship. Default is to call the method based on the relationship type.
Return relationship accessor name. Default is to call to_singlular or to_plural with the name for the foreign table. Which is called depends on the arity of the relationship
Return the accessor name for the column. Default it to return the column name.
Return name for the result class. Default is to join result_class_namespace with table_class_element
Return name for the resultset class. Default is to join resultset_class_namespace with table_class_element
Return moniker used to register result class with DBIx::Class::Schema. Default is to call to_singular with the lowercase table name
Return package name element that will be prefixed with result_class_namespace, resultset_class_namespace, result_role_namespace and resultset_role_namespace to generate class names. Default takes the table_moniker and title-cases based on
_ as a word separator
Utility method to return singular form of
$word. Default implementation uses "to_S" in Lingua::EN::Inflect::Number
Utility method to return plural form of
$word. Default implementation uses "to_PL" in Lingua::EN::Inflect::Number
Create a relatonship which is the opposite of the given relationship.
Return boolean to indicate if the table is a mapping table and many to many mapping relationships need to be created
Generate code and write to filehandle
Build a Mesoderm::Relationship object given a constraint
Build a Mesoderm::Mapping given relationship for a mant to many mapping
DBIx::Class, Moose, Moose::Role, SQL::Translator
At time of writing the version required is not available on CPAN and needs to be fetched from github.
Graham Barr <gbarr@cpan.org>
This software is copyright (c) 2010 by Graham Barr.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
|
http://search.cpan.org/~gbarr/Mesoderm-0.122290/lib/Mesoderm.pm
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
22 February 2011 17:38 [Source: ICIS news]
TORONTO (ICIS)--The political unrest in oil-rich ?xml:namespace>
Rolf Burkl, senior research consultant for Nuremberg-based consumer research GfK, said the unrest in the region could jeopardise oil supplies to the west, which would, in turn, drive up energy prices and hurt consumers immediately.
However, absent those fears, the underlying outlook for
GfK’s monthly consumer confidence index was forecast to rise to 6.0 points in March, from 5.8 points in February, Burkl said.
Falling unemployment had given consumers “greater planning security” when deciding on larger purchases, he said.
Meanwhile,
In related news, BASF’s energy unit, Wintershall, said on Tuesday it had managed to evacuate 31 people – nine employees and 22 relatives - of its international staff from
“Intensive efforts are currently underway to get the remaining employees in
The company said on Monday it would evacuate international staff from
Wintershall employs 453 people in
Wintershall has been active in exploration and production in
BASF’s share price was down 0.68% to €59.49 ($80.39) on
|
http://www.icis.com/Articles/2011/02/22/9437623/libya-unrest-may-hit-germany-consumer-confidence-research-group.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Recent Notes
Displaying keyword search results 1 - 10
With StringBuffer/StringBuilder:
public class ReverseString { private static...Without StringBuffer/StringBuilder:
public class ReverseString { private static... utility to generate SQL insert statements for Oracle for one table, or a set of tables. It doesn't cover all possibilities but should be good enough for most cases.
import java.io.*; import java.sql.*; import ...To generate insert statements for multiple tables, simply put the table names in a file, one per line, and use the -f switch. ( : )...
|
http://www.xinotes.net/notes/keywords/length/default/private/
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
This is a quick post to demonstrate a very useful was of programmatically populating the models (i.e. database) of a Django application.
The canonical way to accomplish this is fixtures - the loaddata and dumpdata commands, but these seem to be more useful when you already have some data in the DB. Alternatively, you could generate the JSON information loadable by loaddata programmatically, but this would require following its format exactly (which means observing how real dumps are structured). One could, for the very first entries, just laboriously hammer them in through the admin interface. As programmers, however, we have a natural resentment for such methods.
Since Django apps are just Python modules, there's a much easier way. The very first chapter of the Django tutorial hints at the approach by test-driving the shell management command, which opens a Python shell in which the application is accessible, so the model classes can be imported and through them data can be both examined and created.
The same tutorial also mentions that you can bypass manage.py by pointing DJANGO_SETTINGS_MODULE to your project's settings and then calling django.setup(). This provides a clue on how the same steps can be done from a script, but in fact there's an even easier way.
There's no need to bypass manage.py, since it's a wonderful convenience wrapper around the Django project administration tools. It can be used to create custom management commands - e.g. your own commands parallel to shell, dumpdata, and so on. Not only that creating such commands gives you a very succinct, boilterplate-free way of writing custom management scripts, it also gives you a natural location to house them, per application.
Here's some simple code that adds a couple of tags into a blog-like model. Let's say the application is named blogapp:
from django.core.management.base import BaseCommand from blogapp.models import Post, Tag class Command(BaseCommand): args = '<foo bar ...>' help = 'our help string comes here' def _create_tags(self): tlisp = Tag(name='Lisp') tlisp.save() tjava = Tag(name='Java') tjava.save() def handle(self, *args, **options): self._create_tags()
This code has to be placed in a file within the blogapp/management/commands directory in your project. If that directory doesn't exist, create it. The name of the script is the name of the custom command, so let's call it populate_db.py. Another thing that has to be done is creating __init__.py files in both the management and commands directories, because these have to be Python packages. The directory tree will look like this:
blogapp ├── admin.py ├── __init__.py ├── management │ ├── commands │ │ ├── __init__.py │ │ └── populate_db.py │ └── __init__.py ├── models.py ... other files
That's it. Now you should be able to invoke this command with:
$ python manage.py populate_db
All the facilities of manage.py are available, such as help:
$ python manage.py help populate_db Usage: manage.py populate_db [options] <foo bar ...> our help string comes here Options: ...
Note how help and args are taken from the Command class we defined. manage.py will also pass custom positional arguments and keyword options to our command, if needed. More details on writing custom management commands are available in this Django howto.
Once you start playing with such a custom data entry script, some of the existing Django management commands may come in very useful. You can see the full list by running manage.py help, but here's a list of those I found handy in the context of this post.
For dumping, dumpdata is great. Once your data grows a bit, you may find it useful only to dump specific models, or even specific rows by specifying primary keys with --pks. I also find the --indent=2 option to be essential when doing the default JSON dumps.
The flush command will clear the DB for you. A handy "undo" for those very first forays into entering data. Be careful with this command once you have real data in the DB.
Finally, the sqlall command is very useful when you're trying to figure out the structure of your models and the connections between them. IMHO model problems are important to detect early in the development of an application.
To conclude, I just want to mention that while custom management commands live within applications, nothing ties them to a specific app. It is customary for Django management commands to accept app and model names as arguments. While a data entry command is naturally tied to some application and model, this doesn't necessarily have to be the case in general. You can even envision an "app" named my_custom_commands which you can add to projects and reuse its functionality between them.
|
http://eli.thegreenplace.net/2014/02/15/programmatically-populating-a-django-database/
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Yyy Xxx wrote:
> I don't see a problem with this specific backward
> compatibility issue.
>
> 1. Non-namespace names like "test:echo" is,
> IMHO, not a common practice.
Common practice is, unfortunately, not the test for backward
compatability. Also backward compatability breaks do occur in Ant1. It
just seems that it is a moveable feast.
>
> 2. If there were build files like this in the field,
> it would be trivial to fix them.
Tell that to the guy struggling with the arsDigita build files on ant-user.
Most incompatible changes to the build file syntax are trivial to
change, but that does not mean they are acceptable (cf. jarfile->destfile)
>
> 3. The XML standards warn about using colons in names
How many ant users read the XML standards? :-) I've had people
complaining about Ant's inability to nest <!-- --> style comments.
Conor
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
|
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200207.mbox/%3C3D2B772D.5090002@cortexebusiness.com.au%3E
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Talking to a Bluetooth Arduino RGB Lamp from C# for Continuous Integration
I previously posted some C# code that I use to gather build status information in a Continuous Integration environment. No CI environment is complete without the Big Red Build light. In case I'm using a custom dual RGB LED lamp controlled by an Arduino. This build light communicates through a SparkFun BlueSmirf Bluetooth adapter in it that appears as a COM port on a Windows PC after pairing. The BlueSmirf talks to the Arduino over it's RX/TX pins making it simple to communicate with on the Arduino using it's Serial libraries. There are newer versions of the BlueSmirf that appear as HID devices for driverless communication but I still like the simplicity of the COM interface and haven't upgraded yet..
GitHub
The source is available on GitHub at.
Arduino Communication
ArduinoDualRGB.cs acts as a cover for the remote device. This code shares some of the same Java-esque characteristics of all my C# code. It mostly passes StyleCop and Code Analysis!
/// /// Written by Joe Freeman joe@freemansoft.com /// Arduino RGB adapter for Arduino build light firmware used for 2, 4 and 32 RGB lamp build lights. /// /// Standard commands are /// color: ~c#[red][green][blue]; /// where red, green and blue have values 0-15 representing brightness /// blink: ~b#[red on][green on][blue on][red off][green off][blue off]; /// where on and off have values 0-15 representing the number of half seconds. namespace BuildWatcher { using System; using System.IO.Ports; using System.Text; using log4net; public class ArduinoDualRGB { /// <summary> /// log4net logger /// </summary> private static ILog log = log4net.LogManager.GetLogger(typeof(ArduinoDualRGB)); /// <summary> /// command prefix /// </summary> private static byte standardPrefix = (byte)'~'; /// <summary> /// last character of commands /// </summary> private static byte standardSuffix = (byte)';'; /// <summary> /// the command to chagne a color /// </summary> private static byte colorCommand = (byte)'c'; /// <summary> /// the command to change a blink rate /// </summary> private static byte blinkCommand = (byte)'b'; /// <summary> /// Serial port we communicate with Arduino over /// </summary> private SerialPort device; /// <summary> /// Initializes a new instance of the <see cref="ArduinoDualRGB"/> class. a proxy for the Arduino controlled dual RGB unit /// </summary> /// <param name="device">Serial port the device is connected two. Can be virtual com port for bluetooth</param> /// <param name="canReset">determines if the device can be reset through DTR or if is actually reset on connect</param> public ArduinoDualRGB(SerialPort device, bool canReset, int numLamps) { if (device == null) { throw new ArgumentNullException("device", "Device is required"); } else { this.device = device; } if (canReset) { //// can we reset with DTR like this? device.DtrEnable = true; //// the firmware starts with the string "initialized" System.Threading.Thread.Sleep(250); byte[] readBuffer = new byte["initialized".Length]; for (int i = 0; i < readBuffer.Length; i++) { readBuffer[i] = (byte)this.device.ReadByte(); log.Debug("read " + i); } log.Debug("Hardware initialized returned string: " + readBuffer); } else { string trashInBuffer = device.ReadExisting(); if (trashInBuffer.Length > 0) { log.Debug("Found some cruft left over in the channel " + trashInBuffer); } } TurnOffLights(numLamps); } /// <summary> /// Turns off the number of lamps specified /// </summary> /// <param name="numLamps">number of lamps to clear</param> public void TurnOffLights(int numLamps) { for (int deviceNumber = 0; deviceNumber < numLamps; deviceNumber++) { this.SetColor(deviceNumber, 0, 0, 0); this.SetBlink(deviceNumber, 2, 0); } } /// <summary> /// sets the color of one of the lamps using RGB /// </summary> /// <param name="deviceNumber">Number of lights in a device 0-1</param> /// <param name="red">value of red 0-15</param> /// <param name="green">vlaue of green 0-15</param> /// <param name="blue">vlaue of 0-15</param> public void SetColor(int deviceNumber, int red, int green, int blue) { byte[] buffer = new byte[7]; buffer[0] = standardPrefix; buffer[1] = colorCommand; buffer[2] = this.ConvertIntToAsciiChar(deviceNumber); buffer[3] = this.ConvertIntToAsciiChar(red); buffer[4] = this.ConvertIntToAsciiChar(green); buffer[5] = this.ConvertIntToAsciiChar(blue); buffer[6] = standardSuffix; this.SendAndWaitForAck(buffer); } /// <summary> /// Sets the blink rate of one of the lamps. All bulbs in a lamp blink at the same rate and time /// </summary> /// <param name="deviceNumber">lamp number in device 0-1</param> /// <param name="onTimeHalfSeconds">blink on time 0-15</param> /// <param name="offTimeHalfSeconds">blink off time 0-15</param> public void SetBlink(int deviceNumber, int onTimeHalfSeconds, int offTimeHalfSeconds) { byte[] buffer = new byte[10]; buffer[0] = standardPrefix; buffer[1] = blinkCommand; buffer[2] = this.ConvertIntToAsciiChar(deviceNumber); buffer[3] = this.ConvertIntToAsciiChar(onTimeHalfSeconds); buffer[4] = this.ConvertIntToAsciiChar(onTimeHalfSeconds); buffer[5] = this.ConvertIntToAsciiChar(onTimeHalfSeconds); buffer[6] = this.ConvertIntToAsciiChar(offTimeHalfSeconds); buffer[7] = this.ConvertIntToAsciiChar(offTimeHalfSeconds); buffer[8] = this.ConvertIntToAsciiChar(offTimeHalfSeconds); buffer[9] = standardSuffix; this.SendAndWaitForAck(buffer); } /// <summary> /// Converts a number ot it's hex ascii equivalent /// </summary> /// <param name="number">input between 0-15 </param> /// <returns>ASCII character Hex equivalent of the number </returns> public byte ConvertIntToAsciiChar(int number) { if (number < 0 || number > 15) { throw new ArgumentException("number out of single digit hex range " + number); } byte result; if (number > 9) { result = (byte)('A' + number - 10); // we start at 10 } else { result = (byte)('0' + number); } return result; } /// <summary> /// Sends a message and waits on the return ack /// </summary> /// <param name="buffer">bytes to be sent to arduino</param> private void SendAndWaitForAck(byte[] buffer) { log.Debug("Sending: " + Encoding.UTF8.GetString(buffer, 0, buffer.Length)); this.device.Write(buffer, 0, buffer.Length); System.Threading.Thread.Sleep(20); //// should handle timeout with exception catch block //// always replies with the command plus a + or - key. '+' means command understood byte[] readBuffer = new byte[buffer.Length + 1]; for (int i = 0; i < buffer.Length + 1; i++) { readBuffer[i] = (byte)this.device.ReadByte(); } log.Debug("Received ack: " + Encoding.UTF8.GetString(readBuffer, 0, readBuffer.Length)); } } }
GitHub
The source is available on GitHub at
thanks for sharing.
Wireless Table Lamp
Restaurant Table Lamp
Cordless Desk Lamp
Hotel Table Lamp
Cordless Led Lamp
|
https://joe.blog.freemansoft.com/2012/06/talking-to-bluetooth-arduino-rgb-lamp.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
35136/help-me-solve-the-code
Hello everyone, I train to create a website with Flask and Python. I have setup db and this is the code for my login information:
@app.route('/login/', methods=["POST","GET"])
def loginpage():
c, conn = connection()
try:
if request.method=="GET":
return render_template("login.html",message=message)
if request.method=="POST":
data=c.execute("SELECT * FROM user WHERE username = '%s'"%(request.form['username']) )
data=c.fetchone()[2]
if data==request.form['password']:
message = "Success"
session['loged-in']=True
session['username']="flag{lollellul}"
return rendertemplate("login.html",message=message) else : message ="Unknown user" return rendertemplate("login.html",message=message)
except Exception as e:
message=str(e)
return render_template("login.html", message=message)
But when I test it, it always returns the error. The 'NoType' object does not have the 'getitem' attribute I edited, but it is still not better
Can anybody help me with it to get?
Is this the complete code? The error says:
The 'NoType' object does not have the 'getitem' attribute
But I can not see getitem being used anywhere in the code.
The fetchone() method fetches the result of the query one row at a time. When there are no more rows to fetch, it retrieves None. So, while implementing a fetchone() method, it is suggested to use an if condition right after fetching. Try this:
c.execute("SELECT * FROM user WHERE username = '%s'"%(request.form['username']) )
fetch_result=c.fetchone()
if fetch_result:
data = fetch_result[2]
#Here goes rest of the logic
Suppose you have multiple threads which don't really touch ...READ MORE
Hey, all the tools that you have ...READ MORE
You can find the explanation and implementation ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Python does not use brackets, it uses ...READ MORE
In this particular code, I think you ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/35136/help-me-solve-the-code?show=40912
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Procedural Level Generation in Games Tutorial: Part 1
A tutorial on procedural level generation using the Drunkard Walk algorithm.
Version
- Other, Other, Other
Note from Ray: This is a brand new Sprite Kit tutorial released as part of the iOS 7 Feast. Enjoy!
Most games you play have carefully designed levels that always remain the same. Experienced players know exactly what happens at any given time, when to jump and what button to press. While this is not necessarily a bad thing, it does to some extent reduce the game’s lifespan with a given player. Why play the same level over and over again?
One way to increase your game’s replay value is to allow the game to generate its content programmatically – also known as adding procedurally generated content.
In this tutorial, you will learn to create tiled dungeon-like levels using an algorithm called the Drunkard Walk. You will also create a reusable
Map class with several properties for controlling a level’s generation.
This tutorial uses Sprite Kit, a framework introduced with iOS 7. You will also need Xcode 5. If you are not already familiar with Sprite Kit, I recommend you read the Sprite Kit Tutorial for Beginners on this site. For readers who are not yet ready to switch to Sprite Kit, fear not. You can easily rewrite the code in this tutorial to use Cocos2d.
Getting Started
Before getting started, let’s clear up one possible misconception: procedural should not be confused with random. Random means that you have little control over what happens, which should not be the case in game development.
Even in procedurally generated levels, your player should be able to reach the exit. What would be the fun of playing an endless runner like Canabalt if you got to a gap between buildings that would be impossible to jump? Or playing a platformer where the exit is in a place you cannot reach? In this sense, it might be even harder to design a procedurally generated level than to carefully craft your level in Tiled.
I assume, being the bad-ass coder that you are, that you scoff at such cautionary statements. To get started, download the starter project for this tutorial. Once downloaded, unzip the file and open the project in Xcode, and build and run. You should now see a screen similar to this:
The starter project contains the basic building blocks of the game, including all necessary artwork, sound effects and music. Take note of the following important classes:
Map: Creates a basic 10×10 square that functions as the level for the game.
MapTiles: A helper class that manages a 2D grid of tiles. I will explain this class later in the tutorial.
DPad: Provides a basic implementation of a joystick to control the player’s character, a cat.
MyScene: Sets up the Sprite Kit scene and processes game logic.
Spend a few moments getting familiar with the code in the starter project before moving on. There are comments to help you understand how the code works. Also, try playing the game by using the DPad at the bottom-left corner to move the cat to the exit. Notice how the start and exit points change every time the level begins.
The Beginnings of a New Map
If you played the starter game more than once, you probably discovered that the game isn’t very fun. As Jordan Fisher writes in GamaSutra, game levels, especially procedurally generated ones, need to nail these three criteria to be successful:
- Feasibility: Can you beat the level?
- Interesting design: Do you want to beat it?
- Skill level: Is it a good challenge?
Your current level fails two of these three criteria: The design is not very interesting, as the outer perimeter never changes, and it is too easy to win, as you can always see where the exit is when the level starts. Hence, to make the level more fun, you need to generate a better dungeon and make the exit harder to find.
The first step is to change the way you generate the map. To do so, you’ll delete the
Map class and replace it with a new implementation.
Select Map.h and Map.m in the Project Navigator, press Delete and then select Move to Trash.
Next go to File\New\New File…, choose the iOS\Cocoa Touch\Objective-C class and click Next. Name the class Map, make it a Subclass of SKNode and click Next. Make sure the ProceduralLevelGeneration target is selected and click Create.
Open Map.h and add the following code to the
@interface section:
@property (nonatomic) CGSize gridSize; @property (nonatomic, readonly) CGPoint spawnPoint; @property (nonatomic, readonly) CGPoint exitPoint; + (instancetype) mapWithGridSize:(CGSize)gridSize; - (instancetype) initWithGridSize:(CGSize)gridSize;
This is the interface that
MyScene expects for the
Map class. You specify here where to spawn the player and exit, and create some initializers to construct the class given a certain size.
Implement these in Map.m by adding this code to the
@implementation section:
+ (instancetype) mapWithGridSize:(CGSize)gridSize { return [[self alloc] initWithGridSize:gridSize]; } - (instancetype) initWithGridSize:(CGSize)gridSize { if (( self = [super init] )) { self.gridSize = gridSize; _spawnPoint = CGPointZero; _exitPoint = CGPointZero; } return self; }
Here you add a stub implementation that simply sets the player spawn and exit points to
CGPointZero. This will allow you to have a simple starting point – you’ll fill these out to be more interesting later.
Build and run, and you should see the following:
Gone are the borders of the map and the feline hero gets sucked right into the exit, making the game unplayable – or really, really easy if you are a glass-half-full kind of person. Not really the a-maze-ing (pun intended) game you were hoping for, right? Well, time to put down some floors. Enter the Drunkard Walk algorithm.
The Drunkard Walk Algorithm
The Drunkard Walk algorithm is a kind of random walk and one of the simplest dungeon generation algorithms around. In its simplest implementation, the Drunkard Walk algorithm works as follows:
- Choose a random start position in a grid and mark it as a floor.
- Pick a random direction to move (Up, Down, Left or Right).
- Move in that direction and mark the position as a floor, unless it already is a floor.
- Repeat steps 2 and 3 until a desired number of floors have been placed in the grid.
Nice and simple, eh? Basically, it is a loop that runs until a desired number of floors have been placed in the map. To allow the map generation to be as flexible as possible, you will start implementing the algorithm by adding a new property to hold the number of tiles to generate.
Open Map.h and add the following property:
@property (nonatomic) NSUInteger maxFloorCount;
Next, open Map.m and add the following method:
- (void) generateTileGrid { CGPoint startPoint = CGPointMake(self.gridSize.width / 2, self.gridSize.height / 2); NSUInteger currentFloorCount = 0; while ( currentFloorCount < self.maxFloorCount ) { currentFloorCount++; } }
The above code begins to implement step 1 in the basic Drunkard Walk algorithm loop, but there is one significant difference. Can you spot it?
[spoiler title="Solution"]
startPoint is defaulted to the center of the grid instead of a random position. You do this to prevent the algorithm from butting up against the edges and getting stuck. More about that in the second part of the tutorial.[/spoiler]
generateTileGrid begins by setting a start position and then enters a loop that runs until the
currentFloorCount is equal to the desired number of floors defined by the
maxFloorCount property.
When you initialize a
Map object, you should invoke
generateTileGrid to ensure that you create the grid. So, add the following code to
initWithGridSize: in Map.m, after the
_exitPoint = CGPointZero line:
[self generateTileGrid];
Build and run to make sure the game compiles as expected. Nothing has changed since the last run. The cat is still sucked into the exit and there are still no walls. You still need to write the code to generate the floor, but before you do that, you need to understand the
MapTiles helper class.
Managing the Tile Grid
The
MapTiles class is essentially a wrapper for a dynamic C array that will manage a 2D grid for the
Map class.
Note: If you're wondering why I choose to use a C array instead of an
NSMutableArray, it comes down to personal preference. I generally do not like boxing primitive data types like integers into objects and then unboxing them again to use them, and since the
MapTiles grid is just an array of integers, I prefer a C array.
The
MapTiles class is already in your project. If you've taken a glance through and feel you understand how it works well, feel free to skip ahead to the next section, Generating the Floor.
But if you're unsure about how it works, keep reading to learn how to recreate it step-by-step, and I'll explain how it works along the way.
To start, select MapTiles.h and MapTiles.m in the Project Navigator, press Delete and then select Move to Trash.
Go to File\New\File..., choose the iOS\Cocoa Touch\Objective-C class and click Next. Name the class MapTiles, make it a subclass of NSObject and click Next. Be sure the ProceduralLevelGeneration target is selected and click Create.
In order to make it easy to identify the type of tile, add this enum below the
#import statement in MapTiles.h:
typedef NS_ENUM(NSInteger, MapTileType) { MapTileTypeInvalid = -1, MapTileTypeNone = 0, MapTileTypeFloor = 1, MapTileTypeWall = 2, };
If later on you want to extend the
MapTiles class with further tile types, you should put those in this
MapTileType enum.
Note: Notice the integer values you assign to each of the enums. They weren't picked at random. Look in the tiles.atlas texture atlas and click the 1.png file, and you will see that it is the texture for the floor just as
MapTileTypeFloor has a value of 1. This makes it easy to convert the 2D grid array into tiles later on.
Open MapTiles.h and add the following properties and method prototypes between
@interface and
@end:
@property (nonatomic, readonly) NSUInteger count; @property (nonatomic, readonly) CGSize gridSize; - (instancetype) initWithGridSize:(CGSize)size; - (MapTileType) tileTypeAt:(CGPoint)tileCoordinate; - (void) setTileType:(MapTileType)type at:(CGPoint)tileCoordinate; - (BOOL) isEdgeTileAt:(CGPoint)tileCoordinate; - (BOOL) isValidTileCoordinateAt:(CGPoint)tileCoordinate;
You've added two read-only properties:
count provides the total number of tiles in the grid and
gridSize holds the width and height of the grid in tiles. You'll find these properties handy later on. I'll explain the five methods as you implement the code.
Next, open MapTiles.m and add the following class extension right above the
@implementation line:
@interface MapTiles () @property (nonatomic) NSInteger *tiles; @end
This code adds a private property
tiles to the class. This is a pointer to the array that holds information about the tile grid.
Now implement initWithGridSize: in MapTiles.m after the
@implementation line:
- (instancetype) initWithGridSize:(CGSize)size { if (( self = [super init] )) { _gridSize = size; _count = (NSUInteger) size.width * size.height; self.tiles = calloc(self.count, sizeof(NSInteger)); NSAssert(self.tiles, @"Could not allocate memory for tiles"); } return self; }
You initialize the two properties in
initWithGridSize:. Since the total number of tiles in the grid is equal to the width of the grid multiplied by the grid height, you assign this value to the
count property. Using this count, you allocate the memory for the tiles array with
calloc, which ensures all variables in the array are initialized to 0, equivalent to the enumerated variable
MapTileTypeNone.
As ARC will not manage memory allocated using
calloc or
malloc, you should release the memory whenever you deallocate the
MapTiles object. Before
initWithGridSize: but after
@implementation, add the
dealloc method:
- (void) dealloc { if ( self.tiles ) { free(self.tiles); self.tiles = nil; } }
dealloc frees the memory when you deallocate an object and resets the
tiles property pointer to avoid it pointing to an array that no longer exists in memory.
Apart from the construction and deconstruction, the
MapTiles class also has a few helper methods for managing tiles. But before you start implementing these methods, you need to understand how the tiles array exists in memory versus how it is organized in a grid.
Figure 1: How
calloc organizes the variables in memory. Each number is the index of the variable in memory.
When you allocate memory for the tiles using
calloc, it reserves n bytes for each array item, depending on the data type, and puts them end-to-end in a flat structure in memory (see figure 1).
This organization of tiles is hard to work with in practice. It is much easier to find a tile by using an (x,y) pair of coordinates, as illustrated in Figure 2, so that is how the
MapTiles class should organize the tile grid.
Thankfully, it is very easy to calculate the index of a tile in memory from an (x,y) pair of coordinates since you know the size of the grid from the
gridSize property. The numbers outside the square in Figure 2 illustrate the x- and y-coordinates, respectively. For example, the (x,y) coordinates (1,2) in the grid will be index 9 of the array. You calculate this using the formula:
index in memory = y * gridSize.width + x
With this knowledge, you can start implementing a method that will calculate an index from a pair of grid coordinates. For convenience, you will also create a method to ensure the grid coordinates are valid.
In MapTiles.m, add the following new methods:
- (BOOL) isValidTileCoordinateAt:(CGPoint)tileCoordinate { return !( tileCoordinate.x < 0 || tileCoordinate.x >= self.gridSize.width || tileCoordinate.y < 0 || tileCoordinate.y >= self.gridSize.height ); } - (NSInteger) tileIndexAt:(CGPoint)tileCoordinate { if ( ![self isValidTileCoordinateAt:tileCoordinate] ) { NSLog(@"Not a valid tile coordinate at %@", NSStringFromCGPoint(tileCoordinate)); return MapTileTypeInvalid; } return ((NSInteger)tileCoordinate.y * (NSInteger)self.gridSize.width + (NSInteger)tileCoordinate.x); }
isValidTileCoordinateAt: tests if a given pair of coordinates is within the bounds of the grid. Notice how the method checks to see if it is outside of the bounds and then returns the opposite result, so if the coordinates are outside the bounds, it returns
NO, and if they are not outside of the bounds, it returns
YES. This is faster than checking if the coordinates are within the bounds, which would require the conditions to be AND-ed together instead of OR-ed.
tileIndexAt: uses the equation discussed above to calculate an index from a pair of coordinates, but before doing this, it tests if the coordinates are valid. If not, it returns
MapTileTypeInvalid, which has a value of -1.
With the math in place, it is now possible to easily create the methods to return or set the tile type. So, add the following two methods after
initWithGridSize: in MapTiles.m:
- (MapTileType) tileTypeAt:(CGPoint)tileCoordinate { NSInteger tileArrayIndex = [self tileIndexAt:tileCoordinate]; if ( tileArrayIndex == -1 ) { return MapTileTypeInvalid; } return self.tiles[tileArrayIndex]; } - (void) setTileType:(MapTileType)type at:(CGPoint)tileCoordinate { NSInteger tileArrayIndex = [self tileIndexAt:tileCoordinate]; if ( tileArrayIndex == -1 ) { return; } self.tiles[tileArrayIndex] = type; }
The two methods calculate the index from the pair of coordinates passed using the
tileIndexAt: method you just added and then either set or return the
MapTileType from the
tiles array.
Last but not least, add a method to determine if a given pair of tile coordinates is at the edge of the map. You'll later use this method to ensure you do not place any floors at the edge of the grid, thereby making it impossible to encapsulate all floors behind walls.
- (BOOL) isEdgeTileAt:(CGPoint)tileCoordinate { return ((NSInteger)tileCoordinate.x == 0 || (NSInteger)tileCoordinate.x == (NSInteger)self.gridSize.width - 1 || (NSInteger)tileCoordinate.y == 0 || (NSInteger)tileCoordinate.y == (NSInteger)self.gridSize.height - 1); }
Referring to Figure 2 above, notice that border tiles would be any tile with an x-coordinate of 0 or
gridSize.width – 1, since the grid indices are zero-based. Equally, an y-coordinate of 0 or
gridSize.height – 1 would be a border tile.
Finally, when testing it's nice to be able to see what your procedural generation is actually generating. Add the following implementation of
description, which will output the grid to the console for easy debugging:
- (NSString *) description { NSMutableString *tileMapDescription = [NSMutableString stringWithFormat:@"<%@ = %p | \n", [self class], self]; for ( NSInteger y = ((NSInteger)self.gridSize.height - 1); y >= 0; y-- ) { [tileMapDescription appendString:[NSString stringWithFormat:@"[%i]", y]]; for ( NSInteger x = 0; x < (NSInteger)self.gridSize.width; x++ ) { [tileMapDescription appendString:[NSString stringWithFormat:@"%i", [self tileTypeAt:CGPointMake(x, y)]]]; } [tileMapDescription appendString:@"\n"]; } return [tileMapDescription stringByAppendingString:@">"]; }
This method simply loops through the grid to create a string representation of the tiles.
That was a lot of text and code to take in, but what you've built will make the procedural level generation much easier, since you can now abstract the grid handling from the level generation. Now it's time to lay down some ground.
Generating the Floor
You're going to place ground or floor tiles procedurally in the map using the Drunkard Walk algorithm discussed above. In Map.m, you already implemented part of the algorithm so that it finds a random start position (step 1) and loops a desired number of times (step 4). Now you need to implement steps 2 and 3 to generate the actual floor tiles within the loop you created.
To make the
Map class a bit more flexible, you'll start by adding a dedicated method to generate a procedural map. This will also be handy if you later need to regenerate the map.
Open Map.h and add the following method declaration to the interface:
- (void) generate;
In Map.m, add the following import to the top of the file:
#import "MapTiles.h"
Add the following code right above the
@implementation line:
@interface Map () @property (nonatomic) MapTiles *tiles; @end
The class extension holds one private property, which is a pointer to a
MapTiles object. You'll use this object for easy grid handling in the map generation. You're keeping it private since you don't want to change the
MapTiles object from outside the
Map class.
Next, implement the
generate method in Map.m:
- (void) generate { self.tiles = [[MapTiles alloc] initWithGridSize:self.gridSize]; [self generateTileGrid]; }
First the method allocates and initializes a
MapTiles object, then it generates a new tile grid by calling
generateTileGrid.
In Map.m, go to
initWithGridSize: and delete this line:
[self generateTileGrid];
You deleted that line because map generation should no longer occur immediately when you create a
Map object.
It's time to add the code to generate the floor of the dungeon. Do you remember the remaining steps of the Drunkard Walk algorithm? You choose a random direction and then place a floor at the new coordinates.
The first step is to add a convenience method to provide a random number between two values. Add the following method in Map.m:
- (NSInteger) randomNumberBetweenMin:(NSInteger)min andMax:(NSInteger)max { return min + arc4random() % (max - min); }
You'll use this method to return a random number between min and max, both inclusive.
Return to
generateTileGrid and replace its contents with the following:
CGPoint startPoint = CGPointMake(self.tiles.gridSize.width / 2, self.tiles.gridSize.height / 2); // 1 [self.tiles setTileType:MapTileTypeFloor at:startPoint]; NSUInteger currentFloorCount = 1; // 2 CGPoint currentPosition = startPoint; while ( currentFloorCount < self.maxFloorCount ) { // 3 NSInteger direction = [self randomNumberBetweenMin:1 andMax:4]; CGPoint newPosition; // 4 switch ( direction ) { case 1: // Up newPosition = CGPointMake(currentPosition.x, currentPosition.y - 1); break; case 2: // Down newPosition = CGPointMake(currentPosition.x, currentPosition.y + 1); break; case 3: // Left newPosition = CGPointMake(currentPosition.x - 1, currentPosition.y); break; case 4: // Right newPosition = CGPointMake(currentPosition.x + 1, currentPosition.y); break; } //5 if([self.tiles isValidTileCoordinateAt:newPosition] && ![self.tiles isEdgeTileAt:newPosition] && [self.tiles tileTypeAt:newPosition] == MapTileTypeNone) { currentPosition = newPosition; [self.tiles setTileType:MapTileTypeFloor at:currentPosition]; currentFloorCount++; } } // 6 _exitPoint = currentPosition; // 7 NSLog(@"%@", [self.tiles description]);
This is what the code is doing:
- It marks the tile at coordinates
startPointin the grid as a floor tile and therefore initializes
currentFloorCountwith a count of 1.
currentPositionis the current position in the grid. The code initializes it to the
startPointcoordinates where the Drunkard Walk algorithm will start.
- Here the code chooses a random number between 1 and 4, providing a direction to move (1 = UP, 2 = DOWN, 3 = LEFT, 4 = RIGHT).
- Based on the random number chosen in the above step, the code calculates a new position in the grid.
- If the newly calculated position is valid and not an edge, and does not already contain a tile, this part adds a floor tile at that position and increments
currentFloorCountby 1.
- Here the code sets the last tile placed to the exit point. This is the goal of the map.
- Lastly, the code prints the generated tile grid to the console.
Build and run. The game runs with no visible changes, but it fails to write the tile grid to the console. Why is that?
[spoiler title="Solution"]You never call
generate on the
Map class during
MyScene initialization. Therefore, you created the map object but don't actually generate the tiles.[/spoiler]
To fix this, go to MyScene.m and in
initWithSize:, replace the line
self.map = [[Map alloc] init] with the following:
self.map = [[Map alloc] initWithGridSize:CGSizeMake(48, 48)]; self.map.maxFloorCount = 64; [self.map generate];
This generates a new map with a grid size of 48 by 48 tiles and a desired maximum floor count of 64. Once you set the
maxFloorCount property, you generate the map.
Build and run again, and you should see an output that resembles something similar to, but probably not exactly like (remember, it's random), the following:
HOORAY!! You have generated a procedural level. Pat yourself on the back and get ready to show your masterpiece on the big – or small – screen.
Converting a Tile Grid into Tiles
Plotting your level in the console is a good way to debug your code but a poor way to impress your player. The next step is to convert the grid into actual tiles.
The starter project already includes a texture atlas containing the tiles. To load the atlas into memory, add a private property to the class extension of Map.m, as well as a property to hold the size of a tile:
@property (nonatomic) SKTextureAtlas *tileAtlas; @property (nonatomic) CGFloat tileSize;
Initialize these two properties in
initWithGridSize:, just after setting the value of
_exitPoint:
self.tileAtlas = [SKTextureAtlas atlasNamed:@"tiles"]; NSArray *textureNames = [self.tileAtlas textureNames]; SKTexture *tileTexture = [self.tileAtlas textureNamed:(NSString *)[textureNames firstObject]]; self.tileSize = tileTexture.size.width;
After loading the texture atlas, the above code reads the texture names from the atlas. It uses the first name in the array to load a texture and stores that texture's width as
tileSize. This code assumes textures in the atlas are squares (same width and height) and are all of the same size.
Note: Using a texture atlas reduces the number of draw calls necessary to render the map. Every draw call adds overhead to the system because Sprite Kit has to perform extra processing to set up the GPU for each one. By using a single texture atlas, the entire map may be drawn in as few as a single draw call. The exact number will depend on several things, but in this app, those won't come into play. To learn more, check out Chapter 25 in iOS Games by Tutorials, Performance: Texture Atlases.
Still inside Map.m, add the following method:
- (void) generateTiles { // 1 for ( NSInteger y = 0; y < self.tiles.gridSize.height; y++ ) { for ( NSInteger x = 0; x < self.tiles.gridSize.width; x++ ) { // 2 CGPoint tileCoordinate = CGPointMake(x, y); // 3 MapTileType tileType = [self.tiles tileTypeAt:tileCoordinate]; // 4 if ( tileType != MapTileTypeNone ) { // 5 SKTexture *tileTexture = [self.tileAtlas textureNamed:[NSString stringWithFormat:@"%i", tileType]]; SKSpriteNode *tile = [SKSpriteNode spriteNodeWithTexture:tileTexture]; // 6 tile.position = tileCoordinate; // 7 [self addChild:tile]; } } } }
generateTiles converts the internal tile grid into actual tiles by:
- Two
forloops, one for x and one for y, iterate through each tile in the grid.
- This converts the current x- and y-values into a
CGPointstructure for the position of the tile within the grid.
- Here the code determines the type of tile at this position within the grid.
- If the tile type is not an empty tile, then the code proceeds with creating the tile.
- Based on the tile type, the code loads the respective tile texture from the texture atlas and assigns it to a
SKSpriteNodeobject. Remember that the tile type (integer) is the same as the file name of the texture, as explained earlier.
- The code sets the position of the tile to the tile coordinate.
- Then it adds the created tile node as a child of the map object. This is done to ensure proper scrolling by grouping the tiles to the map where they belong.
Finally, make sure the grid is actually turned into tiles by inserting the following line into the
generate method in Map.m, after
[self generateTileGrid]:
[self generateTiles];
Build and run — but the result is not as expected. The game incorrectly places the tiles in a big pile, as illustrated here:
The reason is straightforward: When positioning the tile, the current code sets the tile's position to the position within the internal grid and not relative to screen coordinates.
You need a new method to convert grid coordinates into screen coordinates, so add the following to Map.m:
- (CGPoint) convertMapCoordinateToWorldCoordinate:(CGPoint)mapCoordinate { return CGPointMake(mapCoordinate.x * self.tileSize, (self.tiles.gridSize.height - mapCoordinate.y) * self.tileSize); }
By multiplying the grid (map) coordinate by the tile size, you calculate the horizontal position. The vertical position is slightly more complicated. Remember that the coordinates (0,0) in Sprite Kit represent the bottom-left corner. In the tile grid, the position of (0,0) is the top-left corner (see Figure 2 above). Hence, in order to correctly position the tile, you need to invert its vertical placement. You do this by subtracting the y-position of the tile in the grid by the total height of the grid and multiplying it by the tile size.
Revisit
generateTiles and change the line that sets
tile.position to the following:
tile.position = [self convertMapCoordinateToWorldCoordinate:CGPointMake(tileCoordinate.x, tileCoordinate.y)];
Also, change the line that sets
_exitPoint in
generateTileGrid to the following:
_exitPoint = [self convertMapCoordinateToWorldCoordinate:currentPosition];
Build and run – oh no, where did the tiles go?
Well, they are still there – they're just outside the visible area. You can easily fix this by changing the player's spawn position. You will apply a simple yet effective strategy where you set the spawn point to the position of the
startPoint in
generateTileGrid.
Go to
generateTileGrid and add the following line at the very bottom of the method:
_spawnPoint = [self convertMapCoordinateToWorldCoordinate:startPoint];
The spawn point is the pair of screen coordinates where the game should place the player at the beginning of the level. Hence, you calculate the world coordinates from the grid coordinates.
Build and run, and take the cat for a walk around the procedural world. Maybe you will even find the exit?
Try playing around with different grid sizes and max number of floor tiles to see how it affects the map generation.
One obvious issue now is that the cat can stray from the path. And we all know what happens when cats stray, right? All the songbirds of the world shiver. So, time to put up some walls.
Adding Walls
Open Map.m and add the following method:
- (void) generateWalls { // 1 for ( NSInteger y = 0; y < self.tiles.gridSize.height; y++ ) { for ( NSInteger x = 0; x < self.tiles.gridSize.width; x++ ) { CGPoint tileCoordinate = CGPointMake(x, y); // 2 if ( [self.tiles tileTypeAt:tileCoordinate] == MapTileTypeFloor ) { for ( NSInteger neighbourY = -1; neighbourY < 2; neighbourY++ ) { for ( NSInteger neighbourX = -1; neighbourX < 2; neighbourX++ ) { if ( !(neighbourX == 0 && neighbourY == 0) ) { CGPoint coordinate = CGPointMake(x + neighbourX, y + neighbourY); // 3 if ( [self.tiles tileTypeAt:coordinate] == MapTileTypeNone ) { [self.tiles setTileType:MapTileTypeWall at:coordinate]; } } } } } } } }
- The strategy applied by
generateWallsis to first loop through each tile of the grid.
- It does this until it identifies a floor tile (
MapTileTypeFloor).
- It then checks the surrounding tiles and marks these as walls (
MapTileTypeWall) if no tile is placed there already (
MapTileTypeNone).
The inner
for loops (after
//2) might seem a bit strange at first. It looks at each tile that surrounds the tile at coordinate (x,y). Take a peek at Figure 3 and see how the tiles you want are one less, equal to, and one more than the original index. The two
for loop gives just that, starting at -1 and looping through to +1. Adding one of these integers to the original index inside the
for loop, you find each neighbor.
What if the tile you're checking is at the border of the grid? In that case, this check would fail, as the index would be invalid, correct?
Yes, but luckily this situation is mitigated by the
tileTypeAt: method on the
MapTiles class. If an invalid coordinate is sent to
tileTypeAt:, the method will return a
MapTileTypeInvalid value. Consider the line after
//3 in
generateWalls and notice it only changes the tile to a wall tile if the returned tile type is
MapTileTypeNone.
To generate the wall tiles, go back to
generate in Map.m and add the following line of code after
[self generateTileGrid] and before
[self generateTiles]:
[self generateWalls];
Build and run. You should now see wall tiles surrounding the floor tiles. Try moving the cat around – notice anything strange?
Walls are kind of pointless if you can walk right through them. There are several ways to fix this problem, one of which is described in the Collisions and Collectables: How To Make a Tile-Based Game with Cocos2D 2.X, Part 2 tutorial on this site. In this tutorial you will do it a bit differently by using the build-in physics engine in Sprite Kit. Everyone likes new tech, after all.
Procedural Collision Handling: Theory
There are many ways you could turn wall tiles into collision objects. The most obvious is to add a
physicsBody to each wall tile, but that is not the most efficient solution. Another way, as described by Steffen Itterheim, is to use the Moore Neighborhood algorithm, but that is a tutorial in its own right.
Instead, you will implement a fairly simple method where connected wall segments are combined into a single collision object. Figure 4 illustrates this method.
The method will iterate over all tiles in the map using the following logic:
- Starting at (0,0), iterate the tile grid until you find a wall tile.
- When you find a wall tile, mark the tile grid position. This is the starting point for the collision wall.
- Move to the next tile in the grid. If this is also a wall tile, then increase the number of tiles in the collision wall by 1.
- Continue step 3 until you reach a non-wall tile or the end of the row.
- When you reach a non-tile or the end of the row, create a collision wall from the starting point with a size of the number of tiles in the collision wall.
- Start the iteration again, go back to step 2 and repeat until you've turned all wall tiles in the grid into collision walls.
Note: The method described here is very basic and could be optimized further. For instance, you could iterate the map both horizontally and vertically. Iterating the map horizontally would omit all collision walls that are the size of one tile. You would then pick these up when iterating the map vertically, further decreasing the number of collision objects, which is always a good thing.
It's time to put theory into practice.
Procedural Collision Handling: Practice
Look at
initWithSize: in MyScene.m and see that the code to activate the physics engine is already in the starter project. Since Ray did an excellent job explaining how to set up the physics engine in the Sprite Kit for Beginners tutorial, I'll only explain it here in the context of procedural level generation.
When the code creates the
physicsBody of the player object, it sets it to collide with walls by adding the
CollisionTypeWall to the
collisionBitMask. That way, the physics engine will automatically bounce the player off any wall objects.
However, when you created the walls in
generateWalls, you didn't create them as physics objects – only as simple
SKSpriteNodes. Hence, when you build and run the game the player will not collide with the walls.
You're going to simplify wall collision object creation by adding a helper method. Open Map.m and add the following code:
// Add at the top of the file together with the other #import statements #import "MyScene.h" // Add with other methods - (void) addCollisionWallAtPosition:(CGPoint)position withSize:(CGSize)size { SKNode *wall = [SKNode node]; wall.position = CGPointMake(position.x + size.width * 0.5f - 0.5f * self.tileSize, position.y - size.height * 0.5f + 0.5f * self.tileSize); wall.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:size]; wall.physicsBody.dynamic = NO; wall.physicsBody.categoryBitMask = CollisionTypeWall; wall.physicsBody.contactTestBitMask = 0; wall.physicsBody.collisionBitMask = CollisionTypePlayer; [self addChild:wall]; }
This method creates and adds an
SKNode to the map with the passed position and size. It then creates a non-moveable physics body for the node the size of the node, and ensures that the physics engine performs collision handling when the player collides with the node.
It's time to implement the collision wall generation. Add the following method:
- (void) generateCollisionWalls { for ( NSInteger y = 0; y < self.tiles.gridSize.height; y++ ) { CGFloat startPointForWall = 0; CGFloat wallLength = 0; for ( NSInteger x = 0; x <= self.tiles.gridSize.width; x++ ) { CGPoint tileCoordinate = CGPointMake(x, y); // 1 if ( [self.tiles tileTypeAt:tileCoordinate] == MapTileTypeWall ) { if ( startPointForWall == 0 && wallLength == 0 ) { startPointForWall = x; } wallLength += 1; } // 2 else if ( wallLength > 0 ) { CGPoint wallOrigin = CGPointMake(startPointForWall, y); CGSize wallSize = CGSizeMake(wallLength * self.tileSize, self.tileSize); [self addCollisionWallAtPosition:[self convertMapCoordinateToWorldCoordinate:wallOrigin] withSize:wallSize]; startPointForWall = 0; wallLength = 0; } } } }
Here you perform the six steps described earlier.
- You iterate through each row until you find a wall tile. You set a starting point (tile coordinate pair) for the collision wall and then increase the
wallLengthby one. Then you move to the next tile. If this is also a wall tile, you repeat these steps.
- If the next tile is not a wall tile, you calculate the size of the wall in points by multiplying the tile size, and you convert the starting point into world coordinates. By passing the starting point (as world coordinates in pixels) and size (in pixels), you generate a collision wall using the
addCollisionWallAtPosition:withSize:helper method you added above.
Go to
generate in Map.m and add the following line of code after
[self generateTiles] to ensure the game generates collision walls when it generates a tile map:
[self generateCollisionWalls];
Build and run. Now the cat is stuck within the walls. The only way out is to find the exit – or is it?
Where to Go from Here?
You've earned a basic understanding of how to generate procedural levels in your game. Here is the full source code for the first part of the tutorial.
In the second part of this tutorial, you will extend the map generation code even further by adding rooms. You'll also make map generation more controllable by adding several properties that will influence the process.
If you have any comments or suggestions related to this tutorial, please join the forum discussion below.
|
https://www.raywenderlich.com/2637-procedural-level-generation-in-games-tutorial-part-1
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Use Azure portal to create a Service Bus namespace and a queue
This quickstart shows you how to create a Service Bus namespace and a queue using the Azure portal. It also shows you how to get authorization credentials that a client application can use to send/receive messages to/from the queue.
What are Service Bus queues?
Service Bus queues support a brokered messaging communication model. When using queues, components of a distributed application do not communicate directly with each other; instead they..
Prerequisites
To complete this quickstart, make sure you have an Azure subscription. If you don't have an Azure subscription, you can create a free account before you begin.,,.
Create a queue in the Azure portal
On the Service Bus Namespace page, select Queues in the left navigational menu.
On the Queues page, select + Queue on the toolbar.
Enter a name for the queue, and leave the other values with their defaults.
Now, select Create.
Next steps
In this article, you created a Service Bus namespace and a queue in the namespace. To learn how to send/receive messages to/from the queue, see one of the following quickstarts in the Send and receive messages section.
|
https://docs.microsoft.com/en-gb/azure/service-bus-messaging/service-bus-quickstart-portal
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
NAM .
Library interfaces and system callsIn most cases the mq_*() library interfaces listed above are implemented on top of underlying system calls of the same name. Deviations from this scheme are indicated in the following limitThe RLIMIT_MSGQUEUE resource limit, which places a limit on the amount of space that can be consumed by all of the message queues belonging to a process's real user ID, is described in getrlimit(2).
Mounting the message queue filesystem
These fields are as follows:
- QSIZE
- Number of bytes of data in all messages in the queue (but see BUGS).
- NOTIFY_PID
- If this is nonzero,.
Linux implementation of message queue descriptorsFor a discussion of the interaction of POSIX message queue objects and IPC namespaces, see ipc_namespaces(7)..
Linux does not currently (2.6.26) support the use of access control lists (ACLs) for POSIX message queues.
BUGS.
|
https://man.archlinux.org/man/mq_overview.7.en
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
In this article, we will learn how to find all matches to the regular expression in Python. The RE module’s
re.findall() method scans the regex pattern through the entire target string and returns all the matches that were found in the form of a Python list.
Table of contents
- How to use re.findall()
- Example to find all matches to a regex pattern
- finditer method
- Regex find all word starting with specific letters
- Regex to find all word that starts and ends with a specific letter
- Regex to find all words containing a certain letter
- Regex findall repeated characters
How to use re.findall()
Before moving further, let’s see the syntax of the
re.findall() method.
Syntax
re.findall(pattern, string, flags=0)
pattern: regular expression pattern we want to find in the string or text
string: It is the variable pointing to the target string (In which we want to look for occurrences of the pattern).
Flags: It refers to optional flags. by default, no flags are applied. For example, the
re.Iflag is used for performing case-insensitive findings.
The regular expression pattern and target string are the mandatory arguments, and flags are optional.
Return Value
The
re.findall() scans the target string from left to right as per the regular expression pattern and returns all matches in the order they were found.
It returns
None if it fails to locate the occurrences of the pattern or such a pattern doesn’t exist in a target string.
Example to find all matches to a regex pattern
In this example, we will find all numbers present inside the target string. To achieve this, let’s write a regex pattern.
Pattern:
\d+
What does this pattern mean?
- The
\dis a special regex sequence that matches any digit from 0 to 9 in a target string.
- The
+metacharacter indicates number can contain at minimum one or maximum any number of digits.
In simple words, it means to match any number inside the following target string.
target_string = "Emma is a basketball player who was born on June 17, 1993. She played 112 matches with scoring average 26.12 points per game. Her weight is 51 kg."
As we can see in the above string ’17’, ‘1993’, ‘112’, ’26’, ’12’, ’51’ number are present, so we should get all those numbers in the output.
Example
import re target_string = "Emma is a basketball player who was born on June 17, 1993. She played 112 matches with scoring average 26.12 points per game. Her weight is 51 kg." result = re.findall(r"\d+", target_string) # print all matches print("Found following matches") print(result) # Output ['17', '1993', '112', '26', '12', '51']
Note:
First of all, I used a raw string to specify the regular expression pattern i.e
r"\d+". As you may already know, the backslash has a special meaning in some cases because it may indicate an escape character or escape sequence to avoid that we must use raw string.
finditer method
The
re.finditer() works exactly the same as the
re.findall() method except it returns an iterator yielding match objects matching the regex pattern in a string instead of a list. It scans the string from left-to-right, and matches are returned in the iterator form. Later, we can use this iterator object to extract all matches.
In simple words,
finditer() returns an iterator over MatchObject objects.
But why use
finditer()?
In some scenarios, the number of matches is high, and you could risk filling up your memory by loading them all using
findall(). Instead of that using the
finditer(), you can get all possible matches in the form of an iterator object, which will improve performance.
It means,
finditer() returns a callable object which will load results in memory when called. Please refer to this Stackoverflow answer to get to know the performance benefits of iterators.
finditer example
Now, Let’s see the example to find all two consecutive digits inside the target string.
import re target_string = "Emma is a basketball player who was born on June 17, 1993. She played 112 matches with a scoring average of 26.12 points per game. Her weight is 51 kg." # finditer() with regex pattern and target string # \d{2} to match two consecutive digits result = re.finditer(r"\d{2}", target_string) # print all match object for match_obj in result: # print each re.Match object print(match_obj) # extract each matching number print(match_obj.group())
Output:
re.Match object; span=(49, 51), match='17' 17 re.Match object; span=(53, 55), match='19' 19 re.Match object; span=(55, 57), match='93' 93 re.Match object; span=(70, 72), match='11' 11 re.Match object; span=(103, 105), match='26' 26 re.Match object; span=(106, 108), match='12' 12 re.Match object; span=(140, 142), match='51' 51
More use
- Use finditer to find the indexes of all regex matches
- Regex findall special symbols from a string
Regex find all word starting with specific letters
In this example, we will see solve following 2 scenarios
- find all words that start with a specific letter/character
- find all words that start with a specific substring
Now, let’s assume you have the following string:
target_string = "Jessa is a Python developer. She also gives Python programming training"
Now let’s find all word that starts with letter p. Also, find all words that start with substring ‘py‘
Pattern:
\b[p]\w+\b
- The
\bis a word boundary, then p in square bracket
[]means the word must start with the letter ‘p‘.
\w+means one or more alphanumerical characters after a letter ‘p’
- In the end, we used
\bto indicate word boundary i.e. end of the word.
Example
import re target_string = "Jessa is a Python developer. She also gives Python programming training" # all word starts with letter 'p' print(re.findall(r'\b[p]\w+\b', target_string, re.I)) # output ['Python', 'Python', 'programming'] # all word starts with substring 'Py' print(re.findall(r'\bpy\w+\b', target_string, re.I)) # output ['Python', 'Python']
Regex to find all word that starts and ends with a specific letter
In this example, we will see solve following 2 scenarios
- find all words that start and ends with a specific letter
- find all words that start and ends with a specific substring
Example
import re target_string = "Jessa is a Python developer. She also gives Python programming training" # all word starts with letter 'p' and ends with letter 'g' print(re.findall(r'\b[p]\w+[g]\b', target_string, re.I)) # output 'programming' # all word starts with letter 'p' or 't' and ends with letter 'g' print(re.findall(r'\b[pt]\w+[g]\b', target_string, re.I)) # output ['programming', 'training'] target_string = "Jessa loves mango and orange" # all word starts with substring 'ma' and ends with substring 'go' print(re.findall(r'\bma\w+go\b', target_string, re.I)) # output 'mango' target_string = "Kelly loves banana and apple" # all word starts or ends with letter 'a' print(re.findall(r'\b[a]\w+\b|\w+[a]\b', target_string, re.I)) # output ['banana', 'and', 'apple']
Regex to find all words containing a certain letter
In this example, we will see how to find words that contain the letter ‘i’.
import re target_string = "Jessa is a knows testing and machine learning" # find all word that contain letter 'i' print(re.findall(r'\b\w*[i]\w*\b', target_string, re.I)) # found ['is', 'testing', 'machine', 'learning'] # find all word which contain substring 'ing' print(re.findall(r'\b\w*ing\w*\b', target_string, re.I)) # found ['testing', 'learning']
Regex findall repeated characters
For example, you have a string:
""Jessa Erriika""
As the result you want to have the following matches:
(J, e, ss, a, E, rr, ii, k, a)
Example
import re target_string = "Jessa Erriika" # This '\w' matches any single character # and then its repetitions (\1*) if any. matcher = re.compile(r"(\w)\1*") for match in matcher.finditer(target_string): print(match.group(), end=", ") # output J, e, ss, a, E, rr, ii, k, a,
|
https://pynative.com/python-regex-findall-finditer/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
jobs with the REST or WebDriver API.
There are three status badges that correspond to the three states of a finished test: Passing, Failed, and Unknown.
With the browser matrix, you can keep track of the test status for various browser/platform/operating system combinations.
Choose a Sauce account to associate with your project. If you just have one project, you can use your main Sauce account name. If you have multiple projects, you will want to create a sub-account for each project.
Run your tests for a given project on Sauce using that account's username and access key. If you are logged in as the account you want to use, you can find your credentials on the account page. If you are logged in as a parent account, you can see your sub-account usernames and access keys on the sub-accounts page.
Make sure to set a build number a pass/fail status and a visibility (to either 'public', 'share' or 'public restricted') for every test that runs. You will be able to see that these are set correctly by seeing that your tests say "Pass" or "Failed" instead of "Finished" and that a build number is visible in the UI.
Note: If your tests don't have a build or pass/fail status, you'll get the "Unknown" image for security reasons.
Adding the Standard Badge
You can copy/paste the following Markdown into your GitHub README:
[]()
Or you can add the following HTML to your project site:
<a href=""> <img src="" alt="Sauce Test Status"/> </a>
Adding the Browser Matrix Widget
You can copy/paste the following Markdown into your GitHub README:
[]()
Or you can add the following HTML to your project site:
<a href=""> <img src="" alt="Sauce Test Status"/> </a>
Status Images for Private Accounts
To display the build status of a private Sauce account, you need to provide a HMAC token generated from your username and access key. Here is how you could generate one in python:
Note: if you don't have python on your system, check out this link for HMAC
First start the python interpreter with the following command:
python
Then run the following code in the interpreter to generate a query param to add to badge image URLs:
from hashlib import md5 import hmac "?auth=" + hmac.new("philg:45753ef0-4aac-44a3-82e7-b3ac81af8bca", None, md5).hexdigest()
Once the auth token has been obtained, it can be appended to one of the above badge URLs as the value of the
auth query like this:
?auth=AUTH_TOKEN
|
https://wiki.saucelabs.com/plugins/viewsource/viewpagesrc.action?pageId=48366187
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
propeller2.h for C compilers
in Propeller 2
Now that there are several C compilers working for P2, it seems like we should try to get consensus on some things. It'd be nice if code could be easily ported between compilers. It'll be very confusing and frustrating for users if there isn't at least a baseline commonality between C compilers on the platform (naturally each compiler will have its own particular features and advantages, but lots of code *should* be portable between them!).:
extern void _drvh(unsigned p); // set pin p as output and drive it high extern unsigned _testp(unsigned p); // test pin p for input // and so onNot every intrinsic will map to just one instruction, e.g. _coginit will have to do a "setq" for the parameter to pass to the COG. We'll probably also want accessor functions for things like the builtin COG registers, counter, and so forth.
I've been bumping into this issue with Spin2. I have yet to add the mode-specific smart pin intrinsics, but I'm thinking it will be necessary to have the new silicon in hand to make sure I've named things well and covered the functions properly. Having these in every language would go a long way to standardize things.
I'll make a list of what I've got, so far, and post it in this thread.
I totally agree when you say that there should be consensus. Regarding your point 1, I think the macro should be named "-D__propeller2__". It aligns perfectly with the name of the library, which is "propeller2.h".
On another note, functions that address pins should use the smallest variables possibles. For example, "_Bool getpin(unsigned char pin)" instead of "int getpin(int pin)". Or, another example, "void togpin(unsigned char pin)" instead of "void getpin(int pin)". I'm not assuming that getpin() and togpin() exist or will exist, but these are just examples of functions that don't need 32 bit integers. If I recall correctly, there are equivalent functions inside the existing "propeller.h" that use "int".
Better yet, the new lib could use int8_t, int16_t or int32_t (and also int64_t, if possible), when dealing with pins or memory addresses. It is more correct and very portable. It also shows your intention either as a developer or as a programmer. Although intx_t types correspond to unsigned long, int, short and char primitives, their correspondence to primitives always depends on the host and target system (tall order here, but feasible). In this case, being the P2 is always the target of choice, you can define the intx_t types accordingly, so that they have their fixed size as described (you only have to take into account if the host system is 32-bit or 64-bit, as "long int" size will vary, either 32-bit or 64-bit, IIRC). The types "size_t" and "ssize_t" could be implemented as well.
Kind regards, Samuel Lourenço
I'm going to have to disagree with you here. A minor point is that "_Bool" is a C99 invention, and if I recall correctly Catalina only supports C89; this could be worked around. The more serious concern is that "int" and "unsigned int" are by definition the natural and most efficient sizes for the platform. Casting them to/from "char" could require code to be inserted by the compiler. That is: needs to be compiled like: where the cast to (unsigned char) masks out the upper bits, and the cast back to (int) does zero extension. That's adding unnecessary instructions to the operation. "unsigned char" is already wider than we need, so it doesn't really add any error checking. I think we should allow the compiler the freedom to use the most efficient integer size. For current P2 compilers that's 32 bits, but one could imagine a compiler designed to save memory in which the most efficient size might be different. "int" and "unsigned int" are required by the standard to be at least 16 bits, but otherwise they're supposed to be whatever the compiler deems "best".
Regards,
Eric
On first glance it mostly looks ok. I am ok with predefining __propeller2__ on the P2 and with using an underscore to lead function names (this is pretty standard for C).
I am not so sure about using structures. Catalina is fine with this, but it might impede users coming from Spin. I think we should probably just stick with "unsigned int" for compatibility reasons. Sophisticated C users can use unions (which can be defined in the header files) to decode the values, but the functions themselves should return the same values as the Spin equivalents.
If you want to drive people away from using C on Propeller, by all means use the ridiculous C99 types like "_Bool" and "uint32_t"
The structures would only be for functions which return multiple values. Spin2 allows a function to return more than one result; for example to rotate x,y by an angle you could write something like the following in Spin2: which would set both x and y to the 2 values returned by "rotxy". It looks like a few of the functions Chip has proposed are like this, and it makes sense in the context (where the CORDIC does return two 32 bit results). To do this in C I think we have a few choices:
(a) use a struct for the returned result:
This is the most natural way in C to represent a multi-valued return, I think, and produces good code in GCC. fastspin's struct handling in C is still pretty shaky, but if we go this way it'll give me more incentive to get that working properly
(b) pass a pointer for one of the results: This is also fairly natural C, and works well if _rotxy is a function, but it does make for memory traffic so is potentially less efficient (the "struct" version can keep everything in registers, but this one has to force y into HUB memory).
(c) return a 64 bit result in "unsigned long long", with the first result in the low 32 bits and second in the high 32 bits: This should also be quite efficient, but it only handles the case of two returned values. This may be all we need, but in principle Spin2 can allow for more than that (I'm not sure what the limit is in Chip's compiler, but fastspin allows up to eight). I'm also not sure if we should force all C compilers for P2 to implement "unsigned long long" as 64 bits.
Another possibility is that we could hide the actual method of multiple assignment "under the hood" with a macro, something like: I'm a little uncomfortable with this because it looks like a function call but doesn't act like one (the first arguments are modified even though they don't have & in front of them). If we were working in C++ we could use references so it wouldn't be an issue, but I want the header file to work in plain C as well.
My own preference is for option (a), but I'm not married to any of them. If the other compiler writers all agree on some other standard then I'll go along with the consensus.
Yes, in general we want an abstract API, so this discussion is a subset of that one. It seems like a tractable subset. i.e. one that we can perhaps come to agreement on fairly quickly, I hope.
I actually have a use case right now for propeller2.h. I have a VGA driver that I'd like to have working in all of the C compilers; I've got it working in fastspin, RiscV, and p2gcc, but I had to hack up a header file for p2gcc, and then realized that I had no idea how to port it to Catalina as well. Having a single header file that would cover all 4 compilers would make this much nicer.
Sure. Use whatever names you like.
Aargh! I didn't know about that. That is really ugly. Spin needs proper types, not this kind of typographical workaround for their absence.
So yes, I guess we can at least ameliorate this madness in C by using functions that return structures. There is a good reason why this facility is rarely used in C, but in this case I guess it is a better solution than any of the alternatives
That makes sense. I'll be using the functions that the library has to offer, instead of changing the registers myself, then. I've made the wrong assumption that my code would be more efficient. Actually, I've resorted to recreate functions to change pins just because the ones in the library used "int" types, in the line of that wrong assumption.
Why is it ridiculous. Ridiculous it to have a standard that vaguely describes the size of ints. And then you get things like longs having 32-bit or 64-bit, depending on the machine. I'm aware that uintx_t types are derived from these primitives, but they are a step in the right direction. As an example, check the code attached. Would you implement this using primitives in which the size is not guaranteed to be known?
Kind regards, Samuel Lourenço
Good example. The type "unsigned char" must exist according to the C standard. The type uint8_t need not. In the case of your code, you would of course simply redefine uint8_t to be unsigned char, and probably get away with it.
But why should you need to do so?
Beauty is in the eye of the beholder. How would proper types resolve the issue of needing to return multiple values when rotating cartesian coordinates?
About the primitive sizes, the only thing that the C specification guarantees that a char is the smallest type (can be signed or unsigned, signedness is ambiguous with chars without the "signed" or "unsigned" modifiers). A short can larger or equal in size to a char. An int can be larger or equal in size to a short. A long can be larger or equal in size to an int. A long long can be larger or equal in size to a long. This is pretty ambiguous, and a nightmare to deal with. Thus, long has the same size of an int (32-bit) on a 32-bit machine, and has the same size of a long long (64-bit) on a 64-bit machine. If I recall correctly, int can have the same size of a short (16-bit) on a 16-bit machine. Caveats, caveats, caveats!
Having said that, I understand why many people avoid uintx_t types, or _t types in general, like the plague: essentially because they are derived types. But by a different reason and in the same measure, ambiguous types like int, short or long, should be avoided, especially at low level, hardware related stuff. Each type is useful on its own context.
Kind regards, Samuel Lourenço
Anyway, most users are not dumb, and those that are don't have the capacity to learn anything, no matter how much "simplified" is the language or use. Plus, inclusivity and over-simplification promotes lack of versatility and stifles creativity. Many people use their Arduinos to blink a LED, in a 555 timer fashion, and not much more than that. I see far more potential on the P2.
Kind regards, Samuel Lourenço
Let's not get into language wars. I know that C can do everything that Spin2 can do (and vice versa!), because I've written compilers for P2 for both of them. Some things are easier to express in C, some are easier to express in Spin2. Not every person likes every language, and not every language is the "best" tool for every task. Anyway, it's all moot -- this discussion is specifically about making the various C compilers for P2 work well together, so the C language is a given here. If you don't like C, please avoid this thread
Speaking for myself, I wouldn't be willing to develop for P2 without C, for two reasons:
- C is like a second language for me, and a good platform is provided for Propeller;
- Spin requires me to learn it, and if I'm not mistaken, it is interpreted using bytecodes (not my cup of tea).
In a nutshell, C is a mandatory requisite for me. Without it, I wouldn't consider P2 as an option. In fact, non-proprietary C support was one of the main reasons I choose P1.
kind regards, Samuel Lourenço
I think you are on the right track. I cannot comment about C since I've never really done much C code.
Chip is going to implement the multiple return values in spin2 because its required.
(x,y) = some_call(a,b)
Python also has a similar set of calls from little I know yet - still learning as I go with my work job.
So to me, it would make sense to support this type of call.
And yes please, use the same propeller.h header for all the languages.
Fastspin is fast becoming the compiler of choice to compile spin, C, micropython, basic, and spin2 shortly. This amazing effort is bringing all these languages together, and can output pasm or pasm2, and micropython, and hopefully spin2 bytecode in the future. Let's get behind Eric who is doing amazing things here
Sure you can have C functions that modify or operate on more than one variable at the same time (by passing these variables by address). The call would be different. It would be something in the line of "some_call (a, b, &x, &y)", being some_call a void function, and "x", "y" the output variables that are being passed by address. Alternatively, you can have "x = some_call_x (a, b); y = some_call_y (a, b)" if you don't mind having separate functions for x and y.
Regarding your last point, mind that "propeller.h" is C/C++ specific. You can write similar headers/include files for other languages though, but they will be specific for the language they target. You can't therefore have the same header/include file universal to every language. You can have, and should have, the same header file fit for different C/C++ compilers, if that's what you mean.
Kind regards, Samuel Lourenço
For now, I'd like to be able to write some C code for the P2 that all the compilers (Catalina, fastspin, p2gcc, RiscV gcc) can use. If we can make the standard C functions for P2 as similar as possible to what Chip's doing for Spin2, so much the better -- it will lessen the learning curve for those going between those languages.
The struct style "IntVec2 foo(int x,int y)" is perhaps the most nice-looking and helps reduce the amount of variables that need declaring ("IntVec2 pos1,pos2;" instead of "int pos1x,pos1y,pos2x,pos2y;"). OTOH, in C, you can't overload a version that takes the struct as it's parameter, so you'd have to do some minor jank like this: "pos2 = foo(pos1.x,pos1.y)"
The "pass two pointers" style "void foo(int x, int y, int *xr, int *yr)" is also good, since it is very flexible (you can write the results into spereate variables, a struct, an interleaved array, two seperate arrays, etc, all without too much jank). Though it requires more work on the compiler side to optimize out the hub access where possible.
The "pass one pointer, return the other" style "int foo(int x, int y, int *yr)" is annoyingly asymetric (for working with vectors where the elements are roughly equivalent), but otherwise the same as "pass two pointers".
The "return a 64 bit value" style "long long foo(int x,int y)" is just terrible and offers no advantage over the struct-return (IIRC GCC implements oversized types similarly to structs, unless the ABI says otherwise)
You have answered your own question. The function accepts - and returns - a "cartesian coordinate", not arbitrary values.
In most languages, you would define a "cartesian coordinate" type.
It is not "required", it is a syntactic fix for a language deficiency. But it breaks part of the implicit contract that a computer language - any language - has with the programmer. In this case, the implicit contract defined by having "functions" at all.
To see why, consider a function call like:
x = f1(a, f2(b,c), f3(d,e), f4(f)).
Does f1 take 4 parameters? You can no longer tell at a glance, because you don't know how many values the other functions return. You can''t even tell which parameter f3 supplies. It could be the third, the fourth, the third and fourth, or the fourth and the fifth. You need to look up the definition of every preceeding function in the call to figure this out. And if you ever change the number of parameters a function returns, you will need to go searching for every use of that function, to see what you may break. The problem gets worse if your language allows variadic functions, because the compiler will not be able to help you.
Also, are you really gaining anything? How do you use just one of the results of a function that returns two values (a common requirement - think of q,r=div(a,b) where in most cases you only want one of q or r). So do you have 3 separate functions? - i.e. one that returns just q, one that returns just r, and one that returns both? Or do you add a selector function that selects which result you want? e.g. q = select(div(a,b), 1), r = select(div(a,b),2)
Note that I am assuming that a function that returns two values can be used in place of two parameters. If it cannot, then that breaks the implicit function contract in a different way, which is that a function result can be used anywhere where the type of value it returns can appear. Essentially, we have introduced two non-interchangeable types of functions.
Either way, it may look pretty at first glance, but in fact it gets ugly fast
Then you can call it essentially by doing so, for example:
Note that rotate() stores the results in x_r, y_r. Their addresses are passed to the function, that then changes their values. In the function definition, the function takes two int vars x and y, which are the initial cartesian coordinates that have as values 90 and 60 respectively. It also accepts two int pointers *xr and *yr that are to be supplied with addresses of the variables x_r, y_r, where you store the results.
If you don't need to preserve the original coordinates after rotation, you can simplify the declaration:
And the example call as well:
The values are also fed via the same variables in which the results are stored. However, this is over-simplified, and a bit far from ideal, IMHO.
kind regards, Samuel Lourenço
which would then need to be packed and unpacked by both calling and called functions.
I think "most languages" is a difficult claim to make or prove, and it probably isn't worth the effort.
"x" and "y" are neither more nor less ugly than "p.x" and "p.y", but whether they make more or less sense depends very much on your previous programming experience.
Defining a struct for a point with elements x & y, and making the function accept and return these structs as parameters is essentially the same thing as defining a "Cartesian coordinate" type, yet it isn't even what was being proposed.
Why pass "arbitrary values" x & y into the function and return a struct and have to unpack it to "arbitrary values"?
when it should really be more like:
Consistency in implementation is more beautiful to me than any particular implementation.
I did really like the idea of multiple return parameters, but never thought of the implication of using the return values as parameter.
Being more on the spin side as C I just assumed longs and that they then replace n parameter. But parsing that might be a pain, not just for the compiler but for the guy reading the code. I have that problem sometimes, 'what the hell was this guy thinking' when reading code, sadly even with code I wrote by myself.
(a,b):=qrot(c,d) is easy to read and understand.
x = f1(a, f2(b,c), f3(d,e), f4(f)). as in his example not so much.
Maybe the solution is to allow structs and types in spin?
Implicit long if not declared to be compatible with old Spin, else done like types in fastspin basic?
Floating point support in Spin would make sense, having cordic and such, fastspin has the support already for Basic and C, it would just need to define a syntax for spin and get chip to use that too.
just. What do I know how much work that is, I just assume that @ersmith and @RossH can do magic.
I personally would prefer a RETURN A,B compared to defining return parameters as output in the function description.
Mike
After all, if f2() returned a struct containing multiple values, or a pointer to a memory block containing multiple values, you still can't really tell how many parameters f1() actually takes, without examining the code of f1().
I don't think it's a valid assumption.
In your example there is some sense to using only quotient or remainder, or both; but for the rotation there are far fewer examples where that makes sense.
If f2() above actually took "b" as input, and "c" was a pointer to a secondary output, how can a casual glance inform the programmer what f1() will do with the result stored in c?
C is not immune to getting ugly fast either....
Edit: clause in italics added.
|
http://forums.parallax.com/discussion/comment/1472417/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
If you are getting started with Python it can be a bit confusing understanding what a lambda is. Let’s see if I can clarify few things straight away.
A lambda is also called an anonymous function and that’s because lambdas don’t have a name. To define a lambda in Python you use the keyword lambda followed by one or more arguments, a colon (:) and a single expression.
We will start with a simple example of lambda function to get used to its syntax and then we will look at how a Python lambda function fits different scenarios.
To practice all the examples we will use the Python interactive shell.
Let’s get started!
How to Use a Lambda in Python
Let’s start with the syntax of a lambda function.
A lambda function starts with the lambda keyword followed by a list of comma separated arguments. The next element is a colon (:) followed by a single expression.
lambda <argument(s)> : <expression>
As you can see a lambda function can be defined in one line.
Let’s have a look at a very simple lambda that multiplies the number x (argument) by 2:
lambda x : 2*x
Here’s what happens if I define this lambda in the Python shell:
>>> lambda x : 2*x <function <lambda> at 0x101451cb0>
I get back a function object. Interestingly when I define a lambda I don’t need a return statement as part of the expression.
What happens if I include the return statement in the expression?
>>> lambda x : return 2*x File "<stdin>", line 1 lambda x : return 2*x ^ SyntaxError: invalid syntax
We receive a syntax error. So, no need to include return in a lambda.
How to Call a Lambda Function in Python
We have seen how to define a lambda, but how can we call it?
Firstly we will do it without assigning the function object to a variable. To do that we just need to use parentheses.
(lambda x : 2*x)(2)
We will surround the lambda expression with parentheses followed by parentheses surrounding the arguments we want to pass to the lambda.
This is the output when we run it:
>>> (lambda x : 2*x)(2) 4
Sweet!
We also have another option. We can assign the function object returned by the lambda function to a variable, and then call the function using the variable name.
>>> multiply = lambda x : 2*x >>> multiply(2) 4
I feel this kind of goes against the idea of not giving a name to a lambda, but it was worth knowing…
Before continuing reading this article make sure you try all the examples we have seen so far to get familiar with lambdas.
I still remember the first time I started reading about lambdas, I was a bit confused. So don’t worry if you feel the same right now 🙂
Passing Multiple Arguments to a Lambda Function
In the previous sections we have seen how to define and execute a lambda function.
We have also seen that a lambda can have one or more arguments, let’s see an example with two arguments.
Create a lambda that multiplies the arguments x and y:
lambda x, y : x*y
As you can see, the two arguments are separated by a comma.
>>> (lambda x, y : x*y)(2,3) 6
As expected the output returns the correct number (2*3).
A lambda is an IIFE (Immediately Invoked Function Expression). It’s basically a way to say that a lambda function is executed immediately as soon as it’s defined.
Difference Between a Lambda Function and a Regular Function
Before continuing looking at how we can use lambdas in our Python programs, it’s important to see how a regular Python function and a lambda relate to each other.
Let’s take our previous example:
lambda x, y : x*y
We can also write it as a regular function using the def keyword:
def multiply(x, y): return x*y
You notice immediately three differences compared to the lambda form:
- When using the def keyword we have to specify a name for our function.
- The two arguments are surrounded by parentheses.
- We return the result of the function using the return statement.
Assigning our lambda function to a variable is optional (as mentioned previously):
multiply_lambda = lambda x, y : x*y
Let’s compare the objects for these two functions:
>>> def multiply(x, y): ... return x*y ... >>> multiply_lambda = lambda x, y : x*y >>> multiply <function multiply at 0x101451d40> >>> multiply_lambda <function <lambda> at 0x1014227a0>
Here we can see a difference: the function defined using the def keyword is identified by the name “multiply” while the lambda function is identified by a generic <lambda> label.
And let’s see what is returned by the type() function when applied to both functions:
>>> type(multiply) <class 'function'> >>> type(multiply_lambda) <class 'function'>
So, the type of the two functions is the same.
Can I use If Else in a Python Lambda?
I wonder if I can use an if else statement in a lambda function…
lambda x: x if x > 2 else 2*x
This lambda should return x if x is greater than 2 otherwise it should return x multiplied by 2.
Firstly, let’s confirm if its syntax is correct…
>>> lambda x: x if x > 2 else 2*x <function <lambda> at 0x101451dd0>
No errors so far…let’s test our function:
>>> (lambda x: x if x > 2 else 2*x)(1) 2 >>> (lambda x: x if x > 2 else 2*x)(2) 4 >>> (lambda x: x if x > 2 else 2*x)(3) 3
It’s working well…
…at the same time you can see that our code can become more difficult to read if we make the lambda expression more and more complex.
As mentioned at the beginning of this tutorial: a lambda function can only have a single expression. This makes it applicable to a limited number of use cases compared to a regular function.
Also remember…
You cannot have multiple statements in a lambda expression.
How to Replace a For Loop with Lambda and Map
In this section we will see how lambdas can be very powerful when applied to iterables like Python lists.
Let’s begin with a standard Python for loop that iterates through all the elements of a list of strings and creates a new list in which all the elements are uppercase.
countries = ['Italy', 'United Kingdom', 'Germany'] countries_uc = [] for country in countries: countries_uc.append(country.upper())
Here is the output:
>>> countries = ['Italy', 'United Kingdom', 'Germany'] >>> countries_uc = [] >>> >>> for country in countries: ... countries_uc.append(country.upper()) ... >>> print(countries_uc) ['ITALY', 'UNITED KINGDOM', 'GERMANY']
Now we will write the same code but with a lambda. To do that we will also use a Python built-in function called map that has the following syntax:
map(function, iterable, ...)
The map function takes another function as first argument and then a list of iterables. In this specific example we only have one iterable, the countries list.
Have you ever seen a function that takes another function as argument before?
A function that takes another function as argument is called an Higher Order Function.
It might sound complicated, this example will help you understand how it works.
So, what does the map function do?
The map function returns an iterable that is the result of the function passed as first argument applied to every element of the iterable.
In our scenario the function that we will pass as first argument will be a lambda function that converts its argument into uppercase format. As iterable we will pass our list.
map(lambda x: x.upper(), countries)
Shall we try to execute it?
>>> map(lambda x: x.upper(), countries) <map object at 0x101477890>
We get back a map object. How can we get a list back instead?
We can cast the map object to a list…
>>> list(map(lambda x: x.upper(), countries)) ['ITALY', 'UNITED KINGDOM', 'GERMANY']
It’s obvious how using map and lambda makes this code a lot more concise compared to the one where we have use the for loop.
Use Lambda Functions with a Dictionary
I want to try to use a lambda function to extract a specific field from a list of dictionaries.
This is something that can be applied in many scenarios.
Here is my list of dictionaries:
people = [{'firstname':'John', 'lastname':'Ross'}, {'firstname':'Mark', 'lastname':'Green'}]
Once again I can use the map built-in function together with a lambda function.
The lambda function takes one dictionary as argument and returns the value of the firstname key.
lambda x : x['firstname']
The full map expression is:
firstnames = list(map(lambda x : x['firstname'], people))
Let’s run it:
>>> firstnames = list(map(lambda x : x['firstname'], people)) >>> print(firstnames) ['John', 'Mark']
Very powerful!
Passing a Lambda to the Filter Built-in Function
Another Python built-in function that you can use together with lambdas is the filter function.
Below you can see its syntax that requires a function and a single iterable:
filter(function, iterable)
The idea here is to create an expression that given a list returns a new list whose elements match a specific condition defined by a lambda function.
For example, given a list of numbers I want to return a list that only includes the negative ones.
Here is the lambda function we will use:
lambda x : x < 0
Let’s try to execute this lambda passing a couple of numbers to it so it’s clear what the lambda returns.
>>> (lambda x : x < 0)(-1) True >>> (lambda x : x < 0)(3) False
Our lambda returns a boolean:
- True if the argument is negative.
- False if the argument is positive.
Now, let’s apply this lambda to a filter function:
>>> numbers = [1, 3, -1, -4, -5, -35, 67] >>> negative_numbers = list(filter(lambda x : x < 0, numbers)) >>> print(negative_numbers) [-1, -4, -5, -35]
We get back the result expected, a list that contains all the negative numbers.
Can you see the difference compared to the map function?
The filter function returns a list that contains a subset of the elements in the initial list.
How Can Reduce and Lambda Be Used with a List
Another common Python built-in function is the reduce function that belongs to the functools module.
reduce(function, iterable[, initializer])
In this example we will ignore the initialiser, you can find more details about it here.
What does the reduce function do?
Given a list of values:
[v1, v2, ..., vn]
It applies the function passed as argument, to the first two elements of the iterable. The result is:
[func(v1,v2), v3, ..., vn]
Then it applies the function to the result of the previous iteration and the next element in the list:
[func(func(v1,v2),v3), v4, ..., vn]
This process continues left to right until the last element in the list is reached. The final result is a single number.
To understand it in practice, we will apply a simple lambda that calculates the sum of two numbers to a list of numbers:
>>> reduce(lambda x,y: x+y, [3, 7, 10, 12, 5]) 37
Here is how the result is calculated:
((((3+7)+10)+12)+5)
Does it make sense?
Let’s see if we can also use the reduce function to concatenate strings in a list:
>>> reduce(lambda x,y: x + ' ' + y, ['This', 'is', 'a', 'tutorial', 'about', 'Python', 'lambdas']) 'This is a tutorial about Python lambdas'
It works!
Lambda Functions Applied to a Class
Considering that lambdas can be used to replace regular Python functions, can we use lambdas as class methods?
Let’s find out!
I will define a class called Gorilla that contains a constructor and the run method that prints a message:
class Gorilla: def __init__(self, name, age, weight): self.name = name self.age = age self.weight = weight def run(self): print('{} starts running!'.format(self.name))
Then I create an instance of this class called Spartacus and execute the run method on it:
Spartacus = Gorilla('Spartacus', 35, 150) Spartacus.run()
The output is:
Spartacus starts running!
Now, let’s replace the run method with a lambda function:
run = lambda self: print('{} starts running!'.format(self.name))
In the same way we have done in one of the sections above we assign the function object returned by the lambda to the variable run.
Notice also that:
- We have removed the def keyword because we have replaced the regular function with a lambda.
- The argument of the lambda is the instance of the class self.
Execute the run method again on the instance of the Gorilla class…
…you will see that the output message is exactly the same.
This shows that we can use lambdas as class methods!
It’s up to you to chose which one you prefer depending on what makes your code easy to maintain and to understand.
Using Lambda with the Sorted Function
The sorted built-in function returns a sorted list from an iterable.
Let’s see a simple example, we will sort a list that contains the names of some planets:
>>> planets = ['saturn', 'earth', 'mars', 'jupiter'] >>> sorted(planets) ['earth', 'jupiter', 'mars', 'saturn']
As you can see the sorted function orders the list alphabetically.
Now, let’s say we want to order the list based on a different criteria, for example the length of each word.
To do that we can use the additional parameter key that allows to provide a function that is applied to each element before making any comparison.
>>> sorted(planets, key=len) ['mars', 'earth', 'saturn', 'jupiter']
In this case we have used the len() built-in function, that’s why the planets are sorted from the shortest to the longest.
So, where do lambdas fit in all this?
Lambdas are functions and because of this they can be used with the key parameter.
For example, let’s say I want to sort my list based on the third letter of each planet.
Here is how we do it…
>>> sorted(planets, key=lambda p: p[2]) ['jupiter', 'earth', 'mars', 'saturn']
And what if I want to sort a list of dictionaries based on the value of a specific attribute?
>>> people = [{'firstname':'John', 'lastname':'Ross'}, {'firstname':'Mark', 'lastname':'Green'}] >>> sorted(people, key=lambda x: x['lastname']) [{'firstname': 'Mark', 'lastname': 'Green'}, {'firstname': 'John', 'lastname': 'Ross'}]
In this example we have sorted the list of dictionaries based on the value of the lastname key.
Give it a try!
Python Lambda and Error Handling
In the section in which we have looked at the difference between lambdas and regular functions, we have seen the following:
>>> multiply <function multiply at 0x101451d40> >>> multiply_lambda <function <lambda> at 0x1014227a0>
Where multiply was a regular function and multiply_lambda was a lambda function.
As you can see the function object for a regular function is identified with a name, while the lambda function object is identified by a generic <lambda> name.
This also makes error handling a bit more tricky with lambda functions because Python tracebacks don’t include the name of the function in which an error occurs.
Let’s create a regular function and pass to it arguments that would cause the Python interpreter to raise an exception:
def calculate_sum(x, y): return x+y print(calculate_sum(5, 'Not_a_number'))
When I run this code in the Python shell I get the following error:
>>> def calculate_sum(x, y): ... return x+y ... >>> print(calculate_sum(5, 'Not_a_number')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in calculate_sum TypeError: unsupported operand type(s) for +: 'int' and 'str'
From the traceback we can clearly see that the error occurs at line 2 of the calculate_sum function.
Now, let’s replace this function with a lambda:
calculate_sum = lambda x, y: x+y print(calculate_sum(5, 'Not_a_number'))
The output is:
>>> calculate_sum = lambda x,y: x+y >>> print(calculate_sum(5, 'Not_a_number')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <lambda> TypeError: unsupported operand type(s) for +: 'int' and 'str'
The type of exception and the error message are the same, but this time the traceback tells us that there was an error at line 1 of the function <lambda>.
Not very useful!
Imagine if you had to find the right line among 10,000 lines of code.
Here is another reason for using regular functions instead of lambda functions when possible.
Passing a Variable List of Arguments to a Python Lambda
In this section we will see how to provide a variable list of arguments to a Python lambda.
To pass a variable number of arguments to a lambda we can use *args in the same way we do with a regular function:
(lambda *args: max(args))(5, 3, 4, 10, 24)
When we run it we get the maximum between the arguments passed to the lambda:
>>> (lambda *args: max(args))(5, 3, 4, 10, 24) 24
We don’t necessarily have to use the keyword args. What’s important is the * before args that in Python represents a variable number of arguments.
Let’s confirm if that’s the case by replacing args with numbers:
>>> (lambda *numbers: max(numbers))(5, 3, 4, 10, 24) 24
Still working!
More Examples of Lambda Functions
Before completing this tutorial let’s have a look at few more examples of lambdas.
These examples should give you some more ideas if you want to use lambdas in your Python programs.
Given a list of Linux commands return only the ones that start with the letter ‘c’:
>>> commands = ['ls', 'cat', 'find', 'echo', 'top', 'curl'] >>> list(filter(lambda cmd: cmd.startswith('c'), commands)) ['cat', 'curl']
From a comma separated string with spaces return a list that contains each word in the string without spaces:
>>>>> list(map(lambda word: word.strip(), weekdays.split(','))) ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']
Generate a list of numbers with the Python range function and return the numbers greater than four:
>>> list(filter(lambda x: x > 4, range(15))) [5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
Conclusion
In this tutorial we have seen what a Python lambda is, how to define it and execute it.
We went through examples with one or more arguments and we have also seen how a lambda returns a function object (without the need of a return statement).
Now you know that a lambda is also called an anonymous function because when you define it you don’t bind it to a name.
Also, analysing the difference between regular functions and lambda functions in Python has helped us understand better how lambdas works.
It’s very common to use lambda functions when they are needed only once in your code. If you need a function that gets called multiple times in your codebase using regular functions is a better approach to avoid code duplication.
Always remember how important is to write clean code, code that anyone can quickly understand in case of bugs that need to be fixed quickly in the future.
Now you have a choice between lambdas and regular functions, make the right one! 🙂
I’m a Tech Lead, Software Engineer and Programming Coach. I want to help you in your journey to become a Super Developer!
One comment
|
https://codefather.tech/blog/what-is-lambda-python/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Changing Theme of a Tkinter GUI
Hello everyone, In this tutorial, we will learn about how to change the theme of Tkinter GUI. We create a GUI application using Tkinter but we don’t know how to change those boring traditional widgets with something that looks more attractive to the user. We do not get external theme support so we will be using a python library named ttkthemes which has included many themes for our application. This Library supports python version 2.7 or more.
Let’s start by installing ttkthemes in our Python environment.
Installing ttkthemes
We can install ttkthemes with the command below.
pip install ttkthemes
We can also install via Git using
python3 -m pip install git+
Before start Coding, we recommend that you get used to the basics of Tkinter. Refer to these tutorials.
Introduction to Tkinter module in Python
Tkinter pack() , grid() Method In Python
All set guys, Let us change that default theme.
Change Theme with ttkthemes – Tkinter GUI
We assume that you have prior knowledge of basic imports while making a Tkinter GUI and will describe the new things that we will be doing in our code.
import tkinter as tk import tkinter.ttk as ttk from ttkthemes import ThemedStyle
We have imported ThemedStyle from ttkthemes which supports the external themes provided by this package and sets those themes to the Tk instance of our GUI.
app = tk.Tk() app.geometry("200x400") app.title("Changing Themes") # Setting Theme style = ThemedStyle(app) style.set_theme("scidgrey")
In the code above we have created a Tk instance as ‘app’ and set the theme as ‘scidgrey’ which is provided by the ThemeStyle package.
Let us create some widgets using both tk(Default_Themed) and ttk(External_Themed) and see the difference between them.
# Button Widgets Def_Btn = tk.Button(app,text='Default Button') Def_Btn.pack() Themed_Btn = ttk.Button(app,text='Themed button') Themed_Btn.pack() # Scrollbar Widgets Def_Scrollbar = tk.Scrollbar(app) Def_Scrollbar.pack(side='right',fill='y') Themed_Scrollbar = ttk.Scrollbar(app,orient='horizontal') Themed_Scrollbar.pack(side='top',fill='x') # Entry Widgets Def_Entry = tk.Entry(app) Def_Entry.pack() Themed_Entry = ttk.Entry(app) Themed_Entry.pack() app.mainloop()
List of themes in ttkthemes
- Aquativo
- Arc
- Clearlooks
- Equilux
- Keramic
- Plastik
- Radiance
- Scid themes
- Smog
There are many more themes in this Library, look at them here
We hope you really enjoy this tutorial and if you have any doubt, feel free to leave a comment below.
Learn more with us:
Python program for login page using Tkinter package
Create a registration form in python using Tkinter package
|
https://www.codespeedy.com/changing-theme-of-a-tkinter-gui/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Wrapper class in Python
In this tutorial, you are going to learn about the Wrapper class in Python with example. Continue following this article…
What is a Wrapper Class?
A Wrapper class is used to organize codes after creating classes with wrapper logic to organize instances that are created later in the program.
These classes are used to add additional features to the class without changing the original class.
An example of Wrapper Class with Python code example is given below:
def WrapperExample(A): class Wrapper: def __init__(self, y): self.wrap = A(y) def get_number(self): return self.wrap.name return Wrapper @decorator class code: def __init__(self, z): self.name = z y = code("Wrapper class") print(y.get_name())
Output:
We will see the output will print the result you can see below:
Wrapper class
Program Explanation:
Now let’s see what we did in our program step by step.
First, we create one function which is named as a wrapper Example. Next, create class Wrapper and two functions i.e __init__ and get_name. The function __init__ is used to initialize the function. Here A(y) returns an object to class code. The decorator rebinds that class code to another class Wrapper that retains the original class in the enclosing scope and then creates and embeds an instance of the original class when it is called.
Next, we use a decorator which is a design pattern in python used to add additional functionality without changing the structure of a program. @decorator is equivalent to code=decorator(code) which is executed at the end of the class.
The function get_name returns the name attribute for the wrapped object and gives the output as “Wrapper class”.
This is an explanation about the Wrapper class in Python.
|
https://www.codespeedy.com/wrapper-class-in-python/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
On Mon, 8 Jul 2002, Nicola Ken Barozzi wrote:
> Can you keep me/us more informed on the actual state and proceedings?
As I said, the code is checked in ( almost a month ago ) and it seems to
work fine for what it does. The API and the 'story' are missing.
RuntimeConfigurable2 now has an internal hook ( to be replaced with a
real/cleaner API ). You register a property source with a 'namespace',
using a task for example - and after that ${ns:property} will be
replaced by getting the property value dynamically.
I'm looking at BSF, JXPath, jelly, JSP EL for ideas on the simplest
API to plug those. I'll probably write the tasks and adapters for
BSF and JXPath, and velocity/etc would be easy to add.
The second problem ( after API ) is how to integrate this with the
namespaces. ProjectHelper2 is using SAX2 and namespaces - and
Axis for example does a lot of interesting things using the namespace
URL. One intersting idea would be to use the namespace to locate the .jar
( or classloader ) and use the discovery mechanism ( META-INF/services )
to automatically get the tasks in that ns. But how would the namespace
play with the ${properties} ? No idea.
Costin
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
|
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200207.mbox/%3CPine.LNX.4.44.0207081553530.1693-100000@costinm.sfo.covalent.net%3E
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
C++
Background Information
C++ is a commonly used programming language.
Available Tools
Enable you to access Caché objects from within a C++ application. InterSystems provides several bindings:.
Dynamic binding — Instead of using compiled C++ proxy classes, you can work with Caché classes dynamically, at runtime. This can be useful for writing applications or tools that deal with classes in general and do not depend on particular Caché classes.
Light binding — The Light C++ binding, it is ten to twenty times faster than the standard C++ binding.
For maximum flexibility, applications can use the Caché ODBC driver and the Caché C++ binding at the same time.
See Using C++ with Caché.
Availability: All namespaces.
|
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ITECHREF_CPP
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
NAMEnetwork_namespaces - overview of Linux network namespaces
DESCRIPTIONNetwork namespaces provide isolation of the system resources associated with networking: network devices, IPv4 and IPv6 protocol stacks, IP routing tables, firewall rules, the /proc/net directory (which is a symbolic link to /proc/PID/net), the /sys/class/net directory, various files under /proc/sys/net, port numbers (sockets), and so on. In addition, network namespaces isolate the UNIX domain abstract socket namespace (see unix(7)).
A physical network device can live in exactly one network namespace. When a network namespace is freed (i.e., when the last process in the namespace terminates), its physical network devices are moved back to the initial network namespace (not to the parent of the process).
A virtual network (veth(4)) device pair provides a pipe-like abstraction that can be used to create tunnels between network namespaces, and can be used to create a bridge to a physical network device in another namespace. When a namespace is freed, the veth(4) devices that it contains are destroyed.
Use of network namespaces requires a kernel that is configured with the CONFIG_NET_NS option.
|
https://man.archlinux.org/man/network_namespaces.7.en
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
In this post I will show a few methods of how to check AWS S3 bucket size for bucket named
my-docs. You can change this name to any existing bucket name which you have access to.
S3 CLI
All you need is AWS CLI installed and configured.
aws s3 ls --summarize --human-readable --recursive s3://my-docs
It prints output like this one:
2019-03-07 12:11:24 69.7 KiB 2019/01/file1.pdf 2019-03-07 12:11:20 921.4 KiB 2019/01/file2.pdf 2019-03-07 12:11:16 130.9 KiB 2019/01/file3.pdf Total Objects: 310 Total Size: 121.7 MiB
The output looks similar to bash
ls command.
You can than cut total size from the output, e.g.:
aws s3 ls --summarize --human-readable --recursive s3://my-docs \ | tail -n 1 \ | awk -F" " '{print $3}'
tail will get last line of the output
awk will tokenize line by space and print third token which is bucket size in MB.
CloudWatch metric
aws cloudwatch get-metric-statistics --namespace AWS/S3 \ | --start-time 2019-07-07T23:00:00Z \ | --end-time 2019-10-31T23:00:00Z \ | --period 86400 \ | --statistics Sum \ | --region us-east-1 \ | --metric-name BucketSizeBytes \ | --dimensions Name=BucketName,Value="my-docs" Name=StorageType,Value=StandardStorage \ | --output text \ | sort -k3 -n | tail -n 1 | cut -f 2-2
What is happening in the above command?
AWS CLI CloudWatch method
get-metric-statistics prints metric data points from range
start-time till
end-time with period every 86400 sec (which is 24h, the
period depends on the time frame, see docs for more info).
The bucket name is
my-docs and we want output to be printed in plain text (we could as well choose json).
The output printed would look like:
DATAPOINTS 127633754.0 2019-10-17T23:00:00Z Bytes DATAPOINTS 127633754.0 2019-08-13T23:00:00Z Bytes DATAPOINTS 127633754.0 2019-07-07T23:00:00Z Bytes DATAPOINTS 127633754.0 2019-10-03T23:00:00Z Bytes
The third column is a timestamp. Data is unordered and we are interested in the most current size of the bucket so we should sort output by this column:
sort -k3 -n will sort output by 3rd column.
Finally, we want to take second column which is bucket size in bytes.
tail -n 1 takes last line of the output
cut -f 2-2 will cut the line from 2nd to 2nd column, in other words it takes only the column we are interested in.
This method of fetching bucket size is error prone because data points are present only for time frames when data was actually changed on s3 so if you have not modified data through last month and you request metrics from this period you won't get any.
AWS S3 job (inventory)
This is a feature provided by AWS - an inventory report. It allows to configure a scheduled job which will save information about S3 bucket in a file in another bucket. The information, among others, can contain the size of objects in source bucket.
This instruction explains how to configure AWS S3 inventory manually on AWS console.
To wrap up, the first option looks like the easiest one from command line, but other options are worth to know too.
They may serve better for particular use case.
E.g. if you wanted to see how bucket size changed over time period the 2nd method would be more suitable, but if you'd like to get a report with bucket size on regular basis the third seems easier to implement. You could listen on new report object in the second bucket and trigger lambda function on object-created event to process the report (maybe notify user by email).
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/piczmar_0/getting-s3-bucket-size-different-ways-4n4o
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
With the following payload signature..
import { Action } from 'redux'; import { IProfileData } from './model'; export const enum ProfileActionType { PROFILE = 'PROFILE', PROFILE_UI = 'PROFILE_UI', PROFILE_DATA = 'PROFILE_DATA', PROFILE_POST_DATA = 'PROFILE_POST_DATA' } export interface IProfileAction extends Action { type: ProfileActionType; payload: Partial<{ [ProfileActionType.PROFILE_UI]: React.ComponentClass; [ProfileActionType.PROFILE_DATA]: IProfileData; }>; }
..it's still possible to wrongly choose the correct payload:
And you cannot even pass the correct property as-is:
You must use the null-assertion operator, exclamation mark:
A better way would be is to separate the action data structures for UI and Data, and lump them in an action selector (ProfileAction):
export type ProfileAction = IProfileUIAction | IProfileDataAction; interface IProfileUIAction extends Action { type: ProfileActionType.PROFILE_UI; component: React.ComponentClass; } interface IProfileDataAction extends Action { type: ProfileActionType.PROFILE_DATA; data: IProfileData; }
With that, only the component property of ProfileAction will be available for ProfileActionType.PROFILE_UI:
Likewise with ProfileActionType.PROFILE_DATA, only the data property of ProfileAction will be available from ProfileActionType's intellisense.
And that is not just an intellisense feature, it's a language feature. So if you try to get the data property from ProfileAction when the condition (switch case ProfileActionType.PROFILE_UI) is in ProfileActionType.PROFILE_UI, it will not be possible. So awesome!
Likewise when the condition is in ProfileActionType.PROFILE_DATA, you cannot selection action's component property:
You can only choose data property from the action when the condition is in ProfileActionType.PROFILE_DATA:
TypeScript truly feels magical that it can infer the right data structure based on the condition. If you try to select the properties from action outside of switch condition, you can't select the component property neither the data property.
It's also a compiler error:
It's also a compiler error to create an object that does not match from the type's selector:
You can only create an object that matches the type: ProfileActionType.PROFILE_UI
|
https://www.anicehumble.com/2018/05/typescript-is-the-best-for-redux.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
GREPPER
SEARCH SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Python
>>
how to get a user input in python
“how to get a user input in python” Code Answer’s
python input
python by
SkelliBoi
on Feb 21 2020
Donate
16
# The function 'input()' asks the user for input myName = input()
how to get a user input in python
python by
Crazy Crab
on Jul 09 2020
Donate
1
a = input("what is your input")
Python queries related to “how to get a user input in python”
user input in python
python input method
take the input from user in python
python user inputs
getting user input in python and printing it
ways to take input from python
how to input a string to x in python
python collect input
python user inut
how to scan user input in python
prompt another input python
string input from the user python
how to get user imput in python
how to take input in a line in python 2.7
python to take user input and check for the input value using if
what is @Input()
python user command line input
py command line get input from user
how to get user input py
program input for string python
get user input in python 3
print user input python
how to give a input to .py
print input python
input in print python
python enter input string
input syntax in python
read from keyboard python
accepting user input python
python choice input
how to detect input in python
python unout
how to read input from command line in python
python take input from command line
check input of string python
take input from command line python
input = python
user input value in python
python input box
taking python input and display
python with inpiut
print statement with input python
take in input python
which function is used to take the input in python
what is user input in python
how to use inputs in python
python function to get user input
string input python
python input output
python 2 input
get function input python
collect string from user in python
user input python query
how to get input while printing python
get user input as file in python
python imputs
inputvalue in python
enter value in python
python get string input from user
python get input from user
user input in query in python
getting input line by line python
python 2 get user input
python input character
python reading an input
python while input string
input in python programmatically
how to get input in python when you are in a different application
return user input python
inpur method in python
get input from user pyt
see input of the user python
store string input in python
input value python
python unput
input a line in python
how to analyze an input in python
input. function python
input function
how to input string in pythn
input from uyser python
how to tae inout in oython
inputs em python
ways of taking input in python
python input line when input()
input(prompt) python
display input text as * in python
''' input python
python enter input number
python enter input
get input from command line python
function .find() and input user python
best way to take input in python
pyrhon input
python2 get user input
python input enter
python scanner input
input name python
code line for taking input in python
read input from user python
how to input a value in python
python input function example
python user input example
what does input command do in pythom
pyhthon input
requesting input python from user
read input as set
user input any function python
take 2 input in a function python
prompt user for input python
how to take input from command in python
print input to string
take 3 inputs in python
pythin input
get the input in python
what does the input function do in python
input py
accept input from python
prompt() python
string inputs python
input operation number in python
how to get input python
how to take input in python in a line
user interactive input a file in python
how to find out user input in python
how to make user input for string in python
pytho input
insert input in python
how to take inpyt in python
pyhton input
string user input in python
howto write an input program in python\
name = input python
how to detect input from python
read variable in python
commad for user input in python
python ask for input
ask user input pyth
How to bring up an input box in pyhton
user input command line python
take input using python
how to print string in python with user input
how to request input on a string line python
how to get python to accept user input
how to track user input in python
how to call an input in python
Library function to take input from user pyhton
get user input from command line python
python imprt for input attribute
autpcheck datatype while taking input in python
how to accept values in python
how to take input in python 2
python reading input
how to input text in python
inptut python
python user input into string
python get input from user and print user input
python program to take input
basic python code asking questions and getting user input
how to take in user input in python
read values python
input from command line python 2
PYTHON PROGRAM TO ENTER text automatically on screen
inputs in pythonb
how to get user input python
python handle terminal input
input function pyhton
inpute python
python function that prompts user input
input user in python
input in function python
how to embed a function to an input in python 3
taking user input in main function and calling a function in python
how to do an if input = python
python code for user input choice
python code for user input
python input(
input a string in python
how to get a user input in python
python print input
what is input fun in python
python how to read input
input string command prompt python
get input string from user in python
python programming given a input
how to read input with enter in python
get name from input python
how to read user input in python
how to print and get input python
print value from user python
ask for input python
python how to get input
how to prompt user input in python
verse input in python
take input in python
take input from prompt in python
python get string input
python take input in company test
how to inpust python
take input command line python
take n user input in python
with input python
input keyword in python
input variable for python
how to create input in python
how to read values python
how input python
how to input somthing python
take input in python 3
python read input
method that gets input python
taking input in python
python inputs
python text input
user input python
processing input in python
taking input in python string
taking input in python 3
input from user in python
how to use a function from python input
input a variable in python
how to take variable input in python
get input value from terminal python
get input in python
function is used for getting user input
give input with python
python command input
how to ask a letter using input in python
python user inout
python how to take string input from command line
how to prompt for user input in python
python how input works
how to reqire user input from python
how to take input from user in python
apython accept usr input
iinput in python
how to find the input python
input method python
how to assign input value in python
in input in python
user inputs in python
how to take input char in python
how to use input in python
what is the type of input in python 3
what is the type of input in python
how to input something in python from the command line
how to take input python 3
get user input from terminal by running python
python user input
create python input
how to input a name in python
python user prompt
python get input command line
how to take string input in python
how to accept user input in python
input string python
input function in python
how to get the entire input in python 3
take input value in python
prompt a number from user python
take input from user python
how to return a user inputed variable python
input equation in python w3
input function in python 3 that system generated input
what are inputs in python
python get input
how to take user input in a function in python
how to take input from user in python in function
python input statement
imput in python()
inpur in python
types of input python
What does the Python input() function do?
create input element in python
how to write a text in input in python
scan input in Python?
What is input () in Python?
input output function in python
python inptu
input in python 3.6
how to use input function in python
python innput
python get binput
input command python
input as string in python
how to get an input in python
how to give a function for input
taking an input in python
python how to input into programs
what is string you read as input in python
how to get input in python
python input() function do
input tag in python
pyton take input
how to use input() in python
python input string
types of input in python
python ipunt
how to put an input inside a for python
python code input
input string in python
input string data in python
input methods in python
how take input in python
get input from the user in python
how to input in python
python input is definf function ?
how do i get info from user in pyhon
how to make a python code which will take input of anything
python input syntax
input in python functions
input in pythoin
input with function
input f un in python
how to read input in python
how do you input in python
how to take input from user and use it in function in python
how to enter vaule in form using python
how to take input from user in pyhton
how to take input in python 3
input text python
input in pythono
python input example
input print python
sample user input code python
input python code
use input in python
python inpur
python input function
input python example
get input python
input() python
how to take input in python
input in a function python
on input python
how to input in python
python nput
input function syntax in python
input code python
create a input function in python
python input in function
input on python
python iput
what is input in python
input python
take input in a python program
input text with python
using input in python function
input in python coding
input in python 3
python inpu
using the input function in python
input python
function for user input python
input in python
inpuutt python
read input python example
how to use the input function in python
INPUT IN PYTHONE
input python
python input()
python input
Learn how Grepper helps you improve as a Developer!
INSTALL GREPPER FOR CHROME
Browse Python Answers by Framework
Django
Flask
More “Kinda” Related Python Answers
View All Python
print() in python
how to).
python writelines newline
code for showing contents of a file and printing it in python
python get input
get working directory python
get path to current directory python
superscript print python
how to print a blank line in python
python create file if not exists
one. line if statement. python
python print without newline
python write to file
python get all file names in directory
print string and variable python
python multiline docstring styles
how do you use a print statement in python
python check if file has content
Write a Python program to read last n lines of a file
creating a new folder in python
python reference script directory
find the path to a file python
get files in directory python
python copy file
move file python
python execute string
exit python terminal
exception get line number python
python get line number of error
get text from txt file python reference a file in python
how to find word in file
get_terminal_sizee python
python edit text file
.center in python
python list all files in directory
python open folder
python how to print
how to know how much lines a file has using python
can you print to multiple output files python
print python
how to say hello in python
python print same line
os walk example
open choose files from file explorer python
exception pyton print
python os if file exists
how to print text after an interger
extract url from page python
read shp in python
print items in object python
python exit program
python write to command prompt
Write a Python program to read first n lines of a file
input stdin python
how to print items in a list in a single line python
fastest way to output text file in python + Cout
read file python
python check if string is in input
print textbox value in tkinter
1 line if statement python
How to open dialog box to select folder in python
python read file without newline
python hide details
what is a print statement
python write a list to a file line by line
python change line in code
python overwrite text that is already printed
python print format
print statements
how to add variables and text in python on same line
python sys.argv
typage in python
get text selenium
exit python script
python get html info
get list input from user in python
get list of all files in folder and subfolders python
python read arguments
python get copied text
how to make inputs in a loop in python
how to take multiple input in python
how to take input until a condition is satisfied in python
how to read a text file line by line in python
taking input of array in python
array storing in file by python
how to read a specific line from a text file in python
python how to get the folder name of a file
store all files name in a folder python
python list directory files
print each element of list in new line python
python one line if without else
how to read a file into array in python
how to print to command line python
python writeline file
python get dir
how to print on the same line python
how to take multiple integer input in python on one line
input command in python shell
print colored text python on terminal
create text file in directory python linux
print in text file python
python input
python dont exit script after print
how to cout in python
python file handling
how to check if file exists pyuthon
how to get python to extract information from a text file
read entire content of a file using infile python
print multiple lines python
python center window
new line python write to file
hello world in python
python execute shell command and get output
python read text file into string
if else python in single line
python print array
how to read text file in python
python print object
how to run cmd line commands in python
how to take input of something in python
exit in python
python write
input python 3
import files from another folder python
python end script
text based game python
print variable string python
exit py console
How can I get terminal output in python
python file location path
how to check the size of a file in python
python print dictionary line by line
python read lines
python get script path
python file reading
how to check if a string is in a file python
get current directory python
what is print in python
python hoe to make a text box
python list directories only
print linebreak python
python read from stdin
print file line by line python
python 2 print sep end
python how to open a file in a different directory in mac
print()
python print string and variable
python use variable in another file
import from another file python
python create file
log in python
python get path of current file
python mock input
python append a file and read
traverse files in directory python
python multiline string
how to get all folders on path in python
python replace text in file
python pretty print
python multiple inputs
how to save to file in python
python logging to file
print( n ) in python
python load text file line by line
.get python
python os move file
python create file if doesbt exist
python read text file to list
get file names in folder python
python open a url
python single line if
python read a directory to get all files in sub folders
print string python
how to search a file in windows 10 using python
python open each file in directory
accept user input in python
python open file relative to script location
python print without new lines
python use functions from another file
how to compare two text files in python
reading and writing data in a text file with python
call shell script from python take input according to the user in python
how to get what type of file in python
how to print a line in python
python loop opening file from directory
how to read a website in python
how to do an if input = python
create log in python
import file from another directory python
python open file from explorer
printf python
zip a directory in python
python zip folder
open word from python
open word document python
how to take input in python
python import filenames that match search
read only the first line python
python how to get directory of script
python get newest file in directory input variable
python readlines end of file
run python file using python code
how to get a user input in python list of all open windows
python get directory of current script file
python file from path
how to get value from txtbox in flask
reading a list in python
python save input to text file
open file in python directory
how to separate url from text in python
python get current directory
print hello world in python
how to take array input in python in single line
check dir exist python
how to get what type of file a file is in python
python execute file
print command in python
read input from console
python get list of files in
crawl a folder python
how to read files in
editing specific line in text file in python
python all option
ignore error open file python
how to get text of a tag in selenium python
python file back to beginning
how to print hostname in python
exit code python
You will be provided a file path for input I, a file path for output O, a string S, and a string T.
pyhton comment stdout while processing the execution in python
how to print to console python
open web page in python
print command python
file python
how write to file in python
how to store something in python
how to take an input in python
Python program to combine each line from first file with the corresponding line in second file
print output python to file
python print with new line
python file open try except error
python prevent print output
filter directory in python
one line if statement python without else
python printing hello world
python code in 1 line
f.readline python
how to use one with as statement to open two files python
Python program to read a random line from a file
create file in a specific directory python
python style console output
text variable tkinter
python textbox
print python format
how to print multiple strings on one line in python
Python File Write
python find file name
Python program that takes a text file as input and returns the number of words of a given text file
python monitor directory for files count
how to make a text file in python
how to get the current line number in python
how to take 2 input in same line in python
input code for python
python how to align text writen to a file
check hidden files pycharm
printing hello world in python
python how to exit function
how to exit a function python
python write subprocess stdout stderr to file
python include another python file
how to return an output to a function in Python
make a script run itself again python
run python with options
how to print messages in python
python sys.argv exception
python oneline if statement
one line if statement python
change part of a text file python
get the name of a current script in python
python readline
In with statement in pyhton should we close the file?
python carriage return
how to exit program in python
input stdout python
print multiple strings in python
python print advanced
how to access a txt file through os library in python
how to make a function like print in python
python how to write a variable
how to input in python
how to print python text
python get
python with file
open() python all flags
how to create a save command in python
python how to write code over multiple lines
in python how to end the code after 10 input
python console ending multiline input
what is the correct way to output a string to the console in python
python iterate over line of gzip file
how to write statements in python
getting vocab from a text file python
come traferire file python
what does \n do in python?
how to read a pkl file in python
how to print a line of code in python
jupyter notebook pass python variable to shell
multiline f string python
how to display text in command in python
how to keep old content when using write in python
python get names of input arguments
python comments
python add new line from textarea
pyelastic search get document
reopen closed file python
python global variable across files
python inline print variable
how to save the command result with ! in python
is python not a real programing laguage because lines dont end in ;
addinput() python
Python program to get the file size of a plain file.
how to print 's in python
python logger printing multiple times
python read entire file
extract directory python
piythin comment
python event start from file funcion
how to run multiple python files one after another
python input new line
how to use print in python
python commenting
statement used to open a file in python
python printing
continue reading lines until there is no more input python
print fps in while loop python
python sys.stderr.write
how to empty a text file in python
python print functoin
how to get input in python3
python print an array
python open folder in explorer
get string from terminal python
python lxml get parent
how to import file from another directory in python
pytest input()
with open python print file name
print variable python
how print hello world in python
method get first last name python
python read file list from directory
how to take user input and create a file in python
python dir all files
reader in python
write help text python script
getters python
python code to print hello world
python multiline code dot
load text read line by line python
how to comment multiple lines in python ion sublime text
como fazer print no python
get path from file path python
python writelines
how to input a full array in one input in python
Python program to read a random line from a file.
store content to file python
outputting file logs into text in python
python reading into a text file and diplaying items in a user friendly manner
how to break up xml data in python
return result from exec python
python check file format
read file from form input type filepython
two type separatos read file python
how to read a data file in python and build a list of files
jupyter notebook - run code line by line
Python program to assess if a file is closed or not
print("python is good")
comment in python w3schools
python functions with input
Python sys info
python output parameter
file input python
load text file line in listbox python tkinter site:stackoverflow.com
python how to say hello world
how to load a python file in console, pycharm ?
multiline comment python stack overflow
print format python
python import multiple lines
Search bar python program
how to read file again in python
how to use inputs in python
python input separated by
comment all selected lines in python
input int python
python how to get current line number
print function python
python print statements
exec to return a value python
python load a txt file and assign a variable
Search bar using python
find location of a class in python
how to use print statement in python
python print array line by line
python on read text execute command
how to let someone select a folder in python
read yml file in python
readline python sin avanzar de linea
*open(0) in python
python recursion print statement
python script to recursively scan subdirectories
python check if file is writable
how to return paragraph in python
multi line cooment in python
how to read specific words from a file in python
python multiple line string
python make return on multiple lines
print statement in python
write in multiple files python
python print all variables in memory
reading text file in python
Python move all files with extension
get script text selenium python
allow user to input text and create a file with that name in python
dir() in python
use ipython magic in script
Write a Python program to count the number of lines in a text file.
python zip folder and subfolders
int and text on same line python
hoow to print python
You will be passed a file path P and string S on the command line. Output the number of times the string S appears in the file P.
get list of files in directory python
|
https://www.codegrepper.com/code-examples/python/how+to+get+a+user+input+in+python
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
So you might be aware of CSS Custom Properties that let you set a variable, such as a theme color, and then apply it to multiple classes like this:
:root { --theme: #777; } .alert { background: var(—-theme); } .button { background: var(—-theme); }
Well, I had seen this pattern so often that I thought Custom Properties could only be used for color values like rgba or hex – although that’s certainly not the case! After a little bit of experimentation and sleuthing around, I realized that it’s possible to use Custom Properties to set image paths, too.
Here’s an example of that in action:
:root { --errorIcon: url(error.svg) } .alert { background-image: var(--errorIcon); } .form-error { background-image: var(--errorIcon); }
Kinda neat, huh? I think this could be used to make an icon system where you could define a whole list of images in the
:root and call it whenever you needed to. Or you could make it easier to theme certain classes based on their state or perhaps in a media query, as well. Remember, too, that custom properties can be overridden within an element:
:root { --alertIcon: url(alert-icon.svg) } .alert { background-image: var(--alertIcon); } .form-error { --alertIcon: url(alert-icon-error.svg) background-image: var(--alertIcon); }
And, considering that custom properties are selectable in JavaScript, think about the possibilities of swapping out images as well. I reckon this might useful to know!
Yep, they can store URL references alright.
And also much more. They were considered for CSS “mixins” too (the now abandoned @apply rule proposal), because they can store whole CSS rules, so you could write
And you can do this right now and that CSS would be valid. Not that it’s of any use, alas… Not without JavaScript, at least.
And by the way, custom properties can also store strings or even JavaScript snippets…
You reckon correctly! Thanks for the tip
One gotcha is that the URL will be resolved relative to the file where you use the custom property, not relative to the file where you’ve defined them… That makes it a little bit if you want to override custom properties in another file for e.g. theming
You can also do this:
Why bother?
Saves an extra download, enables you to keep all of your icons in one easy to read file.
Note that this only works if your SVGs are clean and hand crafted. In this example the search icon is just a circle and a line, if it was regurgitated from a 1990’s style desktop publishing program as an SVG it would have all kinds of cruft in there as well as the circle defined as scores of points, all specified to nine decimal places which is a bit over the top when you can just type circle with radius and stroke width.
You can also do a filter on your icons, so if the above was set with white as the stroke then in the CSS you could push the brightness down to 0.1 to make it black.
The namespace is required for in CSS SVG whereas it is not needed for inline CSS.
The backslash on each line keeps it legible and enables line breaks.
Any references to colours with a hex code need the hash character escaped as %23. Hence in this example there are named colours (black).
The title attribute can be used too, this helps identify your icons.
Can you keep the variables in a separate file for easy theming?
Absolutely — if you’re using a preprocessor like Sass, then you can keep them in a partial and attach them to the
:rootor some other top-level element or parent component where they’ll be used.
|
https://css-tricks.com/did-you-know-that-css-custom-properties-can-handle-images-too/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
directive
Creates a
FormControl instance from a domain model and binds it to a form control element.
RadioControlValueAccessor
SelectControlValueAccessor
FormsModule
[ngModel]:not([formControlName]):not([formControl])
NgControl
name: string | number |
The
FormControl instance.
It accepts a domain model as an optional
Input. If you have a one-way binding to
ngModel with
[] syntax, changing the domain model's value in the component class sets the value in the view. If you have a two-way binding with
[()] syntax (also known as 'banana-in-a-box syntax'), the value in the UI always syncs back to the domain model in your class.
To inspect the properties of the associated
FormControl (like the validity state), export the directive into a local template variable using
ngModel as the key (ex:
#myVar="ngModel"). You can then access the control using the directive's
control property. However, the most commonly used properties (like
valid and
dirty) also exist on the control for direct access. See a full list of properties directly available in
AbstractControlDirective.
The following examples show a simple standalone control using
ngModel:
import {Component} from '@angular/core'; @Component({ selector: 'example-app', template: ` <input [(ngModel)]="name" #Set value</button> `, }) export class SimpleNgModelComp { name: string = ''; setValue() { this.name = 'Nancy'; } }
When using the
ngModel within
<form> tags, you'll also need to supply a
name attribute so that the control can be registered with the parent form under that name.
In the context of a parent form, it's often unnecessary to include one-way or two-way binding, as the parent form syncs the value for you. You access its properties by exporting it into a local template variable using
ngForm such as (
#f="ngForm"). Use the variable where needed on form submission.
If you do need to populate initial values into your form, using a one-way binding for
ngModel tends to be sufficient as long as you use the exported form's value rather than the domain model's value on submit.
The following example shows controls using
ngModel within a form:
import {Component} from '@angular/core'; import {NgForm} from '@angular/forms'; @Component({ selector: 'example-app', template: ` <form # <input name="last" ngModel> <button>Submit</button> </form> <p>First name value: {{ first.value }}</p> <p>First name valid: {{ first.valid }}</p> <p>Form value: {{ f.value | json }}</p> <p>Form valid: {{ f.valid }}</p> `, }) export class SimpleFormComp { onSubmit(f: NgForm) { console.log(f.value); // { first: '', last: '' } console.log(f.valid); // false } }
The following example shows you how to use a standalone ngModel control within a form. This controls the display of the form, but doesn't contain form data.
<form> <input name="login" ngModel <input type="checkbox" ngModel [ngModelOptions]="{standalone: true}"> Show more options? </form> <!-- form value: {login: ''} -->
nameattribute through options
The following example shows you an alternate way to set the name attribute. Here, an attribute identified as name is used within a custom form control component. To still be able to specify the NgModel's name, you must specify it using the
ngModelOptions input instead.
<form> <my-custom-form-control </my-custom-form-control> </form> <!-- form value: {user: ''} -->
NgControl
abstract viewToModelUpdate(newValue: any): void
AbstractControlDirective
reset(value: any = undefined): void
hasError(errorCode: string, path?: string | (string | number)[]): boolean
getError(errorCode: string, path?: string | (string | number)[]): any
© 2010–2020 Google, Inc.
Licensed under the Creative Commons Attribution License 4.0.
|
https://docs.w3cub.com/angular~10/api/forms/ngmodel
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
I'm again at work on some legacy code, and I'm stumpled on why one bit is not working as I'd expect. Here is the code snippet:
When this function is called, howmany comes out as 0. Yet, if I look at my test output (workdammit) in ArcMap, I do have a polygon feature in there, in the attribute table. So where does it 'go'? I'm stumped, here.
def poly2xylist(layer): arcpy.CopyFeatures_management(layer, r"C:\Temp\workdammit.shp") layercopy = arcpy.CreateScratchName("xxxx", "", "shapefile") arcpy.MakeFeatureLayer_management(layer, "layercopy") howmany = (arcpy.GetCount_management("layercopy").getOutput(0)) arcpy.AddMessage("howmany is " + howmany)
When this function is called, howmany comes out as 0. Yet, if I look at my test output (workdammit) in ArcMap, I do have a polygon feature in there, in the attribute table. So where does it 'go'? I'm stumped, here.
I have not tried to implement your code yet, but here is where that function gets called:
where 's' is what gets passed to poly2xylist, above.
If you are just wanting to export each individual polygon into it's own feature class you could do something like this without even creating a function:
This will create shapefiles with names like "poly_1.shp", "poly_2.shp", etc...
I left out the "howmany" variable because this will always be 1 if you are going through the attribute table by row.
Apologies; I ought to have been more clear. Given it's not 'my' code, I'm working out what exactly it does, myself!
The idea is to have a script that reads in a shapefile, not an mxd, though I could make a map document of the shapefile, certainly. At any rate, after the shapefile is read, a dissolve is performed. The purpose of the function poly2xylist is to determine, For each dissolved polygon, the extent points, ie
I believe this is in a loop in the event of more than one polygon in the dissolve result.
I only added the howmany variable to convince myself it was actually 0 and I was not 'seeing' the polygon I expected to. I was told this script worked in 9.2?
This piece was written as a function as it was intended to be part of a reusable library of functions.
Did that clarify some?
To clarify more- the purpose of knowing the extent of each dissolved polygon is to then determine a given number of random points that are contained within each polygon. The purpose is for coming up with sampling points for ecological study sites.
Enjoy,
Wayne
(oops, sorry Cyndy, you were posting while I was replying. I did get a kick out of your 'workdammit' shapefile name!)
|
https://community.esri.com/thread/74038-where-did-my-polygon-go
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
RESTful JPA One-liners With Rapidoid
RESTful JPA One-liners With Rapidoid
Learn how to implement RESTful services with simple JPA CRUD functionality using the web framework Rapidoid in this tutorial.
Join the DZone community and get the full member experience.Join For Free
Are your API program basics covered? Read the 5 Pillars of Full Lifecycle API Management eBook
Let's see how easy it is to implement RESTful services with simple JPA CRUD functionality with Rapidoid, the almighty web framework.
Bootstrapping JPA and Implementing RESTful Services
import org.rapidoid.jpa.JPA; import org.rapidoid.setup.App; import org.rapidoid.setup.On; public class JPADemo { public static void main(String[] args) { App.run(args).jpa(); // bootstrap JPA On.get("/books").json(() -> JPA.of(Book.class).all()); On.get("/books/{id}").json((Long id) -> JPA.get(Book.class, id)); On.post("/books").json((Book b) -> JPA.insert(b)); On.put("/books/{id}").json((Book b) -> JPA.update(b)); } }
Of course, we also need some JPA entity (which is over-simplified for this demonstration):
import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; @Entity public class Book { @Id @GeneratedValue public Long id; public String title; public int year; }
For a quick start, the rapidoid-quick module should be used. It contains the Rapidoid HTTP server and web framework, Hibernate and other goodies:
<dependency> <groupId>org.rapidoid</groupId> <artifactId>rapidoid-quick</artifactId> <version>5.4.6</version> </dependency>
That's it! The application is ready to run!
Testing the API
Let's quickly test the API, by sending some HTTP requests and printing the responses.
First, let's insert a new book:
Map < String, ? > book1 = U.map("title", "Java Book", "year", 2016); Self.post("/books").data(book1).print(); // {"id":1,"title":"Java Book","year":2016}
Then, let's update the same book:
Map < String, ? > java9book = U.map("title", "Java 9 Book", "year", 2017); Self.put("/books/1").data(java9book).print(); // {"id":1,"title":"Java 9 Book","year":2017}
And the book is still there:
Self.get("/books").print(); // [{"id":1,"title":"Java 9 Book","year":2017}]
Under the Hood
As soon as the application is started, Rapidoid will start up in several important steps:
Scan the classpath for JPA entities,
Bootstrap Hibernate and the built-in JPA service,
Register the JPA entities with Hibernate,
Register the lambda handlers for the specified REST endpoints,
Start the built-in high-performance HTTP server.
Rapidoid's built-in JPA service is doing all the heavy lifting when working with JPA:
Providing high-level API for CRUD JPA operations and more,
Creating and managing EntityManager instances,
Managing JPA transactions.
Conslusion
Rapidoid makes working with JPA super easy. With a few lines of code, we were able to implement few JPA-based RESTful services.
Some important and related aspects haven't been covered: bean validation, integration tests, transaction management, etc. They remain as topics for another article...
Establish API creation, publishing and discovery as a master practice with the API Management Playbook.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/restful-jpa-one-liners-with-rapidoid
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
:
#include <stdio.h>
int main() {
char *mytext;
mytext = "Help";
if (mytext == "Help") {
printf("This should raise a warning");
}
return 0;
}Trying to compile this very simple C program, which on first sight many would probably say this is correct, will result in this:
$ gcc test1.c -Wall
test1.c: In function ‘main’:
test1.c:6:14: warning: comparison with string literal results in unspecified behavior
So, even though we had such an easy program, line 6 seems to be an issue for us:
if (mytext == “Help”) { <stdio.h>
‘0?
Very useful! Thanks.
Hi
Excellent, look forward to more of these 🙂
Thank you; such a series will be a big help.
The wrong example could be
#include
int main() {
const char mytext[] = “Help”;
if (mytext == “Help”) {
printf(“This should raise a warning”);
}
return 0;
}
To avoid making Ulrich Drepper cry 😉 section 2.4.1
But sure, the series is good to have. And is badly needed for some esoteric topics as strict aliasing.
Note that strcmp is easily misused and will certainly lead to bugs . I suggest you define a macro like
#define streq(a,b) (strcmp((a),(b)) == 0)
..or an inline function,as macros are pretty evil 😉
Keep going!
Sure.. but: do we want to have our packager army introduce the most sophisticated and complex code into any upstream project?
The target audience for his series is mostly packagers. Some of them are coders, many are not, some would like to learn, some don’t care and give up.
It’s nice to point out nice nifty tricks. But honestly, I doubt we would achieve a lot of packagers taking care of the errors, if it goes too far.
Yeah years after this is still helpful 🙂
Thanks for the post.
|
http://dominique.leuenberger.net/blog/2011/03/how-to-fix-brp-and-rpmlint-warnings-today-expression-compares-a-char-pointer-with-a-string-literal/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
The Java virtual machines from Sun and IBM put the FPU into 64-bit mode rather
than Linux's default 80-bit mode. For a little background on this, see:
When in this mode, a bunch of weird rendering issues appear in text of GTK+ 2.8
applications. The easiest way to see this is to watch the underlined characters
in menus. The GTK+ application below reproduces the problem when using a font
of "Arial 9" at 96-DPI. This bug should also occur under other OSs that use
64-bit precision by default, such as FreeBSD, on other architectures such as
PPC, and probably also when compiling cairo using -mfp-math=sse. However, I
have not verified any of those configurations.
#include <string.h>
#include <gtk/gtk.h>
#include <fpu_control.h>
int main (int argc, char **argv)
{
GtkWidget *window, *menu, *vbox, *item, *bar;
fpu_control_t fpu_control;
_FPU_GETCW (fpu_control);
fpu_control &= ~_FPU_EXTENDED;
fpu_control |= _FPU_DOUBLE;
_FPU_SETCW (fpu_control);
gtk_init (&argc, &argv);
window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
g_signal_connect (G_OBJECT (window), "destroy",
G_CALLBACK (gtk_main_quit), NULL);
vbox = gtk_vbox_new (0, FALSE);
gtk_container_add (GTK_CONTAINER (window), vbox);
bar = gtk_menu_bar_new ();
gtk_box_pack_start (GTK_BOX (vbox), bar, FALSE, FALSE, 0);
item = gtk_menu_item_new_with_mnemonic ("_File");
gtk_menu_shell_insert (GTK_MENU_SHELL (bar), item, 0);
menu = gtk_menu_new ();
gtk_menu_item_set_submenu (GTK_MENU_ITEM (item), menu);
item = gtk_menu_item_new_with_mnemonic ("Open _Type...");
gtk_container_add (GTK_CONTAINER (menu), item);
item = gtk_menu_item_new_with_mnemonic ("Open Reso_urce...");
gtk_container_add (GTK_CONTAINER (menu), item);
item = gtk_menu_item_new_with_mnemonic ("Sho_w In");
gtk_container_add (GTK_CONTAINER (menu), item);
item = gtk_menu_item_new_with_mnemonic ("Ne_xt");
gtk_container_add (GTK_CONTAINER (menu), item);
item = gtk_menu_item_new_with_mnemonic ("Pre_vious");
gtk_container_add (GTK_CONTAINER (menu), item);
gtk_widget_show_all (window);
gtk_main ();
return 0;
}
This problem is not reproducable under GTK+ 2.6 and it smells a lot like a cairo
problem.
Created attachment 3949 [details]
bad.png
Created attachment 3950 [details]
good.png
Billy, is it the bug reported here:
I hope it is. Anyway, can you check the patch attached in there?
You reproduce this only on 86_64?
I'll try out the patch. Note that this is on x86 not x86-64.
See also downstream bug:.
In particular, the screenshot attached. If this is really the problem, then I
feel this should be upgraded to major (feel free to disagree <g>).
I've tried the patch in
against GTK+ 2.8.8, using Cairo 1.0.2. I still see the problem described in.
Can you think of a workaround? We would rather not ship Eclipse 3.2 with this
still broken....
This bug hasn't even been tracked down to Pango or Cairo ... how could
we suggest a workaround? I'd suggest that actually debugging the problem
would help move the process along...
(In reply to comment #6)
> This bug hasn't even been tracked down to Pango or Cairo ... how could
> we suggest a workaround? I'd suggest that actually debugging the problem
> would help move the process along...
I think either you misinterpreted the tone of my last post, or I'm
misinterpreting the tone of this one. If I offended you, I'm sorry.
My point was just that Eclipse is likely to need to write a workaround. We
don't understand the internals of pango or cairo well, and would appreciate
help. (At this point distributions have started shipping on broken versions
of pango/cairo.)
If you would like help debugging, then we're happy to lend some assistance.
Behdad suggested a patch, which I've tried for him. Unfortunately, it didn't
work. It might be helpful if you read, as it contains a little
more detail about the effects we're seeing and things we've tried.
I didn't mean to sound offended :-), but I was just trying to indicate
that (as usual in free software), indicating that a bug is high priority
for you usually doesn't produce too many results, especially if it is
very hard to reproduce.
I hadn't followed the link to the bug, but it appears glancing at it,
that you guys have tracked it down to GTK+. If so, then you need to
close this bug report, and open one against GTK+.
(Really, it would seem very easy to take that investigation one step
further and find the bug in that function. It might just be a
incorrect rounding problem, say; but first step is to get the bug
report filed against the right place.)
Created attachment 4658 [details] [review]
gdk patch
Sorry I didn't feel like creating an account on Eclipse bugzilla. Can you test
this Gtk+ patch and see if it's a fix?
I didn't include the change to gtklabel; it seemed like it was included in
error. The change to gdkpango.c doesn't seem to have any effect. Does the
small test program given in comment #0 work for you?
That test case is too subtle for me to verify. With my (rather huge) fonts, I
don't see anything bad.
If you have a C test program for the problem exposed in the Eclipse bug, I can
test it.
(In reply to comment #11)
> That test case is too subtle for me to verify. With my (rather huge) fonts,
> I don't see anything bad.
It is definitely subtle. The Eclipse text selection issue is more jarring
because it's one pixel of whitespace appearing throughout a text selection.
But this only occurs in one of our custom widgets. It also seems to occur
regardless of font.
I'm not sure what your set-up is like, but with a JRE
() and Eclipse
(),
you could see the problem for yourself.
Otherwise, I'm happy to keep trying patches, and generally helping out any way
I can.
No, I cannot test that really.
You said that the bug is in gdk_pango_layout_get_clip_region. Can't you just
play with that function and see what's wrong? My gdkpango patch does something
like that. You can at least report to us that "the text extents pango reports
are these and these, and after rounding they become this with extended format,
but that with double" so at least we know whether it's a pango bug, or gtk+...
And other people can help if you open a bug against Gtk+... It's not obvious
that it's a Cairo bug at all.
Created attachment 5210 [details]
test case
This test case simulates a full selection text editor where the selection start
in the middle of a word and spams all the way to the end.
It uses clipping to draw a word that is half selected (to preserve arabic
ligatures for example).
It works fine with older version of pango (1.6.0), it fails on pango 1.10.2
when the FPU mode is double. It works on pango 1.10.2 when FPU mode is extended
(which is the default).
Please, let me know if you need help to reproduce the problem.
Created attachment 5211 [details]
failing scenario
Created attachment 5212 [details]
working scenario
In the failing scenario you can see the first selection box is short by one
pixel in the height and width, this is because gdk_pango_layout_get_clip_region
returns a region that is smaller.
Note: to reproduce the problem you MUST have the right text, font, pango
version, and fpu mode. I'm pretty sure this is a rounding problem and if you
change anything you can get lucky and everything works....
(In reply to comment #18)
>...
I'm a downstream maintainer of Eclipse for Gentoo. Finally got annoyed by the
highlighting problem to start looking into it, and came across this bug.
I tested a pango cvs snapshot from yesterday, and the Eclipse highlighting
problem goes away. Awesome.
Obviously, we missed the GNOME 2.14.1 release, but do you think we can see a
1.10.2 release before the next GNOME release?
Yes, this will be in the next version of GNOME.
Behdad Esfahbod:
Thanks for fixing this. But I need to support older version of gnome. So my
question is:
Is there any workaround I can use in my code (besides play the FPU mode) to
fix this ?
None that I know of.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
|
https://bugs.freedesktop.org/show_bug.cgi?id=5200
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Source Source Source Source Class
Definition
Important
This API is not CLS-compliant.
Represents a source file in the language service and controls parsing operations on that source.
public ref class Source : IDisposable, Microsoft::VisualStudio::TextManager::Interop::IVsHiddenTextClient, Microsoft::VisualStudio::TextManager::Interop::IVsTextLinesEvents, Microsoft::VisualStudio::TextManager::Interop::IVsUserDataEvents
[System.CLSCompliant(false)] public class Source : IDisposable, Microsoft.VisualStudio.TextManager.Interop.IVsHiddenTextClient, Microsoft.VisualStudio.TextManager.Interop.IVsTextLinesEvents, Microsoft.VisualStudio.TextManager.Interop.IVsUserDataEvents
type Source = class interface IDisposable interface IVsTextLinesEvents interface IVsHiddenTextClient interface IVsUserDataEvents
Public Class Source Implements IDisposable, IVsHiddenTextClient, IVsTextLinesEvents, IVsUserDataEvents
- Inheritance
-
- Attributes
-
- Implements
-
Remarks Inheritorsclass and instantiate your class in CreateSource(IVsTextLines).
Notes to Callers
This class is instantiated by a call to the CreateSource(IVsTextLines) method. This is done when the CodeWindowManager object is instantiated (the
Source object is passed to the CodeWindowManager constructor). A Colorizer object can be instantiated and passed to the
Source class's constructor.
|
https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.package.source?redirectedfrom=MSDN&view=visualstudiosdk-2017
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
There comes a time in the life of almost any C++ programmer, where one of the various sleep functions raises its head. Most of the time the problem boils down to some kind of polling algorithm, for example waiting for a resource and wanting to let other processes work in the meantime1.
While it is not very accurate in general, predicting what happens with a sleep that takes a hundred milliseconds or more, is usually fairly simple. This post will concern itself with the extreme low values, primarily zero and the lowest non-zero value the specific sleep function will accept.
Intuitively, a sleep of zero time means that the currently running thread of execution allows the scheduler the chance to schedule some other thread that actually may have better work to do - like release the resource it is waiting for. This means that, for a system with low load, this sleep should usually take about the time of a context switch.
When choosing the smallest non-zero time, we can argue that the result should not be much different, but if both versions would adhere to expectations, this article would be pretty darn useless...
Setup
The Windows and Linux experiments were conducted on a dual Intel Xeon X5680 system providing a whole bunch of cores. The OS X experiments were conducted on a 2.8 GHz Intel Core i7 "Macbook Pro (Retina, 13-inch, Late 2013)" providing 4 logical cores.
Everything was compiled for x64 and configured to represent a typical release build. The total system CPU load was usually in the range of 2-5%. All experiments were repeated at least 20 times in an interleaved fashion and 99.9% confidence intervals are given for each one. Where not otherwise noted, results are normalized to one execution of the sleep function.
All operating systems were "lived in", without any intentional changes to system clock resolution or similar mechanisms. Hopefully this represents the typical use case better than a virgin system fresh out of the box. Similarly, the system was not sent into a benchmark mode where as many programs as possible are disabled. For example, they continuously played music and had a browser pointed open with an editor in which I was writing this article.
The test program used to give the ground truth is:
#define SLEEP(x) static_cast<void>("; }
The modifications for the individual sleep functions simply added any required headers and replaced the definition of the
SLEEP macro with a version that invokes the appropriate sleep function instead. For example, the version relying on the C++11 sleep facilities is:
#include <thread> #define SLEEP(x) ::std::this_thread::sleep_for(::std::chrono::nanoseconds("; }
Be aware that the standard mandates that
::std::this_thread::sleep_for may block execution longer than intended, but not shorter. The standard also suggests that this function use a steady clock, which is the reason why the benchmark code does not use a high-resolution clock.
Windows 10
All code for Windows 10 was compiled by Visual Studio 2015, with Visual C++ 19.00.23026.
For this OS, we will use two platform-specific sleep function in addition to
::std::this_thread::sleep_for:
Sleep and
SleepEx (with its second parameter set to
FALSE). Both functions are described to basically behave the same in this test: When given
0, they will yield execution without sleeping and when given
1 they will take any time up to one system clock tick.
Since WINAPI functions only take arguments with millisecond resolution,
::std::this_thread::sleep_for will be performed in two variations: Once with a nanosecond argument and once with a millisecond argument.
The target system had a system clock resolution of::
ClockRes v2.0 - View the system clock resolution Copyright (C) 2009 Mark Russinovich SysInternals - Maximum timer interval: 15.625 ms Minimum timer interval: 0.500 ms Current timer interval: 1.001 ms
With a ground truth of less than one nanosecond per iteration (980 ± 61 nanoseconds per 3 000), we will first look at the cases where the sleep functions were explicitly asked to perform a zero duration sleep:
::std::this_thread::sleep_forwith 0 nanoseconds: 130 ± 1 ns
::std::this_thread::sleep_forwith 0 milliseconds: 132 ± 5 ns
Sleep: 64 ± 1 ns
SleepEx: 69 ± 11 ns
As expected, there is a certain cost for yielding execution, clocking in at less than 150 ns per sleep. It should also not come as a big surprise, that the C++ standard library function has a higher overhead than the direct WINAPI calls.
Now the results for the minimal non-zero argument:
::std::this_thread::sleep_forwith 1 nanosecond: 1 535 253 ± 19 313 ns
::std::this_thread::sleep_forwith 1 millisecond: 2 000 969 ± 194 ns
Sleep: 2 000 949 ± 135 ns
SleepEx: 2 000 911 ± 134 ns
All functions targeting a single millisecond yield the same result, hitting 2 milliseconds instead of one.
I was surprised by the result of
::std::this_thread::sleep_for when given a 1 nanosecond argument, as it only takes ¾ of the time that either native solution requires for its smallest argument. It should be noted however, that both relative and absolute error are larger though2.
Concluding: Out of these alternatives,
::std::this_thread::sleep_for performs best in general, as its interface alleviates much of the pain associated with the older APIs. Still,
Sleep/
SleepEx offer a better performance when only yielding execution.
Linux
The operating system used was an Arch Linux identifying its kernel release as
4.1.6-1-ARCH. All code was compiled using g++ version 5.2.0.
For this operating system, we will discuss three different native methods in addition to
::std::this_thread::sleep_for. The obvious choice is
nanosleep3, additionally we will use the timeout of
pselect4 and the timerfd facility. The timerfd functionality was tested in three distinct configurations: Recreating the timerfd every call, reusing one timerfd but letting it only fire once, and finally by preparing the timerfd with an interval timer in advance. As all these timer APIs have nanosecond resolution, the chosen inputs will be 0 and 1 nanoseconds. Additionally,
sched_yield is evaluated as a 0 ns sleep.
This operating system exhibits a ground truth of less than one nanosecond per iteration (604 ± 44 nanoseconds per 3 000).5
For the first set of benchmarks, in which the effect with a zero argument is evaluated, the timerfd family of timers will not be present, as their API makes this usage impossible6:
::std::this_thread::sleep_for: 0 ± 1 ns per iteration (612 ± 35 ns per 3000)
nanosleep: 498 577 ± 427 ns
pselect: 136 ± 7 ns
sched_yield: 164 ± 8 ns
Right off the bat:
::std::this_thread::sleep_for requires not statistically significant more time than the ground truth - and definitely not enough for a system call. It would seem as if this were completely handled in user-space, thus not actually yielding execution at all.
Interestingly,
pselect performs slightly better than
sched_yield, which may be due to better optimized code, dumb luck, or because it does not actually yield execution - after all it is not primarily intended to yield execution, but to wait upon an event.7
Finally,
nanosleep performs significantly worse than
sched_yield, probably making it the wrong tool for yielding execution.
Going on, here are the results for a 1 nanosecond sleep:
::std::this_thread::sleep_for: 498 628 ± 263 ns
nanosleep: 498 693 ± 353 ns
pselect: 498 796 ± 398 ns
timerfdrecreating: 4 819 ± 182 ns
timerfdreusing: 3 273 ± 255 ns
timerfdinterval: 2 783 ± 163 ns
It seems that
::std::this_thread::sleep_for,
nanosleep and
pselect are provided by the same underlying mechanism - which is outperformed by several orders of magnitude by the timerfd API. It can also be noticed that
nanosleep seems to treat a 0 ns sleep the same as a 1 ns sleep, unlike the Windows sleep functions that explicitly treat this as a yield only.
There is no real surprise in the relative performance of the timerfd variants themselves: The most general usage case is slowest (although still blazingly fast), with the reuse of the file descriptor saving a lot of work, and the switch to intervals making it faster yet, although it also becomes rather inflexible.
At this point it should be noted that the actual sleeping on the timerfd is done via
read, meaning it is not guaranteed to yield execution, especially in the interval case where the file descriptor may already be ready when
read is invoked. Still, for this benchmark, I was able to verify that about 3000 context switches do take place during the execution of the timerfd in interval using GNU Time 1.7.
Concluding the Linux analysis: To yield execution, it seems safest to use
sched_yield, which performs slightly worse than the
pselect alternative. To perform short sleeps, the use of timerfd timers is far superior to all other variants, as a timerfd with minimal time returns two orders of magnitude quicker than
nanosleep with any time.
OS X
The exact OS X version used for this test was 10.10.5, as El Capitan was not yet available at the time of writing. Be reminded that this test was run on different hardware which must be taken into account when comparing it to the Linux and Windows tests.
The test suite was fairly similar to the Linux one, but the timerfd suite had to be removed as that particular facility is not available on OS X.
This operating system also exhibits a ground truth of less than one nanosecond per iteration (477 ± 27 ns per 3 000).
Beginning with the zero-duration sleeps:
::std::this_thread::sleep_for: 4 ± 1 ns (10 680 ± 670 ns per 3000)
nanosleep: 1 086 ± 32 ns
pselect: 412 ± 13 ns
sched_yield: 180 ± 35 ns
Again, we see a conspicuously low value for
::std::this_thread::sleep_for, suggesting that OS X does not actually perform a sleep here. Maybe the most surprising result is how good both
nanosleep and
pselect perform, compared to
sched_yield.
Now the numbers for a 1 nanosecond sleep:
::std::this_thread::sleep_for: 13 809 ± 186 ns
nanosleep: 14 831 ± 234 ns
pselect: 416 ± 12 ns
For this test, all methods used leave those available on other platform far in the dust. In fact, only Linux's timerfd facilities manage to come close – and they are still beaten by the OS X
pselect by almost an order of magnitude. Additionally, unlike on Linux, great performance is available for all tested methods, including
nanosleep, which is after all the obvious choice in C style code and
::std::this_thread::sleep_for, which is the obvious choice for C++ style code.
Summing up the OS X results, it is obvious that this operating system has all others beat, when it comes to short sleeps. While
nanosleep performs somewhat worse than
pselect, its purpose is more obvious and it can be easily used to continue sleeping in the presence of interrupts.
Conclusion
Interestingly, the results were mixed for Windows and Linux: Windows 10 seems to bring primitives to the table that perform very well when only yielding execution, but lack in resolution when actually sleeping. Linux on the other hand provides the timerfd API, which allows extremely short sleeps when a sleep is actually requested. However, the winner of this articly clearly is called OS X, handily beating both alternatives in every single category.
The test program, all results and the script used to analyze them can be downloaded here.
Footnotes:
In most cases a blocking wait should be preferred. Would life not be great if we had the pleasure of always being easily able to do things the right way? ↩
The absolute error of the 1 millisecond sleeps is about 1.0 ms, while the 1 nanosecond sleep is off by about 1.5 ms. The relative error differs by roughly 6 orders of magnitude. ↩
sleephas second resolution and
usleepis deprecated. ↩
If you are wondering why the heck I am analyzing
pselectof all possible functions sporting a timeout, I stumbled over an answer on stackoverflow that hinted it might by worth evaluating. ↩
Interestingly this is only about ⅔ of the ground truth for Windows 10, possibly due to more aggressive optimization by g++ versus Visual C++. ↩
When setting the time to zero, it disables the timerfd completely, meaning that waiting on it will take forever. ↩
I had to run these specific benchmarks significantly more often than the rest to get the confidence intervals small enough to not overlap. ↩
|
https://gha.st/short-sleeps/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
I understand this has been tweeted all over tarnation and
discussed on Reddit, but a quick Google search of LtU seems to
show that it hasn't been mentioned here. Which is a shame. So,
here goes:
Types
Are Anti-Modular
by Gilad
Bracha.
A nice followup on some of the theses that Gilad put forth in
FLOSS Weekly 159:
Newspeak where he was interviewed by
Randal
Schwartz.
A couple of quotes from the interview:
Learning is brain damage. I've spent the last 18 to 20 years
unlearning the things I learned in my Ph.D. studies.
Also:
[The fact that Newspeak is not mainstream] is
actually a competitive advantage.
Not
exactly a new sentiment but still… Overall, the FLOSS interview with Bracha
was almost as good as
the one with Ingalls.
Ha! IIRC, I asked about this very issue just a few months back here on LTU.
Specifically, I believe I asked about potential transitive resolution of imported types used in argument and return types in a module's exported terms (e.g., functions).
One wants to "keep it very simple," and in reality it's so very easier said than done.
- Scott
One of the main points of functors and structural signature matching (aka structural subtyping) in the ML module system is that one does not need to necessarily agree upon interfaces completely up front. I can provide a module of signature S1, you as a client can abstract yourself (using a functor) over a module of signature S2, and we write our modules completely independently. Then, if S1 matches S2, voila! If not, you can write a coercion module after the fact. How the absence of types would make this kind of programming any more modular is completely beyond me.
So, at least if you program in a civilized language with a proper structural module system, Bracha's argument doesn't hold water AFAICS.
You can code against a set of private interfaces describing the features you want from the types you use. Then the code in that module is only checked at compile time against its own contents. (In Go, the table mapping the methods of an interface to those of a type is computed at runtime using reflection.)
"You cannot typecheck modules in isolation [...] because types capture cross-module knowledge" is wrong because typechecking within modules can be separated from typechecking/linkage between modules.
Just a comment: Go interfaces give you a certain degree of structural typing, but it is comparably weak. Also, while lookup tables are generally constructed at runtime, there is no reflection involved (IIRC, Go deliberately does not have any form of reflection). The construction is fully determined by static types and environments, and only parameterized over other dynamic lookup tables. Somewhat similar to the translation of type classes in Haskell.
The language seems to have some amount of reflection,
I would like to think that Gilad knows about functors and structural matching, and that his criticism goes beyond the technical definition of separate compilation to include informal semantics that can't be captured by the type system. T
How can types inhibit informal semantics that they don't capture? If anything, the inability to capture assumptions about inter-module dependencies would be an argument that existing type systems are too *weak*, not too strong.
Moreover, the argument that someone should know about X has no bearing whatsoever on whether their argument is refuted by X.
My point was I don't think Gilad's point was just about types and separate/independent compilation as this problem is pretty much solved. In this case, I'm questioning our understanding of the thesis, not the argument of its maker.
Obviously, I agree with Derek. :)
Moreover, I think that the other part of Bracha's argument -- namely, that type inference as an alternative to explicit signatures would leak implementation details -- is equally misleading. Inference will certainly expose certain details in general. But it won't do any more so then an untyped language. In fact, you can generally make more observations in an untyped Ianguage, so by Bracha's own line of argument, untyped languages are even less modular!
Bracha is making an extremely strong claim here, as I point out in the comments there. He thinks that all static information is unmodular, because you have to agree on it ahead of time. This includes names of exports.
Basically, he wants everything, even reference, to be a message send, but he phrased it in a way to provoke more discussion.
If it includes names of exports, why doesn't it include names of messages?
...or the format of the message payload?
Seriously, you have to assume something (= agree on it ahead of time), otherwise you cannot do anything. For this purpose, a minimal module signature, a structural object type, a message protocol, or whatever else you prefer to come up with, are pretty much isomorphic implementations of "something". And not checking that assumption early on doesn't make it less of an assumption.
Because that allows "independent compilation", which is the definition that Gilad's taking of "modular". Not that I think this makes sense, but there you go.
I still don't see why naming exports (or any other similar feature) would necessarily prohibit the kind of independent compilation he desires.
If your module can import names from another module, then to compile your module, the compiler must know what those names are. That requires either the compiler to have previously looked at the other module (not independent compilation) or some shared interface file. Gilad declares that interfaces also violate independent compilation, leaving you with nothing.
module Blah
import foo, bar, baz
...
end
I can certainly write a separate compiler for languages with this kind of module, without having to know about other modules, since Blah is declaring what its own imports are. Of course, you will later need a linker to resolve Blah's names to stuff, but ML functors and mixin systems (eg, Derek's RTG and PLT's units) have supported this kind of stuff for ages.
What am I missing?
I've found most of this discussion vague because I don't know what is meant by compile -- to what? Not full compilation seems to be an assumption, because some sort of linker will be left (in most comments). Likewise, basic optimizations like stack allocation seem to be assumed unnecessary as well. The question, as I understand it so far, leads to solutions that are worse than useless.
(Edit: this wasn't meant at Sam, but more at a meta level of the type of back-and-forth the vague formulation is leading to).
I'm not sure what you're confused about here. Separate compilation is a well-understood notion. You compile modules of a program separately, and a module that has some dependencies on unknown other modules will essentially be compiled as a function. Then, when you want to form a full executable program, you have to produce some additional code to link the already-compiled modules (essentially by function application). There are other possibilities, but that's a common one.
But it's worth noting that one need not care about separate compilation per se in order to enjoy the benefits of types and modularity. They are useful for robust program structuring even if you only care about compiling whole programs.
What must be accomplished by the compiler and what can be accepted as a link-time / load-time action? For example, compiling without IPO should be more the 1% case rather than the 99% one, yet separate compilation of modules prohibits this. If I accept IPO (including across module boundaries) as a necessary part of the compiler, separate compilation is impossible (unless you have ridiculous import statements). If you can get something to compile separately, by my definition, it's relatively useless.
We can push these optimizations into load / link / jit time (and many, like tuning, really do need to be there). Considering there's no problem there -- static separate compilation just builds ASTs etc. -- it sounds like we're not really talking about compilers but type checkers / verifiers. There, I agree with Gilad's comments about type systems being either too weak or cumbersome for true switching of modules without testing. My opinion doesn't really matter: the point is that a different problem is being worried about than how to compile.
If the goal isn't correctness but fast compilation, a simple solution is to use an interpreter. If you want your cake and to eat it too (fast code with fast compilation), there's a heck of a body of literature on how to write fast compilers that isn't used in practice (and much of the modern JS JIT stuff is actually about avoiding this trade off). E.g., parallel, SIMD, and incremental evaluation at a fine-grained level. I'm increasingly a fan of PGO/tuned approaches, and in those, the set of problems is way different from what's being discussed here.
My opinion doesn't really matter: the point is that a different problem is being worried about than how to compile.
This is exactly what I thought. We are getting hung up on a detail that really isn't that important. I spent my early grad school years working on modularity and separate compilation and found that it was easy to do even for an OO language like Java. So it works, but it was kind of meh: separate compilation wasn't really the problem with modules, the problem was more about high-level semantics that couldn't be captured by a type.
If the goal isn't correctness but fast compilation, a simple solution is to use an interpreter.
I'm partial to incremental compilation and live programming myself, but again is this the problem people really care about? Or is there something else about developer efficiency that would still act as a bottleneck?
I'm partial to incremental compilation and live programming myself, but again is this the problem people really care about?
The usability of these approaches for software in the large is questionable. I'm obviously biased for the GUI domain more for coordination than performance reasons), but, even then, acknowledge that traditional approaches have heavy usability barriers (when considering the target audience of designers. My ideal would probably be more like PBD on top of a tangible change-propagation layer. The stuff on languages for incremental algorithms has been very questionable; I haven't seen anything compelling to me outside of the GUI and graphics domains.
However... The problem of a fast compiler itself is pretty important. E.g., in the mobile space, there's a big demand for a fast optimizing compiler for CSS/DOM/JavaScript code. The current dominance of low-level languages in mobile systems seems symptomatic of our failure here. Some funny solutions are possible, like Opera Mini's and SkyFire's attempts at offloading (SkyFire's is definitely one to watch, and we're seeing it elsewhere like in OnLive), but once we take power and connectivity into consideration for mobile, local computation solutions are still desirable, and heterogeneity likely forces our hand for local compilation.
The particular approach of getting a fast compiler by specifying it in a declaratively incremental/parallel language and code generating the compiler from it has been compelling enough for me to base my thesis on it. I'm specifying the CSS language and code generating a layout engine / compiler with all sorts of optimizations too hard to do by hand (can email you about all that, including some new GUI language we're doing on top that you might like). I've definitely taken the high road here -- the status quo for optimizing compilers is to roll your own for anything above lexing/parsing.
There is quite a big difference between compilation during development and compilation when the product is deployed, so I think that they should be considered separately.
The only reason we are in the mess today of transferring source JavaScript code over the wire rather than machine code or optimized byte code is that it is convenient and we must think the compilation technology is good enough. If the problems become too great, there is always better (but less convenient) solutions to consider.
The stuff on languages for incremental algorithms has been very questionable; I haven't seen anything compelling to me outside of the GUI and graphics domains.
This sounds right but the field is still young, I wouldn't give up yet. I had a lot of fun turning the Scala compiler into a true incremental compiler via cheap dynamic dependency tracking and re-execution, so I believe there are still many more pragmatic solutions to explore, possible at a very non-invasive language level.
The particular approach of getting a fast compiler by specifying it in a declaratively incremental/parallel language and code generating the compiler from it has been compelling enough for me to base my thesis on it.
Generating a compiler from a specification is a sort of holy-grail of PL. Many of us have attempted it and most (or all?) of us have failed at it, I wish you luck :). CSS sounds like a more manageable problem, perhaps. I would love to hear about the details, and perhaps you could publish something about it at PLDI next year, its in my neighborhood.
This sounds right but the field is still young, I wouldn't give up yet
I haven't :) Revisions is one particular new comers that increases usability.
More intriguing, I think we have a lot of unexpected pay-offs ahead. For example, I've been playing with the idea of optimal parallel speculative execution by using incremental repair for the CSS stuff (not really essential for our domain, but we need to shove stuff like that on top of the real work if we do want to do something like PLDI :)) It's a useful tool to have in your bag!
stuff on languages for incremental algorithms has been very questionable; I haven't seen anything compelling to me outside of the GUI and graphics domains
They is plenty of value for incremental algorithms (and incremental compilation) in live data-fusion and control systems, too.
The question is more of using a language-based solution.
Another place where it makes sense is dealing with the coming problem of cache vs. recompute to deal with slow memory.
Suppose a language requires us to express non-incremental algorithms in an incremental manner, forcing our hand at any non real-time calculation (such as ad-hoc loops or recursion). In each step (or instant), the incremental process provides a view of the result.
A non-incremental algorithm can be modeled in such a language; it just presents a 'Maybe' type for the intermediate view, with the final answer or nothing. This wouldn't be much easier than providing a useful intermediate answer.
I believe this would allow us to more effectively express a lot of issues - such as job-control, timeouts, concurrency, CPU resource scheduling, priority, and concurrency. In state of the art languages, most of these issues are implicitly and badly handled.
I'm not convinced there's a future case for languages that DON'T enforce a model of incremental computation. I question whether there will be any value for such languages in the future.
What you want to do is to break the "function computation/evaluation as a black box" model of usual programming languages, and choose finer-grained semantics reflecting "low-level" considerations you have (computation time, memory consumption, scheduling...).
The current general approach¹ is to layer a finer-grained model where such "less extensional" objects are defined on top of the previous paradigm: explicit datatypes with explicit "evaluation" computations, encoding those finer-grained aspects you're interested in.
Functions are a meta-level tool of abstraction and definitional convenience; all "serious matter" should be handled by explicit data types.
Your claim amounts to saying that *everyone* should drop the black-box model and move to a given finer-grained model you're more interested in for your application domain. That is, you hope that your specific domain will become the de facto universal model in the future. I'm not convinced, but the future will tell.
¹: Haskell Monads and the Scala grant approach of defining DSLs on top of Scala by reification are both instances of this approach. It's maybe questionable for Haskell Monads, as they may also be seen as a way to translate higher concepts *into* functions, seen as the basic construction of computation. So maybe it's only some Haskell monads, or a particular style of use of monad.
PS: agreed, programmers already reason non-extensionally about functions and programs, with consideration of complexity, runtime, memory consumption, etc. This is however mostly informal, not part of the language semantics (and eg. the compiler can break most of this unspecified reasoning).
I'm very interested in secure, robust composition and extension of systems. My current model, RDP, is easily extensible - agents can discover each other and interact through shared registries (both public and private) thus supporting plug-ins and introducing new services. I favor object capability security, which is very effective as a basis for 'black box' services.
So I'm not breaking any black boxes.
But I am interested in improving on the traditional request-wait-response protocol typically associated with function evaluation and composition. Our choice of communication and composition protocols (message passing, REST, streaming and dataflows, etc.) affects which rules each process must obey, and how we integrate a new component or extension with the rest of the system. In an incremental model, for example, we will always have an incremental request (for each step, possibly with incremental data) and an incremental response (with each intermediate view).
And I'd also force many of today's broken black boxes to be more obvious about it. For example, you can't hide delay or divergence in an open system. It's unsafe to even try. The fact that our algorithms aren't incremental today doesn't mean they happen instantaneously; it simply means that we use hackish mechanisms such as timeouts and 'kill -9' to stop a request that is taking too long.
Black boxes in physical reality cannot subvert physics. This comes as a surprise to many programmers, but: black boxes in virtual realities cannot subvert physics, either. We can have black boxes, so long as they all obey certain rules.
What you write is fine in Gilad's perspective, and that's basically exactly what Newspeak, his language, does, except you'd write it this way:
module Blah(foo, bar, baz) ... end
module Blah(foo, bar, baz) ... end
This makes it clear that we've just written a function, which we'll link later.
If what you want to do is write something like this:
module Blah
import foo
x + 1 // x comes from foo
end
module Blah
import foo
x + 1 // x comes from foo
end
Then you need static information about foo ahead of time. This is the sort of thing you can do with the ML module system, or with Units, or with any number of other systems.
Another way to think about the perspective that Gilad's taking (which, again, I disagree with) is that every variable reference is really a message send, and static knowledge about variables is trying to rule out MessageNotUnderstood, which he doesn't want to try to do.
So if we removed the open Foo construct from ML's module language, then it would be Gilad-ically acceptable? I could definitely live with that, since open is my least favorite feature of ML. :)
open Foo
I am not quite on board with treating modules as functions, though. Mixin linking seems very natural to me. That is, if you have a module M which imports an interface I and exports J, and a module M' which imports J and exports I, then you should should be able to put M and M' together to get an I and a J (perhaps up to some well-foundedness constraints).
Stuff like this is a PITA in ML, but is a reasonable thing to want.
No, open is a red herring. If you have a structure Foo with members x and y, then Foo.z is a static error. It is that static error that Gilad objects to, because it requires knowledge of Foo when compiling the references.
Yeah, and my point is that you don't have to agree on static information "ahead of time". I can write whatever code I want, you can write your code assuming whatever "static information" you want about my code, and if they turn out not to link up immediately (e.g. because we named our methods differently), then big whoop: we can write coercion code "later in time" to convert the names of stuff in my exports to the names of stuff in your imports. I fail to see what the problem is that he's complaining about.
Seems a bit a fallacious argument.
You could for instance allow to compile a module A which reference another module B without checking the B types and defer the typechecking at runtime when doing the dynamic linking of both modules. Would that be modular ?
And what about languages that autocompile from sources ?
Dependencies are not modular in general, whatever the type system. Since you need to know the API it means that you need to have at least an agreement on method names, and maybe on types as well.
I don't think I quite know all the relevant jargon here, but I'm remembering an old programming language where there was a "Morphic" datatype which was part of the language, and thinking it may be a counterexample.
Morphics supported allocation, destruction, the addition or deletion of attributes, mutation of or reference to the value of an attribute, listing of the current set of attribute names, and "locking and unlocking" (when a Morphic was locked, addition or deletion of attributes, or mutation of or reference to attributes declared private, would fail. Morphics could only be unlocked in the same compilation unit that had most recently locked them).
There were no user-defined types at all. In practice, we defined what we called types procedurally by creating a Morphic, adding relevant named attributes to it, and locking it. Routines were first-class values, so an attribute of a Morphic could have a "routine" or "method" as its value (the last two distinguished because a "method" was a closure that contained a self pointer and so had a way to refer to the particular Morphic it was carried by, and a "routine" did not).
Independent compilation was easy, because "Morphic" was sufficient as a signature for linking. A signal value for a runtime error would be returned if you queried or attempted to mutate an attribute that a particular Morphic lacked at that particular instant.
If I were updating it, I think I'd add the idea of interfaces and invariants to allow some more useful automated checking; But there could be a lot of different Morphics from different sources that all satisfied the particular set of interfaces and invariants required by a particular module.
Are the "Interfaces and Invariants" what he means by "structural typing?"
Does repeating that something has to satisfy the "iteration" interface in a module that actually uses iteration over it count as repeating unnecessary information in many places?
Or is it the need for a context in which the interface name "iteration" can be resolved that he's against?
Also, many of the interfaces (though not the invariants) could be inferred. It seems that routines to check invariants couldn't be part of the object; that would be meaningless because if something were the wrong type it would carry the wrong invariant-checking routines -- so they would be checking a different set of invariants than the set required by the particular module at hand.
Would the invariant-checking routines (for example, a routine that checks to make sure that the polar and rectangular coordinates of a complex number refer to the same point) be what he considers to be redundant or repetitive information? Even if it were included in a module that wouldn't work correctly unless it were true, and not included in a module that refers to those attributes (say to build a density model of a region in complex space represented by a sample of complex numbers in a list) and which would work correctly regardless?
Or is checking invariants the kind of operation he considers to be leaking implementation information?
Curious....
at least in the sense Gilad Bracha seems to be using anti-modular. You can defer putting all the pieces of a program together until later and later but at some point, before you can run a program, you do have to put them together. You could probably even go as far as saying that meaning is anti-modular in this sense. Which is why most people use a more pragmatic definition of modular involving well-defined contracts between modules.
|
http://lambda-the-ultimate.org/node/4298
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Sc:
def importResource(name:String, resource:Resource):Unit = { log.debug("importing " + name + " into " + root) val path = pathOf(name) createResource(path.toList)(resource) } private def createResource(nodes:List[String])(resource:Resource) = { val directory = nodes.dropRight(1).foldLeft(root)((directory, name) => { if(name.equals("")) { directory } else { val directoryOption:Option[AbstractDirectory] = directory.getDirectory(name) directoryOption.getOrElse({ val subDir = directory.createDirectory(name) match { case result:AbstractDirectory => result } directory.add(subDir) subDir }) } }) directory.createIfNewer(nodes.last, _:Resource) }
If it’s not crystal clear to you what the code does. Here is an overview:
- The first method takes a name and a resource
- The name is split up into it’s path-elements
- The path is passed to the createResource method that will create all directories that does not yet exist on the way to the resource and finally return a function that takes a resource as input and creates it in the already given directory.
A big issue I have with the code is the createResource-method. It simply is very hard to name it. createAllDirectoriesAndReturnResourceCreatingMethod would illustrate better how smelly that method really is. I have an idea on how to refactor this with scalas pattern matching. This is the new version (after some red/green bar cycles):
def importResource(name:String, resource:Resource):Unit = { log.debug("importing " + name + " into " + root) val path = pathOf(name) createResource(path.toList, root, resource) } private def createResource(nodes:List[String], directory:AbstractDirectory, resource:Resource) { nodes match { case head :: Nil => directory.createIfNewer(head, resource) case "" :: tail => createResource(tail, directory, resource) case head :: tail => { val subDir = directory.getDirectory(head).getOrElse { val result = directory.createDirectory(head) match { case dir:AbstractDirectory => dir } directory.add(result) result} createResource(tail, subDir, resource) } case nil => } }
The first thing that strikes me about the second version is the unbalance between the different cases. The head :: tail case is not exactly a one-liner… but it should be. This smells like feature envy. We could ask the directory to getOrCreateDirectory and it would read much better. But that is another refactoring. First let’s go through this one:
One thing I did not first get in scala is how you can iterate through lists using pattern-matching. In order to get how that works it is important to realise one thing about scalas lists:
List(“a”, “b”, “c”, “d”)
is equivalent to
“a” ::(“b” ::(“c” ::(“d” ::(Nil))))
which due to the right associativityness of :: is equivalent to:
“a” :: “b” :: “c” :: “d” :: Nil
Or plain english: Lists are not flat structures, lists are recursive structures. What that means in practical terms is that given any element in the list it is very easy to split the list at that element into head (the current element) and tail (the rest of the elements.
For instance, given the element “b” above, head will be “b” and tail will be “c” :: “d” :: Nil.
Ok, not that hard. Now to the cool part: The :: operator is a case class which in short means that it can be used in pattern matching:
“a” :: “b” :: “c” :: Nil match { case h :: t => println(head +” -> ” +tail); …}
will assign “a” to the variable h and “b” :: “c” :: nil to the variable t
We can now understand the cases above. In pseudocode:
private def createResource(nodes:List[String], directory:AbstractDirectory, resource:Resource) { nodes match { case head :: Nil => //the last element of the list, create the resource using head as a name case "" :: tail => // special case, an empty directory name. Skip to the next name: // createResource(tail, directory, resource) case head :: tail => //head will be a directory-name that we use to get or create the //nextDirectory that is used with tail to call ourselves recursively: // createResource(tail, nextDirectory, resource) case nil => //no more elements to traverse } }
I’m far from done with the refactoring. I want to get rid of the special case of “” to begin with and as mentioned above remove some feature-envy but I think that the cases are more readable than the original code. At least in terms of where the problems lie in terms of special cases and bloated cases. Of course, it’s just an opinion and I reserve the right to change my mind tomorrow :)
If you write:
createResource(path.toList.filter(_ != “”), root, resource)
Then you can get rid of the (“” :: tail) case in createResource.
Jorge Ortiz
October 7, 2008 at 8:37 am
import java.io.File
var nodes = List(“foo”,”bar”,”baz”)
var root = new File(“/”)
var dir = (List(root) /: nodes) ((fs, n)=>{ if (n == “”) fs else new File(fs.head, n) :: fs}) head
if (dir.mkdirs()) createResourceUnder(dir, resource)
Julian Morrison
October 7, 2008 at 10:46 am
Jorge: Thanks for the pointer. I think that it moves the problem rather than fixes it but I agree that it is better to deal with it earlier than as a special case so it is indeed a better solution.
I expect the “” case to dissapear entirely when I fix other crappy code elsewhere. I need to get to the bottom with that.
johlrogge
October 7, 2008 at 11:44 am
Julian: Now we’re talking! :) A bit out of my comfort-zone at the moment but I will give it try and see how it goes to use your ideas in my code.
To me it seems like a form of the first example minus a poor abstraction of a filesystem and also not adressing the fact that the last element is the name of the resource?
I must confess that I’m still a bit uneasy with one letter variable-names etc. While making the code compact it is harder to follow for me.
I find that I have to use more of my “brain-stack” when reading a one liner like yours rather than writing it one step at the time… (though the elegance of code like yours impresses me).
I have similar issues with /: instead of foldLeft. I realize that this is probably a matter of what you’re used to and that I may grow into it. An advantage of /: compared to foldLeft is that it is more recognizable in a formula while foldLeft has to be read… and after all you do have to understand what foldLeft does anyway, it’s not self explainatory to a java guy like me with no background in FP.
I love that I get suggestions like these, keep them coming! Reflection amplifies learning!
johlrogge
October 7, 2008 at 12:12 pm
I’m really cheating by using a lot of the Haskell conventions, and a dirty Scala trick of using head without braces or a dot (it’s not meant to be line-wrapped!). Also, I goofed by putting var when I meant val.
Anyway, the Haskell conventions here are single character variables (they actually make functional code easier to follow – variables are glue, watch the functions), and pluralized variable names (here: fs) indicating the tail of a list (so you often find consing or pattern matching that looks like x :: xs).
So we’re folding a list of File out of a list of String nodes (going forward through the nodes and building the result list by consing on the front – so the end result looks reversed) so as to wrap each dir level (starting from “/” or wherever else) in the two argument constructor of File, using the head of the accumulator list as the parent dir (it was what we built last iteration for the previous node). Then when our list is built (it’s not wasteful, there is only one of each File and they point to each other) you snip off the top one, and it’s the whole directory hierarchy. Then just ask it to mkdir itself.
The (as /: bs)((as, b)=>…) notation in Scala is a visual hint of what it does. The “as” are the beginning of a list to be folded out of “bs”, which will be done by starting with the provided “as” and then using the earlier “as” and the current “b”. Some people complain it’s terse, but I personally find the syntactic sugar more instructive than just the words “fold left”.
Julian Morrison
October 8, 2008 at 12:56 am
Also, the mixing up of creating the resource and the directory is half of what makes your imperative code hard (otherwise could be barely longer than the FP code). It’s simpler to create the directory *then* create the resource in it ;-)
Julian Morrison
October 8, 2008 at 1:06 am
Thanks for the clarification, I actually was able to figure out how the code works but it was out of my comfort zone :) The reason the directories are mixed with the resource-name is how it was easier to use the original method (way back) which was basically looking like:
import(“/adirectory/afile.bin”, inputStream)
Now the code is evolving towards:
import (“/adirectory”, namedResource)
And you’re right, indeed mixing directories and resources was the hard part. The original java-code was more object oriented and used a tell don’t ask style that I managed to mess up totally when converting to scala due to my lack of tools in the language (not that the language does not have tools, just that I have not been able to pick them up).
Yesterday I moved some of the uglyness into the directory class and the code now looks like this:
private def createResource(path:List[String],
directory:AbstractDirectory,
resource:Resource) {
path match {
case head :: Nil => directory.createIfNewer(head, resource)
case “” :: tail => createResource(tail, directory, resource)
case head :: tail => createResource(tail, directory.getOrElseCreateDirectory(head), resource)
case nil =>
}
}
Which is a bit better but if as you said, the named resource and the directory-path were separate pretty much all of this would go away (and it would be more intuitive to use).
I think this is a cool thing about scala, even if I code java in scala, the ugliness that was hidden in the redundant java explicit type soup really comes out and stares me in the face. It’s like a mist is clearing (and I’m not always happy about what I find).
A see your point about /: \: foldLeft does seem to work better as an operator. (After all, what does “foldLeft” mean anyway :)) but it makes the code look mighty strange to us java-folks :)
Thanks for your comment!
johlrogge
October 8, 2008 at 6:06 am
|
https://johlrogge.wordpress.com/2008/10/06/scala-nugget-pattern-matching-and-lists/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
The class writes logs to the system EventLog, a database, a text file, or any combination of the three, depending on how it’s configured. Let’s dive in and see what’s needed to make this work.
Global.asax
I wanted this class to be usable in any web application, regardless of where that app was deployed. However, I may not always be able to write to the EventLog, or maybe I won’t have a database for an application. I may not have file permissions to write to a file, either. Instead of commenting out code, I used boolean constants to turn on and off each logging routine. These constants are defined and set in the Global file like this:
public static boolLogErrorToDatabase = true;
public static boolLogErrorToFile = false;
public static bool LogErrorToEventLog = false;
The Custom Error Class
The following class is attached for download and license-free usage: error-custom-class-code. Since you can view it in its entirety there, I’ll only post the most relevant text here to save space.
To access the EventLog, you need to use the System.Diagnostics namespace by placing a using statement at the top of the class. We also need the System.IO one for writing a text file:
using System.Diagnostics;
using System.IO;
We’re also accepting 4 parameters. First is the Exception followed by the screen and event or method where the error occurred. Finally we’re saving the user’s name. These can be changed or added to in order to fit your own business needs.
public voidLogError(Exception ex, stringsScreen, stringsEvent, stringsUsername)
{
//code discussed below
}
Logging the Error
Inside the LogError method, we’re going to provide three separate functions: one each for logging to the EventLog, the database, and to a file. Since any combination of the three may be desired, we’re evaluating the global variables we set earlier.
Writing to the EventLog
if (Global.LogErrorToEventLog == true)
{
// Write error to log file.
stringsAppName = sScreen + ” :: “+ sEvent;
EventLogoEventLog = new EventLog();
if (!EventLog.SourceExists(sAppName))
{ EventLog.CreateEventSource(sAppName, “Ellefson Consulting Log”); } string sMessage = ex.ToString();
//log the entry
oEventLog.Source = sAppName;
oEventLog.WriteEntry(sMessage, EventLogEntryType.Error);
}
Looking at the code, we can see that we’re combining the sScreen and sEvent parameters into the sAppName variable that will be used in the EventLog. We then use this to see if the EventSource is registered in the computer’s registry and if not, register it. This is not something you otherwise need to worry about, but this check must be done to avoid an error in the event the source is not registered.
The log we want to write to will be called “Ellefson Consulting Log”, which will be created if it doesn’t exist. The rest is self-explantory.
When you go to Computer Management and view the Event Log, this is what you will see:
Writing to a Database
After the EventLog code, we have another block that writes to a database, which is SQL Server 2005 in this case. Here I’m using the SQLHelper file to encapsulate my data access object usage, but any data connection code will work. In my database is a stored procedure called “uspLogError”, which accepts the same parameters as the other code here, writing the results to a table.
if (Global.LogErrorToDatabase == true)
{
intiErrorID = Convert.ToInt32(SqlHelper.ExecuteScalar(Global.ConnectionString, “uspLogError”, ex.ToString(), sScreen, sEvent, sUsername));
}
Writing to a File
Finally, the code writes to a text file. In this sample, the file is located in the C directory but you may want to put it somewhere else. The “true” option indicates we want to append to the file if it exists so we don’t lose the record of past errors. Without this the file would be overwritten. I’ve separated the parameters onto their own lines to make the text file more readable.
if (Global.LogErrorToFile == true)
{
// create a writer and open the file
TextWritertw = new StreamWriter(“C:\\ErrorLog.txt”, true);
// write a line of text to the file
tw.WriteLine(“Error: “+ ex.ToString());
tw.WriteLine(“Error location: “+ sScreen + sEvent);
tw.WriteLine(“Username: “ + sUsername);
tw.WriteLine(“Application: “ + Global.ApplicationName);
// close the stream
tw.Close();}
Using the Class
At the top of my example .aspx page, I need a using statement to reference the custom class I’ve created. Since my application’s namespace is EllefsonConsulting and I’ve created my class in the Business Logic Layer folder I set up, my using statement looks like this:
using EllefsonConsulting.BLL;
Now I need to create an object from my class to be used throughout my .aspx page. This is done just above the Page_Load event that should already be part of your page by default:
ErrorscError = new Errors();
The last step is to use the cError object in my try-catch block:
try
{
//some code
}
catch (Exception ex)
{
//some code to handle the error
//then log the error
cError.LogError(ex, Page.ToString(), “grdNews_SelectedIndexChanged”, Convert.ToString(Session[“Username”]));
}
Let’s look at this final line item by item. First we are refering to the cError object and using its LogError method. Then we are passing in all the needed parameters. First is the Exception “ex”, so we know what went wrong.
Next is the Page title. In this case, the page is my “news.aspx” page, which is what will be written to the log. This let’s me know where the error occured. By using the Page object we don’t have to keep changing this parameter for every method call.
Since the page isn’t enough, I also want to know what method caused the error, so I’ve copied and pasted the event handler title. In this case, that’s grdNews_SelectedIndexChanged. This will need to be changed for each method.
Finally, I want to know which user experienced the error in case I need to follow up with them and learn exactly what they were trying to do when it ocurred. Here I am copying their name out of a session variable.
Conclusion
With code reuse being one of the big advantages of object oriented programming, having all your error logging handled by an identical routine is smart, consistent, and saves time while also giving you flexibability in how you process the logs. This class can be altered to provide other information as you see fit, but it should give you a good start on logging your errors with only one line of code per method. Happy coding!
|
https://ellefsonconsulting.wordpress.com/2009/03/19/custom-error-logging-class-in-c/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
SATURDAY November 9, 2013
Season Ends Saints defeat EN for sectional title Page B1
Lou Ann Homan-Saylor
Perfect Pacers
‘Hunting with my Dad’ still a great tradition
Win over Raptors lifts record to 6-0
Page A3
Page B1
Weather Partly sunny, windy, high 57. Cooler Sunday, high in the upper 40s. Page A6 Kendallville, Indiana
Serving Noble & LaGrange Counties
kpcnews.com
75 cents
EN board approves personnel changes GOOD MORNING Shipshewana to host Holiday Light Parade SHIPSHEWANA — The holiday season kicks off Saturday in Shipshewana with the town’s fifth annual Holiday Light Parade. Parade units decked out in their holiday finest, including lights, will wind their way along the streets of the small LaGrange County town, finishing in front the Blue Gate Restaurant. Once the parade is complete, officials will light the biggest Christmas tree in town in front of the Blue Gate and officially open this year’s Christmas celebration. The parade is sponsored by the Shipshewana Retail Merchants Association. As many as 4,000 people are expected to line the streets to watch the parade pass. This year’s parade theme is Sleigh Bells in Shipshewana. “This is the first time we’ve put a theme to the parade,” said Gary Zehr, executive director of the merchants association. The parade will start to form around 5 p.m. and should take to the streets no later than 6:15 p.m. “We’re really excited about the parade this year,” Zehr added. “The weather looks great. As the parade passes each store, the merchants turn on their holiday decorations. It’s a wonderful experience.”
Coming Sunday Art Alive in Howe
The Kingsbury House showcases more than 30 artists’ work from around the area. Read more on Sunday’s C1 and C2.
Clip and Save Find $82 in coupon savings in Sunday’s newspaper.
BY DENNIS NARTKER dnartker@kpcmedia.com
KENDALLVILLE — East Noble school board members approved the school district’s personnel changes Wednesday night. School trustees voted 3-1 to approve a list of resignations, with trustee Barb Babcock opposed. Trustees John Wicker and Dexter Lutter and board president Dan Beall voted to approve the administration’s recommendation. Trustees Steve Pyle, Carol
Schellenberg and Dr. David Holliday were absent. Four of the seven trustees constitute a quorum, and a vote is official if a majority of the trustees present approve or disapprove. These resignations were approved: Jennifer Duerk as functional life skills teacher at Wayne Center Elementary effective Nov. 18; Elaine Taulbee as an instructional assistant at East Noble High School effective Nov. 8; Joanne Mazzola as food service worker at Wayne Center
Elementary effective Nov. 8; and Ryan Slone as winter percussion director at East Noble High School effective Oct. 8. Babcock said after the meeting it’s not fair to the school district when a teacher leaves after agreeing to a contract. The district must now find and hire a new teacher after the school year has started. She realizes teachers can resign during the school year for a variety of reasons and the school district has no recourse but to let them go. She looks at the situation
from the school district’s side, and how difficult it is sometimes to find a replacement. It’s a problem that won’t go away, said Babcock. Trustees approved without comment the termination of Steven Koons as sports and fitness instructor at North Side Elementary effective Nov. 4, and the reassignment of food service assistant Beth Neuhaus from Avilla Elementary to Wayne Center Elementary. Jeffery Devers was hired as SEE CHANGES, PAGE A6
Hiring turns out OK
DENNIS NARTKER
Cast members for East Noble Middle School’s production of “Crumpled Classics” are, from left, front row: Tim Tew, Johnathon Clifton, Erin Bloom, Bailey Zehr and Nicole Brunsonn; back row, Savannah Harper, Bailey Wilbur, Mattie
Fitzharris, Madelyn Summers, Keely Savage, Daylyn Aumsbaugh, Savannah Myers, Abby Vorndran and Lexie Ley. Cast members not shown are: Brynna Crow, Kayla Garcia, Kylie Handshoe and Karlie Miller.
ENMS transporting classics into contemporary settings BY DENNIS NARTKER dnartker@kpcmedia.com
KENDALLVILLE — When a cast of East Noble Middle School thespians “modernize” classics such as “Romeo and Juliet,” “Sherlock Holmes,” and “Frankenstein” it can only lead to hilarious results. East Noble Middle School will present “Crumpled Classics” Nov. 15 and 16 at 7 p.m. in the middle school auditorium. Tickets will cost $3 at the door. Retired East Noble librarian and longtime area amateur actress Jo Drudge is directing the show. Drudge also coordinates
the annual Gaslight Playhouse Children’s Theatre Summer Workshop. Asked how many years she has been directing children’s theatre in Kendallville, Drudge laughed and said she doesn’t keep an exact count, but it’s more than 30 years. Drudge is assisted by high school students Michael Johnston and Jocelyn Hutchins. “Crumpled Classics” involves making “Romeo and Juliet,” “Frankenstein,” “Phantom of the Opera,” “Sherlock Holmes” and the “Legend of King Arthur” relevant to today’s audiences. While the true authors of the
classics may be rolling in their graves, the audience will be laughing in their seats, according to Drudge. They will see Romeo and Juliet meet in a fast-food restaurant and Frankie Stein try to assemble the perfect prom date. A theatrical agent will become a monster, and a lazy teen will become a king. The 90-minute production uses a minimum of props and costumes with simple, representable sets for each story, according to Drudge. The show is produced by special arrangement with Pioneer Drama Services, Inc. and playwright Craig Sodaro.
WASHINGTON .
LOU ANN ON FACEBOOK Read more from Lou Ann Homan-Saylor facebook.com/ LouAnnHomanSaylor7-B8 Life..................................................... A5 Obituaries......................................... A4 Opinion ............................................. A3 Sports.........................................B1-B3 Weather............................................ A6 TV/Comics .......................................B6 Vol. 104 No. 309
Powerful typhoon slams Philippines MANILA, Philippines (AP) — One 235 kph (147 mph) with gusts of 275 kph AP (170 mph) when it made landfall. A house is engulfed by the storm surge brought about by powerful By those measurements, Haiyan typhoon Haiyan that hit Legazpi city, Albay province Friday about would be comparable to a strong SEE TYPHOON, PAGE A6
325 miles south of Manila, Philippines.
Dow Corning to help address community needs KENDALLVILLE — The Dow Corning Foundation is teaming with the Noble County Community Foundation to establish a community needs fund designated for the Kendallville area. “This is an excellent opportunity for our employees to be
engaged in funding decisions in the community and for us to broaden our understanding of where we can make the most impact,” said Janice Worden, Kendallville site manager. “Our employees are looking forward to working with Noble County Community Foundation represen-
tatives to make a difference in the community.” Dow Corning Foundation said its mission is to: • improve scientific literacy by increasing access to math, science and technology education at the pre-university level; • improve vitality and quality
of life in communities where Dow Corning employees work and reside; and • increase the utilization of sustainable, innovative technologies to benefit society. “This is the first time we have partnered with a community SEE NEEDS, PAGE A6
A2
THE NEWS SUN
AREA • STATE •
kpcnews.com
Public Meetings •
SATURDAY, NOVEMBER 9, 2013
Kendallville Park Board meets at 6:30 p.m. at the Youth Center, 211 Iddings St.
Police step up visibility during holiday season
Tuesday, Nov. 12
FROM STAFF REPORTS
Monday, Nov. 11
Noble County Board of Commissioners meets at 8:30 a.m. in the Commissioners Room of the Noble County Courthouse. Noble County Drainage Board meets at 1:30 p.m. in the Commissioners Room of the Noble County Courthouse. Albion Town Council meets at 6 p.m. in the Council Meeting Room of the Albion Municipal Building. Rome City Town Council meets at 6:30 p.m. in Town Hall. Kendallville Board of Public Works meets at 8:30 a.m. in City Hall. Kendallville Public Library Board of Trustees meets at 7 p.m. at the Limberlost Public Library in Rome City.
DENNIS NARTKER
Citizen Academy graduate Kendallville Mayor Suzanne Handshoe recognized Vanessa Olsen of Kendallville at Tuesday night’s City Council meeting for graduating from the Citizen Academy, a free 10-week course for citizens to learn about Kendallville city government. Four people started the course, but Olsen was the only one to complete it. The was the third year the mayor has held the academy. In a brief, emotional speech, Olsen praised the cooperation and courtesy she saw between city officials, department superintendents and employees and said Washington could learn from how well Kendallville’s municipal government operates.
Briefs •
Wednesday, Nov. 13
Noble County Council meets in special session at 8:30 a.m. in the Commissioners Room of the Noble County Courthouse. Kendallville Redevelopment Commission meets at 8 a.m. in the clerk-treasurer’s office conference room. Thursday, Nov. 14
Noble County Public Library Board of Trustees meets at 6 p.m. in the central library in Albion. East Noble School Corp. public meeting on the future of the East Noble Middle School building is at 6:30 p.m. at the middle school, 401 Diamond St., Kendallville.
Granada Drive reopens to traffic KENDALLVILLE — The city of Kendallville annouced Friday afternoon that Granada Drive has reopened to traffic from both the north and south ends, as well as from Pueblo Drive. City officials said connections from both Del Norte Drive and Cortez Drive onto Granada Drive will be paved Monday if weather permits.
Girl Scouts sending packages to soldiers LAGRANGE — LaGrange Girl Scouts will gather at LaGrange United Methodist
NEED A MOWER? NOW’S THE TIME TO BUY!
Church on Monday, Veterans Day, to put together a large shipment of care packages they are donating to soldiers stationed overseas. The girls have been gathering supplies for the last couple of weeks, collecting personal items such as razors, soap and wet wipes and other items such as writing paper, pens and fruit snacks. The Girl Scouts will start to arrive at the church, 209 W. Spring St., around 5 p.m. to start sorting items and creating the packages. The Scouts will continue to accept donations at the church until around 7 p.m., said Shay Owsley, a LaGrange Girl Scout troop leader. Any questions about items needed may be directed to Owsley at 237-1184.
FORT WAYNE — In an effort to make the upcoming Thanksgiving holiday travel period safer, the Indiana State Police will be joining approximately 250 other law enforcement agencies statewide in participating in the annual Safe Family Travel campaign, a news release said. Beginning Friday and running through Sunday, Dec. 1 the Indiana State Police will be conducting high visibility enforcement efforts including sobriety check points and saturation patrols targeting impaired drivers and unrestrained motorists.
BY MAUREEN GROPPE Lafayette Journal and Courier
WASHINGTON —.” Bayh, the first Indiana governor since 1830 to become a father while in
Show
November 16, 2013
West Noble High School 5094 N. US 33, Ligonier, IN
8 AM - 2 PM
Cookie Walk, Face Painting & Pictures with Santa 11 AM - 1 PM
For more information, call Karena Wilkinson at 574-457-4348 or sleighbellscraftshow@hotmail.com
Independent Full Gospel Church 1302 South Gonser St., Ashley, IN
While supplies last. Some equipment might have small dents and/or scratches. Full warranty on all equipment.
is celebrating their
10 Year Anniversary
11400 N 350 W Ligonier, IN 260-593-2792
and their passengers. To make the Thanksgiving holiday travel period safe, police say, observe the following safety rules: • If you are planning to travel make sure you are well rested, a fatigued driver is a dangerous driver • avoid tailgating and remember the two-second rule; • make sure everyone is buckled up; • put down the electronic devices and drive; • don’t drink and drive; and • move over and slow down for emergency and highway service vehicles.
Former Sen. Bayh’s twins turn 18
Sleigh Bells
SCRATCH & DENT SALE
In 2012, alcohol-impaired driving in Indiana was linked to 150 fatalities (an increase from 140 fatalities in 2011) and 2,112 injuries. Alcohol-impaired collisions were less than 3 percent of all Indiana crashes, but accounted for 20.3 percent of Indiana 779 traffic fatalities in 2012. Roughly six out of 10 fatalities in alcohol-impaired collisions were the impaired driver from 2008 to 2012. Approximately 80 percent of serious fatal and incapacitating injuries from alcohol-impaired collisions occurring during the 2008-2012 period were suffered by impaired drivers
at their present location
PHOTO CONTRIBUTED
Susan, Beau, Nick and former Sen. Evan Bayh pose for a photo. The Bayh twins have grown up since their births on Nov. 8, 1995, while their father was Indiana’s governor. The boys will be heading to college next fall.
office,. Both are seniors at St. Albans college preparatory school in Washington, their father’s alma mater. Nick, an avid tennis player, hasn’t picked a college yet. “Fortunately for him,” Bayh said, “he’s smart like his mother.” Beau has been recruited to play lacrosse for Harvard — which his father jokingly referred to as the “IU of the East.” “Since Indiana and Purdue don’t have lacrosse teams, he had to go to
Harvard instead,” Bayh said. Only Harvard costs a little more than the $350 a semester that Bayh remembers paying as a freshman in Bloomington. When Bayh told Susan that Beau had sealed the deal with Harvard’s lacrosse coach and she would be a “Harvard mom,” Susan started to cry. When Bayh looked up Harvard’s tuition, he said, “then I started crying.” Finances aside, Bayh said he’s been getting “prematurely melancholy” about the prospect of seeing them leave home next August. But could their leaving home clear the way for a return to politics and a possible bid for governor, as some Democrats speculate? “My sons’ leaving home will clear the way for us to clean their rooms!” Bayh responded.
Sunday, November 10 The public is invited and encouraged to attend this special celebration, starting with the 10:30 AM worship service featuring “The Dotsons,” an anointed Southern Gospel Singing family. John and Yavonna Dotson have been singing together for over 22 years. They did their first recording project in October 1998. In February 1999 they were given the opportunity to open for the legendary group, The Kingsmen. Shortly after the concert the phone started ringing with inquiries about booking the Dotsons. They are now in their 11th year of traveling and singing for the Lord with a total of 6 recording projects. Immediately following the morning worship service, there will be a carry-in dinner in the church fellowship hall. There will also be an open house from 2 p.m.-4 p.m.
THE NEWS SUN
NASCAR Every Thursday in the Sports Section
Everyone is welcome!
THE NEWS SUN (USPS 292-440) 102 N. Main St., Kendallville, IN 46755 Established 1859, daily since 1911 ©KPC Media Group Inc. 2013 Recipient of several awards from the
Hoosier State Press Association for excellence in reporting in 2012.
DELIVERY SERVICE — — Motor and Foot Routes Delivery Type:
403 SOUTH MAIN STREET KENDALLVILLE
A vi lla
900 Autumn Hills Dr.
.
347-1653 Funeral Home
c In k,
Serving Kendallville Since 1943
Home ile Pa ob r M & Autumn Storage
897-3406
Lot 5A, Avilla
LIGONIER TELEPHONE COMPANY Internet Access • Touch Tone • PBX’s Call Waiting & Forwarding • Cellular Direct TV • Key Systems Long Distance Service 414 S. Cavin • Ligonier
894-7161
OVERHEAD DOOR COMPANY OF THE NORTHERN LAKES
260-593-3496 • 800-334-0861 For a detailed listing of churches in your area, log on to
Est. 1963
kpcnews.com/churches
Paving, Patching & Sealing Professional Striping
will print the area church listings the first weekend of each month.
FREE 30 Yearslt Asphalt Paving, a Es in Asph s Driveways & Parking Lots timates Busines William Drerup 897-2121 Bryan Drerup 897-2375
THE NEWS SUN
ENERGY-SAVING PROGRAMS FOR BUSINESSES AND SCHOOLS Saving energy is not just good business, it’s good for the community and makes a positive impact on your customers. Chances are you’ve already discovered the benefits of making changes to your lighting. Now you can learn even more ways to become energy efficient at ElectricIdeas.com from Indiana Michigan Power. You’ll find information about incentives, rebates, audits, and custom programs for energy efficient building improvements. Find the right energy-saving programs for your facility.
Visit ElectricIdeas.com today!
THE NEWS SUN
SATURDAY, NOVEMBER 9, 2013
kpcnews.com
Privatize JOHN out of sheer generosity, that someone dies at just the STOSSEL In Iran, it’s legal to sell thing that organs. It’s the rare Iran does right. thing that Iran does People right.. Later,.
Letter Policy • The News Sun welcomes letters to the Voice of the People column. All letters must be submitted with the author’s signature, address and telephone number. The News Sun reserves the right to reject or edit letters on the basis of libel, poor taste or repetition. Mail or deliver letters to The News Sun, 102 N. Main St., P.O. Box 39, Kendallville, IN 46755. Letters may be emailed to: dkurtz@kpcmedia. com Please do not send letters as attachments.
•
•
JOHN STOSSEL is host of “Stossel” on the Fox Business Network. He’s the author of “Give Me a Break” and of “Myth, Lies, and Downright Stupidity.” More information at johnstossel. com. To read features by other Creators Syndicate writers and cartoonists, visit creators. com.
A3
Voice Of The People • Fee for yard sale is crossing a line To the editor: Kendallville’s history of budget woes for the last 10 years is a familiar story. Unlike the working class, who know how to tighten their belts in lean times, governing bodies never seem to figure it out. Instead of long-term solutions, they keep trying to fix compound fractures with Band-Aids. Kendallville has a unique way to make up for shortfalls. Just pass an ordinance
involving a tax, a fee or surcharge, call it what you will. It’s money coming out of taxpayer pockets. We’ve had a raise in transfer site tickets, sewage rates raised, drain line insurance, which is useless, multiple housing fees. The car show on Main Street used to be free; entries now pay $12. Farmers Market used to be free for vendors. Now they pay a fee. The Amish have not come back. Their latest brainstorm, pass an ordinance requiring a $5 fee to have a yard sale. That is crossing the line. I don’t know of any county law, state law or federal law
that prohibits a person from selling personal possessions on their own property, unless it’s contraband. Maybe they should have thought twice about all those hefty pay raises they gave city employees. My SSI will be a hefty $204 for the year. My one hope before I die is they will pass some ordinance taking my whole check, instead of sucking it out a piece at a time, like a leech. The community has a voice. Use it to stop garbage ordinances like this. Douglas Terry Kendallville
‘Hunting with Dad’ is a continuing tradition Another paper slips out of my dad’s collection of poems, notes and books that he left me. It is a yellowed piece of paper with a typewritten poem. My dad signed the poem Jan. 30, 1982. The poem, “Hunting with my Dad,” fell out at the right time as we now live in the middle of deer hunting season. “At the top of the pines, the wind would moan I love you here, in this my home, Don’t go away, stay with me lad When I went hunting with my Dad.” I get the call to go over to Aaron’s garage. Jonah just got his first deer with his cross-bow. My immediate reaction is a cold feeling in my gut; one of fear or pride, I am not sure. I hop on my bike and head over to their garage. Jonah is waiting for me. The cold feeling goes away as I see his face. It isn’t so much that he is thrilled or happy. He is proud as he says, “Nannie, now I am a provider for my family.” Jonah is 9. His dad and his Uncle Adam are in the garage working on the deer. Sorry if this is a bit too much for your breakfast table, but this is the way it is done. First they bleed the deer and then they skin the deer. They teach Jonah the ways of preparing the deer. They teach him the ways of the land. “Sometimes the leaves were dry and crispy And the haze would lay so low and wispy I remember as a little lad When I went hunting with my Dad.” My dad and his three brothers hunted with their dad from the time they could
hold a shotgun. It was there they learned respect for life, for the beauty of the land, and for the bond between father and son. When I lived on the farm my three boys hunted with their dad. Opening day was such an event at our house. No one could sleep the night before because of the excitement. I got up early to deep fry homemade doughnuts. I made dozens LOU ANN of these doughnuts for all the guys that showed up HOMAN- at the farmhouse. I dipped in sugar and placed SAYLOR them them on the table, and they disappeared as quickly as I made them. The boys actually made maps showing where each would be hunting. There was “Doc’s Woods” and “Squirrel Woods.” With mounds of hunting gear gone from the house I would sweep up the kitchen floor and put a giant pot of chili on the wood stove, waiting for the reports. One by one each would show up with stories of the buck that got away, or they would come for help to drag the deer out of the woods. Luckily I never had to do that! “On my right there’d go Brother Jim To my left, Jerry, Keith and him Always walking where the marsh was bad When we were hunting with my Dad.” I watch my sons pass this tradition on to Jonah. Matthew already shot his
•
first turkey. Someday he will bring home the deer meat as well. These children are taught to use all parts of the animal. Maybe they will tan the hide or use the antlers for door knobs or chandeliers. I also know that in a few days after the carcass hangs, Karen’s kitchen will become the processing center where each boy will help grind and place the meat in bags for winter. Nothing will be left except a few bones which will be taken out to the farm for the coyotes. I do not hunt, but I understand their passion. I know there is something wonderful in passing this on down to son or daughter. From the outside I have watched these traditions pass down from father to son. The remnants of the bonding remain. Even with the farm gone from my own life, I am still part of this hunting tradition. I keep the coffee going and provide a place for stories at my kitchen table. What is better on cold autumn nights than telling these stories? Hopefully Matthew and Jonah will grasp these traditions and pass them on to different children where the land is still strong and they become the teachers. “Now my sigh is dim and blurry But I still see the wood fire, cheer For I remember and feel glad For going hunting with my Dad.”.
What Others Say •. The Post and Courier Charleston, S.C
A4
AREA • NATION •
kpcnews.com
SATURDAY, NOVEMBER 9, 2013
Deaths & Funerals • Pierrette Biancardi
of Padua Catholic Church, Angola, Indiana, with one hour of visitation prior to the service at the church. Father Fred Pasche will be officiating. Burial will be at Highland Park Cemetery, Fort Wayne, Indiana, at 2 p.m. Tuesday, November 12, 2013. Visitation will also be on Monday, November 11, 2013, from 3-6 p.m., with a 6 p.m. prayer service at the Weicht Funeral Home, Angola, Indiana Memorials may be made to the family in care of Henry Biancardi You may sign the guestbook at. com.
ANGOLA — Pierrette A. Biancardi, 81, of Angola, Indiana, passed away Wednesday, November 6, 2013, at the Select Specialty Hospital of Fort Wayne, Indiana. She retired from Cameron Memorial Hospital where she was a Registered Nurse. She was born on October 11, 1932, in Chicago, Illinois, Mrs. to Henry Biancardi and Gladys (White) Gougeon. She married George Bell Jr. Ferdinand Biancardi on October 26, 1954, in Cook AUBURN — George W. County, Illinois. Bell Jr., 68, passed away Pierrette led a life of Thursday, service. After completing November her nursing degree in her 7, 2013, at birthplace of Chicago, she Parkview dedicated over 40 years Regional of her life to a career Medical in the field. Within her Center in lengthy tenure, Pierrette Fort Wayne. became a figure of mentorHe was ship and guidance within born October Mr. Bell the Cameron Hospital 10, 1945, in community. A respected Fort Wayne. and instrumental figure His parents were George W. within the OBGYN unit of Bell Sr. and Verna (Franks) Cameron, she invested her Bell. worldly life into bringing George worked for over new life into the world. 30 years for Dana/Eaton In retirement, Pierrette Corporation in Auburn spent her time amongst before retiring in 2001. community and family. She He was a member of the was a staple figure at card Moose Lodge of Auburn. nights at the Lion’s Club, His life was his family, favoring Bridge, and spent horses, hunting and loved much of her time enjoying watching Western movies. the company of other senior Surviving are two sons players. As the matriarch and two daughters, Jeff of the Biancardi family, Bell of Attica, Jerry Bell her home played center of Angola, Kimberly (Eric) stage during holidays and Bell of Avilla and Amy birthdays. She was notorious Alday of Fort Wayne; nine within her family for doting grandchildren; and one heavily on her grandchildren great-grandchild. and preparing weeks worth He was preceded in of food for a single event. death by his parents and Pierrette is survived a great-granddaughter, by her three sons and a Addison. daughter-in-law: Henry Services are at 4 p.m. Biancardi of Fort Wayne, Monday, November 11, Indiana; Phil and Joan 2013, at Feller and Clark Biancardi of Angola, Funeral Home, 1860 Center Indiana; and Dan Biancardi St., Auburn, Ind., with the of Angola, Indiana. She Rev. Bob Bell and Pastor is also survived by eight Jerry Weller officiating. grandchildren: Matthew Burial is in Fairfield Biancardi, Brian Biancardi, Cemetery, Corunna, Ind. Joseph Biancardi, Rosemarie Calling is two hours Biancardi, Samantha prior to the service Monday Biancardi, Bianca Biancardi, from 2 to 4 p.m. at the Frederick Biancardi and funeral home. Kaydee Biancardi. To send condolences She was preceded in visit. death by her parents; com. her husband, Ferdinand Biancardi in August Danny Wilcox of 1975; her son, Fred ANGOLA — Danny C. Biancardi Sr. on February Wilcox, 61, of Angola died 16, 2013; her brothers, Thursday, Nov. 7, 2013, at Roland and Richard St. Joseph Hospital in Fort Gougeon; and her sister, Wayne. Constance White. Funeral arrangements are Services will be at 10 pending at Beams Funeral a.m. Tuesday, November Home in Fremont. 12, 2013, at St. Anthony
Kathleen Jackson
Sandra Campbell
PLEASANT LAKE — Kathleen Louise “Kate” Jackson, 60, died Thursday, November 7, 2013, at her home in Pleasant Lake, Indiana. She was a special education teacher at the Prairie Heights Elementary School for over 30 years. She was a member of the Land of Lakes Lions Club, past president of the Hamilton Lions Club and past Miss District Jackson Governor out of District B of the Indiana Lions Club. Kate was born March 19, 1953, in Angola, Indiana, to Robert David and Margaret Marian (Harris) Jackson. She is survived by her brother and sisterin-law, John C. and Kathy Jackson of Pleasant Lake, Indiana; her nieces and nephews, Lisa and Aaron Starkey, Christy and Brad Mills and Mitch and Miranda Jackson; and her great-nieces, Lauren Mills and Alaina Mills. She was preceded in death by her parents. Services will be at 11 a.m. Wednesday, November 13, 2013, at the Pleasant Lake United Methodist Church with Pastor John Boyanowski officiating. Burial will be in the Pleasant Lake Cemetery. Visitation will be from 4-8 p.m. at the Pleasant Lake United Methodist Church, with a 7:30 p.m. Lions Club service. In lieu of flowers, Kate’s request was to make memorial donations to the Steuben County Humane Society, Angola, Indiana. Weicht Funeral Home in Angola is in charge of arrangements. You may sign the guestbook at www. weichtfh.com.
KENDALLVILLE — Sandra Lea Campbell, 73, of Angola died Wednesday, Nov. 6, 2013, at her home in Steuben County. Mrs. Campbell had been employed in the past at Foundations in Albion. She was a member of Lake Gage Congregational Church in Angola. She was born in Kendallville on April 12, 1940, to George and Mrs. Constance Campbell (Browand) Hampshire. Her husband Forrest Campbell preceded her in death. Surviving are a daughter, Tammy and Terry Danning of Angola; a grandson; a brother, R.D. and Jane Hampshire of Eagle Island near Rome City; and many nieces and nephews. She was also preceded in death by a sister, Arcille Fiandt Workman, and a brother, Robert Hampshire. Funeral services will be Monday at 1 p.m. at Hite Funeral Home in Kendallville, with visitation from 11 a.m. until the service begins. Officiating the funeral service will be Steve Altman. Burial will be at Lake View Cemetery in Kendallville. Send a condolence to the family at home.com.
Mark Staulters FORT WAYNE — Mark W. Staulters, 51, of Fort Wayne died Thursday, Nov. 7, 2013, in Fort Wayne. Funeral arrangements are pending at Beams Funeral Home in Fremont.
Steven Pierce KENDALLVILLE — Steven Pierce, 58, of Kendallville died Friday, Nov. 9, 2013, at Parkview Regional Medical Center in Fort Wayne. Funeral arrangements are pending at Hite Funeral Home in Kendallville.
Munson Baughman AUBURN — Munson M. Baughman, 81, of Auburn died Tuesday, November 5, 2013, at Betz Nursing Home in Auburn. Munson was born Jan. 15, 1932, in DeKalb County to Eugene and Ruth (Berry) Baughman. Mr. He was Baughman a 1950 graduate of St. Joe High School. He served during the Korean Conflict in the 3rd Infantry Division of the United States Army where he received a Bronze Star. He married Evelyn L. Diederich on Aug. 15, 1954, in the Zion Lutheran Church in Garrett, and she passed away Oct. 31, 2002. Mr. Baughman worked for the Dana Corp Spicer Clutch Division in Auburn, retiring after more than 33 years of service. He was a member of the Orland American Legion and was an avid fisherman, bowler, gardener, and loved fine dining in the “Over 400 monuments inside our showroom”
Young Family Funeral Home
A C E
411 W. Main St., Montpelier, OH 43543 800-272-5588 facklermonument.com
260-927-5357 Hours: Custom Mon.-Fri. 9-5 Monuments
Auburn area. Surviving are a son and daughter-in-law, Gary M. and Carrie Ann Baughman of Fremont; a daughter, Catherine A. BaughmanClark of Virginia; three grandsons, Blake Baughman of Auburn, Andrew Baughman of Virginia, and Stephen Clark of Virginia; two great-grandchildren, Gracelynn Baughman and Hunter J. Clark, both of Virginia; three brothers and sisters-in-law, Donald Baughman of Auburn, Jordan Wayne and Mary Lou Baughman of Butler, and Arthur and Carolyn Baughman of Camden, Mich.; four sisters and a brother-in-law, Mary Warfield of Garrett, Virginia Aschleman of Auburn, Jane and Hollis Bales of Auburn and Charlotte Rogers of Kendallville; and a sisterin-law, Wilma Baughman of Garrett. He was preceded in death by his parents; wife; a brother, Robert Baughman; and two sisters, Arlene Beard and Josephine Sowles. Services will be at 11 a.m. today, Saturday, Nov. 9, at Feller and Clark Funeral Home, 1860 Center St., Auburn, with Pastor Roger Strong officiating. Burial will take place in Woodlawn Cemetery in Auburn, with military graveside services being conducted by the U.S. Army and the Auburn American Legion. Visitation was from 3 to 7 p.m. Friday at the funeral home. Memorials may be directed to the Auburn American Legion or the Wounded Warrior Project. To send condolences, visit. com..
Report stirs new confusion in Arafat death RAMALLAH, West Bank (AP) —.”
Lotteries • INDIANAPOLIS (AP) — The following lottery numbers were drawn Friday. Indiana: Midday 9-9-8 and 7-7-2-0. Evening: 5-4-5 and 0-7-1-0. Cash 5: 6-12-13-18-24. Mix and Match: 3-5-27-3039. Quick Draw: 4-9-15-19-27-38-39-41-42-44-50-53-5456-59-64-67-69-71-79. Poker Lotto: Ace of Spades, King of Hearts, Ace of Diamonds, Queen of Diamonds, 4 of Hearts. Mega Millions 41-42-51-57-65. Mega Ball: 7. Megaplier: 2. Michigan: Midday 2-9-4 and 4-4-1-9. Daily 0-8-0 and 2-1-2-0, Fantasy 5: 05-06-14-29-32, Keno: 10-11-16-1721-24-25-30-32-33-35-38-39-40-49-52-54-58-61-72-78-79. Poker Lotto: 5 of Diamonds, 8 of Diamonds, 5 of Hearts, 6 of Spades, 9 of Spades. Ohio: Midday 7-8-2, 7-9-5-0 and 0-2-3-3-5. Evening 3-8-1, 4-4-6-7 and 1-0-5-9-3. Rolling Cash 5: 03-15-21-3739. Illinois: Hit or Miss Morning 01-04-05-07-08-09-11-1314-15-21-23, GLN : 2; Midday 5-6-0.
Wall Street Glance • BY THE ASSOCIATED PRESS
Friday’s Close: Dow Jones Industrials High: 15,764.29 Low: 15,579.35 Close: 15,761.78 Change: +167.80 Other Indexes Standard&Poors 500 Index: 1770.61 +23.46 NYSE Index: 10,032.13 +107.76 Nasdaq Composite Index: 3919.23 +61.90 NYSE MKT Composite:
2422.98 +19.73 Russell 2000 Index: 1099.97 +20.88 Wilshire 5000 TotalMkt: 18,798.63 +249.84 Volume NYSE consolidated volume: 3,704,839,368 Total number of issues traded: 3,174 Issues higher in price: 1,778 Issues lower in price: 1,325 Issues unchanged: 71
THE EXPERT k @s
LIFE •
SATURDAY, NOVEMBER 9, 2013
Briefs •
kpcnews.com
THE NEWS SUN
A5
Area Activities •
Painting class offered Today KENDALLVILLE — Professional artist Carl Mosher will instruct a Kendallville Park and Recreation Department scenic painting class on Thursday, Nov. 21, at 6 p.m. at the Youth Center, 211 Iddings St. Students will paint “Pheasant� using ink and oil paint. The $20 fee includes all supplies and is payable at the Youth Center park office prior to the class.
Trip tickets remain KENDALLVILLE — Tickets are still available for the Dec. 5 trip to the Round Barn Theatre in Nappanee for a production of ‘‘The Wizard of Oz.’’ The Noble County Council on Aging is sponsoring the activity. The price of the trip is $27 which includes an all-you-can-eat family-style dinner and the play. The van ride there would be for a donation. Call 347-4226 and ask for Joyce for information.
Holiday Bazaar: Annual event. New Life Tabernacle, 609 Patty Lane, Kendallville. 9. Yu-Gi-Oh: Stop in for the sanctioned Yu-Gi-Oh Tournament and battle your buddies. There is a $2 tournament fee that should be paid at the door, or you can pay a $5 fee and receive a pack of cards. Cossy ID cards are suggested. Prizes will be
(10 min. from I-69)
Hotel Reservations
349-9003 )',& 0(0%(5
Hess Team, PC 260-347-4640, 877-347-4640 toll free Anita’s cell: 260-349-8850 Tim’s cell: 260-349-8851
Tim & Anita Hess GRI, CRS, ABR
9LVLW2XU:HEVLWH$W ZZZKHVVKRPHWHDPFRP
2ZQHU'DQ%URZQ
Visit Our Website:
Sunday, Nov. 10
Veterans Day Programs: Area veterans and their guests, current military personnel and the public invited to East Noble High School gym on Garden Street for student-led program. World War II vet will be guest speaker. 8:15 a.m.; Francis Vinyard VFW Post 2749 and American Legion Post 86 will combine for a public program at 11 a.m. at VFW Post 2749, 127 Veterans Way. Guest speaker will be U.S. Army veteran of Mark Mendenhall. Luncheon served following program. North Side Elementary School on Harding Street will have its program at 1:45 p.m. U.S. War Dogs Association representative will bring a military dog and give an address. Public invited. St. John Lutheran School will present ‘‘Remembering Our
Holiday Bingo: Annual fundraiser for Delta Theta Tau Sorority. Prizes are Longaberger and Vera Bradley items. Lunch available. For tickets call Christy at 347-5464 or Deanna at 854-2275. Kendallville Eagles, U.S. 6 West, Kendallville. 11:30 to na.org. Club Recovery, 1110 E. Dowling St., Kendallville. 12:30 p.m.
C
Kendallville. 5:30 p.m.
Bingo: Bingo games. Warm ups at 12:30 p.m. and games at 1:30 p.m. Sponsored by the Sylvan Lake Improvement Association. Rome City Bingo Hall, S.R. 9, Rome City. 12:30 p.m. DivorceCare: 13-week program with videos, discussion and support for separated or divorced. For more information, call 347-0056. Trinity Church United Methodist, 229 S. State St.,
Monday, Nov. 11 Bingo: For senior citizens every Monday. Noble County Council on Aging, 111 Cedar St., Kendallville. Noon.
MISSION:
“To provide all member businesses with purpose-driven EHQHĂ€WVWRLPSURYHJURZDQG VWUHQJWKHQWKHLUEXVLQHVVÂľ
Please Welcome These NEW Kendallville Area Chamber Members! JANSEN LAW – General Law practice. Chris Jansen, Attorney-Owner. 228 S Main St., Kendallville. 260-599-4206.
Top 10 Member BeneďŹ ts 1. PHP Insurance Discount 2. Chamber Leads & Referrals Groups 3. FREE Marketing 4. Event Promotion & Sponsorship 5. FREE Use of Chamber Space
6. FREE Use of Projection System & Screen 7. FREE Coupons 8. Political Advocate 9. Continuing Education 10. Member Directory with Hot Link to your Website
KENDALLVILLE AREA CHAMBER EVENTS FOR NOV./DEC.
EVERY TUESDAY  MORNING LEADS & REFER RALS GROUP – 8–9 a.m. at American Legion Post 86,
322 S. Main, Kendallville. Breakfast $3.00. Call the Chamber to register. Come network with other Chamber members, share your business highlights, bring your business cards & swap leads & referrals from the group!
EVERY WEDNESDAY  NOON LEADS & REFER RALS GROUP – Noon–1 p.m. at the Chamber. Network with other Chamber members, share business highlights, bring business cards, swap leads & referrals & bring your lunch.
NOVEMBER 9  WINTER CRAFT BAZAAR – 10 a.m. - 4 p.m. at Bridgeway Evangelical Church, 210 Brians Place, Kendallville. Come join us...open to the public! Crafts, hillbilly hotdogs, pumpkin rolls & more. Contact Heather for more information 349-1567 or goldengang7@hotmail. com NOVEMBER 9  LEGISLATIVE FORUM – 10 a.m. noon at the Kendallville Public Library Rooms A & B. Sen. Susan Glick and Rep. Dave Ober will review the results of their respective summer work sessions and advise the attendees of the issues that will be addressed in the 2014 Legislative Session. They also want to hear from the public about any topics that they feel should be addressed in the session.
NOVEMBER 10  ALL YOU CAN EAT BREAKFAST – 8-11 a.m. at the Kendallville American Legion Post 86, 322 S. Main St., Kendallville. All you can eat breakfast for $7.00 Bacon, sausage, eggs, biscuits and gravy, pancakes, hash browns, French toast, coee or orange juice.
NOVEMBER 14  PROFESSIONAL/BUSINESS WOMEN’S – 6-9 p.m. at the Kendallville Park & Recreation Dept., 122 Iddings St., Kendallville. Everyone is invited, but a dinner reservation ($6/person) must be made by calling 347-1144. Great and useful items for sale at our auction. Come shop for Christmas gifts. The Club is looking for new members and welcomes all inquiries about becoming a member.
NOVEMBER 16 ďšť NOBLE COUNTY TURKEY TROT - Registration 8-8:30 a.m.; race 9:00 a.m.. Pre-registration to receive T-shirt has passed. $15 pre-registration w/o T-shirt. $20 day of race. Checks payable to “Noble County Community Foundationâ€? withâ€? P.U.L.S.E. Turkey Trotâ€? in memo line. All registration forms & payment MUST be received at the Noble County Community Foundation by 4:30 p.m. November 1st to receive a T-shirt. Proceeds beneďŹ t the P.U.L.S.E. Endowment in memory of Dave Knopp Fund for scholarships. Register online at. com
NOVEMBER 16  HOLIDAY CRAFT & BAKE SALE – 10 a.m. – 3 p.m. at American Legion Post 86, 322 S. Main St., Kendallville. Concessions will be available.
NOVEMBER 16  HOLIDAY HOUSE WALK – 10 a.m. - 3 p.m. at homes in Rome City/Sylvan Lake. Ticket $6 per person (children 12 & under free). Presented by Rome
City Chamber of Commerce. Tickets may be purchased at the Limberlost Library, Specialty House, Rome City Town Hall & Gene Stratton-Porter. See the Artisan Market at the Town Hall in Rome City from 9 a.m. - 3 p.m. or our Facebook page.
NOVEMBER 23 ďšť SAVE THE STRAND 5K – 8-10 a.m. @ Bixler Lake Lions Pavilion. Registration 8am; race 9 a.m. $20 pre-registration w/T-shirt; $15 w/o T-shirt; $20 day of race. Checks payable to “Noble County Community Foundationâ€? with “Save the Strandâ€? on memo line to Teela Gibson, 111 S. Progress Dr. E., Kendallville. Must be received by Nov. 15th to guarantee T-shirt. Proceeds beneďŹ t “Strand Theatre: Keep the Lights On Campaignâ€?. Registration forms available at the Kendallville Chamber, 122 S. Main St. NOVEMBER 23 ďšť KENDALLVILLE CHRISTMAS WALK – 5:30-9:30 p.m. at Floral Hall (Fairgrounds) and ďŹ ve area homes. Tickets $8 in advance or $10 day of walk. Toy drop o at Floral Hall and ďŹ rst home on walk. Proceeds beneďŹ t Christmas Bureau. Tickets available at the Kendallville Chamber, Park Dept & Campbell & Fetter Bank.
NOVEMBER 23  FESTIVAL OF TREES – 6-9 p.m. at the Kendallville Event Center. Kick o your holiday season surrounded by decorated trees at the 16th Annual Festival of Trees Open House and Evening Gala to support Parkview Noble Home Health and Hospice. Contact Jane Roush jane.roush@parkview.com for sponsorship. DECEMBER 2  FAMILIES FOR FREEDOM CHRISTMAS PARTY – 6:30 p.m. - 8 p.m. at the Rome City American Legion Post, Kelly St., Rome City. Open to the public. Bring a new or gently used item for rae, along w/two dozen cookies. Enjoy refreshments & be part of group photo included in Christmas cards to local, active military.
DECEMBER 6ďšş8 AND 13ďšş15 ďšť WINDMILL WINďšş TER WONDERLAND - 5:30-8:30 p.m. each night at the Mid-America Windmill Museum, 732 S. Allen Chapel Rd. Admission $3 per person; free for children under 12. Lighted Christmas displays, crafters, music, warm food & Santa giving every child under 12 a gift bag full of goodies.
DECEMBER 7 ďšť KENDALLVILLE CHRISTMAS PAďšş RADE – 1 p.m. in Downtown Kendallville. DECEMBER 7 ďšť THE COMEDY AND MAGIC OF JIM MCGEE PRESENTS: MAGIC ON MAIN “KEEP THE LIGHTS ONâ€? – 4 p.m. at American Legion Post 86, 322 S. Main St., Kendallville. Comedian Magician Bill Reader. All proceeds beneďŹ t Strand Theatre. Tickets $10 each or family of 4-$25 may be purchased @ Strand Theatre, Kendallville Chamber, & American Legion Post 86. Tune in to WAWK for chance to win tickets.
DECEMBER 8 ďšť FRIGID FREEDOM 5K – 2 p.m. at Bixler Lake Park. Registration $15 w/T-shirt, or day of race w/o T-shirt $20 at Kendallville Public Library beginning at 12:45. Proceeds beneďŹ t Families for Freedom, support group for active military from Noble & LaGrange Counties. Like us on Facebook!
MARK THESE DATES ON YOUR CALENDAR AND WATCH FOR MORE INFORMATION ON UPCOMING EVENTS NEXT MONTH.
401 Sawyer Road Kendallville 347-8700 1-888-737-9311
Veterans,’’ All veterans and active duty service members invited. Christ-centered praise and worship assembly led by David Britton. St. John Lutheran Church and School, 301 S. Oak St., Kendallville. 1:30 p.m. Zumba Class: Free Zumba classes at Presence Sacred Heart Home in Avilla run from 6:30-7:25 p.m. each Monday and Thursday..
KENDALLVILLE CHAMBER
The Kendallville Chamber would like to thank the perimeter advertisers on this page who help publish this monthly Chamber feature page. Space is available. If you would like to feature your business on this page, please contact the Kendallville Area Chamber of Commerce or KPC Media Group Inc.
LOCATION:
US 6 West Kendallville
347-2254 • •
2003 E. Dowling St. Kendallville • •
260-347-5263
Call Ben Helmkamp Today
Holiday Bazaar: Crosspointe Family Church of the Nazarene will be having their annual holiday bazaar. They will have a variety of crafts to choose from, gifts, food, and door prizes. For further details contact Natalie Buhro 347- 4249 or dbuhro@ ligtel.com CrossPointe Family Church, 205 HighPointe Crossing, Kendallville. 10 a.m.
Legislative Forum: State Sen. Susan Glick, R-LaGrange, and state Rep. Dvid Ober, R-Albion, will be featured. Kendallville Public Library, 221 S. Park Ave., Kendallville. 10 a.m.
r e t t a h
U.S. 6 East Kendallville, IN
FARMERS & MERCHANTS BANK
given to the top three players! Kendallville Public Library, 221 S. Park Ave., Kendallville. 10 a.m. 343-2010
2702 Cobblestone Lane Kendallville, IN 46755 260-349-1550
Kendallville Auburn Albion Angola Ligonier Goshen Warsaw Sensible Banking for Sensible Lives
MEMBER
)',&
Š2013 Campbell & Fetter Bank
:2KLR6WUHHW .HQGDOOYLOOH,1
NUDIWFRP
“We’re in your hometown�
CHEVROLET • BUICK • GMC
260-347-1400 U.S. 6,Kendallville Service Hours: Mon.-Fri. 7:30 a.m.-5 p.m.
A6
AREA • NATION •
kpcnews.com
THE NEWS SUN
SATURDAY, NOVEMBER 9, 2013
CHANGES: Teacher evaluation process explained FROM PAGE A1
Windy and warmer today with some sunshine. High of 57 and tonight’s low will be 37. Sunny and cooler Sunday with daytime highs in the upper 40s. Overnight temperatures will be in the low 30s. Monday and Tuesday conditions are expected to be cloudy and rainy.
Sunrise Sunday 7:23 a.m. Sunset Sunday 5:27 p.m.
National forecast Forecast highs for Saturday, Nov. 9
Friday’s Statistics Local HI 45 LO 37 PRC. tr. Fort Wayne HI 46 LO 38 PRC. tr,.
Sunny
Pt. Cloudy
teachers in grades 7-12. A group of 20 teachers were trained on the system last summer, and they have trained about 75 percent of East Noble’s teachers for grades 7-12. It is described as a “one-stop shopping place” for lesson planning, test-taking, sharing of multimedia information, online sites and communication between teachers, students and parents. The system prepares the school district for online courses.
NEEDS: Company has already helped local groups FROM PAGE A1
City/Region High | Low temps
Forecast for Saturday, Nov. 9
MICH.
Chicago 57° | 45°
documentation. Building principals conduct at least three observations of their teachers during a school year, rating them on their skills using a standardized list of competencies. Subjectivity virtually has been eliminated, and teachers are apprised of their evaluations throughout the process, according to Lamon. • heard a report on the introduction of a pilot learning management system called Canvas for
South Bend HI 45 LO 41 PRC. tr. Indianapolis HI 50 LO 38 PRC. 0
Today's Forecast South Bend 55° | 41°
Fort Wayne 54° | 37° Fronts Cold
Warm Stationary
Pressure Low
High
OHIO
Lafayette 57° | 37°
ILL.
Cloudy
seventh-grade boys basketball coach at East Noble Middle School, and trustees granted East Noble High School functional life skills teacher Kimberly Luke Scherer six weeks of maternity leave beginning Jan. 17. In other business, the board: • heard Assistant Superintendent Becca Lamon explain the teacher evaluation process and
Indianapolis 59° | 41°
Today’s drawing by:
Terre Haute 59° | 37°
Evansville 63° | 39°
Kyle Lepper Louisville 61° | 37°
KY.
© 2013 Wunderground.com
Submit your weather drawings to: Weather Drawings, Editorial Dept. P.O. Box 39, Kendallville, IN 46755
Obama issues apology to those who’ve lost coverage.
Real Estate
foundation in Indiana,” said Kathryn Spence, director of the Dow Corning Foundation. “The foundation has primarily funded projects and equipment directly to local organizations. This will enable us to broaden our reach and provide additional support to the communities where our employees live and work.” Past recipients of funding from the Dow Corning Foundation include the Kendallville Fire Department, Junior Achievement,
Drug Free Noble County, Camp Invention, Boomerang Backpacks and Common Grace. “An easy way to think about a donor-advised fund is like a charitable savings account. A donor or corporation contributes to the fund and then has an opportunity to recommend grants to their favorite charity,” said Linda Speakman Yerick, executive director of the Noble County Community Foundation. She added, “For Dow
Corning, whose corporate offices are not in Kendallville, they can participate using local employees to determine and evaluate the needs in their community and have the ability to make grants. We are pleased to have Dow Corning as a partner.” To apply for Dow Corning Donor Advised Funds, contact the Noble County Community Foundation at noblecountycf. org. For information on the Dow Corning Foundation, visit dowcorning.com/foundation.
TYPHOON: Storm causes landslides, destroys homes FROM PAGE A1.
DOWNTOWN AUBURN
Commercial property on 1/2 city block between 6th & 7th Streets and on the west side of Jackson Street. (AS24DEK)
Call Arden Schrader 800-451-2709 6((³/,67,1*6´
SchraderAuction.com
NOW is the time to buy or sell!
Contact your realtor today!
D
PR
IC
E
K E Y
W
L O C A T O R
A > Allen
N > Noble
W > Whitley
S > Steuben
K > Kosciusko
L > LaGrange
M > Michigan
E > Elkhart
O > Ohio
NE
D > DeKalb
200 S. Britton, Garrett
Modern meets tradition in this lovely updated home. A complete renovation from top to bottom. Cozy enclosed porch for a morning coffee or to catch up with an old friend. Natural light graces the beautiful dining room, the hardwood floors remind you that you are in a well-built home from a time gone by. The kitchen was a complete transformation down to the studs. 3 BR, 2 BA. $69,900. MLS#676032.
260-347-4206
PR
8845 E. Circle Drive, Kendallville
Hey, are you looking for a great home with a man cave? This is it! Three bedrooms, 1-1/2 baths with many recent updates throughout including new roof and windows. Large eat-in kitchen with oak cabinets and all appliances stay. Great backyard with wood privacy fence, patio, above-ground pool for summer fun. Fabulous 3-car garage, 14x25 1-car, plus 32x36 2-car (all attached!). Large two-car is insulated, 10’ ceilings, storage above, and heated with a non-combustible overhead gas heater. $128,500. MLS#201316828.
260-349-8850
260-349-8850
W 204 E. Lisle St., Kendallville
Check this cute bungalow out situated on a large corner lot. Full basement and 1-car detached garage. Living room and dining area open concept. Enjoy the summer nights as you relax on the covered porch. Move-in ready. MLS#676002. $64,900.
260-347-5176 Terri Deming
503 E. Diamond, Kendallville
Feel right at home when you step into this 4 bedroom, 1.5 bath. Beautiful updated kitchen that features breakfast bar, new flooring and stainless steel appliances to stay. Home also features large living room with hardwood floors, new carpet upstairs and on the stairs, natural woodwork plus much more. Large balcony deck off 2 of the bedrooms upstairs. MLS#676210. $113,500.
260-347-5176 Terri Deming
IN G NE W 204 N. Park Ave., Kendallville
Lots of room here for the whole family! Inviting living room with a bay window and open to the den (with lots of windows for light!) and formal dining room. Main floor bedroom. Newly remodeled bath. Large kitchen and laundry/mud room with a walk-in pantry. Hardwood floors throughout most of the main floor. Two large bedrooms upstairs with extra room off one that could be a walk-in closet, sitting room or a good place for a 2nd bath. $69,100. MLS#201317110.
260-349-8850 The Hess Team
508 N. MAIN ST., KENDALLVILLE N
N
NE
NE
W
The Hess Team
PR
IC
E
N
1104 Town Street, Kendallville Affordable living in the middle of everything! Updated 3 bedroom, 2 bath, 1 block from East Noble School, 2 blocks to YMCA and 4 blocks to Bixler Lake and park. $44,900. MLS#675921.
260-349-8850
The Hess Team
PR
IC
E
The Hess Team
N
Open Homes
SU O N. PE 2- N 4P M
NE
NE
W
W
PR W NE 2013 Cortland Lane, Kendallville
Beautifully appointed villa in Orchard Place. Open concept. Large great room with 12’ ceilings, fireplace, built-in bookshelves and large array of windows to the patio and backyard. Kitchen with custom maple cabinets, all appliances, breakfast bar and dining area. Front bedroom with vaulted ceilings, master suite with a full bath and walk-in closet. Many more extras added when constructed, some of which include over-sized garage, wide entry doors to bedrooms for wheelchair accessibility, pocket doors! $172,500. MLS#531407.
N
LI ST
N
IC E
IN G
N
LI ST
IC
E
Michelle Eggering
Totally updated, self-contained, simple & easy to maintain & afford, immaculate, handicap accessible abode! Large 66’x165 lot allows potential to add on to home or build a garage! Open concept from kitchen & living room! Roomy bath & walk-in closet in bedroom! MLS#9005904 $59,900.
DIRECTIONS: US 6 to Main St., south 3 blocks to property on east side. Park in back off alley via Grove St.
Hosted By: Dep Hornberger
260-312-4882
The
THE NEWS SUN
SATURDAY, NOVEMBER 9, 2013
Star
THE HERALD REPUBLICAN
kpcnews.com
B
Knights denied sectional title FRIDAY’S GAMES TORONTO...................... (SO) 2 NEW JERSEY ............................1
Bishop Dwenger finishes with 221 rush yards in 33-13 win
WINNIPEG ..................................5 NASHVILLE.................................0
BY JUSTIN PENLAND japenland@hotmail.com
Area Events • TO DAY C O LLE G E W R E STLI NG Tr ine at Michigan St ate Open, 9 a.m.; at Muskegon (Mich.) Community College’s Ben McMullen Open, 9:3 0 a.m. C OLLEG E FO OTBALL Tr ine at Olivet, 1 p.m.
On The Air • TO DAY LO CA L East No ble Football Coaches Corner 9 5.5 F M, 11 a.m. Indiana University Football v s Illinois 9 5.5 F M, 2:3 0 p.m. AUTO RACI NG NASCAR, Nationwide Series, ServiceMaster 20 0, at Avondale, Ariz., E S P N2, 4 p.m. C OLLEG E FO OTBALL Kansas St. at Texas Tech, ABC, noon Auburn at Tennessee, E S P N, noon Penn St. at Minnesot a, E S P N2, noon TCU at Iowa St., F S N, noon Southern Cal at California, FOX, 3 p.m. Nebrask a at Michigan or BYU at Wisconsin, ABC, 3:3 0 p.m. Mississippi St. at Texas A&M, CB S, 3:3 0 p.m. Nebrask a at Michigan or BYU at Wisconsin, E S P N, 3:3 0 p.m. Tulsa at East Carolina, F S N, 3:4 5 p.m. Kansas at Oklahoma St., F S1, 4 p.m. Vi rginia Tech at Miami, E S P N, 7 p.m. Houston at UCF, E P S N2, 7 p.m. Texas at West Virginia, FOX, 7 p.m. LS U at Alabama, CB S, 8 p.m. Notre Dame at Pittsburgh, ABC, 8:07 p.m. UCLA at Arizona, E S P N, 1 0 p.m. Fresno St. at Wyoming, E S P N2, 1 0:1 5 p.m. GOLF P GA Tour, The McGladrey Classic, third round, at St. Simons Island, Ga., TGC, 1 p.m. SO C CE R Premier League, West Bromwich at Chelsea, N BCS N, 9:5 5 a.m. Premier League, West Ham at Norwich, N BC, 12:3 0 p.m. M LS, playoffs, conference championships, leg 1, teams TB D, N BC, 2:3 0 p.m.
FORT WAYNE — East Noble’s football season concluded in an uncharacteristic fashion Friday against Bishop Dwenger. The Saints fired out of the gates and scored 10 points in the opening 3 1/2 minutes en route to a 33-13 victory over the Knights at the University of Saint Francis’ Bishop John D’Arcy Stadium. Following an East Noble three-and-out in the opening series, Dwenger (9-3) marched 63 yards in approximately 1:30 to set up Tyler Tippman’s 2-yard rushing score. Dwenger finished the game with 383 total yards, including 221 on the ground. The Saints (9-3) host New Haven (11-1) in regional action on Friday. “They hit a few big plays and ran the ball on us early. Everyone knows we are not the dynamic team that can come back from a big deficit,” East Noble coach Luke Amstutz said. “We needed (a big play) for us to win this game. With our style of offense and defense, we needed this to be a 21-14 game.” In order to keep it close, East Noble needed to run the ball with some success. Bishop Dwenger took notes from last week’s Leo-EN game. It stacked the box right away, taking away the Knights’ option attack with
INDIANAPOLIS (AP) —high AP the sixth consecutive opponent Indiana has held to 40 percent Indiana forward Troy Williams (5) defends during an NCAA college basketball game in shooting or worse. Chicago State Cougars guard Nate Duhon (32) Bloomington Friday. The Pacers improved to 6-0 for the first time since the 1970-71, when the club played in the ABA. They rallied from a halftime deficit for the fifth time this season. Indiana overcame 16 turnovers to BLOOMINGTON, Ind. (AP) “We knew they were going to against Ohio State in 1997. shoot 46.2 percent. — The Hoosiers followed the rules get up and press us,” Hollowell They outrebounded Chicago George made an arcing 3 over Friday night. said. “With the new rules, we State (0-1) 62-36, and had six a leaping Landry Fields at the They attacked the basket, wanted to take advantage of it and players score in double figures. third-quarter buzzer and clinched drew fouls and made free throws. attack the basket and get fouled. I They won their 16th consecuhis fists in celebration before Defensively, they blocked shots think we did a good job of that.” tive season opener and their 29th sprinting to Indiana’s bench and and avoided fouls. And, of course, The Hoosiers (1-0) did all of consecutive home opener. slapping hands with teammates. they won another season opener. that and more. And the loudest roar from the George bounced back from a Jeremy Hollowell scored a They blocked 13 shots, crowd might have come when five-point first half to outscore the career-high 16 points and had breaking the Assembly Hall record Jeff Howard put in a layup with Raptors 17-13 in the period. He four blocks, and Noah Vonleh set in 1999 against San Francisco 11 seconds to go, giving ticketshot 5 of 9 from the field and made added 11 points, 14 rebounds and and falling one short of the overall holders some free food at a nearby all five free throws. The Pacers led three blocks in his college debut, record set at Penn State in 2000. restaurant and Indiana its first 72-59 entering the fourth quarter. leading Indiana past outmanned They made 45 of 55 free 100-point game in its season Gay carried the Raptors to a Chicago State 100-72. It went just throws, breaking the school record opener since Murray State in 46-44 halftime lead, scoring 22 the way coach Tom Crean drew for made free throws (43) first set November 1992. points. No teammate scored more it up. against Michigan in 1943 and tied SEE HOOSIERS, PAGE B2 than six in the half.
Hoosiers roll in season opener
18,900
15,899
Dwenger.
SEE KNIGHTS, PAGE B2
George leads Pacers past Raptors
$
$
Brandon Mable and quarterback Bryce Wolfe. Mable, who finished with 146 yards and a score, had 37 rushing yards in the first quarter and no run over seven yards in that span. With the box flooded with Saints, Wolfe went to the sky to find Nathan Ogle twice on an 11-play, 52-yard drive midway through the first quarter. The first snag Ogle pulled down was for 14 yards, and four plays later, he grabbed another for 12. Ogle led all receivers with his two receptions as Wolfe recorded 62 yards through the air on 7-of-18 passing with an interception. “We wanted to be able to run the ball. We ran the ball later on and we started to climb back,” Amstutz said. “We felt like we did some things well, but a few mistakes killed us. Give credit to Dwenger, it took advantage of missed opportunities.” East Noble’s defense settled down after the Saints scored late in the first to shut out the “home” team through the latter part of the first and into the second quarter. The “Roughneck” crew scored the team’s first touchdown after halftime, taking a Mike Fiacable interception 36 yards the other way. Dylan Jordan snagged the JAMES FISHER Fiacable pass across the middle East Noble running back Brandon Mable looks to gain yardage and jetted down the Dwenger in Friday’s sectional football contest with Fort Wayne Bishop sideline. The interception looked
$
22,900
$
24,900
NEW
2014 Ford Focus SE
NEW
Auto, Air, Cruise Control, Power Windows & Locks, AM/FM, CD/MP3, Sirius Sat. Radio, Sync. 2012 FORD FOCUS SE 4 DR, Hatchback
$
$
17,900
21,900
2010 FORD TAURUS SEL, Only 13,000 Miles
CELE B
TH R 40 ANNIVE G OU RS
T MAX PLAT FOROD WNED FAMILY O 73 SINCE 19
2008 FORD EDGE SEL Leather, Sunroof
2011 FORD F-150 XLT, Super Cab
2011 FORD ESCAPE LIMITED Leather, Sunroof $
23,900
2008 GMC YUKON DENALI Sunroof, Leather, Chrome Wheels
$
8,995
2007 FORD EXPEDITION Eddie Bauer, 4x4
0%*
0%*
for 60 mos. or up to
for 60 mos. or up to
$2,000*
$$7,250*
customer cash
customer cash
*Ford Credit, WAC
COMING FALL 2013!
Y! AR
TIN RA
2012 FORD FUSION SEL Leather, Sunroof
2013 F-150 4x4 Super Crew XLT
5.0 L V8, 6 Speed, Auto, Power Seat, XLT Plus Pkg., Chrome Package, Rear Camera, Sliding Rear Window, Trailer Brake Controller, C o Convenience Pa Package Pa
Max Platt
Jeff Platt
Robin Haines
Patrick Snow
David Dressler
561 S. Main • Kendallville • 347-3153 •
B2
SPORTS •
kpcnews.com
SATURDAY, NOVEMBER 9, 2013
Boilermakers escape Northern Kentucky Hungry Hoosiers WEST LAFAYETTE, Ind. (AP) — Purdue coach Matt Painter believes the Northern Kentucky Norse deserved to win. And they almost did. But Purdue’s Ronnie Johnson made sure it didn’t happen. Johnson, who had 18 points and five assists, scored the go-ahead free throws with 13 seconds left and the Boilermakers beat Northern Kentucky 77-76 in the season opener for both teams on Friday night. “I felt they deserved to win just out of being quicker to the basketball and having a little bit more energy than us,” Painter said. “But also just the way they shot the basketball and the times they made 3’s. They answered every call. It’s unfortunate for them.” The Boilermakers (1-0) never found a way to lead the lead, let alone pull away from the Norse (0-1). Purdue scored five straight points to close out the game. “Just being poised at the end and taking good shots,” Johnson said. “Not forcing anything. Peck hit a nice shot to put us in a good spot.” After Northern Kentucky’s Tyler White hit a 3-pointer to put the Norse ahead 76-72 with 58 seconds remaining, Erick Peck — who finished with 11 points and nine rebounds — nailed a 3-pointer in front of the Purdue bench to put the Boilermakers back
within a point. Jordan Jackson, who led the Norse with 24 points and eight rebounds, went to the line with the chance to extend the lead again. But he missed Northern Kentucky’s only free throws of the night. Then Johnson went to the line on the other end of the court and hit two free throws to capture the win. “It was a tough thing for him to step up and miss those two at the end,” Norse coach Dave Bezold said. “But we’re not in that position without him, if he doesn’t get to the line and create what he did.” The Norse are in just their second season of being a NCAA Division I level program. Against Purdue, they played like a more experienced Division I team. Purdue is just the second Big Ten team the Norse has faced. They lost to Ohio State last season. “Just playing them doesn’t put you on the map, it just means you’re on their schedule,” Bezold said. “You’ve got to win some games. It’s something that we’ve got to do as a program.” When Purdue would take a lead, Jackson would drive to the basket for a lay-up. When the Boilermakers stepped in Jackson’s way, Jack Flournoy or Todd Johnson would hit a 3-pointer, keeping Purdue from gaining any momentum.
HOOSIERS: IU started two frosh, two sophomores against Cougars FROM PAGE B1
It was a solid start for a team that replaced four 1,000-point scorers with a starting lineup of two freshmen, two sophomores and senior Will Sheehey. “Our guys showed a lot of the upside that’s there, a lot of the athleticism,” Crean said. , but especially now with the rules the way they are.” This one will be hard to top. Though Clarke Rosenberg led the Cougars with 27 points, only one of his teammates reached double figures. Eddie Denard finished with 10 on a night Chicago State shot a dismal 25.9 percent from the field and was just 8 of 36 on 3-pointers. It was good enough to impress Chicago State coach Tracy Dildy. “That’s a really good, athletic team, which we knew coming in,” Dildy said. “They changed a lot of our guy’s shots just with their length, and I’ve been telling people this is going to be a team that’s going to be in that hunt for that Big Ten (title) because they’re not going to do anything but get better and get better and get better.” Crean certainly hopes so. The perfectionist challenged his players to MARTINS
MARTINS
MARTINS
MARTINS
MARTINS
MARTINS MARTINS
SUMORZ WEDNESDAY • 10 PM-2 AM • NO COVER
MARTINS
MARTINS
SATURDAY • 10 PM-2 AM • NO COVER
KARAOKE ***Sunday Drink Specials***
OPEN SUNDAYS Noon - 3:30 AM MARTINS
MARTINS
MARTINS
MARTINS
MARTINS
MARTINS
JUKEBOX THURSDAY
MARTINS
9 PM-1 AM • NO COVER
MARTINS
MARTINS
MARTINS
Downtown Garrett
115 N. Randolph St. • (260) 357-4290
MARTINS
MARTINS
MARTINS
MARTINS
MARTINS
MARTINS
MARTINS
be even more aggressive contesting shots, forcing turnovers and taking care of the ball — three areas the Hoosiers did not fare as well Friday. Indiana committed 19 turnovers and forced 10. But even he acknowledged it was a good start. Without Cody Zeller to patrol the middle, Indiana repeatedly attacked the basket with zeal, hoping to score points or draw fouls. They did both. The Hoosiers finally started to pull away midway through the first half with a 10-2 spurt and followed that with an 8-2 run that gave Indiana a 39-21 lead with 5:09 left in the first half. Chicago State answered with seven straight points, all from Rosenberg — one of its few good stretches of the night. “I wouldn’t change the experience (in Assembly Hall), but I would change the performance,” Dildy said. “We really just wanted to come and put on a good showing for the fans.” Instead, the Hoosiers turned it into a rout. Kevin “Yogi” Ferrell scored the final four points in an 8-0 run to close the first half, and Indiana started the second half on an 8-1 surge that made it 55-29 with 16:02 left in the game. Chicago State never got closer than 19 again and the only real question for the fans was whether they would hit 100 points. Ferrell scored 11 points, Sheehey and freshman Devin Davis each had 10 points and nine rebounds and freshman Troy Williams finished with 13 points.
ready for Iowa
AP
Northern Kentucky’s Todd Johnson, left, goes around Purdue’s Terone Johnson during an NCAA college basketball game Friday in West Lafayette.
The Norse hit 13 shots from 3-point range. Todd Johnson, who scored all 12 of his points from behind the arc, hit a wide-open 3 with 12:59 left in the half to give the Norse a 16-13 lead, then hit another to make it 19-13. With 9:23 left in the half, Johnson scored a 3-pointer to give the Norse a 22-13 lead. Flournoy scored all 12 of his points from 3-point range, too, including a shot to give the Norse a 73-70 lead before Ronnie Johnson drove to the basket to put Purdue within a
point again, 73-72. “To come in to a Big Ten school and win you have to have some people who are special,” Painter said. “I thought Jordan Jackson was special. We simply couldn’t keep him in front of us. I thought Todd Johnson’s energy set the tone. He’s a kid that shot 20 percent from the three last year and he comes out and goes 3-for-3 right away. And then Flournoy goes 4-for-4, stretches the defense, is a big guy. I thought those three guys were special.”
INDIANAPOLIS (AP) — Indiana is hungry. It’s been a month since the Hoosiers last won a game. It’s been six years since they last qualified for a bowl. And after last weekend’s bungled finish against Minnesota, players and coaches are eager to make amends. The next quest begins Saturday. “In our world, there is a lot of football to play, a lot of things we can accomplish,” Hoosiers coach Kevin Wilson said. “Two weeks in a row we played a pretty good team, got them into the fourth quarter, haven’t been able to get over the hump. We’re getting close. We’re in those games and our deal is we’ve got to keep fighting and pushing to knock that thing down.” The Hoosiers’ next chance comes against Illinois (3-5, 0-4 Big Ten), which looks like it’s stolen Wilson’s playbook. Both teams throw first. Both offenses score points by the dozens. Both defenses give up more than 32 points per game and both coaches are trying to get their programs bowl-eligible. The winner on Saturday will end a losing streak — Indiana has lost three straight, Illinois has lost 18 consecutive conference
games — and move within two wins of that magical sixth win. The question, of course, is which team is better positioned: Indiana (3-5, 1-3), seeking a breakthrough November victory, or the Fighting Illini, who are trying to prove they can just win a Big Ten game. “When we get down there, we’ve got to all be able to step up and have each other’s back and make plays for each other,” Illinois quarterback Nathan Scheelhaase said, referring to the Illini’s red-zone proficiency. “It might not always be pretty down there, but one way or the other we’ve all got to be able to step up and make plays.” Meanwhile, Indiana is simply trying to recapture the excitement swirling around the program when the season started — just in time to prove the so-called experts wrong about their postseason hopes. “We’re at that part of the race where you can keep pushing or stop, and we’ve come too far to stop pushing,” Wilson said. “We’re down to two at home, and we need another good crowd to get some energy in the stands for the guys this week.”
Notre Dame faces another test at Pitt PITTSBURGH (AP) —.”
Iowa tries to end scoring funk against Purdue INDIANAPOLIS (AP) — touchdown in their last eight quarters of regulation. Saturday.”. And with four games remaining, Hazell is running out of time and options. On Tuesday, Hazell said he doesn’t anticipate making many, if any, personnel moves the rest of this season. He’s just hoping that leads to improvement rather than more of the same.
KNIGHTS: East Noble ran for 167 yards, tallied 15 first downs FROM PAGE B1
sideline. The interception looked like the big play the Knights needed. However, they could not find a rhythm for the rest of the game. The Knights tallied 167 yards on the ground and 15 first downs, but punted six times. “We have lived by the way our defense has played all year. That was a huge play that kind of sparked some life in us, but it didn’t spark as much as we needed,” Amstutz said. “It gave us an opportunity, but we were short in manufacturing the big plays.” East Noble (9-3) loses a large group of seniors, which helped lead the team to the most victories since the 2004 season when the Knights went 10-1. “I want them to know that it doesn’t end here. You may have played your last football game, but what you have become and accomplished will lead to other great things in life,” Amstutz said.
Bishop Dwenger 33, East Noble 13 East Noble 0 0 6 7— 13 Bishop Dwenger 16 0 7 10— 33 Scoring Summary First Quarter BD —Tyler Tippmann 2 run (Trey Casaburo kick), 9:49 BD — Casaburo 23 field goal, 8:29 BD — Gabriel Espinoza 46 pass by Mike Fiacable (Casaburo kick), 1:55 Third Quarter EN — Dylan Jordan 36 yard interception return (2-point failed), 11:14 BD — Tippmann 5 run (Casaburo kick), 3:43 Fourth Quarter EN — Mable 1 run (Jared Teders kick), 8:44 BD — Casaburo 33 field goal, 5:09 BD — Ryan Cinadr 22 run, (Casaburo kick) 3:40 Team Statistics EN BD First Downs 15 19 Rushes-yards 42-167 45-221 Comp-Att-INT 7-22-1 7-12-0 Passing Yards 62 162 Total plays-yards 64-229 57-383 Penalties-yards 1-15 4-47 Punts-average 6-39 3-26 Fumbles-lost 4-1 1-1 INDIVIDUAL STATISTICS RUSHING: EN — Mable 29-146, TD; Bryce Wolfe 10-15; Tyler Leazier 3-6. BD — Tippmann 14-70, 2 TD; Fiacable 9-61; Ryan Cinadr 14-55; John Kelty 1-16; Espinoza 2-10; Andrew Gabet 3-5. PASSING: EN — Wolfe 7-18, 62 yards, INT; Bret Sible 0-4. BD — Fiacable 7-12, 162 yards, TD. RECEIVING: EN — Nathan Ogle 2-26; Matt Strowmatt 1-17; Jacob Brown 1-11; Leazier 1-7; Mable 1-2; Grey Fox 1-(-1). BD — Espinoza 2-68, TD; Ryan Watercutter 3-53; Cinadr 1-10; Gus Schrader 1-31.
JAMES FISHER
East Noble quarterback Bryce Wolfe looks to pass during Friday’s sectional football game against Fort Wayne Bishop Dwenger.
SCOREBOARD •
SATURDAY, NOVEMBER 9, 2013
Prep Football Regionals CLASS 6A Penn 33, Lake Central 6 Carmel 38, Carroll (Ft. Wayne) 7 Warren Central 24, Indpls Pike 21 Center Grove 56, Southport 14 Sectional Finals CLASS 5A Sectional 9 Mishawaka 24, Munster 17 Sectional 10 Concord 34, Elkhart Central 0 Sectional 11 Westfield 45, McCutcheon 21 Sectional 12 Ft. Wayne Snider 17, Ft. Wayne North 14, OT Sectional 13 Indpls Cathedral 56, Anderson 13 Sectional 14 Whiteland 41, Floyd Central 20 Sectional 15 Bloomington North 24, Bloomington South 21 Sectional 16 Terre Haute North 42, Ev. North 7 CLASS 4A Sectional 18 New Prairie 28, S. Bend St. Joseph’s 6 Sectional 19 Ft. Wayne Dwenger 33, E. Noble 13 Sectional 20 New Haven 37, Norwell 7 Sectional 21 New Palestine 33, Mt. Vernon (Fortville) 0 Sectional 22 Indpls Chatard 28, Indpls Roncalli 8 Sectional 23 Columbus East 42, Shelbyville 7 CLASS 3A Sectional 25 Andrean 42, Glenn 0 Sectional 26 Jimtown 42, Twin Lakes 21 Sectional 27 Ft. Wayne Concordia 42, Ft. Wayne Luers 21 Sectional 29 Indpls Brebeuf 42, Tri-West 21 Sectional 30 Guerin Catholic 24, Indian Creek 20 Sectional 31 Brownstown 62, Charlestown 6 CLASS 2A Sectional 34 Bremen 20, Woodlan 13 Sectional 35 Tipton 37, Delphi 21 Sectional 36 Oak Hill 35, Alexandria 14 Sectional 37 Indpls Ritter 35, Speedway 10 Sectional 38 Indpls Scecina 46, Shenandoah 14 Sectional 39 Paoli 21, Triton Central 14 Sectional 40 Southridge 21, Ev. Mater Dei 19 CLASS A Sectional 41 Winamac 33, W. Central 7 Sectional 42 Pioneer 32, Frontier 0 Sectional 43 S. Adams 40, Southwood 39, OT Sectional 44 Tri-Central 32, Clinton Prairie 0 Sectional 45 Eastern Hancock 57, Northeastern 36 Sectional 46 S. Putnam 42, Indpls Lutheran 28 Sectional 47 Fountain Central 48, Attica 12 Sectional 48 Linton 42, Perry Central 9
National Football League 01 6 0 .333 230 287 2 7 0 .222 220 279 West W L T Pct PF PA Seattle 8 1 0 .889 232 149 San Francisco 6 2 0 .750 218 145 Arizona 4 4 0 .500 160 174 St. Louis 3 6 0 .333 186 226 Thursday’s Game Minnesota 34, Washington 27 Sunday’s.
National Hockey League EASTERN CONFERENCE Atlantic Division GP W LOT Pts Tampa Bay 15 11 4 0 22 Toronto 16 11 5 0 22 Detroit 17 9 5 3 21 Boston 15 9 5 1 19 Montreal 17 8 8 1 17 Ottawa 16 6 6 4 16 Florida 16 3 9 4 10 Buffalo 18 3 14 1 7 Metropolitan Division GP W LOT Pts Pittsburgh 16 11 5 0 22 Washington 16 9 7 0 18 N.Y. Rangers16 8 8 0 16 Carolina 16 6 7 3 15 N.Y. Islanders16 6 7 3 15 New Jersey 16 4 7 5 13 Columbus 15 5 10 0 10 Philadelphia 15 4 10 1 9 WESTERN CONFERENCE Central Division GP W LOT Pts Colorado 14 12 2 0 24 Chicago 16 10 2 4 24 St. Louis 14 10 2 2 22 Minnesota 17 9 4 4 22 Nashville 16 8 6 2 18 Dallas 16 8 6 2 18
GF 51 50 43 42 44 50 32 31
GA 37 37 45 29 38 49 57 55
GF 49 53 35 30 47 30 36 22
GA 38 44 43 45 51 44 44 42
GF 46 56 50 45 37 44
GA 25 43 33 38 49 47
Winnipeg 18 7 9 2 16 45 51 Pacific Division GP W LOT Pts GF GA Anaheim 17 13 3 1 27 57 42 San Jose 16 10 2 4 24 59 36 Phoenix 17 11 4 2 24 56 53 Vancouver 18 11 5 2 24 52 46 Los Angeles 16 10 6 0 20 45 40 Calgary 16 6 8 2 14 45 57 Edmonton 17 4 11 2 10 42 66 NOTE: Two points for a win, one point for overtime loss. Thursday’s Games’s Games Toronto 2, New Jersey 1, SO Winnipeg 5, Nashville 0 Calgary at Colorado, late Buffalo at Anaheim, late. Sunday’s.
NBA EASTERN CONFERENCE Atlantic Division W L Pct GB Philadelphia 4 2 .667 — New York 2 3 .400 1½ Brooklyn 2 3 .400 1½ Toronto 2 4 .333 2 Boston 2 4 .333 2 Southeast Division W L Pct GB Miami 4 2 .667 — Charlotte 3 3 .500 1 Orlando 3 3 .500 1 Atlanta 2 3 .400 1½ Washington 2 3 .400 1½ Central Division W L Pct GB Indiana 6 0 1.000 — Milwaukee 2 2 .500 3 Detroit 2 3 .400 3½ Chicago 2 3 .400 3½ Cleveland 2 4 .333 4 WESTERN CONFERENCE Southwest Division W L Pct GB San Antonio 5 1 .833 — Houston 4 2 .667 1 New Orleans 3 3 .500 2 Dallas 3 3 .500 2 Memphis 2 3 .400 2½ Northwest Division W L Pct GB Oklahoma City 4 1 .800 — Minnesota 4 2 .667 ½ Portland 2 2 .500 1½ Denver 1 3 .250 2½ Utah 0 6 .000 4½ Pacific Division W L Pct GB Golden State 4 2 .667 — Phoenix 3 2 .600 ½ L.A. Clippers 3 3 .500 1 L.A. Lakers 3 4 .429 1½ Sacramento 1 3 .250 2 Thursday’s Games Miami 102, L.A. Clippers 97 Denver 109, Atlanta 107 L.A. Lakers 99, Houston 98, late Sacramento at Portland, late. Sunday’s Games San Antonio at New York, 12 p.m. Washington at Oklahoma City, 7 p.m. New Orleans at Phoenix, 8 p.m. Minnesota at L.A. Lakers, 9:30 p.m.
Major League Soccer Playoff Glance at Houston, 2:30 p.m. Leg 2 — Saturday, Nov. 23: Houston at Sporting KC, 7:30 p.m. Western Conference Leg 1 — Sunday, Nov. 10: Portland at Real Salt Lake, 9 p.m. Leg 2 — Sunday, Nov. 24: Real Salt Lake at Portland, 9 p.m. MLS CUP Saturday, Dec. 7: at higher seed, 4 p.m.. Points Leaders Through Nov. 3 1. Jimmie Johnson 2,342. 2. Matt.. 21. Marcos Ambrose 836. 22. Juan Pablo Montoya 830. 23. Denny Hamlin 689. 24. Casey Mears 686. 25. Danica Patrick 611. 26. David Gilliland 610. 27. David Ragan 608. 28. Mark Martin 595. 29. Tony Stewart 594. 30. Dave Blaney 506. 31. Travis Kvapil 486. 32. David Reutimann 447. 33. J.J. Yeley 445. 34. A J Allmendinger 402. 35. Bobby Labonte 390. 36. David Stremme 362. 37. Michael McDowell 197. 38. Timmy Hill 180.. Money Leaders Through Nov. 37 21. Marcos Ambrose $4,481,304 22. David Ragan $4,101,988 23. Denny Hamlin $3,949,874 24. Casey Mears $3,944,179 25. Mark Martin $3,850,419 26. Jeff Burton $3,764,013 27. Tony Stewart $3,710,624 28. David Gilliland $3,654,686 29. Travis Kvapil $3,644,897 30. Danica Patrick $3,375,030 31. David Reutimann $3,296,100 32. Dave Blaney $3,283,919 33. J.J. Yeley $3,071,053 34. Bobby Labonte $2,928,477 35. Josh Wise $2,853,241 36. Landon Cassill $2,672,706 37. Joe Nemechek $2,652,458 38. Michael McDowell $2,497,398 39. David Stremme $2,306,964 40. A J Allmendinger $1,946,387 41. Brian Vickers $1,866,055 42. Timmy Hill $1,558,678 43. Austin Dillon $1,533,233 44. Trevor Bayne $1,316,064 45. Scott Speed $1,113,344 46. Regan Smith $1,019,772 47. Mike Bliss $814,433 48. Ken Schrader $749,047 49. Terry Labonte $718,975 50. Michael Waltrip $694,209 (Matt Kenseth)’ Southern 500 (Matt Kenseth) May 18 — x-Sprint Showdown (Jamie McMurray) May 18 — x-NASCAR Sprint All-Star Race (Jimmie Johnson) May 26 — Coca-Cola 600 (Kevin Harvick) June 2 — FedEx 400 benefiting Autism Speaks (Tony Stewart) June 9 — Party in the Poconos 400 presented by Walmart (Jimmie Johnson) June 16 — Quicken Loans 400 (Greg Biffle) June 23 — Toyota/Save Mart 350 (Martin Truex Jr.) June 30 — Quaker State 400 (Matt Kenseth) July 6 — Coke Zero 400 powered by Coca-Cola (Jimmie Johnson) July 14 — Camping World RV Sales 301 (Brian Vickers) July 28 — Crown Royal Presents The Samuel Deeds 400 at The Brickyard ) Sept. 1 — AdvoCare 500 at Atlanta (Kyle Busch) Sept. 7 — Federated Auto Parts 400 (Carl Edwards) Sept. 15 — GEICO 400 (Matt Kenseth) Sept. 22 — Sylvania 300 (Matt Kenseth) Sept. 29 — AAA 400 (Jimmie Johnson) Oct. 6 — Hollywood Casino 400 (Kevin Harvick)
kpcnews.com, Avondale, Ariz. Nov. 17 — Ford EcoBoost 400, Homestead, Fla. x-non-points race Rookie Standings Through Nov. 3 1. Ricky Stenhouse Jr., 216 2. Danica Patrick, 195 3. Timmy Hill, 160
NASCAR Nationwide Points Leaders Through Nov. 2 1. Austin Dillon, 1,107. 2. Sam Hornish Jr., 1,101. 3. Regan Smith, 1,053. 4. Elliott Sadler, 1,026. 5. Justin Allgaier, 1,022. 6. Brian Scott, 1,010. 7. Trevor Bayne, 1,009. 8. Brian Vickers, 970. 9. Kyle Larson, 945. 10. Parker Kligerman, 924. 11. Alex Bowman, 851. 12. Nelson Piquet Jr., 801. 13. Mike Bliss, 780. 14. Travis Pastrana, 702. 15. Michael Annett, 639. 16. Jeremy Clements, 606. 17. Mike Wallace, 574. 18. Reed Sorenson, 524. 19. Joe Nemechek, 481. 20. Eric McClure, 465. 21. Brad Sweet, 391. 22. Cole Whitt, 391. 23. Johanna Long, 391. 24. Landon Cassill, 348. 25. Kevin Swindell, 323. 26. Jeffrey Earnhardt, 315. 27. Blake Koch, 310. 28. Jeff Green, 246. 29. Dexter Stacey, 245. 30. Jamie Dick, 236. 31. Joey Gase, 227. 32. Robert Richardson Jr., 222. 33. Josh Wise, 207. 34. Chris Buescher, 199. 35. Hal Martin, 186. 36. Kenny Wallace, 155. 37. Kevin Lepage, 148. 38. Juan Carlos Blum, 140. 39. Jason White, 138. 40. Kyle Fowler, 119. 41. Drew Herring, 118. 42. Carl Long, 115. 43. Ryan Reed, 111. 44. Mike Harmon, 106. 45. Ken Butler, 99. 46. T.J. Bell, 89. 47. Max Papis, 81. 48. Harrison Rhodes, 78. 49. Daryl Harr, 78. 50. Danny Efland, 78. Money Leaders Through Nov. 2 1. Sam Hornish Jr., $1,116,882 2. Austin Dillon, $1,086,449 3. Kyle Busch, $1,047,215 4. Elliott Sadler, $909,392 5. Regan Smith, $867,673 6. Trevor Bayne, $860,962 7. Brian Vickers, $856,177 8. Kyle Larson, $850,438 9. Justin Allgaier, $833,365 10. Brian Scott, $820,888 11. Parker Kligerman, $784,526 12. Alex Bowman, $766,507 13. Nelson Piquet Jr., $711,557 14. Travis Pastrana, $696,737 15. Mike Bliss, $694,872 16. Mike Wallace, $657,081 17. Jeremy Clements, $634,572 18. Brad Keselowski, $628,485 19. Reed Sorenson, $601,657 20. Joe Nemechek, $571,322 21. Eric McClure, $570,337 22. Michael Annett, $530,324 23. Blake Koch, $461,608 24. Joey Logano, $461,210 25. Jeff Green, $450,370 26. Matt Kenseth, $438,232 27. Landon Cassill, $434,363 28. Johanna Long, $427,802 29. Jeffrey Earnhardt, $361,389 30. Brad Sweet, $337,655 31. Robert Richardson Jr., $336,976 32. Josh Wise, $315,172 33. Joey Gase, $307,710 34. Jamie Dick, $295,127 35. Cole Whitt, $292,888 36. Hal Martin, $284,238 37. Dexter Stacey, $283,177 38. Kevin Harvick, $281,560 39. Kasey Kahne, $260,130 40. Kevin Swindell, $242,268 41. Juan Carlos Blum, $232,966 42. Jason White, $224,408 43. J.J. Yeley, $201,344 44. Kevin Lepage, $184,136 45. Ty Dillon, $178,995 46. Carl Long, $177,080 47. Mike Harmon, $174,179 48. Dale Earnhardt Jr., $154,750 49. Ken Butler, $150,229 50. Kyle Fowler, $145,223 Recent Schedule-Winners) Sept. 6 — Virginia 529 College Savings 250 (Brad Keselowski) Sept. 14 — Dollar General 300 powered by Coca-Cola (Kyle Busch) Sept. 21 — Kentucky 300 (Ryan Blaney) Sept., Avondale, Ariz. Nov. 16 — Ford EcoBoost 300, Homestead, Fla.
ATP World Tour ATP Finals Results Friday At O2 Arena London Purse: $6 million (Tour Final) Surface: Hard-Indoor Round Robin Singles Group A Stanislas Wawrinka (7), Switzerland, def. David Ferrer (3), Spain, 6-7 (3), 6-4, 6-1. Rafael Nadal (1), Spain, def. Tomas Berdych (5), Czech Republic, 6-4, 1-6, 6-3. Standings: Nadal 3-0 (6-1); Wawrinka, 2-1 (4-4); Berdych, 1-2 (4-4); Ferrer, 0-3 (1-6). Group B Standings: Djokovic, 2-0 (4-2); Federer, 1-1 (3-2); del Potro, 1-1 (3-3); Gasquet, 0-2 (1-4). Doubles Group A Standings: Dodig-Melo, 2-0 (4-2); Fyrstenberg-Matkowski, 1-1 (3-2); Bryan-Bryan, 1-1 (3-3); Qureshi-Rojer, 0-2 (1-4). Group B Marcel Granollers and Marc Lopez (3), Spain, def. Leander Paes, India, and Radek Stepanek (7), Czech Republic, 4-6, 7-6 (5), 10-8. Alexander Peya, Austria, and Bruno Soares (2), Brazil, def. David Marrero and Fernando Verdasco (6), Spain, 6-3, 7-5. Standings: Peya-Soares, 2-1 (5-3); Marrero-Verdasco, 2-1 (4-2); Granollers-Lopez, 1-2 (3-5); Paes-Stepanek, 1-2 (3-5).
College Football Top 25 Schedule All Times EST Saturday, Nov. 9 No. 1 Alabama vs. No. 10 LSU, 8 p.m. No. 3 Florida State at Wake Forest, Noon No. 7 Auburn at Tennessee, Noon No. 9 Missouri at Kentucky, Noon No. AP Top 25 Poll The Top 25 teams in The Associated Press college football poll, with firstplace votes in parentheses, records through Nov. 2, total points based on 25 points for a first-place vote through one point for a 25th-place vote, and previous ranking: Record Pts Pv 1. Alabama (52) 8-0 1,491 1 2. Oregon (2) 8-0 1,418 2 3. Florida St. (6) 8-0 1,409 3 4. Ohio St. 9-0 1,315 4 5. Baylor 7-0 1,234 5 6. Stanford 7-1 1,214 6 7. Auburn 8-1 1,082 8 8. Clemson 8-1 1,059 9 9. Missouri 8-1 956 10 10. LSU 7-2 863 11 11. Texas A&M 7-2 861 12 12. Oklahoma 7-1 816 13 13. South Carolina 7-2 769 14 14. Miami 7-1 737 7 15. Oklahoma St. 7-1 662 18 16. UCLA 6-2 515 17 17. Fresno St. 8-0 493 16 18. Michigan St. 8-1 478 24 19. UCF 6-1 472 19 20. Louisville 7-1 385 20 21. Wisconsin 6-2 342 22 22. N. Illinois 9-0 322 21 23. Arizona St. 6-2 197 25 24. Notre Dame 7-2 164 NR 25. Texas Tech 7-2 102 15 Others receiving votes: Texas 34, Georgia 32, BYU 28, Mississippi 17, Houston 9, Minnesota 7, Michigan 6, Washington 6, Ball St. 4, Duke . Harris Top 25 Poll The Top 25 teams in the Harris Interactive College Football Poll, with firstplace votes in parentheses, records through Nov. 2, total points based on 25 points for a first-place vote through one point for a 25th-place vote and previous ranking: Record Pts Pv 1. Alabama (95) 8-0 2,613 1 2. Oregon (8) 8-0 2,491 2 3. Florida State (2) 8-0 2,444 3 4. Ohio State 9-0 2,317 4 5. Baylor 7-0 2,167 5 6. Stanford 7-1 2,102 6 7. Clemson 8-1 1,890 8 8. Missouri 8-1 1,725 9 9. Auburn 8-1 1,672 11 10. Oklahoma 7-1 1,572 10 11. LSU 7-2 1,467 12 12. Texas A&M 7-2 1,426 13 13. Miami (FL) 7-1 1,344 7 14. Oklahoma State 7-1 1,315 15 15. South Carolina 7-2 1,175 17 16. Louisville 7-1 1,013 16 17. Fresno State 8-0 989 18 18. Michigan State 8-1 789 23 19. UCLA 6-2 768 19 20. Northern Illinois 9-0 727 20 21. Central Florida 6-1 567 22 22. Wisconsin 6-2 450 24 23. Texas Tech 7-2 409 14 24. Arizona State 6-2 255 25 25..
Transactions BASEBALL National League NEW YORK METS — signed RHP Joel Carreno and INF/OF Anthony Seratelli to minor league contracts. ANAHEIM DUCKS — Assigned G Igor Bobkov and D Stefan Wang from Norfolk (AHL) to Utah (ECHL). DALLAS STARS — Recalled D Aaron Rome from Texas (AHL). Loaned D Kevin Connauton to Texas for a conditioning assignment. DETROIT RED WINGS — Recalled C Luke Glendening and D Xavier Ouellet from Grand Rapids (AHL). Assigned D Adam Almquist to Grand Rapids. EDMONTON OILERS — Traded D Ladislav Smid and G Olivier Roy to the Calgary Flames for C Roman Horak and G Laurent Brossoit. FLORIDA PANTHERS — Fired coach Kevin Dineen and assistant coaches Gord Murphy and Craig Ramsey. Named Peter Horachek interim coach and Brian Skrudland and John Madden assistant coaches. MONTREAL CANADIENS — Assigned D Greg Pateryn to Hamilton (AHL). OTTAWA SENATORS — Reassigned G Nathan Lawson to Binghamton (AHL). WASHINGTON CAPITALS — Signed LW Jason Chimera to a two-year contract extension. American Hockey League SAN ANTONIO RAMPAGE — Named Tom Rowe coach of San Antonio (AHL). ECHL ECHL — Suspended Ontario D Adrian Van de Mosselaer four games, Elmira D Mathieu Gagnon indefinitely and fined them, and Fort Wayne F Kaleigh Schrock, undisclosed amounts for their actions during recent games. Central Hockey League RAPID CITY RUSH — Signed D Sean Erickson. HORSE RACING THOROUGHBRED AFTERCARE ALLIANCE — Named James Hastie exective director. COLLEGE NCAA — Suspended Rutgers men’s basketball F Junior Etou six games for accepting impermissible benefits from a third party from overseas.
B3
SPORTS BRIEFS • Notre Dame’s Biedscheid to sit out 2013-14 season SOUTH BEND (AP) — Not.
Notre Dame to honor Phelps 40 years after upset SOUTH BEND (AP) —.
Golden Gophers suspend Walker for six games MIN.
Heat’s Chalmers fined $15,000 NEW YORK (AP) —.
Buccaneers place RB Martin on injured reserve T.
Central Michigan routs Division III Manchester MOUNT PLEASANT, Mich. (AP) — John Simons scored 27 points and Central Michigan overwhelmed Manchester 101-49 on Friday in the Chippewas’ season opener. Braylon Rayson scored 12 points and Austin Keel 11 as the Chippewas emptied the bench with twelve players scoring and eleven snaring rebounds. The Spartans committed 22 turnovers, 17 on steals. Central Michigan outrebounded Manchester 52-28. Manchester took the early lead and held it until 15:32 left in first half. That is when Rayshawn Simmons made two free throws for a 10-9 CMU lead, and the Chippewas pulled away from there. The Spartans were led by Brady Dolezal with 18 points. He made 7 of 12 shots but his team only shot 33.9 percent to CMU’s 48.6, led by Simons’ 10-of-13 effort. The Chippewas shot 20 more free throws than Manchester, making 22 of 28. The game was an exhibition for Manchester, a Division III school from Indiana.
B4
AGRIBUSINESS •
kpcnews.com
SATURDAY, NOVEMBER 9, 2013
Fall fruit care: Mulching strawberries, storing apples Fall is one of my favorite times of the year. You get to enjoy the coolness of the weather after withering all summer, crops, vegetables and fruits are being ELYSIA harvested and enjoyed, RODGERS and it is a last-minute rush to get all of those outdoor projects done before the cold of winter. Rosie Lerner, Purdue Horticulture Specialist, offers this advice on putting your strawberries to bed and storing apples for winter.
•
Strawberries Perhaps the last garden chore of the season is tucking in the strawberry planting for winter. Strawberry plants have already set their buds for next spring’s flowers, and the crop can be lost unless you protect them from harsh winter conditions. degrees.
Storing apples Most apple trees are bearing above-average loads this year,ars — temperatures between 27.8 and 29.4 F, depending on the cultivar, and frozen fruit will deteriorate rapidly. Straw-lined pits, buried tiles and other storage methods are at the mercy of the weather and may give satisfactory results some years, but may be a loss during others. ELYSIA RODGERS is the agriculture and natural resources director for the Purdue University Cooperative Extension Service in DeKalb County.
PURDUE NEWS SERVICE
Soybean virus spotted Soybean vein necrosis virus, shown above,. Common diseases such as brown spot, downy mildew and sudden death syndrome can all be mistaken for SVNV, so it’s important for growers to know the difference.
...
Covering All Of Your Acres See us for all your farm lending needs including operating, machinery, and real estate.
AP
Farmers in many parts of the country found that adequate rain and cooler
temperatures at pollination time produced exceptional results for corn.
Record corn crop predicted But prices expected to be lowest since 2010,
Family works to save old Indiana grain elevator GASTON (AP) — Decades after it was a bustling business near downtown Gaston, an old grain elevator still stands at one end of Main Street, a crumbling landmark from the rural town’s agricultural past. Now Joe Clock is “a farmer on a mission,” hoping to see the empty wood-and-aluminum structure revived rather than torn down. Clock, his wife, Linda and daughter, Candy Clock, have been cutting back trees that had grown up around the buildings, and are trying to track down grant funds that might help to pay for renovation, The Star Press reported. The Clocks don’t own the property, which was purchased in a tax sale several years ago; they’re just helping clean it up. As someone who used to haul grain to the elevator back when it was in business, however, Joe Clock believes strongly that it should be preserved as a piece of local history, and said he has urged the current owner not to tear it down. Noting that another grain elevator closer to the Cardinal Greenway in Gaston was torn down years ago, Joe Clock suggests that this one could be preserved — and possibly even returned to some use — in part to give trail users something they could stop by when they arrive at the Gaston trailhead. The Main Street elevator and adjoining structures —
AP
The Clock family hopes to save this grain elevator. They want to restore the buildings which were built in the 1950s.
including a rusting metal Quonset hut — were built in 1953 and 1957, according to records in the Delaware County assessor’s office. It was still a working elevator when former owner Charles Kirtley owned it, from 1972 until 1983; Kirtley told The Star Press in 2005 that he closed the business when too many farmers weren’t able to pay their feed bills anymore. Now the interior of the elevator is littered with graffiti and trash, and lower portions of the wavy aluminum siding on the exterior are missing where people have stolen it to sell, Clock said. He can still identify where various chutes and levers once were — or where remains still hang from the wooden beams overhead — and he notes the top of the elevator would
Grain futures reported lower on Chicago board CHICAGO — Grain futures were mostly lower Thursday on the Chicago Board of Trade. Wheat for December delivery fell .25 cent to $6.53 a bushel; December corn fell .75 cent to 4.2050 a bushel; December oats were 4.50 cents lower at $3.39 a bushel; while January soybeans advanced 11.50 cents to $12.6650 a bushel.. shel..
Today’s KPC
WILD
bingo WIN # #
COVERALL
Howe Office
Waterloo & Woodburn Offices
260-562-1054
260-837-3080
J oe Walter
Stephanie Walter Dean Bassett
Dave Gurtner Jackie Freeman Larry Kummer Eric Aschleman.
$
500 #
Complete rules on back of card.
G
49
#
O
66
11-9
be a great vantage point for people to visit for the view if the building could be fixed up. J.P. Hall, Eastern Regional Office director for Indiana Landmarks, agreed that grain elevators and related structured are “absolutely important” elements of local agricultural history, and can be saved if there is enough local buy-in and support — as well as a clear plan for what function the restored structure would serve. Hall cited the repurposed grain elevator in historic downtown Farmland, now housing Old Mill Shoppes, as an example of successful rehab and reuse of an old grain elevator. The Gaston elevator’s location within blocks of the Greenway has some value, Hall said.
Storage centers are filling up fast LAFAYETTE . But Purdue agricultural economist Chris Hurt says that’s not the case this year. He says the state’s grain storage centers are filling up thanks to this year’s favorable weather. Hurt said Indiana’s corn crop is expected to be near a record-high 1 billion bushels this year. Soybean production is projected to be nearly normal at about 250 million bushels.
NATION • WORLD •
SATURDAY, NOVEMBER 9, 2013
Briefs • Park officials try to track landslide ANCH. An estimated 30,000 yards of debris fell from 600 feet above the road. Some debris was as thick as 15 feet and the size of a small cabin, according to Capps. There were no reported casualties.
Protesters want justice for woman shot on front porch DEARBORN HEIGHTS, Mich. (AP) — said.. A vigil was held Wednesday at the home. About 50 people rallied Thursday outside the police department. The homeowner hasn’t been arrested or named by police.
Rickets making comeback in UK.
kpcnews.com
THE NEWS SUN & THE HERALD REPUBLICAN
B5
Does Chicago still have tallest tower?
AP
A girl looks down from “The Ledge,” will decide whether a design change affecting One World Trade Center’s needle disqualifies its hundreds of feet from being counted, which would deny the building the title of nation’s tallest..”
CBS admits being fooled by source for Benghazi story
AP
Lonna McKinley, of the National Museum of the U.S. Air Force, looks through the log for President John F. Kennedy’s Air Force One, rear, Friday, at the museum in Dayton, Ohio. The
blanket at center was used by President Kennedy on the plane, and the blanket at left was used by First Lady Jacqueline Kennedy on the plane.
JFK artifacts on display DAYTON, Ohio (AP) —..
NEW YORK (AP) —. In that story, which was stripped from the “60.
Mexican citizens take lives back from cartel.
B6
COMICS • TV LISTINGS •
kpcnews.com
DUSTIN BY STEVE KELLEY & JEFF PARKER
SATURDAY, NOVEMBER 9, 2013
Woman has bar set high in search for man
FOR BETTER OR FOR WORSE BY LYNN JOHNSTON
GARFIELD BY JIM DAVIS
BLONDIE BY YOUNG AND MARSHALL
•
process of dating a smooth and easy one. For others it’s complicated, but not impossible. I agree that the basis of strong relationships is friendship and compatibility..
SATURDAY 9, 2013 6:00
Change baby’s position to prevent flat spots during infancy. It also makes their skulls sensitive to pressure, especially when that pressure is always in the same place. Flat spots don’t cause brain damage or affect brain function. They can, however, lead to if the ASK teasing shape is very DOCTOR K. abnormal. To prevent Dr. Anthony flat spots, change the Komaroff position of your baby’s head throughout the day: • Give your baby “tummy time” when he is awake and being watched. Do this for at least a few
• that
7:00
7:30
8:00
8:30
9:00
9:30 10:00 10:30
N ews Jeopardy Football NCAA Louisiana State University vs. Alabama (L) Action Dreamline News News Access Hollywood Sat. Night Live Miss Universe Pageant (N) (3:30) Football NCAA (L) Post-g News Paid Pre-game /(:05) Football NCAA Notre Dame vs. Pittsburgh (L) (4:00) The Saint
Perfect Stranger ('07) Halle Berry. Cheaters Cops Cops Rules Rules Action Dreamline MASH News Glee Sat. Night Live Miss Universe Pageant (N)
X-Men: The Last Stand FamilyG FamilyG
Wall Street ('87) Michael Douglas. FamilyG News. JustSeen Antiques Rd. Lawrence Welk Holiday Auction Served? R.Green Start Up DinoT WordGirl Fetch! Raggs Sid Barney W.World George Arthur Cyberch. Speaks Clifford Lidia's Cook's Sewing JazzyVeg Hey Kids Knit Meals Sewing Cooking Sew Easy JazzyVeg C.Cooks Lawrence Welk News. Broadstr Antiques Rd. History Detectives Austin City (N) Antiques Rd. (3:00) Football NCAA (L) Bridge Football NCAA Texas vs. West Virginia (L) News (3:30) Football NCAA MS St./Tex.A&M (L) P aid Jeopardy Football NCAA Louisiana State University vs. Alabama (L) Middle Middle Mother Mother BigBang BigBang Futura Futura Seinfeld Seinfeld News Friends (3:00) Football NCAA (L) Bridge Football NCAA Texas vs. West Virginia (L) 28 News News. Michiana Classic Gospel Lawrence Welk Antiques Rd. Appear. Appear. As Time As Time Faith Partners Basketball Garden Gaither Paid Spotlight Nopa Sumrall The Best of Harvest (3:30) Football NCAA (L) Post-g News Pre-game Pre-game /(:05) Football NCAA Notre Dame vs. Pittsburgh (L) TimeHpe Celebrate Live Rest.Rd Athletes Differ. Super. JewJesus Z. Levitt Just Say Praise Dorinda 3: The Mask of...
Jurassic Park III ('01) Sam Neill.
X-Men ('00) Hugh Jackman. Movie Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Flipping Vegas (N) Paid Paid Paid Paid Car Car American Greed Suze Orman (N) Car Car CNN Newsroom The Situation Pandora's Promise (2013) Anthony Bourdain Anthony Bourdain :55 Futura :25 Futura Futura Futura Futura Futura
Grandma's Boy Scott Pilgrim v... Moonshiners Dual Survival Dual Survival Naked Castaway Naked Castaway Naked Castaway GoodLk Austin Dog Blog Liv/Mad Jessie Jessie
The Game Plan Lab Rats Kickin' It (4:00) The Voice The Voice E! News Weekend
13 Going on 30 Jennifer Garner. The Back-Up ... (4:05)
A Knight's Tale (:25)
Identity
The Amazing Spider-Man (:20) RoboCop (3:30) Football NCAA (L) Scoreb. Football NCAA Virginia Tech vs. Miami (L) Football NCAA (L) (4:00) Auto Racing NASCAR Scoreb. Football NCAA Houston vs. Central Florida (L) Scoreb. /:15 Football (4:00)
Burlesque Cher.
Dirty Dancing ('87) Patrick Swayze.
Grease ('78) John Travolta. The Five News HQ FOX Report Huckabee Judge Jeanine Fox News (4:00) Football NCAA Kansas vs. Oklahoma State (L) Fox Sat. UFC Fight Night (L) (3:45) Football NCAA Tuls./E.C. (L) Pre-game Basketball NBA Indiana Pacers vs. Brooklyn Nets (L) Postgame /B Basketb. 4: All I Want f... A Princess for Christmas Snow Bride ('13) Patricia Richardson. Debbie Macom... 4:15 Wrath of ...
I, Robot ('04) Will Smith. Taken 2 ('12) Liam Neeson. Boxing HBO After Dark Real Sports Behind the Candelabra Matt Damon. The Newsroom Down Ladies True Blood Movie (:50) Boardwalk Empire (:55)
A Simple Plan Bill Paxton.
This Is 40 ('12) Paul Rudd. Love It or List It HouseH House HouseH House Love It or List It Love It/ List It (N) HouseH House Ax Men Ax Men Pwn Star Pwn Star Pwn Star Pwn Star Pwn Star Pwn Star Pwn Star Pwn Star 4: A Very Mer...
The Christmas Blessing A Country Christmas Story (N) Christmas Angel (:15)
Warm Bodies
Ocean's Twelve George Clooney. (:10) S. Back Orig
Life of Pi Teen Mom 3 The Nightmare Before Christmas Snooki Snooki
Step Up 2: The Streets Sponge Sponge Thunder. Hathawa Sam, Cat Sam, Cat Sam, Cat Hathawa Thunder. Thunder. Inst.Mom F.House Piranhaconda ('12) Michael Madsen. Lake Placid 3 ('10) Yancy Butler. Bering Sea Beast (P) Cassie Scerbo. (:15) Love and Honor Liam Hemsworth. Homeland On the Road ('13) Sam Riley. (:05) Masters of Sex (4:00)
Training Day Cops Cops Cops Cops Cops Cops Cops The Departed (:05)
King Arthur ('04) Clive Owen. (:15)
Basic ('03) John Travolta. Dancing Edge (N) (:10) Spartacus: B Queens Queens Ray Ray Ray Ray BigBang BigBang BigBang BigBang BigBang BigBang Hoarding Hoarding Hoarding Untold Stories Untold Stories Untold Stories Movie (:35)
Man on a Ledge (:20)
Step Up Revolution
The Frighteners Michael J. Fox. (4:30)
Invincible
The Longest Yard ('05) Adam Sandler.
Rush Hour 3 ('07) Chris Tucker. Griffith Griffith Griffith Griffith Griffith Griffith Griffith Griffith Ray Ray Ray Ray NCIS "Dog Tags" NCIS "Toxic" Modern Modern Modern Modern Modern Modern Modern Modern 4:30 S.N.L Chrissy Love and Hip-Hop Crazy Sexy Cool: The TLC Story Diary of a Mad Black W... Law:CI "Beast" Bones Bones Home Videos Home Videos Home Videos
On this date Nov. 9: • In 1872, fire destroyed nearly 800 buildings in Boston. • In 1938, Nazis looted and burned synagogues as well as Jewish-owned stores and houses in “Kristallnacht.” • In 1965, the great Northeast blackout occurred as a series of power failures lasting up to 13 1/2 hours left 30 million people in seven states and part of Canada without electricity.
THE BORN LOSER BY ART & CHIP SANSOM
6:30
(3:30) Football NCAA MS St./Tex.A&M (L)
Almanac •
DEAR DOCTOR
relationship won’t work if I can’t bring myself to be intimate with the person. In all my years of dating, I have been in love only twice. Any help would be appreciated. — LOST DEAR LOST: I DEAR wish I had a lamp ABBY magic that would give you what you’re Jeanne Phillips looking for in a puff of smoke, but I don’t. What I can offer is that you need to continue looking for someone who is as independent as you are, so you can find an attractive man whose needs are similar to yours. Some couples find the. DR. KOMAROFF is a physician and professor at Harvard Medical School. His website is AskDoctorK.com.
Crossword Puzzle •
kpcnews.com
SATURDAY, NOVEMBER 9, 2013
B7
Kerry warns gaps remain in nuclear talks with Iran GENEVA (AP) — U.S. Secretary of State John Kerry warned Friday of significant differences between Iran and six world powers trying to fashion a nuclear agreement, as he and three European foreign ministers tried. Kerry arrived from Tel Aviv after meeting with Israeli Prime Minister Benjamin Netanyahu during which Kerry.” He told reporters, “There is not an agreement at this point in time.”. The six powers negotiating with Tehran are considering a gradual rollback of sanctions that have crippled Iran’s economy. In exchange they demand initial curbs on Iran’s nuclear program, including a cap on enrich-
AP
U.S. Secretary of State John Kerry shakes hands with European Union foreign policy chief Catherine Ashton before their meeting with Iranian Foreign Minister Mohammad Javad Zarif in Geneva Friday.
ment to a level that can be turned quickly to weapons use. The six have discussed ending a freeze on up to
D e K a l b ,
kpcnews.com
L a G r a n g e ,
N o b l e
a n d
$50 billion (37 billion euros) in overseas accounts and lifting restrictions on petrochemicals, gold and other precious metals.
HOMES / RENTALS
S t e u b e n
C o u n t i e s
To ensure the best response to your ad, take the time to make sure your ad is correct the first time it runs. Call us promptly to report ■ any ❐ errors. ■ We ❐ reserve ■
ADOPT: A bright future awaits the child that blesses my home. Active, creative, financially secure woman seeks to adopt a baby. Expenses Paid. Call Sarah 1-855-974-5658
Kendallville Bridgeway Evangelical Church 210 Brian’s Place, East of Rural King Sat. • 10-4 Lots of Craft Vendors, Pumkin Rolls, Fudge, Cookies, &other Baked goods.Hillbilly Hot Dogs
Adopt: Our hearts reach out to you. Loving cou ple seeks to adopt a newborn bundle of joy to complete our family and share our passions for cooking, travel & education. Please call Maria and John 888-988-5028 or johnandmariaadopt.com
JOBS EMPLOYMENT ■ ◆ ■ ◆ ■
Drivers
WEST NOBLE SCHOOL CORPORATION in Ligonier, IN is looking for substitute bus drivers. Training is included. Apply at: West Noble Transportation Office or call Kathy Hagen (260) 894-3191 ext. 5036
Factory seeking
REWARD!-Reward for safe return of 3 adult dogs missing 10/30/13. (2) Shih Tzus, (1) Yorkie. Garwick’s the Pet People. 419-795-5711. garwicksthepet people.com. (A)
BAZAARS Holiday Bazaar New Life Tabernacle 609 Patty Lane Kendallville Friday • 9 - 6 Sat. • 9 - 3 Call 260 347-8488
SEARCHING FOR THE LATEST NEWS?
CLICK ON
PFG Customized Distribution
Call (260)343-4336, or (260)316-4264
Auditor
NOTICES
Delivery Drivers
is adding Class A Drivers at the Kendallville Distribution Center. Scheduled dedicated regional teamroutes, four-day weekly delivery schedule. Guaranteed weekly pay & excellent benefits. EOE.)
Drivers
QUALITY AUDITOR
CLASSIFIED
full time and first shift. Must ensure high level customer service and communication skills. Must be able to correct quality issues and complaints. Must be able to analyze data, product specifications, formulate and document quality standards. Must be able to read blueprints and fill out SPC charts.
Don’t want the “treasure” you found while cleaning the attic? Make a clean sweep ... advertise your treasures in the Classifieds.
Please send resume and qualifications to:
EMPLOYMENT
EMPLOYMENT
■ ● ■ ● ■
■ ❐ ■ ❐ ■
■ ✦ ■ ✦ ■
General
Health
General
FWT, LLC. A leading manufacturer of utility & telecommunication towers for over 50 years.
Quality Auditor PO Box 241 Ashley, IN 46705
Minimum 3 years experience. Must be able to pass an AWS D1.1 certification.
FITTERS/LAYOUT Must be able to read blueprints & obtain AWS certification.
QUALITY ASSURANCE INSPECTORS Ultrasonic testing, mag particle testing & visual testing experience in weld inspection required.
■ ◆ ■ ◆ ■
Maintenance
We are hiring for the following positions:
Apply in person at: Kendallville Manor 1802❐Dowling ■ ■ ❐St.■ Kendallville, IN EOE
■ ❐ ■ ❐ ■
OR EMAIL RESUME TO hicksville@fwtllc.com OR FAX RESUME TO
■ ● ■ ● ■
•Slitter Operator
We are not a mill or foundry. Our working conditions are great. Benefits include: 401(K), Health, Dental, Disability, Life Insurance and Bonus opportunities! Pay will be commensurate with experience.
■■■■■■■■■■■■■ General
JOURNAL GAZETTE Routes Available In: Kendallville, Angola, & Wolcottville
We love kids who love the outdoors!
If you know a young person who has an outdoor story to tell, send the story and any photos to The Outdoor Page editor Amy Oberlin at amyo@kpcnews.net.
UP TO $1000/ MO.
Call 800-444-3303 Ext. 8234
Please include a daytime contact phone number.
■■■■■■■■■■■■■
photo EPRINTS
R
Hundreds of published and non-published photos available for purchase! ❊
❊
Magic Coil Products Attn: HR Dept. 4143 CR 61 Butler, IN 46721
■ ✦ ■ ✦ ■
Your connection to
❊
Go to:
kpcnews. mycapture.com
local and world news
kpcnews.com
aaaA
Sudoku Puzzle Complete the grid so that every row, column and 3x3 box contains every digit from 1 to 9 inclusively.
Novae Corporation is a growing trailer manufacturer with locations in Markle, Columbia City, and North Manchester, Indiana. With our continued growth comes the need for additional qualified individuals at our Markle and North Manchester facilities in the following positions:
Mig Welders • Production welding experience of 2 or more years is required. • Ability to read blueprints, tape measures, and general knowledge of fabrication.
Assembly/Final Finish • Experience in construction and assembly. • Experience in wiring, decking, axle installation, metal hanging, and roofing. • Applicants must possess an eye for detail and strive to produce a quality product.
Shipping & Receiving • Forklift experience is preferred, but not required. • Ability to verify and keep records of incoming and outgoing shipments. • Prepare items (trailers and accessories) for shipment.
Paint
Automotive manufacturer in northeast Indiana has the following opening for a result-oriented Maintenance team member.
• General knowledge of preparation and painting process. • Know how to use powder guns and gauges. • Ability to determine paint flow, viscosity, and coating quality by performing visual inspections.
Must have extensive industrial electrical knowledge, mechanical aptitude, read/interpret electrical and electronic circuit diagrams and familiar with computers and programmable logic controllers.
All applicants must have the following: • High School diploma or GED. • Ability to pass pre-employment drug test. • Ability to lift 80 lbs. on a regular basis. • Proven dependability. • Excellent work and attendance history.
Experience with preventative maintenance programs and pneumatics. Must be able to work any shift. We offer a comprehensive benefit package including Medical, Dental, Vacation, 401K, Holidays and more.
Novae Corp. is an equal opportunity employer and maintains a Drug and Alcohol Free Workplace for all employees. Job offers are contingent upon successful completion of a pre-employment drug screen, and employees must maintain compliance with the policy for the duration of employment. No phone calls please! Applicants that have already applied are still being considered.
Qualified candidates should send their resume and salary requirements to:
Applications can be submitted in person at:
HUDSON INDUSTRIES ATTN: Human Resource Manager PO Box 426, Hudson, IN 46747 Jody.Blaskie@midwayproducts.com EOE
EMPLOYMENT
•General Labor
Please respond via:
APPLY IN PERSON AT 761 W. High Street Hicksville, OH 43526 419-542-1420
•Slitter Set-up / Helper •Crane Operator
• CNA’s • LPN’s • RN’s
Positions are for 1st and 3rd shifts and requires candidates to be able to pass a pre-employment physical and drug screen.
Steel Service Center needs employees and is WILLING TO TRAIN for the following 1st and 2nd shift positions: •Barcoding
WELDERS
419-542-0019
kpcnews.com
Health “Residents First.. Employees Always..”
EMPLOYMENT
One Novae Parkway, Markle, IN or
11870 N - 650 E, North Manchester, IN DIFFICULTY: 4 (of 5) 11-09
or Online: adnum=80212245
B8
kpcnews.com
SATURDAY, NOVEMBER 9, 2013
EMPLOYMENT
Instructor
◆ ❖ ◆ ❖ ◆
WELDING INSTRUCTOR
Maintenance EOE
At Trine University Now Hiring -
G&M Media Packaging is seeking a selfmotivated individual interested in working in a non-automotive environment to join our 2nd Shift maintenance team. The position will require you to have a proven background in trouble shooting automated equipment, carry out preventative and predictive maintenance programs and the ability to read prints and schematics when necessary to be able to trouble shoot electrical issues. A mechanical aptitude is a must, as it will be a very hands-on position. A background in metal stamping and tooling would definitely be a plus. The right individual must be willing to work overtime as needed and have demonstrated interpersonal skills and excellent attendance. You must be able to pass a drug screen and background check to be considered for the position. If you feel you fit the above qualifications and want to join a company that has competitive wages and excellent benefits, please reply via email to HR@gm-media packaging.com OR mail to: Human Resources P. O. Box 524 Bryan, Ohio 43506
◆ ❖ ◆ ❖ ◆ ■ ❏ ■
❏ ■
General
Pokagon State Park is now hiring for winter seasonal help. Positions available include toboggan workers, rental room attendants and laborers. Wages begin at $8.06/hour. Must be available weekends and during the Christmas school break. Must be able to lift 50 lbs repetitively, be 18 years of age or older, have reliable transportation to work and be able to work outside for extended periods of time. Interested applicants should contact the Park Office for further information at:
260-833-2012
AGRIBUSINESS • Every Saturday
The
Star
THE NEWS SUN THE
HERALD
REPUBLICAN Call 1-800-717-4679 today to begin home delivery!
Hamilton Lake
Ashley
Appetit MAINTENANCE Bon Management TECH Company
Help Wanted
read up on the latest trends, technology and predictions for the future of farming.
MOBILE HOMES FOR RENT
Pokagon SP is an Equal Opportunity Employer
■ ❏ ■ ✦ ✦ Office
✦
❏ ■ ✦
✦
All Positions Please call:
(260) 665-4811 to schedule an interview ❖ ❖ ❖ ❖ ❖ ❖ ❖ Security
Security Officer Positions (Angola, Butler & Auburn Areas) $8.50 - $10.00 Securitas Security Services, USA is now accepting applications for Security Officers. We have open positions available in Angola, Butler & Auburn, IN. Some essential functions of the job include, but not limited to: Access control, observe and report suspicious activity, interior and exterior patrols. Qualified applicants must be at least 18 years of age, have a high school diploma or GED and must be able to pass a drug screen and background investigation. PLEASE APPLY AT: SECURITASJOBS .COM 260 436-0930 EOE/M/F/D/V Drivers Driver Trainees Needed Now! Learn to drive for US Xpress! Earn $800+ per week! No experience needed! CDL-Trained and Job Ready in 15 days! 1-800-882-7364 General EQUIPMENT FABRICATOR WANTED--2 years equipment fabrication or maintenance experience required. MIG and TIG welding skills required. Tools will be required. Starting scale $14-$18 based on aptitude scores and ex perience. Great Work Hours and Benefit Package. Career position, located in Ft. Wayne, IN. Indoor work w/ overtime. 260-422-1671, ext. 106. (A)
PART TIME (Fill-In) RECEPTIONIST NEEDED Must have strong organizational skills & ability to multi-task and prioritize. Email resume to:
resume.angola@ yahoo.com ✦
✦
✦
✦
✦
Sudoku Answers 11-09
GOBBLE UP ON SAVINGS AT ASHLEYHUDSON APTS! $99 Move-In Special 1 BR Apartments available Water, Sewer, Trash pickup, & Satellite TV service included in rent! Rental assistance available to those who qualify. Eligibility requirements: 62 years of age or older, disabled any legal age may apply. Rent based on all sources of income and medical expenses.
FREE HEAT! AS THE TEMPERATURE GOES DOWN SO DOES OUR RENT
DEPOSITS START AT
$
99!
Thanksgiving Special Open House 2 Days Only Nov. 8th & 9th $200 off 2nd Month’s rent $0 Application Fee • Free Heat & Water • Pet Friendly Community
CALL TARA TODAY! NELSON ESTATES
STORAGE Auburn Inside winter RV storage $40.00 monthly. 260 920-4665 after 7pm%)
OPEN HOUSES
1815 Raleigh Ave., Kendallville 46755 nelsonestates@mrdapartments.com mrdapartments.com
BANKRUPTCY FREE CONSULTATION
$25.00 TO START Payment Plans, Chapter 13 No Money down. Filing fee not included. Sat. & Eve. Appts. Avail. Call
Collect: 260-424-0954 act as a debt relief agency under the BK code
Divorce • DUI • Criminal • Bankruptcy
HOME IMPROVEMENT
All Phase Remodeling and Handyman Service - No Job too Big or Small !!! Free Estimates Call Jeff 260-854-9071 Qualified & Insured Serving You Since 1990
General Practice KRUSE & KRUSE,PC
ROOFING/SIDING
260-925-0200 or 800-381-5883 A debt relief agency under the Bankruptcy Code.
FREE ESTIMATES
POLE BUILDINGS We Build Pole Barns and Garages. We also re-roof and re-side old barns, garages and houses. Call 260-632-5983. (A)
County Line Roofing Tear offs, wind damage & reroofs. Call (260)627-0017
CHILD CARE ALBION Child Care available in smoke-free home. Close to schools & factories. 1st shift & after school Availability (260)564-3230
HOMES FOR RENT Angola Pine Canyon Lake 4 BR, 3 1/2 BA 4077 Sq. Ft. • 1000 Sq. Ft. deck. • 382 ft. Lake front, Year round rental, non sports lake. Beautiful home! $1,350. (843)450-7810
up to $1000.00
Antique & Collectible Show National Guard Armory 130 West Cook Rd. Ft. Wayne, IN Sat. Nov 9 • 10-5 Sun. Nov 10 • 10-4 $2 Admission Free Parking
FURNITURE 2ND BEST FURNITURE Thurs & Fri 10-5, Sat 8-3 8451 N. S.R. 9 1 MILE N. OF 6 & 9 Brand NEW in plastic!
QUEEN PILLOWTOP MATTRESS SET Can deliver, $125. (260) 493-0805 Very nice dining room table, 6 chairs, custom pad, 2 leaves. $325. 260-495-4124
BUILDING MATERIALS
Angola-Crooked Lake $500 mo.+ Deposit, New Flooring/ No pets 432-1270/ 624-2878 Auburn Land contract, 3 BR garage, $500/mo. 260 615-2709 Kendallville 353 N. Main St. 3 BR $640/mo. + dep. & util. 318-5638 Waterloo Land contract, 3 BR garage, $450/mo. 260 615-2709
KPC
Contest
FARM/GARDEN
MOBILE HOMES FOR SALE Garrett MOBILE HOMES FOR AS LOW AS $550.00 A MONTH - LEASE TO OWN! WE HAVE 2 & 3 BR TO CHOOSE FROM. WE ALSO DO FINANCING. CALL KATT TODAY 260-357-3331
REALLY TRULY LOCAL...
KPC Phone Books Steuben, DeKalb, Noble/LaGrange
Poulen Chain Saw 14” works good, $25.00. Butler, (260) 760-0419
Junk Auto Buyer
ANTIQUES
260 349-2685
Kendallville OPEN HOUSE 230 E. RUSH ST. SUNDAY, NOV. 10 1-4 The character of an old home with modern updates. Original wood floors, NEW windows, carpet, stainless appliances, bathroom & siding. 1650 sq. ft. main floor laundry & master BR. 2 large BR up, corner fenced lot w/driveway. $98,500. 260 760-5056
8 - 1 gal. Glass Jugs. No chips or cracks. Clean, ready to use. $40.00. Call or text, (574) 535-3124
IVAN’S TOWING
All species of hard wood. Pay before starting. Walnut needed.
GARAGE SALES
BUSINESS & PROFESSIONAL
Cromwell Now Leasing Crown Pointe Villas Call (260) 856-2146 Handicap Accessible Equal Housing Opportunity “This institution is an equal opportunity provider, and employer.”
SETSER TRANSPORT AND TOWING
ATTENTION: Paying up to $530 for scrap cars. Call me 318-2571
TIMBER WANTED
Auburn $99 First Month 2BR-VERY NICE! SENIORS 50+ $465 No Smokers/ No Pets (260) 925-9525
MERCHANDISE UNDER $50
7' artificial Christmas tree w/standgreat condition $100 260-927-0221
WANTED TO BUY ASHLEY 506 South Union St. $500 before 11/10/13. $550 after • 668-4409
MERCHANDISE UNDER $50
MERCHANDISE
HR Quinton Fitness Treadmill/Club Track 510. Asking $350. text - 260 349-2793
Angola ONE BR APTS. $425/mo., Free Heat. 260-316-5659
AUTOMOTIVE/ SERVICES
USED TIRES Cash for Junk Cars! 701 Krueger St., K’ville. 260-318-5555
EXERCISE EQUIPMENT
Auburn
260-349-0996
Avilla 1 & 2 BR APTS $450-$550/ per month. Call 260-897-3188
AT YOUR SERVICE
Kendallville 1206 N Lima Rd. (SR3) Fri -Sat • 9-5 Household, Tools, (some old), Player Piano, DBL Garage door w/ track, Lawn Equip. & Lots of Miscellaneous!
Wolcottville 2 & 3 BR from $100/wk also LaOtto location. 574-202-2181
Ashley Hudson Apartments 830 W. State St. Ashley, In 46705 260-587-9171 For Hearing Impaired Only, Call TDD # 1-800-743-3333 This institution is an Equal Opportunity Provider & Employer.
NOW OFFERING WEEKLY RENTALS!
GARAGE SALES
Hamilton Lake 2 BR, updated, large kitchen & LR, one block to lake, nice park, others available. $450/mo. (260) 488-3163
HOMES
has an opening for
Welding Instructor
❖ ❖ ❖ ❖ ❖ ❖ Restaurants
RENTALS
Impact Institute
APARTMENT RENTAL
STUFF
EMPLOYMENT
APPLES & CIDER Mon.-Sat. • 9-5:30 Sun. • 11-5 GW Stroh Orchards Angola (260) 665-7607
(260) 238-4787
CARS 2008 Dodge Caliber 4 DR, White, Looks Brand New $6500 Call 897-3805 2007 Cadillac DTS 49,500 mi, good cond., white pearl, new brakes $13,500/OBO Call Bret @ 260 239-2705 2003 Chevy Blazer LS 4 x 4, Blk, V6, Fact. Mag Wheels, ABS, CD, No rust, Very Good Cond.. $4950 /obo (260) 349-1324 1998 Olds Achieva 136,000 miles, Exc. cond. $2100/ obo (260)316-5450 1 & ONLY PLACE TO CALL--to get rid of that junk car, truck or van!! Cash on the spot! Free towing. Call 260-745-8888. (A) Indiana Auto Auction, Inc.--Huge Repo Sale Thursday, Nov. 14th. Over 100 repossessed units for sale. Cash only. $500 deposit per person required. Register 8am-9:30am to bid. No public entry after 9:30am. (A)
TRUCKS 98 Ford F150XLT 4X4 4.6 V8, Miles 150,000, Auto/Air/Tilt/Cruise/ Pr.Windows/Locks Good Tires: $3900 Blakesley Auto Sales 260-460-7729
SUV’S
MOTORCYCLES 1997 Harley Davidson 1200 Sportster, 26k mi. $3,500/obo 260 668-0048
PETS/ANIMALS
MERCHANDISE UNDER $50
Coton de Tulear Puppies, Ready for Christmas, all white, 5 males. Call 260 668-2313
10 gal. Reptile Terrarium includes 2 lights, temp gauge & cover. $30.00 obo. Call or text, (260) 573-6851
FREE to good home: Kittens 12 weeks old, 1 Male, 1 Female , prefer to adopt together. (260) 349-9093
100 Firearm Publications. $20.00 for all. (260) 837-4775
SNOW EQUIPMENT Buhler Allied snowblower Model 6010 3 point hitch $1400.00 (260)337-5850
11 Boxes 20 ga. Slugs. $40.00 with belt (260) 349-3437 12’ Metal Single Person Tree Stand. $50.00. (260) 349-3437 13” RCA Color TV with Remote, $10.00. (260) 243-0383 1976 “Uncle Sam” Complete Set Bicentennial 7-Up Cans. $50.00. (260) 347-2291
WHEELS
EMPLOYMENT
AUTOMOTIVE/ SERVICES $ WANTED $ Junk Cars! Highest prices pd. Free pickup. 260-705-7610 705-7630
3 - 1 gal. Glass Jugs. 1 green, 2 brown, 1 brown has crack. Clean. $25.00. Call or text, (574) 535-3124 3 New 5”x5” Conabear Traps. $20.00. (260) 349-3437 4 ft. Christmas Tree in box & 2 boxes decorations & lights. $20.00. (260) 242-2689
Antique Single Bottom Plow. All metal except handles. $50.00. (260) 347-3388 Baby Bouncer Seat with netting, $5.00. Call or text, (260) 336-2109 Basket For Steps Very nice, clean. $15.00. (260) 927-5148 Casio Electric Piano. Model CTK-700. $50.00. Text for pic. (260) 573-9116 Chair. Good cond. Clean. No smoke, no dogs. Beige/gold with pattern. $45.00. (260) 349-1607 Collection of Cookbooks. All for $29.00 (260) 833-4232
Princess Diana Porcelain collectors doll, in box, $25.00. (260) 925-2579 Quilter Frame for hand quilting. $50.00. (260) 837-4775 Red Crushed Velvet, swivel, rocker chair. Good cond. $40.00. (260) 925-1125 Round Kerosene Heater. $40.00. (260) 837-4775 Round Table with 4 chairs, 2 leaves. Medium wood color. Call or text, (260) 336-2109 Sauder TV Entertainment Center with glass side shelves and drawers for CD/tapes. Opening for TV is 36wx24t. $50.00. (260) 349-2689
Colts Shower Curtain & Rug. Very nice, $25.00. (260) 927-5148
Set of Four Michelin Exalto A/S 205/50/R17 with good tread. $50.00. (260) 410-9600
Craftsman 1 1/2 h.p. Router with lite and 15 bit set. $35.00. (260) 833-2362
Sled with Ice Skates & Wreath attached. $25.00. (260) 347-0951
Craftsman 10” Mitre Chop Saw with 104 Tooth Blade, $45.00. (260) 833-2362 Craftsman 10” Variable Speed Band Saw, 3 blades, 2 sanding belts. $40.00. (260) 833-2362 Dark Brown Lined Trench Coat style. Size medium. Never worn. $10.00. (260) 414-2334 Desk with chair 41”lx31”hx18”d. Very nice, clean. $45.00. (260) 927-5148 Exercise Bicycle $15.00 (260) 925-2579 Exotic African Tree 4’ Very different, $15.00 (260) 927-1286 Extra large box material for crafts or quilts. $15.00. (260) 242-2689 Glass Top Electric Kitchen Range. Almond color, $45.00. (260) 854-2253 Glider Chair Bought from Vans in 2008. $45.00 (260) 927-1286 Homelite Electric Hedge Trimmer. Like new, $15.00. (260) 347-2291 Hot Point Refrigerator 18.5 cu. ft. Asking $40.00. (260) 833-1049 Larin 3 lb. Sausage Stuffer. 3 tubes in box. $30.00. (260) 349-3437 Like New Black Moby Wrap/Carrier $20.00. Call or text, (260) 336-2109 London Fog Winter Dress Coat, size 42. Gray, $25.00. Butler, (260) 760-0419 London Fog Winter Dress Coat, size 46. Tan, $25.00. Butler, (260) 760-0419 Longaberger Bread Basket. 1999 warm brown basket w/American Holly liner & protector. Great cond. $29.00. (260) 833-4232 Longaberger Sleigh Basket with liner & fabric. $25.00. (260) 347-0951 Maple Jenny Lind Crib No mattress, $20.00. (260) 833-2362 McCoy Kettle Jar & 3 matching dishes. $20 (260) 347-0951 Mens Slacks Size 38x30, 3 pair. $6.00. (260) 347-6881 Motorcycle Seats from a 2002 Honda Ace 750. Very good cond. $50.00. (260) 238-4285 Moving Picture Projector/Outside. 10 slides all season/holidays/nice for garage door, etc. $10.00. (260) 925-4570 Nice Oval Mirror on a wood stand. $40.00. (260) 761-3031 Office Desk Chair Good cond. $12.00 (260) 927-1286 Older Sewing Machine in cabinet. Works good, Fleetwood. $35.00. Butler, (260) 760-0419
4 Kasey Kahne pictures and coaster set. $50.00 obo. (260) 553-0709
Pair of 2675/65/18 Tires. Good shape, $50.00. (260) 768-9122
7 1/2 ft. Pre-lite Concord Fir pine Christmas tree. $35.00. (260) 318-4950
Poulan Pro Gas Blower/Vac. Brand new, used once. $50.00. (260) 665-5193
Sofa. Good cond. Clean. No smoke, no dogs. Beige/gold. $50.00. (260) 349-1607 Solid Oak Framed Cabinet & Shelves on casters.33”hx28”wx19”d $30.00. Fremont, (260) 243-0383 Solid Oak Framed Coffee Table with 2-sectioned tempered glass top. 4’Lx2’wx16”h. $40.00. Fremont, (260) 243-0383 St. Michaels Church Centennial Plate, $10.00. (260) 837-4775 Steel Toe Boots 9W Used little, w/Guards, black. $20.00 Butler, (260) 760-0419 Swivel Straight Christmas Tree Stand. $5.00. (260) 318-4950 TV Stand. Fits up to 52”. 2 shelves. $40.00. Wolcottville, (260) 854-9305 Twin Mattress $5.00. Fremont, (260) 243-0383 Very nice TV Cabinet with extra storage. Only $50.00. (260) 316-4606 W.W.II Wood Shipping Crate Box, $50.00. Text for pic. (260) 573-9116 Woman’s Black Leather 3/4 length coat. Size M. $20.00 cash only (260) 357-3753 Womans Brown Dansko shoe, Mary Jane style. Size 8 1/2-9. $35.00. (260) 318-4950 Wood Desk. 48x30, 2 drawers, removable shelves. $20.00. (260) 347-2291 Yard Swing Good cond., $50.00 (260) 243-8671.
“You’vgeot news!” Every print subscription includes online access to
kpcnews.com
The News Sun is the daily newspaper serving Noble and LaGrange counties in northeast Indiana.
|
https://issuu.com/kpcmedia/docs/ns11-09-13?e=4200578/5558945
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Debug print trace macro
Far be it for me to encourage over-use of preprocessor macros, but the ability of the preprocessor to embed file, function and line information makes it useful for debug trace. The use of the standard built-in NDEBUG macro allows the code to be automatically excluded on release builds.
Note that the lack of a comma between `"%s::%s(%d)"` and `format` is deliberate. It prints a formatted string with source location prepended.
For many systems with critical timing it is often useful to also include a timestamp using a suitable system-wide clock source, but since that is system dependent I have omitted that option from the example. Note that where timing is critical you should ensure that your stdout implementation is buffered and non-blocking.
Support for variadic macros may not be universal hacing been standardised in C99.
#if defined NDEBUG #define TRACE( format, ... ) #else #define TRACE( format, ... ) printf( "%s::%s(%d) " format, __FILE__, __FUNCTION__, __LINE__, __VA_ARGS__ ) #endif // Example usage void example() { unsigned var = 0xdeadbeef ; TRACE( "var=0x%x", var ) ; } // Resultant output example: // example.c::example(5) var=0xdeadbeef //
There are no comments yet!
Add a Comment
|
https://www.embeddedrelated.com/showcode/320.php
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
import "golang.org/x/text/encoding/charmap"
Package charmap provides simple character encodings such as IBM Code Page 437 and Windows 1252.
var ( // ISO8859_6E is the ISO 8859-6E encoding. ISO8859_6E encoding.Encoding = &iso8859_6E // ISO8859_6I is the ISO 8859-6I encoding. ISO8859_6I encoding.Encoding = &iso8859_6I // ISO8859_8E is the ISO 8859-8E encoding. ISO8859_8E encoding.Encoding = &iso8859_8E // ISO8859_8I is the ISO 8859-8I encoding. ISO8859_8I encoding.Encoding = &iso8859_8I )
These encodings vary only in the way clients should interpret them. Their coded character set is identical and a single implementation can be shared.
All is a list of all defined encodings in this package.
CodePage437 is the IBM Code Page 437 encoding.
CodePage850 is the IBM Code Page 850 encoding.
CodePage852 is the IBM Code Page 852 encoding.
CodePage855 is the IBM Code Page 855 encoding.
CodePage858 is the Windows Code Page 858 encoding.
CodePage862 is the IBM Code Page 862 encoding.
CodePage866 is the IBM Code Page 866 encoding.
ISO8859_1 is the ISO 8859-1 encoding.
ISO8859_10 is the ISO 8859-10 encoding.
ISO8859_13 is the ISO 8859-13 encoding.
ISO8859_14 is the ISO 8859-14 encoding.
ISO8859_15 is the ISO 8859-15 encoding.
ISO8859_16 is the ISO 8859-16 encoding.
ISO8859_2 is the ISO 8859-2 encoding.
ISO8859_3 is the ISO 8859-3 encoding.
ISO8859_4 is the ISO 8859-4 encoding.
ISO8859_5 is the ISO 8859-5 encoding.
ISO8859_6 is the ISO 8859-6 encoding.
ISO8859_7 is the ISO 8859-7 encoding.
ISO8859_8 is the ISO 8859-8 encoding.
KOI8R is the KOI8-R encoding.
KOI8U is the KOI8-U encoding.
Macintosh is the Macintosh encoding.
MacintoshCyrillic is the Macintosh Cyrillic encoding.
Windows1250 is the Windows 1250 encoding.
Windows1251 is the Windows 1251 encoding.
Windows1252 is the Windows 1252 encoding.
Windows1253 is the Windows 1253 encoding.
Windows1254 is the Windows 1254 encoding.
Windows1255 is the Windows 1255 encoding.
Windows1256 is the Windows 1256 encoding.
Windows1257 is the Windows 1257 encoding.
Windows1258 is the Windows 1258 encoding.
Windows874 is the Windows 874 encoding.
XUserDefined is the X-User-Defined encoding.
It is defined at
Package charmap imports 5 packages (graph) and is imported by 46 packages. Updated 2016-04-21. Refresh now. Tools for package owners.
|
https://godoc.org/golang.org/x/text/encoding/charmap
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
set FlowLayout margins
hi,
I tried to set a margin between all the 3 FormPanels in
GWT 1.5
GXT 1.0.1
hosted and browser mode
with this code :
Code:
public class Test extends LayoutContainer implements EntryPoint { /** * This is the entry point method. */ public void onModuleLoad() { // layout FlowLayout layout = new FlowLayout(20); setLayout(layout); ContentPanel form1 = new FormPanel(); form1.setHeading("form1"); ContentPanel form2 = new FormPanel(); form2.setHeading("form2"); ContentPanel form3 = new FormPanel(); form3.setHeading("form3"); TextField field1 = new TextField(); field1.setFieldLabel("field"); TextField field2 = new TextField(); field2.setFieldLabel("field"); TextField field3 = new TextField(); field3.setFieldLabel("field"); form1.add(field1); form1.setCollapsible(true); form1.setAnimCollapse(false); form2.add(field2); form2.setCollapsible(true); form2.setAnimCollapse(false); form3.add(field3); form3.setCollapsible(true); form3.setAnimCollapse(false); add(form1); add(form2); add(form3); RootPanel.get().add(this); } }Code:
layout.setMargin(20);Code:
add(form1, new FlowData(20);
all my FormPanel are stick together with no spacing
try
Code:
add(form1, new MarginData(20));
Last edited by gslender; 20 Jul 2008 at 2:32 PM. Reason: missing close brackett
This...
Code:
add(form1,new MarginData(10)); add(form2,new MarginData(20)); add(form3,new MarginData(30));
ps - I also removed the 20 out of new FlowLayout(20) - not that this impacted the render
produced this on Ubuntu 8.04 Hosted mode and FF3
I tested against
GWT 1.5
GXT 1.0
on Win 2003
hosted mode : work
ie6 : work
firefox 3 : don't work ...
same thing with GXT 1.0.1
Mmm, it seems that we still have some differences between browsers/OS. Darell, wouldn't it be helpful to declare the HTML 4 doctype instead of HTML 3 ? Look at
Thank zaccret and gslender for your help.
wrong doctype... I apologize
I think I will write my own applicationCreator for gxt.
That's not a bad idea.
Actually, it would be nice if Darell could integrate it in gxt bundle.
|
https://www.sencha.com/forum/showthread.php?41779-set-FlowLayout-margins
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
A transaction is basically a logical unit of work consisting of activities that must all succeed or fail and that must comply with ACID principals.Movement of money from a bank account to another is a simple example of a transaction. In this single transaction, two operations will be performed. One account will be debited (amount will be taken from) and other will be credited (amount will be deposited to).Enabling transactions in Windows Communication Foundation is simple and straight forward but implementation sometimes becomes difficult depending upon the scenario. For example, implementing transactions in a distributed environment will definitely require effort and more to consider.Now, consider we have already developed a WCF service and we need to enable transactions in it. So, we will use the steps below:1. Add the System.Transactions namespace to the WCF Service project.2. Set the TransactionFlow property of the OperationContract attribute to Mandatory.Available options for TransactionFlow are:
For example, our WCF service contract is as follows:[TransactionFlow(TransactionFlowOptions.Mandatory]void MyMethod();3., the transaction will be committed.4. Enable Transactions for WCF Binding being used.For example, in our configuration file the.5. A transaction from the client must be started a few more things regarding "Service Instancing and Sessions" that need to be considered while working with Transactions but this WCF tutorial is more focused on enabling transactions in a simple scenario. In my later WCF article on Windows Communication Foundation Transactions on this blog, I'll discuss those concepts in more details.Other WCF Service articles that might be of your interest:
©2016
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/UploadFile/81a718/simple-steps-to-enable-transactions-in-wcf/
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
Walkthrough Writing An Orchard Module
This topic is obsolete. If you are just getting started with Orchard module development you should read the Getting Started with Modules course first. It will introduce you to building modules with Orchard using Visual Studio Community, a free edition of Visual Studio.
Orchard is designed with modular extensibility in mind. The current application contains a number of built-in modules by default, and our intent with writing these modules has been to validate the underlying CMS core as it is being developed - exploring such concepts as routable content items, their associated "parts" (eventually to be bolted on using metadata), UI composability of views from separate modules, and so on. While there are many CMS core concepts that remain unimplemented for now, there are still many things you can do with the current system. The module concept is rooted in ASP.NET MVC Areas (1,2) with the idea that module developers can opt-in to Orchard-specific functionality as needed. You can develop modules in-situ with the application as "Areas", using Visual Studio's MVC tools: Add Area, Add Controller, Add View, and so on (in VS2010). You can also develop modules as separate projects, to be packaged and shared with other users of Orchard CMS (the packaging story is still to be defined, along with marketplaces for sharing modules). This is how the Orchard source tree is currently organized. There is also a "release" build of Orchard that contains all the modules pre-built and ready to run (without source code), that you can extend using the VS tooling for MVC Areas - this can be downloaded from.
Let's take a walk through building an Orchard module as an MVC Area in VS. We'll start simple (Hello World), and gradually build up some interesting functionality using Orchard.
Installing Software Prerequisites
First, install these MVC and Orchard releases to your machine, along with Visual Studio or Visual Web Developer for code editing:
- Install VS2010 (Express or higher)
- Download and Install ASP.NET MVC 3
- Download and extract the latest "release" build from
- Double-click the csproj file in the release package to open it in VS
Getting Started: A Simple Hello World Module ("Area" in VS)
Our objective in this section is to build a very simple module that displays "Hello World" on the front-end using the applied Orchard theme. We'll also wire up the navigation menu to our module's routes.
Objectives:
- A simple custom area that renders "Hello World" on the app's front-end
- Views in the custom area that take advantage of the currently applied Orchard theme
- A menu item on the front-end for navigating to the custom area's view
Follow These Steps:
- Right-click the project node in VS Solution Explorer, and choose "Add > Area..."
- Type "Commerce" for the area name and click OK.
- Right-click the newly created "Commerce > Controllers" folder, and choose "Add > Controller..."
- Name the Controller "HomeController"
- Right-click on the "Index()" method name and choose "Add View..."
- Selected the "Create a partial view" option and click Add
- Add the following HTML to the View page:
<p>Hello World</p>
Add the following namespace imports to the HelloController.cs file:
using Orchard.Themes; using Orchard.UI.Navigation;
Add a
[Themed]attribute to the HelloController class:
namespace Orchard.Web.Areas.Commerce.Controllers { [Themed] public class HomeController : Controller
Add another class to create a new Menu item:
public class MainMenu : INavigationProvider { public String MenuName { get { return "main"; } } public void GetNavigation(NavigationBuilder builder) { builder.Add(menu => menu.Add("Shop", "4", item => item .Action("Index", "Home", new { area = "Commerce" }))); } }
|
http://docs.orchardproject.net/Documentation/Walkthrough-Writing-An-Orchard-Module
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
I've got a homework from my teacher to make shortest possible program that takes integer i as input, then there are i integers on the input that are to be read and print the sum of the non-negative ones. The catch is that the program should be as short as possible. Ideally under 100 characters(not including whitespace chars). I've got 108 characters, is there any way to make this one shorter? Code I've got so far:
#include <iostream> int main() { int i,j,k,l=0; std::cin>>i; for(j = 0; j < i; j++) { std::cin >> k; if(k>0)l+=k; } std::cout << l; }
Thanks in advance!
|
http://www.dreamincode.net/forums/topic/292579-shortening-this-code/page__p__1705801
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
yatxmilter 0.1.1
Twisted protocol to handle Milter Connections in libmilter styleyatxmilter (yet another)
========================
_we recently changed the name of the project from **txmilter** to **yatxmilter** because there already was a [txmilter]() project under development with a different license and we didn't feel comfortable to use the name._
The milter protocol written in pure python as a twisted protocol, licensed under [GPLv2!](../master/LICENSE)
##wait, what?
The **yatxmilter** is a project that aims to bring the milter protocol to python using the [Twisted Matrix Framework]() It was inspired after people telling us to use [crochet]() and libmilter to achieve our goals. As we like to use things inside Twisted the way Twisted works, it was decided to create this project.
The goal of using **yatxmilter** is to provide a _faster_ response using Twisted's power of asynchronous calls instead the threaded solution used by libmilter. **yatxmilter** is _really fast_ and can handle _lots_ of simultaneous connections. In pure python ;)
###how do I use it?
first, you'll have to install it using pip or from [PyPI]():
```
$ pip install yatxmilter
```
**yatxmilter** is designed to be as simple as possible, and as close as possible to libmilter (that you can check here:) having that in mind, if you know how libmilter works you won't get any trouble
working with yatxmilter. Function calls are (_almost_) the same and functions names really remembers libmilter name calls.
For example, take a close look to the code:
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from yatxmilter.protocol import MilterProtocolFactory
from yatxmilter.defaults import MilterFactory
def main():
# we consider a "good pattern" to import reactor just when you'll use it,
# since we have other reactors in our codebase
from twisted.internet import reactor
reactor.listenTCP(1234, MilterProtocolFactory(
MilterFactory()
))
reactor.run()
if __name__ == '__main__':
main()
```
This is _just_ a twisted protocol inicialization as any twisted protocol will do ...
`MilterFactory` is just an empty factory that creates `Milter` objects. You should build your own factory to instantiate your milter objects. The `MilterProtocolFactory` is the one that does the magic for you, abstracting communication to expose you the `Milter` interface.
Your job is to extend `Milter`, override any method you need to work with _and_ the `xxfi_negotiate` method to exchange what signals your milter will support to build a `MilterFactory`.
As a plus, the **yatxmilter** can handle more than one milter plugin on the same connection, taking care of handling any signal different from continue and communicating with the MTA.
```
MTA yatxmilter your milter
___________________
| |
| MTA opens |
| connection |
|__________________|
| __________________
| | |
|_________> | Instantiate |
| all plugins |
|_________________|
___________________ |
| | |
| MTA starts | |
| negotiation | <_________|
|__________________|
| ____________________
| | |
|________________________________> | Send flags |
|___________________|
|
__________________ |
| | |
| Merge flags | <_________|
|_________________|
|
___________________ |
| | |
| Status filtering | |
| request | <_________|
|__________________|
|
| ____________________
| | |
|________________________________> | Process and reply |
|___________________|
|
__________________ |
| | |
| Wait all finish | |
| or first error | <_________|
|_________________|
|
V
__________________
| |
| Reply status |
|_________________|
|
V
_________________________________________________________________
| |
| Close connection |
|________________________________________________________________|
```
and that's all :)
#todo
* we still have no test units and code coverage, but you're welcome to push them if you want :)
* docs: wow, better examples and sphinx related docs are still pending;
* py3 was just completely ignored for now.
#license
**yatxmilter** code and docs are released under the [GPLv2](../master/LICENSE) license, from which is derived from the libmilter license - as we used its code as a base for ours and also as a form of gratitude.
- Downloads (All Versions):
- 0 downloads in the last day
- 1 downloads in the last week
- 39 downloads in the last month
- Author: Humantech Knowledge Management
- Maintainer: Jean Marcel Duvoisin Schmidt
- Keywords: milter,twisted,protocol
- License: GNU GPLv2
- Categories
- Development Status :: 3 - Alpha
- Environment :: Plugins
- Framework :: Twisted
- Intended Audience :: Developers
- Intended Audience :: System Administrators
- Intended Audience :: Telecommunications Industry
- License :: OSI Approved :: GNU General Public License v2 (GPLv2)
- Operating System :: OS Independent
- Programming Language :: Python :: 2.7
- Topic :: Communications :: Email :: Filters
- Topic :: Software Development :: Libraries :: Python Modules
- Topic :: System :: Networking :: Monitoring
- Package Index Owner: humantech
- DOAP record: yatxmilter-0.1.1.xml
|
https://pypi.python.org/pypi/yatxmilter
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
There are many scenarios where it's useful to integrate a desktop application with the web browser. Given that most users spend the majority of their time today surfing the web in their browser of choice, it can make sense to provide some sort of integration with your desktop application. Often, this will be as simple as providing a way to export the current URL or a selected block of text to your application. For this article, I've created a very simple application that uses text to speech to speak out loud the currently selected block of text in your web browser.
Internet Explorer provides many hooks for integrating application logic into the browser, the most popular being support for adding custom toolbars. There are great articles explaining how to do this on Code Project, such as this Win32 article and this .NET article. You can also create toolbars for Firefox using their Chrome plug-in architecture which uses XML for the UI layout and JavaScript for the application logic (see CodeProject article).
What about the other browsers, Google Chrome, Safari, and Opera? Given there is no common plug-in architecture used by all browsers, you can see a huge development effort is required to provide a separate toolbar implementation for each browser.
It would be nice to be able to write one toolbar that could be used across all browsers. This is not possible today, but you can achieve almost the same effect using Bookmarklets.
A bookmarklet is special URL that will run a JavaScript application when clicked. The JavaScript will execute in the context of the current page. Like any other URL, it can be bookmarked and added to your Favourites menu or placed on a Favourites toolbar. Here is a simple example of a bookmarklet:
javascript:alert('Hello World.');
Here is a slightly more complex example that will display the currently selected text in a message box:
<a href="javascript:var q = ''; if (window.getSelection)
q = window.getSelection().toString(); else if (document.getSelection)
q = document.getSelection(); else if (document.selection)
q = document.selection.createRange().text; if(q.length > 0)
alert(q); else alert('You must select some text first');">Show Selected Text</a>
Select a block of text on the page and click the above link. The text will be shown in a message box.
Now, drag the bookmarklet and drop it on your Favourites toolbar (for IE, you need to right click, select Add to Favourites, and then create it in the Favourites Bar). Navigate to a new page, select a block of text, and click the Show Selected Text button. Once again, the selected text will be shown in a message box.
You can see the potential of bookmarklets. You can create a bookmarklet for each command of your application and display them on a web page. The user can then select the commands they want to use and add them to their Favourites (either the toolbar or menu).
The downside is it's slightly more effort to install than a single toolbar, but on the upside, it gives the user a lot of flexibility. They need only choose the commands they're interested in and can choose whether they want them accessible from a toolbar or menu.
From a developer's perspective, bookmarklets are great as they're supported by all the major browsers. The only thing you need to worry about is making sure your JavaScript code handles differences in browser implementations, something that is well documented and understood these days (although still a right pain).
Bookmarklets allow you to execute an arbitrary block of JavaScript code at the click of a button, but how do you use this to communicate with a desktop application?
The answer is to build a web server into your desktop application and issue commands from your JavaScript code using HTTP requests.
Now, before you baulk at the idea of building a web server into your application, it's actually very simple. You don't need a complete web server implementation. You just need to be able to process simple HTTP GET requests. A basic implementation is as follows.
The .NET framework 2.0 has an HttpListener class and associated HttpListenerRequest and HttpListenerResponse classes that allow you to implement the above in a few lines of code.
HttpListener
HttpListenerRequest
HttpListenerResponse
On the browser, you need a way of issuing HTTP requests from your JavaScript code. There are a number of ways of issuing HTTP requests from JavaScript.
The simplest is to write a new URL into the document.location property. This will cause the browser to navigate to the new location. However, this is not what we want. We don't want to direct the user to a new page when they click one of our bookmarklets. Instead, we just want to issue a command to our application while remaining on the same page.
document.location
This sounds like a job for AJAX and the HttpXmlRequest. AJAX provides a convenient means of issuing requests to a web server in the background without affecting the current page. However, there is one important restriction placed on the HttpXmlRequest called the same domain origin policy. Browsers restrict HttpXmlRequests to the same domain as that used to serve the current page. For example, if you are viewing a page from codeproject.com, you can only issue HttpXmlRequests to codeproject.com. A request to another domain (e.g., google.com) will be blocked by the browser. This is an important security measure that ensures malicious scripts cannot send information to a completely different server behind the scenes without your knowledge.
HttpXmlRequest
This restriction means that we cannot use a HttpXmlRequest to communicate with our desktop application. Remember that JavaScript bookmarklets are executed in the context of the current page. We need to be able to send a request from any domain (e.g. codeproject.com) to our desktop application which will be in the localhost domain.
In order to overcome this problem, I turned to Google for inspiration. Google needs to be able to do precisely this in order to gather analytics information for a site. If you're not familiar with Google analytics, it can be used to gather a multitude of information about the visitors to your web site, such as the number of visitors, where they came from, and the pages on your site they visit. All this information is collected, transferred back to Google, and appears in various reports in your analytics account.
To add tracking to your site, you simply add a call to the Google analytics JavaScript at the bottom of every page of your site.
Whenever a visitor lands on your page, the JavaScript runs and the visitor details are sent back to your Google analytics account.
The question is how does Google do this? Surely, they can't use an HttpXmlRequest as it would break the same domain origin policy? They don't. Instead, they use what can only be described as a very clever technique.
The JavaScript Image class is a very simple class that can be used to asynchronously load an image. To request an image, you simply set the source property to the URL of the image. If the image loads successfully, the onload() method is called. If an error occurs, the onerror() method is called. Unlike the HttpXmlRequest, there is no same domain origin policy. The source image can be located on any server. It doesn't need to be hosted on the same site as the current page.
Image
source
onload()
onerror()
We can use this behaviour to send arbitrary requests to our desktop application (or any domain for that matter) if we realize the source URL can contain any information, including a querystring. The only requirement is that it returns an image. Here is an example URL:
<a href=""></a>
We can easily map this URL to the following command in our application.
public void speaktext(string text);
In order to ensure the request completes without error, a 1x1 pixel GIF image is returned. This image is never actually shown to the user. A tiny image is used to minimize the number of bytes being transmitted.
The most important point to realize is all communication is one way, from the browser to the desktop application. There is no way of sending information from the desktop application back to the browser. However, for many applications, this is not a problem.
Google uses the JavaScript Image technique to send visitor information to your pages (hosted on yourdomain.com) back to your Google analytics account (hosted on google.com).
You need to be aware that URLs have a maximum length that varies from browser to browser (around 2K - check). This restricts the amount of information you can send in a single request. If you need to send a large amount of information, you'll need to break it up into smaller chunks and send multiple requests. The sample application, BrowserSpeak, uses this technique to speak arbitrarily large blocks of text.
JavaScript will automatically encode a URL you pass to Image.src as UTF-8. However, when passing arbitrary text as part of a URL, you will need to escape the '&' and '=' characters. These characters are used to delimit the name/value pairs (or arguments) that are passed in the querystring portion of the URL. This can be done using the JavaScript escape() function.
Image.src
escape()
Web browsers will cache images (as well as many other resources) locally to avoid making multiple requests back to the server for the same resource. This behaviour is disastrous for our application. The first command will make it through to our desktop application, and the browser will cache dummy.gif locally. Subsequent requests will never reach our desktop application as they can be satisfied from the local cache.
There are a couple of solutions to this problem. One answer is to set the cache expiry directives in the HTTP response to instruct the browser never to cache the result.
The other approach, which is used for the BrowserSpeak application, is to ensure every request has a unique URL. This is done by appending a timestamp containing the current date and time. For example:
var request = "http://" + server + "/" +
command + "/dummy.gif" + args +
"×tamp=" + new Date().getTime();
It's now time to put all this theory into practice and create a sample application that has some real world use.
BrowserSpeak is a C# application that will speak the text on a web page out loud. It can be used when you are tired of reading large passages of text from the screen. It uses the System.Speech.Synthesis component found in the .NET Framework 3.0 for the text to speech functionality.
System.Speech.Synthesis
BrowserSpeak provides the following commands, available through its web interface and through its UI.
It also provides a BufferText command available from the web interface. This command is used to send a block of text from the web browser to the desktop application. It splits the text into 1500 byte chunks so it's not limited by the maximum size of a URL. It's used by the Speak Selected bookmarklet to transfer the selected text to the BrowserSpeak application prior to speaking.
BrowserSpeak uses the following bookmarklets (drag these onto your Favourites bar to use from your browser):
The Speak Selected command is the most complex and also the most interesting. It's listed below:
// A bookmarklet to send a speaktext command to the BrowserSpeak application.
var server = "localhost:60024";
// Change the port number for your app to something unique.
var maxreqlength = 1500;
// This is a conservative limit that should work with all browsers.
var selectedText = _getSelectedText();
if(selectedText)
{
_bufferText(escape(selectedText));
_speakText();
}
void 0;
// Return from bookmarklet, ensuring no result is displayed.
function _getSelectedText()
{
// Get the current text selection using
// a cross-browser compatible technique.
if (window.getSelection)
return window.getSelection().toString();
else if (document.getSelection)
return document.getSelection();
else if (document.selection)
return document.selection.createRange().text;
return null;
}
function _formatCommand(command, args)
{
// Add a timestamp to ensure the URL is always unique and hence
// will never be cached by the browser.
return "http://" + server + "/" + command +
"/dummy.gif" + args +
"×tamp=" + new Date().getTime();
}
function _speakText()
{
var image = new Image(1,1);
image.onerror = function() { _showerror(); };
image.src = _formatCommand("speaktext", "?source=" + document.URL);
}
function _bufferText(text)
{
var clearExisting = "true";
var reqs = Math.floor((text.length + maxreqlength - 1) / maxreqlength);
for(var i = 0; i < reqs; i++)
{
var start = i * maxreqlength;
var end = Math.min(text.length, start + maxreqlength);
var image = new Image(1,1);
image.onerror = function() _showerror(); };
image.src = _formatCommand("buffertext",
"?totalreqs=" + reqs + "&req=" + (i + 1) +
"&text=" + text.substring(start, end) +
"&clear=" + clearExisting);
clearExisting = "false";
}
}
function _showerror()
{
// Display the most likely reason for an error
alert("BrowserSpeak is not running. You must start BrowserSpeak first.");
}
Most of the code is self-explanatory. However, it's important to explain the behaviour of the _bufferText() loop. If the text being sent is greater than 1500 bytes, then multiple requests will be made. Remember that as far as the browser is concerned, it's requesting an image. Modern browsers will issue multiple image requests in parallel. This will cause multiple buffertext commands to be issued in parallel. Not only that, it's quite possible the requests will arrive out of order at the BrowserSpeak desktop application. Therefore, every request includes the parameters req (the request number) and totalreqs (the total number of requests). This allows the BrowserSpeak application to reassemble the text into the correct order.
_bufferText()
req
totalreqs
The code for a bookmarklet must be formatted into a single line. For very small applications, this is not a problem. However, when you start to develop larger, more complex applications, you will want to develop your code over multiple lines, with plenty of whitespace and comments. I've found that using a JavaScript minifier, in particular the free YUI Compressor, is a great way of turning a normal chunk of JavaScript into a single line suitable for use in a bookmarklet. Ideally, you'd add this step into your automated build process.
The main application-specific logic lives in the MainForm class. First, it starts an HttpCommandDispatcher instance in the constructor, responsible for receiving and dispatching HTTP commands (sent from the bookmarklets). The MainForm class then listens for various events and updates the UI to reflect its current state.
MainForm
HttpCommandDispatcher
SpeechController
TextBuffer
BufferTextCommand
TextBox
HttpCommandDispatcher.RequestReceived
The HttpCommandDispatcher listens for HTTP requests using the HttpListener class found in System.Net. When a request is received, it extracts the command from the URL, looks up the appropriate HttpCommand, and calls the HttpCommand.Execute() method. It will also send a response with a dummy.gif image (this is preloaded and stored in a byte[] array).
System.Net
HttpCommand
HttpCommand.Execute()
byte[]
A word about text encoding and extracting arguments from the URL. The HttpListenerRequest has a QueryString property that is a name/value collection containing the arguments received in the querystring portion of the URL. Unfortunately, I found you couldn't use this property as the argument values are not correctly decoded from their UTF-8 encoding. Instead, I parse the RawUrl property manually and call the HttpUtility.DecodeUrl() method on each argument value. This correctly handles the UTF-8 encoded strings we receive from JavaScript.
QueryString
RawUrl
HttpUtility.DecodeUrl()
You will probably recognize the Command pattern. You must derive a class from HttpCommand for every command you wish to make available through the HTTP interface. Each command must be added using the HttpCommandDispatcher.AddCommand() method.
HttpCommandDispatcher.AddCommand()
An abstract TextCommand is provided for use by commands that need to receive large amounts of text from the browser (e.g. the SpeakTextCommand). A TextCommand will listen to the TextBuffer and call the abstract TextAvailable method whenever new text arrives. Derived classes need to override this method and execute their operation whenever this method is called. This handles the case where the command arrives from the browser before all the text the command operates on has arrived.
TextCommand
SpeakTextCommand
TextAvailable
One additional piece of functionality that's provided but not actually used in the sample application is the ImageLocator class. This class will take an HTTP request for an image, look up the appropriate image from the application's resources, and return an image in the requested format. For example, you can view the icon used for the About button using the following URL:
ImageLocator
<a href=""></a>
The classes described above live in the HttpServer namespace and are pretty much decoupled from the BrowserSpeak application. You should be able to lift these out and drop them into your own application without change.
HttpServer
If you try and run BrowserSpeak on Vista, you will get an Access Denied exception when you try to start the HttpListener. Vista doesn't let standard users register and listen for requests to a specific URL. You could run your application as Administrator, but a better approach is to grant all users permission to register and listen on your application's URL. That way, you can run your application as a standard user. You can do this using the netsh utility. To grant BrowserSpeak's URL user permissions, execute the following command from a command prompt running as Administrator.
netsh http add urlacl url= user=BUILTIN\Users listen=yes
This setting is persistent and will survive reboots. When you deploy your application, you should execute this command as part of your installer.
The text to speech functionality used by the application is found in the SpeechController class. Thanks to the functionality provided in the System.Speech.Synthesis namespace found in .NET 3.0, this class does almost nothing. It merely delegates through to an instance of the Microsoft SpeechSynthesizer class. If you wanted to remove the dependency on .NET 3.0, you could reimplement the SpeechController class and use COM-interop to access the native Microsoft Speech APIs (SAPI).
SpeechSynthesizer
I hope I've demonstrated the power of bookmarklets in this article and given you some ideas on how to provide useful integration between the web browser and a desktop application.
There are a couple of things I couldn't get working to my satisfaction using this technique:
<link rel="shortcut icon" href="speaktext.ico" type="image/vnd.microsoft.icon"/>
Unfortunately, the browsers don't seem to use the favicon for bookmarklets.
I've made use of this technique in the latest version of my commercial text to speech application, Text2Go. I've also used a variation of this technique to add a menu of JavaScript bookmarklets in Internet Explorer 8's Accelerator preview.
|
http://www.codeproject.com/Articles/36517/Communicating-from-the-Browser-to-a-Desktop-Applic?fid=1540842&df=90&mpp=10&sort=Position&spc=None&tid=3075078
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
Ingo Molnar píše v Čt 19. 02. 2009 v 13:47 +0100:> * Petr Tesarik <ptesarik@suse.cz> wrote:> > > Ingo Molnar píše v Čt 19. 02. 2009 v 13:22 +0100:> > > * Petr Tesarik <ptesarik@suse.cz> wrote:> > > > > > > Ingo Molnar píše v Čt 19. 02. 2009 v 13:10 +0100:> > > > > * Petr Tesarik <ptesarik@suse.cz> wrote:> > > > > > > > > > > So, the only method I could invent was using gas macros. It > > > > > > works but is quite ugly, because it relies on the actual > > > > > > assembler instruction which is generated by the compiler. Now, > > > > > > AFAIK gcc has always translated "for(;;)" into a jump to self, > > > > > > and that with any conceivable compiler options, but I don't > > > > > > know anything about Intel cc.> > > > > > > > > > > +static inline __noreturn void discarded_jmp(void)> > > > > > +{> > > > > > + asm volatile(".macro jmp target\n"> > > > > > + "\t.purgem jmp\n"> > > > > > + ".endm\n");> > > > > > + for (;;) ;> > > > > > +}> > > > > > > > > > hm, that's very fragile.> > > > > > > > > > Why not just:> > > > > > > > > > static inline __noreturn void x86_u2d(void)> > > > > {> > > > > asm volatile("u2d\n");> > > > > }> > > > > > > > > > If GCC emits a bogus warning about _that_, then it's a bug in > > > > > the compiler that should be fixed.> > > > > > > > I wouldn't call it a bug. The compiler has no idea about what > > > > the inline assembly actualy does. So it cannot recognize that > > > > the ud2 instruction does not return (which BTW might not even > > > > be the case, depending on the implementation of the Invalid > > > > Opcode exception).> > > > > > No, i'm not talking about the inline assembly.> > > > > > I'm talking about the x86_u2d() _inline function_, which has > > > the __noreturn attribute.> > > > > > Shouldnt that be enough to tell the compiler that it ... wont > > > return?> > > > Nope, that's not how it works.> > > > You _may_ specify a noreturn attribute to any function (and > > GCC will honour it AFAICS), but if GCC _thinks_ that the > > function does return, it will issue the above-mentioned > > warning:> > > > /usr/src/linux-2.6/arch/x86/include/asm/bug.h:10: warning: 'noreturn' function does return> > > > And that's what your function will do. :-(> > > > Yes, I also thinks that this behaviour is counter-intuitive. > > Besides, I haven't found a gcc switch to turn this warning > > off, which would be my next recommendation, since the GCC > > heuristics is broken, of course.> > so GCC should be fixed and improved here, on several levels.Agree.But it takes some time, even if we start pushing right now. What's yoursuggestion for the meantime? Keep the dummy jmp? And in case anybody isconcerned about saving every byte in the text section, they can apply mydirty patch?Actually, this doesn't sound too bad.Petr Tesarik--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2009/2/19/155
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
XForms Future Features
From W3C XForms Group Wiki (Public)
XForms Future Features
Features that could make it into 2.0
- MIP functions
- User-defined model item properties
- bind function
- Initial sending of MIP events
- New UI events (added by Erik)
- Context information for UI events (added by Erik)
- Value changes upon instance replacement
- Variables in XPath
- AVTs
Moved from Future_Goals
- Better expression of default trigger
- Better DOM interface to expose all actions and to query states of UI controls
- AJAX programmer APIs (same as above?)
- submissions that put and get binary resources
- Ability to suppress RRRR on submission completion and/or initialization
- Lazy authoring improvement: empty instance exists even if no refs
- repeat and relevance/enabled/disabled ()
- submission to a "new" window
- XML Events 2
- Model improvements (optional, nested, external/src, exceptions)
- UI event notification improvements
- Revisit multiple instance delete issue
- Setting event contextinfo as part of dispatch
- Better upload control (better control of metadata, instant upload submission to server)
- Better support for submission and model patterns for multipage web apps
- Simplified repeat patterns
- Refactor dependencies to detect automatic rebuild conditions and reduce circular dependencies
- Consolidate recalculate and revalidate
- Structural calculates (like declarative insert/delete)
- XForms for HTML
Moved from XForms_Future_Features
- Multiple step undo (e.g. undo to last state)
- Templates of shadow data
- Co-related quantities
- multiple schemas targeting same namespace (e.g. allow inline schema to augment external schema; similar to having the internal one import the external one).
- View and Edit and Submit binary data
- Specific actions instead of general insert
- Allow IRIs for external schema locations
- General useability of IRIs
- Upcoming RFC for Human Readable Resource Identifiers
- Add XML 1.1 support for i18n
- Add ITS Rule file
- Wiki Markup for XForms (Creation of interactive content)
- Drive UI (presentation) properties from data and calculates
- xforms-value-changed context info for node changed
- Annotations - standardized ways to add complex content to a textarea
- Fix select1 so that it can allow multiple items with the same value to be selected simultaneously as well as one that restricts to only one item that matches.
Componentization
- Nested models for convenient grouping and composition (Strawman - Charlie and Mark)
- Form apps as components
- Duplicate Form App modularization
- Inter-model data exchange
- XBL integration (Erik's note: I would use the more general term "Components" to describe the high-level feature. We may or may not use XBL to support components.)
Model
- Better expression of conditional default values
- Generalized Constraints in Document
- Includes A document() function
- Should include a here() function
- Structural Constraints
- International mail adresses
View
- Repeat pattern (Strawman - Steven)
- Wizard pattern
- Default trigger (Strawman - John)
- UI Controls that can manipulate XML subtrees, which includes
- Binding to optional elements Micah Dubinko request
- Arbitrary Attachment Capabilities, consisting of
- Multiple cases active for some kind of UI switch (like a switch-many)
- Should supercede: Indicating whether something is wrong with non-selected cases and non-relevant groups
- Sorting a view of data
- Consider data formatting in UI controls
- Supercedes "Let @value override single node binding on output"
Controller
- Dynamic add and delete of form controls, binds, bindings instances, schema
- Close Form - Close Document
- XForms Timer (capability available in 1.1 using delay on dispatch)
Submission
- Cancel submission
- Track submission progress
- Need binary data support for submission data and submission result
- Submission some kind of better filtering than relevance that affects only submission and not the UI
- HTTP Authentication
- Session mgmt/Large cookies/Save and Reload
- Better control of headers on instance src
Foundations
- Need ability to conditionally cancel events (integrate XML Events 2.0)
XForms 3.0 Modularization
This includes new modules and refinement of some existing modules from XForms 1.2. New modules include the XML Signatures Module and the SCXML module. Refinements include updates to the Model module to allow pluggable schema engines and updates to the Instance module to allow pluggable reference engines and possibly data formats.
- The XForms message module MarkB and Steven
- message, help, hint, alert
- The XForms instance data module Charlie, Uli and John
- Include insert/delete/setvalue and "get (string) value"
- Parse, serialize
- Pluggable reference engine
- Includes Add support for multiple expression languages
- Supercedes Alignment with XPath 2.0
- May include pluggable data format, e.g. JSON data
- Data Properties module Uli and John
- MIP values/inheritance, overall validity, user-defined
- Model module (understands bind module, deferred updates, etc.)
- Bind Module (constraint, type, calculate, relevance, readonly, p3ptype, user-defined)
- Validation Module (includes pluggable schema engines)
- May include Schema attr on instance
- May include Validation attr on instance to control strict/lax/skip
- XForms actions module (include user-defined actions, deferred update)
- The XForms submission module
- User Interface module (combines Container and Atomic form control modules)
- Container Form Controls module (group, switch, repeat, user-defined)
- Input Form Controls module (input, textarea, select1, ..., user-defined)
- Output Form Control module
- The XForms label module
- UI binding module
- XML Signature module
- SCXML module
- May include or rely on user-defined actions
|
http://www.w3.org/MarkUp/Forms/wiki/XForms_Future_Features
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
- Preface
-
-
-
-
-
-
-
The DNS Update protocol (RFC 2136) integrates DNS with DHCP. The latter two protocols are complementary; DHCP centralizes and automates IP address allocation, while DNS automatically records the association between assigned addresses and hostnames. When you use DHCP with DNS update, this configures a host automatically for network access whenever it attaches to the IP network. You can locate and reach the host using its unique DNS hostname. Mobile hosts, for example, can move freely without user or administrator intervention.
This chapter explains how to use DNS update with Cisco Network Registrar servers, and its special relevance to Windows client systems.
DNS Update Process
Special DNS Update Considerations
DNS Update for DHCPv6
Creating DNS Update Configurations
Creating DNS Update Maps
Configuring Access Control Lists and Transaction Security
Configuring DNS Update Policies
Confirming Dynamic Records
Scavenging Dynamic Records
Troubleshooting DNS Update
Configuring DNS Update for Windows Clients
To configure DNS updates, you must:
1.
Create a DNS update configuration for a forward or reverse zone or both. See the "Creating DNS Update Configurations" section.
2.
Use this DNS update configuration in either of two ways:
–
Specify the DNS update configuration on a named, embedded, or default DHCP policy. See the "Creating and Applying DHCP Policies" section on page 21-3.
–
Define a DNS update map to autoconfigure a single DNS update relationship between a Cisco Network Registrar DHCP server or failover pair and a DNS server or High-Availability (HA) pair. Specify the update configuration in the DNS update map. See the "Creating DNS Update Maps" section.
3.
Optionally define access control lists (ACLs) or transaction signatures (TSIGs) for the DNS update. See the "Configuring Access Control Lists and Transaction Security" section.
4.
Optionally create one or more DNS update policies based on these ACLs or TSIGs and apply them to the zones. See the "Configuring DNS Update Policies" section.
5.
Adjust the DNS update configuration for Windows clients, if necessary; for example, for dual zone updates. See the "Configuring DNS Update for Windows Clients" section.
6.
Configure DHCP clients to supply hostnames or request that Cisco Network Registrar generate them.
7.
Reload the DHCP and DNS servers, if necessary based on the edit mode.
Consider these two issues when configuring DNS updates:
•
For security purposes, the Cisco Network Registrar DNS update process does not modify or delete a name an administrator manually enters in the DNS database.
•
If you enable DNS update for large deployments, and you are not using HA DNS (see Chapter 18, "Configuring High-Availability DNS Servers"), divide primary DNS and DHCP servers across multiple clusters. DNS update generates an additional load on the servers.
Cisco Network Registrar currently supports DHCPv6 DNS update over IPv4 only. For DHCPv6, DNS update applies to nontemporary stateful addresses only, not delegated prefixes.
DNS update for DHCPv6 involves AAAA and PTR RR mappings for leases. Cisco Network Registrar 7.2 supports server- or extension-synthesizing fully qualified domain names and the DHCPv6 client-fqdn option (39).
Because Cisco Network Registrar is compliant with RFCs 4701, 4703, and 4704, it supports the DHCID resource record (RR). All RFC-4703-compliant updaters can generate DHCID RRs and result in data that is a hash of the client identifier (DUID) and the FQDN (per RFC 4701). Nevertheless, you can use AAAA and DHCID RRs in update policy rules.
DNS update processing for DHCPv6 is similar to that for DHCPv4 except that a single FQDN can have more than one lease, resulting in multiple AAAA and PTR RRs for a single client. The multiple AAAA RRs can be under the same name or a different name; however, PTR RRs are always under a different name, based on the lease address. RFC-4703-compliant updaters use the DHCID RR to avoid collisions among multiple clients.
Note
Because DHCPv4 uses TXT RRs and DHCPv6 uses DHCID RRs for DNS update, to avoid conflicts, dual-stack clients cannot use single forward FQDNs. These conflicts primarily apply to client-requested names and not generated names, which are generally unique. To avoid these conflicts, use different zones for the DHCPv4 and DHCPv6 names.
Note
If the DNS server is down and the DHCP server can not complete the DNS updates to remove RRs added for a DHCPv6 lease, the lease continues to exist in the AVAILABLE state. Only the same client reuses the lease.
DHCPv6 Upgrade Considerations
Generating Synthetic Names in DHCPv6
Determining Reverse Zones for DNS Updates
Using the Client FQDN
If you use any policy configured prior to Cisco Network Registrar 7.2 that references a DNS update object for DHCPv6 processing (see the "DHCPv6 Policy Hierarchy" section on page 26-9), after the upgrade, the server begins queuing DNS updates to the specified DNS server or servers. This means that DNS updates might automatically (and unexpectedly) start for DHCPv6 leases.
If clients do not supply hostnames, DHCPv6 includes a synthetic name generator. Because a DHCPv6 client can have multiple leases, Cisco Network Registrar uses a different mechanism than that for DHCPv4 to generate unique hostnames. The v6-synthetic-name-generator attribute for the DNS update configuration allows appending a generated name to the synthetic-name-stem based on the:
•
Hash of the client DHCP Unique Identifier (DUID) value (the preset value).
•
Raw client DUID value (as a hex string with no separators).
•
CableLabs cablelabs-17 option device-id suboption value (as a hex string with no separators, or the hash of the client DUID if not found).
•
CableLabs cablelabs-17 option cm-mac-address suboption value (as a hex string with no separators, or the hash of the client DUID if not found).
See the "Creating DNS Update Configurations" section for how to create a DNS update configuration with synthetic name generation.
In the CLI, an example of this setting is:
nrcmd> dhcp-dns-update example-update-config set v6-synthetic-name-generator=hashed-duid
The DNS update configuration uses the prefix length value in the specified reverse-zone-prefix-length attribute to generate a reverse zone in the ip6.arpa domain. You do not need to specify the full reverse zone, because you can synthesize it by using the ip6.arpa domain. You set this attribute for the reverse DNS update configuration (see the "Creating DNS Update Configurations" section). Here are some rules for reverse-zone-prefix-length:
•
Use a multiple of 4 for the value, because ip6.arpa zones are on 4-bit boundaries. If not a multiple of 4, the value is rounded up to the next multiple of 4.
•
The maximum value is 124, because specifying 128 would create a zone name without any possible hostnames contained therein.
•
A value of 0 means none of the bits are used for the zone name, hence ip6.arpa is used.
•
If you omit the value from the DNS update configuration, the server uses the value from the prefix or, as a last resort, the prefix length derived from the address value of the prefix (see the "Configuring Prefixes" section on page 26-18).
Note that to synthesize the reverse zone name, the synthesize-reverse-zone attribute must remain enabled for the DHCP server. Thus, the order in which a reverse zone name is synthesized for DHCPv6 is:
1.
Use the full reverse-zone-name in the reverse DNS update configuration.
2.
Base it on the ip6.arpa zone from the reverse-zone-prefix-length in the reverse DNS update configuration.
3.
Base it on the ip6.arpa zone from the reverse-zone-prefix-length in the prefix definition.
4.
Base it on the ip6.arpa zone from the prefix length for the address in the prefix definition.
In the CLI, an example of setting the reverse zone prefix length is:
nrcmd> dhcp-dns-update example-update-config set reverse-zone-prefix-length=32
To create a reverse zone for a prefix in the web UI, the List/Add Prefixes page includes a Create Reverse Zone button for each prefix. (See the "Creating and Editing Prefixes" section on page 26-24.)
The CLI also provides the prefix name createReverseZone [-range] command to create a reverse zone for a prefix (from its address or range value). Delete the reverse zone by using prefix name deleteReverseZone [-range].
You can also create a reverse zone from a DHCPv4 subnet or DHCPv6 prefix by entering the subnet or prefix value when directly configuring the reverse zone. See the "Adding Primary Reverse Zones" section on page 15-12 for details.
The existing DHCP server use-client-fqdn attribute controls whether the server pays attention to the DHCPv6 client FQDN option in the request. The rules that the server uses to determine which name to return when multiple names exist for a client are in the following order of preference:
1.
The server FQDN that uses the client requested FQDN if it is in use for any lease (even if not considered to be in DNS).
2.
The FQDN with the longest valid lifetime considered to be in DNS.
3.
The FQDN with the longest valid lifetime that is not yet considered to be in DNS.
A DNS update configuration defines the DHCP server framework for DNS updates to a DNS server or HA DNS server pair. It determines if you want to generate forward or reverse zone DNS updates (or both). It optionally sets TSIG keys for the transaction, attributes to control the style of autogenerated hostnames, and the specific forward or reverse zone to be updated. You must specify a DNS update configuration for each unique server relationship.
For example, if all updates from the DHCP server are directed to a single DNS server, you can create a single DNS update configuration that is set on the server default policy. To assign each group of clients in a client-class to a corresponding forward zone, set the forward zone name for each in a more specific client-class policy.
Step 1
From the DHCP menu, choose DNS Updates to open the List/Add DNS Update Configurations page.
Step 2
Click Add DNS Update Configuration to open the Add DNS Update Configuration page.
Step 3
Enter a name for the update configuration in the name attribute field.
Step 4
Click the appropriate dynamic-dns setting:
•
update-none—Do not update forward or reverse zones.
•
update-all—Update forward and reverse zones (the default value).
•
update-fwd-only—Update forward zones only.
•
update-reverse-only—Update reverse zones only.
Step 5
Set the other attributes appropriately:
a.
If necessary, enable synthesize-name and set the synthetic-name-stem value.
You can set the stem of the default hostname to use if clients do not supply hostnames, by using synthetic-name-stem. For DHCPv4, enable the synthesize-name attribute to trigger the DHCP server to synthesize unique names for clients based on the value of the synthetic-name-stem. The resulting name is the name stem appended with the hyphenated IP address. For example, if you specify a synthetic-name-stem of host for address 192.168.50.1 in the example.com domain, and enable the synthesize-name attribute, the resulting hostname is host-192-168-50-1.example.com. The preset value for the synthetic name stem is dhcp.
The synthetic-name-stem must:
•
Be a relative name without a trailing dot.
•
Include alphanumeric values and hyphens (-) only. Space characters and underscores become hyphens and other characters are removed.
•
Include no leading or trailing hyphen characters.
•
Have DNS hostnames of no more than 63 characters per label and 255 characters in their entirety. The algorithm uses the configured forward zone name to determine the number of available characters for the hostname, and truncates the end of the last label if necessary.
For DHCPv6, see the "Generating Synthetic Names in DHCPv6" section.
b.
Set forward-zone-name to the forward zone, if updating forward zones. Note that the policy forward-zone-name takes precedence over the one set in the DNS update configuration.
For DHCPv6, the server ignores the client and client-class policies when searching for a forward-zone-name value in the policy hierarchy. The search for a forward zone name begins with the prefix embedded policy.
c.
For DHCPv4, set reverse-zone-name to the reverse (in.addr.arpa) zone to be updated with PTR and TXT records. If unset and the DHCP server synthesize-reverse-zone attribute is enabled, the server synthesizes a reverse zone name based on the address of each lease, scope subnet number, and DNS update configuration (or scope) dns-host-bytes attribute value.
The dns-host-bytes value controls the split between the host and zone parts of the reverse zone name. The value sets the number of bytes from the lease IP address to use for the hostname; the remaining bytes are used for the in-addr.arpa zone name. A value of 1 means use just one byte for the host part of the domain and the other three from the domain name (reversed). A value of 4 means use all four bytes for the host part of the address, thus using just the in-addr.arpa part of the domain. If unset, the server synthesizes an appropriate value based on the scope subnet size, or if the reverse-zone-name is defined, calculates the host bytes from this name.
For DHCPv6, see the "Determining Reverse Zones for DNS Updates" section.
d.
Set server-addr to the IP address of the primary DNS server for the forward zone (or reverse zone if updating reverse zones only.
e.
Set server-key and backup-server-key if you are using a TSIG key to process all DNS updates (see the "Transaction Security" section).
f.
Set backup-server-addr to the IP address of the backup DNS server, if HA DNS is configured.
g.
If necessary, enable or disable update-dns-first (preset value disabled) or update-dns-for-bootp (preset value enabled). The update-dns-first setting controls whether DHCP updates DNS before granting a lease. Enabling this attribute is not recommended.
Step 6
At the regional level, you can also push update configurations to the local clusters, or pull them from the replica database on the List/Add DNS Update Configurations page.
Step 7
Click Add DNS Update Configuration.
Step 8
To specify this DNS update configuration on a policy, see the "Creating and Applying DHCP Policies" section on page 21-3.
Use dhcp-dns-update name create. For example:
nrcmd> dhcp-dns-update example-update-config create
Set the dynamic-dns attribute to its appropriate value (update-none, update-all, update-fwd-only, or update-reverse-only). For example:
nrcmd> dhcp-dns-update example-update-config set dynamic-dns=update-all
DNS Update Process
Special DNS Update Considerations
DNS Update for DHCPv6
A DNS update map facilitates configuring DNS updates so that the update properties are synchronized between HA DNS server pairs or DHCP failover server pairs, based on an update configuration, so as to reduce redundant data entry. The update map applies to all the primary zones that the DNS pairs service, or all the scopes that the DHCP pairs service. You must specify a policy for the update map. To use this function, you must be an administrator assigned the server-management subrole of the dns-management or central-dns-management role, and the dhcp-management role (for update configurations).
Step 1
From the DNS menu, choose Update Maps to open the List/Add DNS Update Maps page.
Step 2
Click Add DNS Update Map to open the Add DNS Update Map page.
Step 3
Enter a name for the update map in the Name field.
Step 4
Enter the DNS update configuration from the previous section in the dns-config field.
Step 5
Set the kind of policy selection you want for the dhcp-policy-selector attribute. The choices are:
•
use-named-policy—Use the named policy set for the dhcp-named-policy attribute (the preset value).
•
use-client-class-embedded-policy—Use the embedded policy from the client-class set for the dhcp-client-class attribute.
•
use-scope-embedded-policy—Use the embedded policy from the scope.
Step 6
If using update ACLs (see the "Configuring Access Control Lists and Transaction Security" section) or DNS update policies (see the "Configuring DNS Update Policies" section), set either the dns-update-acl or dns-update-policy-list attribute. Either value can be one or more addresses separated by commas. The dns-update-acl takes precedence over the dns-update-policy-list.
If you omit both values, a simple update ACL is constructed whereby only the specified DHCP servers or failover pair can perform updates, along with any server-key value set in the update configuration specified for the dns-config attribute.
Step 7
Click Add DNS Update Map.
Step 8
At the regional level, you can also push update maps to the local clusters, or pull them from the replica database on the List/Add DNS Update Maps page.
Specify the name, cluster of the DHCP and DNS servers (or DHCP failover or HA DNS server pair), and the DNS update configuration when you create the update map, using dns-update-map name create dhcp-cluster dns-cluster dns-config. For example:
nrcmd> dns-update-map example-update-map create Example-cluster Boston-cluster
example-update-config
Set the dhcp-policy-selector attribute value to use-named-policy, use-client-class-embedded-policy, or use-scope-embedded-policy. If using the use-named-policy value, also set the dhcp-named-policy attribute value. For example:
nrcmd> dns-update-map example-update-map set dhcp-policy-selector=use-named-policy
nrcmd> dns-update-map example-update-map set dhcp-named-policy=example-policy
ACLs are authorization lists, while transaction signatures (TSIG) is an authentication mechanism:
•
ACLs enable the server to allow or disallow the request or action defined in a packet.
•
TSIG ensures that DNS messages come from a trusted source and are not tampered with.
For each DNS query, update, or zone transfer that is to be secured, you must set up an ACL to provide permission control. TSIG processing is performed only on messages that contain TSIG information. A message that does not contain, or is stripped of, this information bypasses the authentication process.
For a totally secure solution, messages should be authorized by the same authentication key. For example, if the DHCP server is configured to use TSIG for DNS updates and the same TSIG key is included in the ACL for the zones to be updated, then any packet that does not contain TSIG information fails the authorization step. This secures the update transactions and ensures that messages are both authenticated and authorized before making zone changes.
ACLs and TSIG play a role in setting up DNS update policies for the server or zones, as described in the "Configuring DNS Update Policies" section.
Access Control Lists
Configuring Zones for Access Control Lists
Transaction Security
You assign ACLs on the DNS server or zone level. ACLs can include one or more of these elements:
•
IP address—In dotted decimal notation; for example, 192.168.1.2.
•
Network address—In dotted decimal and slash notation; for example, 192.168.0.0/24. In this example, only hosts on that network can update the DNS server.
•
Another ACL—Must be predefined. You cannot delete an ACL that is embedded in another one until you remove the embedded relationship. You should not delete an ACL until all references to that ACL are deleted.
•
Transaction Signature (TSIG) key—The value must be in the form key value, with the keyword key followed by the secret value. To accommodate space characters, the entire list must be enclosed in double quotes. For TSIG keys, see the "Transaction Security" section.
You assign each ACL a unique name. However, the following ACL names have special meanings and you cannot use them for regular ACL names:
•
any—Anyone can perform a certain action
•
none—No one can perform a certain action
•
localhost—Any of the local host addresses can perform a certain action
•
localnets—Any of the local networks can perform a certain action
Note the following:
•
If an ACL is not configured, any is assumed.
•
If an ACL is configured, at least one clause must allow traffic.
•
The negation operator (!) disallows traffic for the object it precedes, but it does not intrinsically allow anything else unless you also explicitly specify it. For example, to disallow traffic for the IP address 192.168.50.0 only, use !192.168.50.0, any.
Click DNS, then ACLs to open the List/Add Access Control Lists page. Add an ACL name and match list. Note that a key value pair should not be in quotes. At the regional level, you can additionally pull replica ACLs or push ACLs to local clusters.
Use acl name create match-list, which takes a name and one or more ACL elements. The ACL list is comma-separated, with double quotes surrounding it if there is a space character. The CLI does not provide the pull/push function.
For example, the following commands create three ACLs. The first is a key with a value, the second is for a network, and the third points to the first ACL. Including an exclamation point (!) before a value negates that value, so that you can exclude it in a series of values:
nrcmd> acl sec-acl create "key h-a.h-b.example.com."
nrcmd> acl dyn-update-acl create "!192.168.2.13,192.168.2.0/24"
nrcmd> acl main-acl create sec-acl
To configure ACLs for the DNS server or zones, set up a DNS update policy, then define this update policy for the zone (see the "Configuring DNS Update Policies" section).
Transaction Signature (TSIG) RRs enable the DNS server to authenticate each message that it receives, containing a TSIG. Communication between servers is not encrypted but it becomes authenticated, which allows validation of the authenticity of the data and the source of the packet.
When you configure the Cisco Network Registrar DHCP server to use TSIG for DNS updates, the server appends a TSIG RR to the messages. Part of the TSIG record is a message authentication code.
When the DNS server receives a message, it looks for the TSIG record. If it finds one, it first verifies that the key name in it is one of the keys it recognizes. It then verifies that the time stamp in the update is reasonable (to help fight against traffic replay attacks). Finally, the server looks up the key shared secret that was sent in the packet and calculates its own authentication code. If the resulting calculated authentication code matches the one included in the packet, then the contents are considered to be authentic.
Creating TSIG Keys
Generating Keys
Considerations for Managing Keys
Adding Supporting TSIG Attributes
Note
If you want to enable key authentication for Address-to-User Lookup (ATUL) support, you must also define a key identifier (id attribute value). See the "Setting DHCP Forwarding" section on page 23-24.
From the Administration menu or the DNS menu, choose Keys, to open the List/Add Encryption Keys page.
For a description of the Algorithm, Security Type, Time Skew, Key ID, and Secret values, see Table 28-1. See also the "Considerations for Managing Keys" section.
To edit a TSIG key, click its name on the List/Add Encryption Keys page to open the Edit Encryption Key page.
At the regional level, you can additionally pull replica keys, or push keys to local clusters.
Use key name create secret. Provide a name for the key (in domain name format; for example, hosta-hostb-example.com.) and a minimum of the shared secret as a base-64 encoded string (see Table 28-1 for a description of the optional time skew attribute). An example in the CLI would be:
nrcmd> key hosta-hostb-example.com. create secret-string
It is recommended that you use the Cisco Network Registrar cnr_keygen utility to generate TSIG keys so that you add them or import them using import keys.
Execute the cnr_keygen key generator utility from a DOS prompt, or a Solaris or Linux shell:
•
On Windows, the utility is in the install-path\bin folder.
•
On Solaris and Linux, the utility is in the install-path/usrbin directory.
An example of its usage (on Solaris and Linux) is:
> /opt/nwreg2/local/usrbin/cnr_keygen -n a.b.example.com. -a hmac-md5 -t TSIG -b 16
-s 300
key "a.b.example.com." {
algorithm hmac-md5;
secret "xGVCsFZ0/6e0N97HGF50eg==";
# cnr-time-skew 300;
# cnr-security-type TSIG;
};
The only required input is the key name. The options are described in Table 28-1.
The resulting secret is base64-encoded as a random string.
You can also redirect the output to a file if you use the right-arrow (>) or double-right-arrow (>>) indicators at the end of the command line. The > writes or overwrites a given file, while the >> appends to an existing file. For example:
> /opt/nwreg2/local/usrbin/cnr_keygen -n example.com > keyfile.txt
> /opt/nwreg2/local/usrbin/cnr_keygen -n example.com >> addtokeyfile.txt
You can then import the key file into Cisco Network Registrar using the CLI to generate the keys in the file. The key import can generate as many keys as it finds in the import file. The path to the file should be fully qualified. For example:
nrcmd> import keys keydir/keyfile.txt
If you generate your own keys, you must enter them as a base64-encoded string (See RFC 4648 for more information on base64 encoding). This means that the only characters allowed are those in the base64 alphabet and the equals sign (=) as pad character. Entering a nonbase64-encoded string results in an error message.
Here are some other suggestions:
•
Do not add or modify keys using batch commands.
•
Change shared secrets frequently; every two months is recommended. Note that Cisco Network Registrar does not explicitly enforce this.
•
The shared secret length should be at least as long as the keyed message digest (HMAC-MD5 is 16 bytes). Note that Cisco Network Registrar does not explicitly enforce this and only checks that the shared secret is a valid base64-encoded string, but it is the policy recommended by RFC 2845.
To add TSIG support for a DNS update configuration (see the "Creating DNS Update Configurations" section), set these attributes:
•
server-key
•
backup-server-key
DNS update policies provide a mechanism for managing update authorization at the RR level. Using update policies, you can grant or deny DNS updates based on rules that are based on ACLs as well as RR names and types. ACLs are described in the "Access Control Lists" section.
Compatibility with Previous Cisco Network Registrar Releases
Creating and Editing Update Policies
Defining and Applying Rules for Update Policies
Previous Cisco Network Registrar releases used static RRs that administrators entered, but that DNS updates could not modify. This distinction between static and dynamic RRs no longer exists. RRs can now be marked as protected or unprotected (see the "Protecting Resource Record Sets" section on page 16-3). Administrators creating or modifying RRs can now specify whether RRs should be protected. A DNS update cannot modify a protected RR set, even if an RR of the given type does not yet exist in the set.
Note
Previous releases allowed DNS updates only to A, TXT, PTR, CNAME and SRV records. This was changed to allow updates to all but SOA and NS records in unprotected name sets. To remain compatible with a previous release, use an update policy to limit RR updates.
Creating an update policy initially involves creating a name for it.
Step 1
From the DNS menu, choose Update Policies to open the List DNS Update Policies page.
Step 2
Click Add Policy to open the Add DNS Update Policy page.
Step 3
Enter a name for the update policy.
Step 4
Proceed to the "Defining and Applying Rules for Update Policies" section.
Use update-policy name create; for example:
nrcmd> update-policy policy1 create
DNS update policies are effective only if you define rules for each that grant or deny updates for certain RRs based on an ACL. If no rule is satisfied, the default (last implicit) rule is to deny all updates ("deny any wildcard * *").
Defining Rules for Named Update Policies
Applying Update Policies to Zones
Defining rules for named update policies involves a series of Grant and Deny statements.
Step 1
Create an update policy, as described in the "Creating and Editing Update Policies" section, or edit it.
Step 2
On the Add DNS Update Policies or Edit DNS Update Policy page:
a.
Enter an optional value in the Index field.
b.
Click Grant to grant the rule, or Deny to deny the rule.
c.
Enter an access control list in the ACL List field.
d.
Choose a keyword from the Keyword drop-down list.
e.
Enter a value based on the keyword in the Value field. This can be a RR or subdomain name, or, if the wildcard keyword is used, it can contain wildcards (see Table 28-2).
f.
Enter one or more RR types, separated by commas, in the RR Types field, or use * for "all RRs." You can use negated values, which are values prefixed by an exclamation point; for example, !PTR.
g.
Click Add Policy.
Step 3
At the regional level, you can also push update policies to the local clusters, or pull them from the replica database on the List DNS Update Policies page.
Step 4
To edit an update policy, click the name of the update policy on the List DNS Update Policies page to open the Edit DNS Update Policy page, make changes to the fields, then click Edit Policy.
Create or edit an update policy (see the "Creating and Editing Update Policies" section, then use update-policy name rules add rule, with rule being the rule. (See Table 28-2 for the rule wildcard values.) For example:
nrcmd> update-policy policy1 rules add "grant 192.168.50.101 name host1 A,TXT" 0
The rule is enclosed in quotes. To parse the rule syntax for the example:
•
grant—Action that the server should take, either grant or deny.
•
192.168.50.101—The ACL, in this case an IP address. The ACL can be one of the following:
–
Name—ACL created by name, as described in the "Access Control Lists" section.
–
IP address, as in the example.
–
Network address, including mask; for example, 192.168.50.0/24.
–
TSIG key—Transaction signature key, in the form key=key, (as described in the "Transaction Security" section.
–
One of the reserved words:
any—Any ACL
none—No ACL
localhost—Any local host addresses
localnets—Any local network address
You can negate the ACL value by preceding it with an exclamation point (!).
•
name—Keyword, or type of check to perform on the RR, which can be one of the following:
–
name—Name of the RR, requiring a name value.
–
subdomain—Name of the RR or the subdomain with any of its RRs, requiring a name or subdomain value.
–
wildcard—Name of the RR, using a wildcard value (see Table 28-2).
•
host1—Value based on the keyword, in this case the RR named host1. This can also be a subdomain name or, if the wildcard keyword is used, can contain wildcards (see Table 28-2).
•
A,TXT—RR types, each separated by a comma. This can be a list of any of the RR types described in Appendix A, "Resource Records." You can negate each record type value by preceding it with an exclamation point (!).
•
Note that if this or any assigned rule is not satisfied, the default is to deny all RR updates.
Tacked onto the end of the rule, outside the quotes, is an index number, in the example, 0. The index numbers start at 0. If there are multiple rules for an update policy, the index serves to add the rule in a specific order, such that lower numbered indexes have priority in the list. If a rule does not include an index, it is placed at the end of the list. Thus, a rule always has an index, whether or not it is explicitly defined. You also specify the index number in case you need to remove the rule.
To replace a rule, use update-policy name delete, then recreate the update policy. To edit a rule, use update-policy name rules remove index, where index is the explicitly defined or system-defined index number (remembering that the index numbering starts at 0), then recreate the rule. To remove the second rule in the previous example, enter:
nrcmd> update-policy policy1 rules remove 1
After creating an update policy, you can apply it to a zone (forward and reverse) or zone template.
Step 1
From the DNS menu, choose Forward Zones to open the List/Add Zones page.
Step 2
Click the name of the zone to open the Edit Zone page.
Tip
You can also perform this function for zone templates on the Edit Zone Template page, and primary reverse zones on the Edit Primary Reverse Zone page (see Chapter 15, "Managing Zones.").
Step 3
Enter the name or (comma-separated) names of one or more of the existing named update policies in the update-policy-list attribute field.
Note
The server processes the update-acl before it processes the update-policy-list.
Step 4
Click Modify Zone.
Use zone name set update-policy-list, equating the update-policy-list attribute with a quoted list of comma-separated update policies, as defined in the "Creating and Editing Update Policies" section. For example:
nrcmd> zone example.com set update-policy-list="policy1,policy2"
The Cisco Network Registrar DHCP server stores all pending DNS update data on disk. If the DHCP server cannot communicate with a DNS server, it periodically tests for re-established communication and submits all pending updates. This test typically occurs every 40 seconds.
Click DNS, then Forward Zones. Click the View icon (
) in the RRs column to open the List/Add DNS Server RRs for Zone page.
Use zone name listRR dns.
Microsoft Windows DNS clients that get DHCP leases can update (refresh) their Address (A) records directly with the DNS server. Because many of these clients are mobile laptops that are not permanently connected, some A records may become obsolete over time. The Windows DNS server scavenges and purges these primary zone records periodically. Cisco Network Registrar provides a similar feature that you can use to periodically purge stale records.
Scavenging is normally disabled by default, but you should enable it for zones that exclusively contain Windows clients. Zones are configured with no-refresh and refresh intervals. A record expires once it ages past its initial creation date plus these two intervals. Figure 28-1 shows the intervals in the scavenging time line.
Figure 28-1 Address Record Scavenging Time Line Intervals
The Cisco Network Registrar process is:
1.
When the client updates the DNS server with a new A record, this record gets a timestamp, or if the client refreshes its A record, this may update the timestamp ("Record is created or refreshed").
2.
During a no-refresh interval (a default value of seven days), if the client keeps sending the same record without an address change, this does not update the record timestamp.
3.
Once the record ages past the no-refresh interval, it enters the refresh interval (also a default value of seven days), during which time DNS updates refresh the timestamp and put the record back into the no-refresh interval.
4.
A record that ages past the refresh interval is available for scavenging when it reaches the scavenge interval.
Note
Only unprotected RRs are scavenged. To keep RRs from being scavenged, set them to protected. However, top-of-zone (@) RRs, even if unprotected, are not scavenged.
The following zone attributes affect scavenging:
•
scvg-interval—Period during which the DNS server checks for stale records in a zone. The value can range from one hour to 365 days. You can also set this for the server (the default value is one week), although the zone setting overrides it.
•
scvg-no-refresh-interval—Interval during which actions, such as dynamic or prerequisite-only DNS updates, do not update the record timestamp. The value can range from one hour to 365 days. The zone setting overrides the server setting (the default value is one week).
•
scvg-refresh-interval—Interval during which DNS updates increment the record timestamp. After both the no-refresh and refresh intervals expire, the record is a candidate for scavenging. The value can range from one hour to 365 days. The zone setting overrides the server setting (the default value is one week).
•
scvg-ignore-restart-interval—Ensures that the server does not reset the scavenging time with every server restart. Within this interval, Cisco Network Registrar ignores the duration between a server down instance and a restart, which is usually fairly short.
The value can range from two hours to one day. With any value longer than that set, Cisco Network Registrar recalculates the scavenging period to allow for record updates that cannot take place while the server is stopped. The zone setting overrides the server setting (the default value is 2 hours.
Enable scavenging only for zones where a Cisco Network Registrar DNS server receives updates exclusively from Windows clients (or those known to do automatic periodic DNS updates). Set the attributes listed above. The Cisco Network Registrar scavenging manager starts at server startup. It reports records purged through scavenging to the changeset database. Cisco Network Registrar also notifies secondary zones by way of zone transfers of any records scavenged from the primary zone. In cases where you create a zone that has scavenging disabled (the records do not have a timestamp) and then subsequently enable it, Cisco Network Registrar uses a proxy timestamp as a default timestamp for each record.
You can monitor scavenging activity using one or more of the log settings scavenge, scavenge-details, ddns-refreshes, and ddns-refreshes-details.
On the Manage DNS Server page, click the Run icon (
) in the Commands column to open the DNS Server Commands page (see Figure 7-1 on page 7-2). On this page, click the Run icon next to Scavenge all zones.
To scavenge a particular forward or reverse zone only, go to the Zone Commands for Zone page, which is available by clicking the Run icon (
) on the List/Add Zones page or List/Add Reverse Zones page. Click the Run icon again next to Scavenge zone on the Zone Commands for Zone page. To find out the next time scavenging is scheduled for the zone, click the Run icon next to Get scavenge start time.
Use dns scavenge for all zones that have scavenging enabled, or zone name scavenge for a specific zone that has it enabled. Use the getScavengeStartTime action on a zone to find out the next time scavenging is scheduled to start.
You can use a standard DNS tool such as dig and nslookup to query the server for RRs. The tool can be valuable in determining whether dynamically generated RRs are present. For example:
$ nslookup
default Server: server2.example.com
Address: 192.168.1.2
> leasehost1.example.com
Server: server2.example.com
Address: 192.168.1.100
> set type=ptr
> 192.168.1.100
Server: server2.example.com
Address: 192.168.1.100
100.40.168.192.in-addr.arpa name = leasehost1.example.com
40.168,192.in-addr.arpa nameserver = server2.example.com
You can monitor DNS updates on the DNS server by setting the log-settings attribute to ddns, or show even more details by setting it to ddns-details.
The Windows operating system rely heavily on DNS and, to a lesser extent, DHCP. This reliance requires careful preparation on the part of network administrators prior to wide-scale Windows deployments. Windows clients can add entries for themselves into DNS by directly updating forward zones with their address (A) record. They cannot update reverse zones with their pointer (PTR) records.
Client DNS Updates
Dual Zone Updates for Windows Clients
DNS Update Settings in Windows Clients
Windows Client Settings in DHCP Servers
SRV Records and DNS Updates
Issues Related to Windows Environments
Frequently Asked Questions About Windows Integration
It is not recommended that clients be allowed to update DNS directly.
For a Windows client to send address record updates to the DNS server, two conditions must apply:
•
The Windows client must have the Register this connection's addresses in DNS box checked on the DNS tab of its TCP/IP control panel settings.
•
The DHCP policy must enable direct updating (Cisco Network Registrar policies do so by default).
The Windows client notifies the DHCP server of its intention to update the A record to the DNS server by sending the client-fqdn DHCP option (81) in a DHCPREQUEST packet. By indicating the fully qualified domain name (FQDN), the option states unambiguously the client location in the domain namespace. Along with the FQDN itself, the client or server can send one of these possible flags in the client-fqdn option:
•
0—Client should register its A record directly with the DNS server, and the DHCP server registers the PTR record (done through the policy allow-client-a-record-update attribute being enabled).
•
1—Client wants the DHCP server to register its A and PTR records with the DNS server.
•
3—DHCP server registers the A and PTR records with the DNS server regardless of the client request (done through the policy allow-client-a-record-update attribute being disabled, which is the default value). Only the DHCP server can set this flag.
The DHCP server returns its own client-fqdn response to the client in a DHCPACK based on whether DNS update is enabled. However, if the 0 flag is set (the allow-client-a-record-update attribute is enabled for the policy), enabling or disabling DNS update is irrelevant, because the client can still send its updates to DNS servers. See Table 28-3 for the actions taken based on how various properties are set.
A Windows DHCP server can set the client-fqdn option to ignore the client request. To enable this behavior in Cisco Network Registrar, create a policy for Windows clients and disable the allow-client-a-record-update attribute for this policy.
The following attributes are enabled by default in Cisco Network Registrar:
•
Server use-client-fqdn—The server uses the client-fqdn value on incoming packets and does not examine the host-name. The DHCP server ignores all characters after the first dot in the domain name value, because it determines the domain from the defined scope for that client. Disable use-client-fqdn only if you do not want the server to determine hostnames from client-fqdn, possibly because the client is sending unexpected characters.
•
Server use-client-fqdn-first—The server examines client-fqdn on incoming packets from the client before examining the host-name option (12). If client-fqdn contains a hostname, the server uses it. If the server does not find the option, it uses the host-name value. If use-client-fqdn-first is disabled, the server prefers the host-name value over client-fqdn.
•
Server use-client-fqdn-if-asked—The server returns the client-fqdn value in the outgoing packets if the client requests it. For example, the client might want to know the status of DNS activity, and hence request that the DHCP server should present the client-fqdn value.
•
Policy allow-client-a-record-update—The client can update its A record directly with the DNS server, as long as the client sets the client-fqdn flag to 0 (requesting direct updating). Otherwise, the server updates the A record based on other configuration properties.
The hostnames returned to client requests vary depending on these settings (see Table 28-4).
Windows DHCP clients might be part of a DHCP deployment where they have A records in two DNS zones. In this case, the DHCP server returns the client-fqdn so that the client can request a dual zone update. To enable a dual zone update, enable the policy attribute allow-dual-zone-dns-update.
The DHCP client sends the 0 flag in client-fqdn and the DHCP server returns the 0 flag so that the client can update the DNS server with the A record in its main zone. However, the DHCP server also directly sends an A record update based on the client secondary zone in the behalf of the client. If both allow-client-a-record-update and the allow-dual-zone-dns-update are enabled, allowing the dual zone update takes precedence so that the server can update the secondary zone A record.
The Windows client can set advanced properties to enable sending the client-fqdn option.
Step 1
On the Windows client, go to the Control Panel and open the TCP/IP Settings dialog box.
Step 2
Click the Advanced tab.
Step 3
Click the DNS tab.
Step 4
To have the client send the client-fqdn option in its request, leave the Register this connection's addresses in DNS box checked. This indicates that the client wants to do the A record update.
You can apply a relevant policy to a scope that includes the Windows clients, and enable DNS updates for the scope.
Step 1
Create a policy for the scope that includes the Windows clients. For example:
a.
Create a policywin2k.
b.
Create a win2k scope with the subnet 192.168.1.0/24 and policywin2k as the policy. Add an address range of 192.168.1.10 through 192.168.1.100.
Step 2
Set the scope attribute dynamic-dns to update-all, update-fwd-only, or update-rev-only.
Step 3
Set the zone name, server address (for A records), reverse zone name, and reverse server address (for PTR records), as described in the "Creating DNS Update Configurations" section.
Step 4
If you want the client to update its A records at the DNS server, enable the policy attribute allow-client-a-record-update (this is the preset value). There are a few caveats to this:
•
If allow-client-a-record-update is enabled and the client sends the client-fqdn with the update bit enabled, the host-name and client-fqdn returned to the client match the client client-fqdn. (However, if the override-client-fqdn is also enabled on the server, the hostname and FQDN returned to the client are generated by the configured hostname and policy domain name.)
•
If, instead, the client does not send the client-fqdn with the update bit enabled, the server does the A record update, and the host-name and client-fqdn (if requested) returned to the client match the name used for the DNS update.
•
If allow-client-a-record-update is disabled, the server does the A record updates, and the host-name and client-fqdn (with the update bit disabled) values returned to the client match the name used for the DNS update.
•
If allow-dual-zone-dns-update is enabled, the DHCP server always does the A record updates. (See the "Dual Zone Updates for Windows Clients" section.)
•
If use-dns-update-prereqs is enabled (the preset value) for the DHCP server or DNS update configuration and update-dns-first is disabled (the preset value) for the update configuration, the hostname and client-fqdn returned to the client are not guaranteed to match the DNS update, because of delayed name disambiguation. However, the lease data will be updated with the new names.
According to RFC 2136, update prerequisites determine the action the primary master DNS server takes based on whether an RR set or name record should or should not exist. Disable use-dns-update-prereqs only under rare circumstances.
Step 5
Reload the DHCP server.
Windows relies heavily on the DNS protocol for advertising services to the network. Table 28-5 describes how Windows handles service location (SRV) DNS RRs and DNS updates.
You can configure the Cisco Network Registrar DNS server so that Windows domain controllers can dynamically register their services in DNS and, thereby, advertise themselves to the network. Because this process occurs through RFC-compliant DNS updates, you do not need to do anything out of the ordinary in Cisco Network Registrar.
To configure Cisco Network Registrar to accept these dynamic SRV record updates:
Step 1
Determine the IP addresses of the devices in the network that need to advertise services through DNS.
Step 2
If they do not exist, create the appropriate forward and reverse zones for the Windows domains.
Step 3
Enable DNS updates for the forward and reverse zones.
Step 4
Set up a DNS update policy to define the IP addresses of the hosts to which you want to restrict accepting DNS updates (see the "Configuring DNS Update Policies" section). These are usually the DHCP servers and any Windows domain controllers. (The Windows domain controllers should have static IP addresses.)
If it is impractical or impossible to enter the list of all the IP addresses from which a DNS server must accept updates, you can configure Cisco Network Registrar to accept updates from a range of addresses, although Cisco does not recommend this configuration.
Step 5
Reload the DNS and DHCP servers.
Table 28-6 describes the issues concerning interoperability between Windows and Cisco Network Registrar, The information in this table is intended to inform you of possible problems before you encounter them in the field. For some frequently asked questions about Windows interoperability, see the "Frequently Asked Questions About Windows Integration" section.
Example 28-1 Output Showing Invisible Dynamically Created RRs
Dynamic Resource Records
_ldap._tcp.test-lab._sites 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_ldap._tcp.test-lab._sites.gc._msdcs 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
_kerberos._tcp.test-lab._sites.dc._msdcs 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_ldap._tcp.test-lab._sites.dc._msdcs 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_ldap._tcp 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_kerberos._tcp.test-lab._sites 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_ldap._tcp.pdc._msdcs 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_ldap._tcp.gc._msdcs 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
_ldap._tcp.1ca176bc-86bf-46f1-8a0f-235ab891bcd2.domains._msdcs 600 IN SRV 0 100 389
CNR-MKT-1.w2k.example.com.
e5b0e667-27c8-44f7-bd76-6b8385c74bd7._msdcs 600 IN CNAME CNR-MKT-1.w2k.example.com.
_kerberos._tcp.dc._msdcs 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_ldap._tcp.dc._msdcs 600 IN SRV 0 100 389 CNR-MKT-1.w2k.example.com.
_kerberos._tcp 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_gc._tcp 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
_kerberos._udp 600 IN SRV 0 100 88 CNR-MKT-1.w2k.example.com.
_kpasswd._tcp 600 IN SRV 0 100 464 CNR-MKT-1.w2k.example.com.
_kpasswd._udp 600 IN SRV 0 100 464 CNR-MKT-1.w2k.example.com.
gc._msdcs 600 IN A 10.100.200.2
_gc._tcp.test-lab._sites 600 IN SRV 0 100 3268 CNR-MKT-1.w2k.example.com.
These questions are frequently asked about integrating Cisco Network Registrar DNS services with Windows:
Q.
What happens if both Windows clients and the DHCP server are allowed to update the same zone? Can this create the potential for stale DNS records being left in a zone? If so, what can be done about it?
A.
The recommendation is not to allow Windows clients to update their zones. Instead, the DHCP server should manage all the client dynamic RR records. When configured to perform DNS updates, the DHCP server accurately manages all the RRs associated with the clients that it served leases to. In contrast, Windows client machines blindly send a daily DNS update to the server, and when removed from the network, leave a stale DNS entry behind.
Any zone being updated by DNS update clients should have DNS scavenging enabled to shorten the longevity of stale RRs left by transient Windows clients. If the DHCP server and Windows clients are both updating the same zone, three things are required in Cisco Network Registrar:
a.
Enable scavenging for the zone.
b.
Configure the DHCP server to refresh its DNS update entries as each client renews its lease. By default, Cisco Network Registrar does not update the DNS record again between its creation and its final deletion. A DNS update record that Cisco Network Registrar creates lives from the start of the lease until the lease expires. You can change this behavior using a DHCP server (or DNS update configuration) attribute, force-dns-updates. For example:
nrcmd> dhcp enable force-dns-updates
100 Ok
force-dns-updates=true
c.
If scavenging is enabled on a particular zone, then the lease time associated with clients that the DHCP server updates that zone on behalf of must be less than the sum of the no-refresh-interval and refresh-interval scavenging settings. Both of these settings default to seven days. You can set the lease time to 14 days or less if you do not change these default values.
Q.
What needs to be done to integrate a Windows domain with a pre-existing DNS domain naming structure if it was decided not to have overlapping DNS and Windows domains? For example, if there is a pre-existing DNS domain called example.com and a Windows domain is created that is called w2k.example.com, what needs to be done to integrate the Windows domain with the DNS domain?
A.
In the example, a tree in the Windows domain forest would have a root of w2k.example.com. There would be a DNS domain named example.com. This DNS domain would be represented by a zone named example.com. There may be additional DNS subdomains represented in this zone, but no subdomains are ever delegated out of this zone into their own zones. All the subdomains will always reside in the example.com. zone.
Q.
In this case, how are DNS updates from the domain controllers dealt with?
A.
To deal with the SRV record updates from the Windows domain controllers, limit DNS updates to the example.com. zone to the domain controllers by IP address only. (Later, you will also add the IP address of the DHCP server to the list.) Enable scavenging on the zone. The controllers will update SRV and A records for the w2k.example.com subdomain in the example.com zone. There is no special configuration required to deal with the A record update from each domain controller, because an A record for w2k.example.com does not conflict with the SOA, NS, or any other static record in the example.com zone.
The example.com zone then might include these records:
example.com. 43200 SOA ns.example.com. hostmaster.example.com. (
98011312 ;serial
3600 ;refresh
3600 ;retry
3600000 ;expire
43200 ) ;minimum
example.com.86400 NS ns.example.com
ns.example.com. 86400 A 10.0.0.10
_ldap._tcp.w2k.example.com. IN SRV 0 0 389 dc1.w2k.example.com
w2k.example.com 86400 A 10.0.0.25
...
Q.
In this case, how are zone updates from individual Windows client machines dealt with?
A.
In this scenario, the clients could potentially try to update the example.com. zone with updates to the w2k.example.com domain. any other device on the network. Security by IP is not the most ideal solution, as it would not prevent a malicious attack from a spoofed IP address source. You can secure updates from the DHCP server by configuring TSIG between the DHCP server and the DNS server.
Q.
Is scavenging required in this case?
A.
No. Updates are only accepted from the domain controllers and the DHCP server. The DHCP server accurately maintains the life cycle of the records that they add and do not require scavenging. You can manage the domain controller dynamic entries manually by using the Cisco Network Registrar single-record dynamic RR removal feature.
Q.
What needs to be done to integrate a Windows domain that shares its namespace with a DNS domain? For example, if there is a pre-existing DNS zone called example.com and a Windows Active Directory domain called example.com needs to be deployed, how can it be done?
A.
In this example, a tree in the Windows domain forest would have a root of example.com. There is a pre-existing domain that is also named example.com that is represented by a zone named example.com.
Q.
In this case, how are DNS updates from individual Windows client machines dealt with?
A.
To deal with the SRV record updates, create subzones for:
_tcp.example.com.
_sites.example.com.
_msdcs.example.com.
_msdcs.example.com.
_udp.example.com.
Limit DNS updates to those zones to the domain controllers by IP address only. Enable scavenging on these zones.
To deal with the A record update from each domain controller, enable a DNS server attribute, simulate-zone-top-dynupdate.
nrcmd> dns enable simulate-zone-top-dynupdate
It is not required, but if desired, manually add an A record for the domain controllers to the example.com zone.
Q.
In this case, how are zone updates from individual Windows client machines dealt with?
A.
In this scenario, the clients could potentially try to update the example.com zone. other devices on the network. Security by IP is not the most ideal solution, as it would not prevent a malicious attack from a spoofed source. Updates from the DHCP server are more secure when TSIG is configured between the DHCP server and the DNS server.
Q.
Has scavenging been addressed in this case?
A.
Yes. The subzones _tcp.example.com, _sites.example.com, _msdcs.example.com, _msdcs.example.com, and _udp.example.com zones accept updates only from the domain controllers, and scavenging was turned on for these zones. The example.com zone accepts DNS updates only from the DHCP server.
|
http://www.cisco.com/c/en/us/td/docs/net_mgmt/network_registrar/7-2/user/guide/cnr72book/UG27_DDN.html
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
You have to ->import the symbols in to your library:
use warnings;
use strict;
package MyStandardModules;
use Time::HiRes ();
package main;
Time::HiRes->import('gettimeofday');
print gettimeofday();
[download]
I suggest either calling the function/method qualified with the namespace (ala my $time = Time::HiRes::gettimeofday()) or use()ing the module in the place you need the symbol imported.
Could you explain your adversion to use()ing the module in the package where the function is needed?
In reply to Re: Export again
by trwww
in thread Export again
by Cag.
|
http://www.perlmonks.org/?parent=889482;node_id=3333
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
?
You can either launch it from python via sublime.run_command or you can create a key binding or create a menu entry. There's a tutorial linked from the root documentation page going into more details.
I've reviewed those resources, but I still cannot figure it out. I don't have a view with which to work, so I'm confused as to how I can invoke the plugin that from the Python console built into Sublime. The tutorial shows to run the command as follows:
view.run_command('hello')
I've also tried using:
sublime.run_command('hello')
I've tried creating a key binding in my user file:
{ "keys": "super+shift+h"], "command": "hello" }
]
All I'm trying to do at the moment is to just print something to the console that doesn't use the
import sublime, sublimeplugin
class HelloCommand(sublime_plugin.ApplicationCommand):
def run(self, args):
print "Hello"
Ultimately, I'd like to make a plugin that will prompt me for a URL that will go fetch that file based on specific parameters like drive mappings. But to get there, I need to understand how to do basic stuff like this.
It is sublime_plugin.
Also check the Console (Ctrl- ' (apostrophe)) for any error message.
Bah! I should have known it was some mundane typo.
Thanks!
|
https://forum.sublimetext.com/t/how-are-applicationcommands-invoked-and-other-questions/7736/4
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
,
I have a question regarding the best practices for adding new nodes to an existing cluster.
From reading the following wiki: -- I understand
that when creating a brand new cluster -- we can use the following to calculate the initial
token for each node to achieve balance in the ring:
def tokens(nodes):
for i in range(1, nodes + 1):
print (i * (2 ** 127 - 1) / nodes)
My question is on the best practice for adding new nodes to an existing cluster. There is
a recommendation in the wiki which is to basically to compute new tokens for every node and
assign them manually using the nodetool command. We're planning on running either 16GB or
32GB heaps on each of our nodes, so token re-assignment for each node in the cluster sounds
like a very expensive operation especially in situations where we're adding new nodes to handle
scaling issues w/ the existing cluster.
I'm bit of a noob to cassandra, so wanted to see how others are currently coping w/ this.
One option can be to grow the cluster in the power of 2 and use bootstraping w/ automatic
token generation. Is this an option that people are using? (but this gets exponentially expensive
when you already have a large # of nodes)
Does anyone know why cassandra doesn't use virtual tokens (e.g. one node token - creating
256 virtual node tokens in the ring)? This way adding new nodes to an existing cluster will
significantly mitigate the unbalance issue in the ring.
Thanks
gkim
|
http://mail-archives.apache.org/mod_mbox/cassandra-user/201010.mbox/%3C1288115100.81515077@192.168.2.228%3E
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
Write a program that asks the user for their name and age then if they are between 0 and 21 says no drinking for you, 21-65 hi ho hi ho its off to work you go, then 65 and over enjoy your retirement while the money lasts. Turn in the .java file of a zip of the entire project. Make sure to comment your code.
Well, I have most of the code taken care of but for some reason that I have spent the last hour racking my brain trying to figure it out to no avail.
Here is my code:
package project_1; import java.util.Scanner; public class Project_1 { public static void main(String[] args) { String name; int age; Scanner data = new Scanner(System.in); System.out.print("Hello, please enter your name: "); name = data.next(); System.out.print("\nThank you, now how old are you? "); age = data.nextInt(); data.close(); if (age < 21); System.out.println("\nSorry " +name+ " but no drinking for you."); if (age >=21 && age <= 65); System.out.println("Hi ho hi ho its off to work you go!");
Going by logic, 22 is greater that 21 but yet anytime I type 22 or higher I still get the "no drinking for you" message. Also when I type anything less than 21 I get the "Hi ho" message from the second if statement. I don't understand why. Help please? Thanks
Btw, this program is written with Eclipse.
This post has been edited by Goggz56: 06 October 2013 - 05:51 PM
|
http://www.dreamincode.net/forums/topic/330919-equality-statement-isnt-working-as-expected-please-help/
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
Recently while I was studying a C++ tutorial, I wrote this piece of code:
#include <iostream> #include <fstream> #include <string> using namespace std; void main() { string text; ifstream myFile ("data.txt"); while (!myFile.eof()) { getline(myFile, text); cout << text << endl; } cin.get(); }
Now the problem is I really don't understand this statement:
while (!myFile.eof())
Please explain me ! I am just a beginner, so the more lucid the explanation, the better !
If you feel there's something I need to learn beforehand/any enhancement to the code, please convey it to me ! I am ready to learn. Also how to check if the file already exists or not. Also please explain the ! operator.
Thanks
This post has been edited by Jeet.in: 26 June 2011 - 12:54 PM
|
http://www.dreamincode.net/forums/topic/237104-file-read-fstream-queries/page__p__1372860
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
note mstone <p><b>NICE</b> summary.. bravo!</p> <ul><li><i>I suppose that means then that a closure is a way of exposing a scoped variable without bringing it across the abstraction barrier.</i></ul> <p>Sort of.. A closure contains a variable whose binding was set in a specific evaluation context. The variable retains that binding, even after it leaves the context where the binding was defined. A closure makes an entity accessible outside the scope where it was defined, so I'd say it exposes the entity, not the variable per se.</p> <ul><li><i>I'm still confused about what a summary/value is though. Could you provide a concrete example?</i></ul> <p>Sure.. the string "hello, world.\n" is a value. We can put it into a script:</p> <code> "hello, world.\n"; </code> <p>but we can't <b>do</b> anything with it. It just sits there. Even if we use it by passing it to a function:</p> <code> print "hello, world.\n"; </code> <p>it's still inaccessible to the rest of the script. We can't do anything else with "the value we just printed". If we want to use the same value in more than one place, we need to put that value inside an entity, then give that entity a name:</p> <code> sub string { return ("hello, world.\n"); } print string(); </code> <p>The name 'string' is an identifier, the function it's bound to is an entity, and the string "hello, world.\n" is a value.</p> <ul><li><i> Also, does object permanence mean "the same entity" in terms of memory location or content?</i></ul> <p>'Memory location' is closest, but that carries all sorts of baggage specific to a given runtime environment.. it can't be a hardware location, because virtual memory systems shuffle things all over the place; it can't be a logical address, because garbage collectors tend to move things around; yadda yadda yadda.</p> <p>Object permanence means that we can assume the name "string" will stay bound to the function that returns "hello, world.\n" until we explicitly re-bind it. As a counter-example, imagine a buggy compiler that sometimes confuses identifiers of the same length. The script:</p> <code> sub f1 { return ("hello, world.\n"); } sub pi { return (3.14159); } for (1..10) { print f1(); } </code> <p>could print any combination of "hello, world.\n"s and "3.14159"s. That would be a violation of object permanence, and would make the compiler useless as anything more than a random number generator.</p> <p>Object permanence is one of those concepts that's so obvious, and so easy to take for granted, that thinking about it takes work. It's like trying to see air: the whole concept seems meaningless.. unless you're an astronomer, or live in L.A.</p> 222451 222467
|
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=222972
|
CC-MAIN-2016-18
|
en
|
refinedweb
|
Locked out at 2am
October 31, 2009 5 Comments
How it all started
It all started after a night out back in March 2009. I’d been out for a few post-work drinks and got a taxi home. When I tried to pay the driver, he asked if I had anything smaller than the awkward 50 euro note I was offering. I remembered that I had some smaller notes inside my apartment so I ran in to get them. I returned, paid the driver and turned to head back in to my apartment. It was then that I realised I’d left my keys inside and was locked out of the building.
I was less than absolutely delighted at this development as it was 2am on a Tuesday evening/Wednesday morning and I had to be up for work at ~7am. However getting annoyed wasn’t going to help so I decided to try waiting for someone else to enter/exit the building. In those silent minutes it occurred to me how useful it would be if I had a remote control for my apartment’s intercom. The intercom can open the building door at the press of a switch. There was nothing I could do at the time but I liked the idea and decided that I’d look into it the next day. I thought it might be worth writing a little about where these thoughts eventually lead me.
A GSM based remote control
I decided that a mobile phone controlled remote control would be the best option. This would have the advantage that I would not need to carry round any new hardware: I’d just need to have my phone with me. Also by using the GSM network I would be able to let people into the building even if I was not there.
A disclaimer
The above plan was all very well but the trouble was that the only electronics I knew was the little I remembered from my school days (over 10 years ago now). When I started this project I just knew what voltage, current, resistors and capacitors were and I had a vague understanding of diodes and transistors. That was it so please bear this in mind if I seem to have done any silly stuff below!
How the intercom works
The same intercom with the front cover removed
From the user’s point of view, the intercom has three components:
A push-switch which can be used to open the door to the building (seen on the top right of the housing in the picture). A speaker which will ring when the number of my apartment is dialled at the door to the building. A handset which can be used to converse with a person who has just caused my speaker to ring. [This component was irrelevant for my purposes.]
In theory, for this project I should just be interested in the switch but I found that the switch will not cause the door to open unless the speaker has been rung in the last minute or so. This means that for the remote control to work, the user has to ring my apartment on the building intercom system and then send a message via the GSM network to cause the switch to be pressed.
Version 1
There ended up being two versions of this project. I’ll talk in more detail about the second, current (final?) version below but there are a few details that were unique to version 1 which I think are worth mentioning. I’ll just sketch things here.
I was very keen that whatever hardware I ended up adding to my intercom would not alter the look of the hall in my apartment. I thus decided that anything I was going to use would have to fit inside the intercom enclosure. Also, after a few tests I discovered that only a few milliwatts of power were available in the circuits powering the intercom so whatever I was going to add there would almost certainly have to be battery powered. I wasn’t so keen on using batteries until I had the idea that since the speaker for the intercom has to be rung anyway for the switch to work, if I could have a device which was off (and so drawing no current from the batteries) except for a short period after the speaker was rung then I could expect acceptable battery life. Furthermore I had the idea that I could put a small low power short range RF tranceiver in the intercom which would receive commands from a GSM module elsewhere in my apartment. This way the GSM module could be plugged in and always on and very little power would be needed in the intercom, plus it’s easy to get very small RF transceivers that would easily fit inside the intercom enclosure. I decided on this as a solution and bought the following items (amongst others) from Sparkfun:
1x Telit GM862 GPS module
1x Sparkfun GM862 USB evaluation board
2x Maxstream XBee series 2 module (actually the link is to series 2.5 as series 2 seems to be gone)
2x Sparkfun XBee explorer USB board
The GM862 GPS module is superb. My favourite feature of it is that it has a python interpreter on it! You can write simple python programs on your PC, upload them to the module and then have them control the module. This obviates the need for another microcontroller telling the GM862 module what to do. It is easy to send and receive text messages and phone calls. The built in GPS module (though of no use to me in this project) is also very easy to use and although I have not yet done it, I believe it is easy to open GPRS connections using the module. The module also has a host of useful GPIO pins which are very useful and a serial port (it has many other features too!).
The XBee modules are also great (though Maxstream’s decision to remove IO line passing in the XBee series 2 modules is disappointing). They have two modes: transparent where they pair up and behave like a wire-less serial connection and API mode where the user communicates with the modules via their serial ports and can send simple packets back and forth between the modules to determine and set their state, including reading and writing values on their IO pins.
I used the XBees in API mode for this version of the project. I connected an opto-isolator up to one of the digital IO pins on the XBee module in the intercom so that I could trigger the switch for the intercom using the XBee. (In fact I discovered that I needed to use a Darlington coupled opto-isolator for enough current to trigger the intercom switch.)
I discovered that an XBee series 2 module (running the ZB 2.5 end device firmware) can be put in pin sleep mode where it consumes < 1uA of current and furthermore that if pin 20 (the commissioning button pin) is grounded once, the module will wake up for 30 seconds and broadcast a Node Identification Indicator packet (API identifier value: 0×95) to the network letting other nodes know it is awake. This feature was perfect and saved me having to manually create my own micro-power timed wake-up circuits for the XBee in the intercom. However I discovered that sometimes this packet did not arrive and so I jumpered pin 20 to pin 19 which I set as a digital input. I then configured the intercom XBee to transmit a packet when the state of DIO19 changed. I found that these Data Sample Rx Indicator packets (API identifier value: 0×92) were always transmitted and so a packet was always sent out to the network when the intercom XBee was woken up using pin 20.
All I had to do as regards the intercom was then to arrange for pin 20 to be grounded when the speaker was rung. This was easily accomplished used a bridge rectifier, a resistor and another opto-isolator. The schematic of the simple circuit I used appears in version 2 below. Here are some pictures of the XBee for the intercom mounted on some Veroboard together with the simple circuits to trigger the intercom switch and to wake up when the speaker is rung:
XBee module for intercom (underside)
XBee module for intercom (front side)
As a result of this, all I had to do was to have the GM862 module plugged in elsewhere in my apartment, connected (serially) to another XBee waiting to hear the packets indicating that the speaker had been rung. If the speaker was rung, the module checked if it had received a text message recently from the correct person, containing the correct password. If so, the GM862 module used its XBee to send an API packet to the XBee in the intercom to cause it to trigger the Darlington opto-isolator and trigger the intercom switch. A Remote AT Command Request packet (API identifier value: 0×17) was used to do this. A simple python script was running on the GM862 module controlling all of this.
Here’s a (pretty blurry!) picture of the GSM module inside it’s enclosure. You can see the GSM aerial, the power connector and the USB port on Sparkfun’s GM862 USB evaluation board.
So that’s a brief sketch of version 1. I’ve omitted many details and smoothed over quite a few bumps but that was the general set up.
Version 2
Running the cable
After I had version 1 working for a while, a friend of mine came round and commented that I could avoid having any batteries if I was willing to run a cable behind the wall in the hall from the intercom and have it come out in the hot press of my apartment.
I decided this was worth the effort and set about carrying it out. The idea was to plug the GSM module into a plug socket in the hot press and connect it to the intercom using the cable I would run from the hot press to the intercom. No wireless modules and no batteries would be necessary. So I set about running the cable.
I’d never really done any DIY but I was careful to think things through and was very satisfied with the results I got. I decided to use 8 core CAT5e cable and mount an RJ45 socket in the hot press which the CAT5e cable would connect to various junctions in the intercom circuit board. In fact I just need 4 of the 8 wires in the CAT5e cable (2 for the switch, 2 for the speaker) but I thought it would be worth having extra wires for any potential future features.
I don’t know how people normally do cabling jobs like this but I decided to use magnets to run the cable behind the wall. I bought a **VERY** powerful F4335 neodymium rare earth magnet from Magnet Expert (which they allege can exert a force of ~1200N in ideal circumstances!). The internal walls in my apartment are made of 1.25cm plasterboard (drywall if you’re from the US). I also placed a small magnet tied to some string into the wall using the hole that the existing cable for the intercom enters by. I then guided this small magnet and string behind the wall using my super magnet. It worked like a dream! There were various complications involving support beams, turning a corner and a door frame but I studied the geometry carefully and also had bought a cheap but sufficient snake scope so in the end I managed to succeed.
Once I had succeeded in running the cable from the intercom to the hot press, I mounted the RJ45 socket and used Polyfilla (multipurpose followed by fine surface) to cover my tracks where I had had to make a few holes in the hall wall. Here are a few pictures I took at various stages of this wiring job:
This rj45 socket inside the intercom is wired up to the speaker and switch. The CAT5e cable plugs in to connect it to the rj45 socket in the hot press.
An exit hole in the hall for the string-with-magnet which was necessary because of support beams behind the hall plasterboard.
The string is run across the support beams. The exit hole in the previous picture is visible on the right and the edge of a beam is visible on the right edge of the large hole on the left.
A rectangular piece of plasterboard is cut out of the wall surrounding the above holes and a piece of wood screwed in to the support beams. The CAT5e cable is running behind in a groove cut in the wood.
The first round of polyfilla covering the rectangular piece of wood.
After sanding and applying fine surface polyfilla and then painting
A view of the door frame which I needed to run the wire through taken using my snake scope from inside the hot press.
The same part of the door frame after I have succeeded in getting my drillbit to come through from the hall.
The string exiting in the hot press at last.
The hot press with rj45 socket mounted.
A view showing the intercom and hot press with new rj45 socket.
The SR latch
With this cabling done, it was now trivial to have the GSM module trigger the intercom switch upon receipt of an appropriate text message. However I thought it would be useful for the GSM module to have two functions: one where it triggers the switch immediately upon receipt of a text and one where it will trigger the switch as soon as it hears the speaker ring (provided an appropriate text message was received recently).
For this second function I needed the GSM module to hear the speaker. However there are no interrupts available on the GM862 pins so I would have to poll for the speaker. This would mean I could miss it if the module was busy doing something else and after noting that the module is particularly slow when it is waiting for the SIM to process a command, I realised that I would be lucky if I ever caught the speaker. I decided that the solution would be to have the speaker set an SR latch which the GSM module would poll and reset. I was keen to get this working quickly so I built a latch by hand using the transistors and resistors already available to me in my parts box. By the time I was building this I was less ignorant about electronics and so understood the importance of ensuring transistor saturation. I thus calculated my resistor values carefully and made sure that the current gain I was getting was safe even if the transistor beta $h_{fe}$ was at least a factor of 10 worse than the data-sheet claimed it could be. See the schematic for the circuit I used, the circuit to tap the speaker current is also shown.
So the latch solved the problem of catching the speaker ring. A lucky thing here which saved me having to solder yet more transistors onto my Veroboard was that the GM862 module has an open collector output pin GPO2, ideal for resetting the latch (a CMOS GPIO pin would not work here as it would always be driving the circuit) and also a transistor buffered input pin GPI1, ideal for reading the state of the latch (a CMOS GPIO pin would not work here as it would draw too much current).
Triggering switch.
It is probably also worth mentioning that in this version, although I continued to use an opto-isolator to tap the speaker current (as seen in the above schematic), I decided to use a relay instead of the opto-isolator to trigger the switch. I did this because I once found the Darlington opto-isolator in version 1 of this project in an odd state. This reminded me that I really don’t know what circuitry the intercom switch is connected to (the wires just disappear into the wall, they don’t go elsewhere on the intercom PCB) so it would be best to pass current through a relay than through a phototransistor. As it turned out I had some (rather expensive) relays whose contacts were made of gold (Panasonic microwave ARE1303 relays) so as long as I trigger the relay reliably I can expect the switch current to pass through the contacts safely. Again, I refer the reader to the schematic for the details.
Using the XBees
As is evident, I no longer needed the XBee modules for this version of the project since I had overcome the wireless problem by, well, using wires! However it occurred to me that it would be pretty useful if I could reprogram/debug the GSM module remotely so I set my XBees both in transparent mode, connected the serial port of one up to the serial port of the GSM module and the other up to my PC. Because the logic levels of the GSM module’s serial port are 2.8V and the XBee modules claim that they will run on anything from 2.1V – 3.4V, and because I only have a short range problem I decided to run the XBee on 2.8V. (See the schematic for the voltage regulation circuit I constructed; in fact it generates 2.75V (if we neglect the very tiny current flowing from the adjust pin of the voltage regulator) but this is close enough. As elsewhere the exact components used reflect what was available to me from my parts box.)
I then programmed the python script running the GSM module so that it would quit the python interpreter if it received the string ‘quit’ on its serial port and also so that it would send debug information out its serial port which I can listen to wirelessly on my PC. It has been extremely useful to be able to reprogram the GSM module without physical access and this works reliably. The XBee which plugs into my PC USB port is also rather neat:
So that’s it really! I found this a very enjoyable, educational and satisfying project. I include the python scripts running on the GSM module below (with passwords and phone numbers removed!). There are still lots of small changes and improvements I can think of but I’m happy to say this works well in its current form. Here’s a picture of the complete Veroboard containing the circuits discussed above (on the right of the image we can also see the edge of Sparkfun’s USB evaluation board for the GM862 module):
main.py
import SER, MOD, sys class SerWriter: def write(self, s): t = '%d ' % MOD.secCounter() for c in s: if c == '\r' or c == '\n': t = t + ' ' else: t = t + c SER.send(t + '\r\n') SER.set_speed('9600', '8N1') sys.stdout = sys.stderr = SerWriter() print 'Serial line set up' """ I have two python files and the below import at the bottom of this script for two reasons: 1. The below script will be compiled to a python object file and not recompiled each time we tun this script. This saves time on startup because Telit's python compiler is VERY slow. 2. It is almost impossible to debug compile errors without getting the output of stderr and the setup I have here allows me to see them. """ import sentry
sentry.py
import MDM, MOD, GPIO, SER, sys def get_till_empty(get): # Reads from stream determined by calling get till nothing left. # Might alter this to be able to handle case when get takes arguments. # (Then could use it for the SER and MDM receive functions too.) s = '' while 1: t = get() if t == '': # TODO: Robustify this termination condition. break s = s + t MOD.sleep(5) # Not sure if this helps but no harm. return s def parse_text_messages(s): l = [] lines = s.split('\r') i = 0 while 1: # StopIteration exception not supported in Python 1.5.2+ so have to do it old fashioned way. line = lines[i].strip() if line[:5] == '+CMGL': # TODO: Insert IndexError exception handling below. fields = line.split(',') # TODO: Handle quotes properly. Date/time mangled a bit by this. Also find out what fields[3] is (empty in all my examples). msg_id = int(fields[0][6:]) sender = fields[2] # TODO: Fix up formatting issues like leading + and quotes. time_and_date = fields[4] + ' ' + fields[5] # TODO: Format this into more useful structure. i = i + 1 body = lines[i].strip() l.append((msg_id, sender, body, time_and_date)) i = i + 1 if i >= len(lines): # OK to check this here since ''.split('\r') == [''], i.e. length 1 list. break return l def open_door(): sys.stdout.write('Welcome home!') GPIO.setIOvalue(UNLOCK_PIN, 1) MOD.sleep(DELAY) GPIO.setIOvalue(UNLOCK_PIN, 0) global last_open_time last_open_time = MOD.secCounter() """ Note that the open signal for the door also triggers the speaker so this speaker sound has to be finished by the time we get to the below, where we try to latch the speaker state low again. If it isn't, the latch will just revert to the speaker high state and we could end up sending the open signal on each iteration of the outer loop until open_on_speaker_timeout is up which would be pretty silly. The delays in place are enough though I think. """ sys.stdout.write('Setting BELL low') set_speaker_low() def set_speaker_low(): GPIO.setIOvalue(BELL_WRITE_PIN, 1) MOD.sleep(DELAY) GPIO.setIOvalue(BELL_WRITE_PIN, 0) def report_state(destination, message): MDM.send('AT+CMGS=%s,145\r' % destination, DELAY) MOD.sleep(DELAY) response = get_till_empty(MDM.read)[-4:] if response == '\r\n> ': sys.stdout.write('Texting %s to %s' % (message, destination)) MDM.send(message + chr(0x1A), DELAY) else: sys.stderr.write('Unexpected response %s when texting %s to %s' % (response, message, destination)) DELAY = 20 # 0.1s of second. SIM_PIN = xxxx BELL_READ_PIN = 1 # Important to use pin 1 here as need extra transistor buffer so don't draw to much current from latch circuit. BELL_WRITE_PIN = 2 # Important to use pin 2 here as need open collector output for my latch to work. UNLOCK_PIN = 10 UNLOCKING_KEYS = { '"+35386xxxxxx"' : 'xxxxxx' } # I could store this dictionary in the SIM address book rather than the code. MAX_UNLOCK_TIME = 300 # 5 minutes. # Unlock SIM (and consequently register with network). MDM.send('AT+CPIN=%d\r' % SIM_PIN, 0) while MDM.receive(DELAY).find('READY') == -1: # TODO: Perform better match than just find('READY'). MDM.send('AT+CPIN?\r', 0) MOD.sleep(DELAY) sys.stdout.write('Module ready') # Set our message format to text (not PDU). MDM.send('AT+CMGF=1\r', 0) MOD.sleep(DELAY) # Just in case module needs a moment to digest this command. MDM.send('AT+CMGD=1,4\r', DELAY) # Clear out all text messages before we start. MOD.sleep(DELAY * 2) GPIO.setIOdir(BELL_READ_PIN, 0, 0) GPIO.setIOdir(BELL_WRITE_PIN, 0, 1) GPIO.setIOdir(UNLOCK_PIN, 0, 1) last_open_on_speaker_text_time = 0 open_on_speaker_timeout = 0 last_open_time = 0 last_speaker_time = 0 sys.stdout.write('Entering main loop') while 1: sys.stdout.write('Looping') MDM.send('AT+CMGL="ALL"\r', DELAY) # DELAY is just a maximum wait here; in general we're quicker than this. for (msg_id, sender, body, time_and_date) in parse_text_messages(get_till_empty(MDM.read)): sys.stdout.write('Deleting message with id: %d' % msg_id) MDM.send('AT+CMGD=%d\r' % msg_id, DELAY) MOD.sleep(50) if sender in UNLOCKING_KEYS.keys() and body.startswith(UNLOCKING_KEYS[sender]): sys.stdout.write('Message %s, %s matched.' % (sender, body)) body_fields = body.split() if len(body_fields) == 2: try: open_on_speaker_timeout = min(MAX_UNLOCK_TIME, int(body_fields[1])) last_open_on_speaker_text_time = MOD.secCounter() except ValueError: sys.stderr.write('Unrecognised field "%s". Was expecting timeout in seconds.' % body_fields[1]) else: open_door() elif sender in UNLOCKING_KEYS.keys() and body == 'state': report_state(sender, 'now=%d\n'\ 'last_speaker=%d\n'\ 'last_open=%d\n'\ 'last_open_on_speaker_time=%d\n'\ 'last_open_on_speaker_timeout=%d'\ % (MOD.secCounter(), last_speaker_time, last_open_time, last_open_on_speaker_text_time, open_on_speaker_timeout)) else: sys.stdout.write('Ignoring unmatched message: %s, %s' % (sender, body)) if GPIO.getIOvalue(BELL_READ_PIN) == 1: sys.stdout.write('BELL state high') last_speaker_time = MOD.secCounter() if MOD.secCounter() - last_open_on_speaker_text_time < open_on_speaker_timeout: open_door() set_speaker_low() # It will in fact already be low if we just called open_door() but that's fine. else: sys.stdout.write('BELL state low') MOD.sleep(DELAY) if get_till_empty(SER.read).find('quit') != -1: sys.stdout.write('OK quitting python') break
Pingback: GSM enabled security door - Hack a Day
yer this is something that i’d do :]
Pingback: The Telly Terminator « Oliver Nash's Blog
Hi ,
You look jus like money to me. Started of to learn electronics , passed by our blog,, interest++ .
Pingback: GSM enabled security door « Tamil Affection
|
http://ocfnash.wordpress.com/2009/10/31/locked-out-at-2am/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
05 August 2008 05:04 [Source: ICIS news]
SINGAPORE (ICIS news)--China-based FibreChem Technologies reported late on Monday a 5% year-on-year rise in second quarter net profit to Hongkong dollars (HKD) 151.4m ($19.4m) from HKD 144.3m.
“The group had done reasonably well in the first half of 2008, amidst signs of a possibly slowing fibre industry in ?xml:namespace>
The company recorded a 10% growth in revenue to HKD 505.8m from HKD 459.6m driven by its new 10,000 tonne/year bi-component long fibre production line that commenced operations in June 2007.
FibreChem’s net profit was boosted by exchange gains driven by foreign currency loans in light of the appreciating Chinese Yuan that also cushioned rising operational and other expenses, Zhang said.
Selling and distribution expenses grew a significant 84% due to increases in advertising budget and additional costs that was set aside for the establishment of new sales offices in
Looking forward, FibreChem intends to explore new fibre products whilst actively growing its microfibre leather business in anticipation of the weakening of the Chinese fibre industry.
“We are confident that the steady investment in marketing and the strategic development of our various business segments will generate sustainable growth for the group,” Zhang said.
($1 = HKD 7
|
http://www.icis.com/Articles/2008/08/05/9145243/fibrechem-reports-5-growth-in-net-profit.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
You are instructed to email the weekly report at the beginning of the week.
if jpholiday.is_holiday (today) == True: sys.exit () ... if today.weekday () == 0 or today.weekday () == 1 and jpholiday.is_holiday (today + datetime.timedelta (days = -1)) == True: Execution
I think that this is almost enough, but if it is this, it will not be delivered on Wednesday and Thursday after consecutive holidays such as Golden Week.
"Wednesday/Thursday after consecutive holidays" can be judged by adding more if statements or if statements.
Prepare a list of days to run on Wednesdays and Thursdays,
Is there any smart way to do it?
If not, I think it is safer to say "deliver on Mondays regardless of holidays" or turn off the machine during consecutive holidays and manually run the program on exceptional days.
- Answer # 1
- Answer # 2
Answer using jpholiday.
import datetime import jpholiday def get_first_weekday_date (base_date): monday = base_date --datetime.timedelta (days = base_date.weekday ()) for i in range (7): dt = monday + datetime.timedelta (days = i) if jpholiday.is_holiday (dt): # print (f "skipped {dt} {jpholiday.is_holiday_name (dt)}") continue return dt return None # If the whole week is holiday today = datetime.date.today () # today = datetime.date (2020, 5, 5) #for testing target_date = get_first_weekday_date (today) print (target_date) if target_date == today: print ("Now is the time ♪")
- Answer # 3
It is faster to make it anymore.
jpholiday seems to be maintained, but I don't feel that it is a commonly used library. Also, it seems that special holidays can be added, but it seems that the base holiday cannot be deleted, so the company of the questioner said, "Next Monday's transfer holiday will be adjusted to work day by year!" May not be able to handle it.
Also, although it is troublesome at first, I think that making it like this has the following merits.
When you need to register next year's schedule or revise this year's schedule, it is easy to register (you can register the company holiday list or your own year holiday as it is, change something else You don't have to worry about registration, such as registering your day)
It is easy to expand in the future when another requirement such as "Submit a provisional version of the weekly report on a weekend other than a holiday!"
Defining a file on the outside is no longer an application, so please customize it yourself. Since terateil is a place for programmers to solve problems, instead of "answers that you don't have to look at the code after it is completed", "answers that can be incorporated into the questioner's code and expanded steadily" I kept in mind.
class MyDateTime: # Prioritize upper set of days, ignore lower duplicated elements. # Of courese, you must register the future calendars, # in odrder not to work on many days! :-) special_workdays = {} special_holidays = { '2020-1-2', '2020-1-3', '2020-8-11', '2020-8-12', '2020-8-13', '2020-8-14', '2020-12-29', '2020-12-30', '2020-12-31' } national_holidays = { '2020-1-1', '2020-1-13', '2020-2-11', '2020-2-23', '2020-2-24', '2020-3-20', '2020-4-29', '2020-5-3', '2020-5-4', '2020-5-5', '2020-5-6', '2020-7-23', '2020-7-24', '2020-8-10', '2020-9-21', '2020-9-22', '2020-11-3', '2020-11-23' } @classmethod def strptime (cls, s): tdatetime = datetime.datetime.strptime (s,'% Y-% m-% d') return datetime.date (tdatetime.year, tdatetime.month, tdatetime.day) @classmethod def is_workday (cls, day): if day in map (cls.strptime, cls.special_workdays): return True if day in map (cls.strptime, cls.special_holidays): return False if day in map (cls.strptime, cls.national_holidays): return False return not (day.weekday () == 5 or day.weekday () == 6) @classmethod def is_holiday (cls, day): return not cls.is_workday (day) @classmethod def is_sad_day (cls, day): # the 1st workday of the week for i in range (day.weekday ()): if cls.is_workday (day --datetime.timedelta (days = day.weekday ()-i)): return False return cls.is_workday (day) for day in ['2020-11-21', '2020-11-22', '2020-11-23', '2020-11-24', '2020-11-25']: print (day,'= sad day?', MyDateTime.is_sad_day (MyDateTime.strptime (day))) # 2020-11-21 = sad day? False # 2020-11-22 = sad day? False # 2020-11-23 = sad day? False # 2020-11-24 = sad day? True # 2020-11-25 = sad day? False
Related articles
- is it possible to connect to mysql of xampp started from another pc via wifi with a python application?
- is it possible to replace uwsc with python altogether?
- is it possible to operate the browser control of python that has already been opened?
- python - i want to swap the first half and the second half of the array and put them in different arrays
- python - when converting from webm to mp4, the conversion is possible but combined
- python - whether scraping is possible
- python - i want to get the very first value of the corresponding condition in the time condition of pandas
- python - i want to get only the value of the first stage (after roipool) of faster r-cnn of torchvision
- python - i want to determine if the channel is a text channel
- python i want to output the first treeview data to csv
- python 3x - the program only works on the first line
- python - i want to swap the first half and the second half of the array and put them in different arrays
- python - i want to insert a row name in the first column in sql
- python - is it possible to bring the values of the list in params from the fixture when parameterizing the fixture with pytest
- i want to start the next process when the first process starts in python
- php - check box is it possible to determine whether or not there is a check and branch?
- python 3x - is it possible to set multiple python3 tkinter themes? impossible?
Do you want to judge the "dawn" of consecutive holidays for e-mail newsletter (?) Delivery?
Is it one way to have "after the Golden Week holidays" as data more honestly?
Delivery conditions can be defined by the conditions of "Monday, not a holiday" or "Day after consecutive holidays".
|
https://www.tutorialfor.com/questions-323723.htm
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
- Engineering
- Computer Science
- java i have done the program but something needs to...
Question: java i have done the program but something needs to...
Question details
Java: I have done the program, but something needs to be fixed, please read the instructions below. Assume I have a file name "inputfile.txt" that contains words: "The central processing unit is the computer's brain, it retrieves instructions from memory and executes them. " I don't know how to make the words in the index unique (see highlighted words below), cuz there are two "the" in the sentence, my output suppose to have one "the" instead of two. and I failed to do step 2 as well. (my code was pasted at the buttom)
Write a Java program called BookIndex to keep track of an index of words that appears in a text file. Although an index typically references the page number of a word in a text, we will reference the word number of the first occurrence of a word since your input file will probably be short (and not paginated). Your program will read the text from a text file and create the index. The index will consist of 2 parallel arrays, one of type String and one of type int. The words in the index must be unique! Never store the same word twice. After populating the arrays, your program should do the following:
1. Sort both arrays in alphabetical order according to the words. Each time a pair of words is swapped, the corresponding pair of numbers must be swapped.
2. Allow the user to lookup a number by entering a word. Also, allow the user to enter the start of a word (at least one character) and print all of the words in the index that begin with that prefix, and their numbers.
Implementation Details:
You may read one word at a time or read the entire file into one String and call the split method of the String class.
In both cases, you will need to trim the punctuation marks from the words before they are placed in the index. You may use the replaceAll method in the String class for this. Convert all strings to lowercase before placing in the index.
Remember to use .equals instead of == and compareTo instead of when comparing String objects.
You may use the sequential search for the lookup, but you are can also program the binary search since the array of words is sorted.
import java.util.Scanner;
import java.io.*;
public class BookIndex1 {
public static void main(String [] args) throws IOException{
File myfile = new File("inputfile.txt");
if (!myfile.exists()){
System.out.println("The file hw8inputfile.txt is not found.");
System.exit(0); //this will terminate the program if the file does not exist.
}
final int SIZE =100; //assume the array has size of 100
Scanner input = new Scanner(myfile);
String[] words = new String[SIZE];
int[] index = new int[SIZE];
System.out.printf("%-12s%15s\n", "words", "Index");
System.out.println("---------------------------");
int size=0;
while(input.hasNext()){
words[size] = input.next();
size++;
}
for(int i=0; i<size; i++){
index[i]=i;
}
convertLowerCase(words, size); //convert all words to lowercase
removePun(words, size); //remove all punctuation
sortArrays(words, index, size); //sort array alphabetically
displayArrays(words, index, size);
searchWords(words, index, size); //allow user to type a keyword and find its index
}
public static void convertLowerCase(String[] words, int size){
for(int i=0; i<size; i++){
words[i]=words[i].toLowerCase();
}
}
public static void removePun(String[] words, int size){
for(int i=0; i<size; i++){
words[i]=words[i].replaceAll("[, . ? : ; !]", " ");
}
}
public static void sortArrays(String words[], int wordIndex[], int arraySize){
for(int i=0; i<arraySize; i++){
for(int j=0; j<arraySize-i-1; j++){
if(words[j].compareTo(words[j+1])>0){
String temp = words[j];
words[j]=words[j+1];
words[j+1]=temp;
int tempIndex=wordIndex[j];
wordIndex[j]=wordIndex[j+1];
wordIndex[j+1]=tempIndex;
}
}
}
}
public static void displayArrays(String words[], int[] wordIndex, int arraySize){
for(int i=0; i<arraySize; i++){
System.out.printf("%-12s%15d\n", words[i],wordIndex[i]);
}
}
public static void searchWords(String words[], int[] wordIndex, int arraySize){
//this is what I have tried in step 2: set the while loop 1000 tie to allow the user to enter the keyword 1000 times,
//is there any other ways to allow the user to search a keyword without limits?
// there is a minor bug in this program segment when I enter "them" which is index 15 in the array, it doesn't display anything. can you figure out what's wrong?
//Also, can you write some code to allow the user to
enter the start of a word (at least one character) and print all of
the words in the index that begin with that prefix, and their
numbers.
Scanner sc = new Scanner(System.in);
int searching = 0;
while(searching<1000){
System.out.println("Enter keyword: ");
String word = sc.next();
int index;
for(int i=0; i<arraySize; i++){
if (words[i].equals(word)){
index=wordIndex[i];
System.out.println(word+" found at index "+index);
}
}
}
}
}
Solution by an expert tutor
|
https://homework.zookal.com/questions-and-answers/java-i-have-done-the-program-but-something-needs-to-814353672
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Hi
I'm starting to develop som geoprocessing tools in ArcGIS Pro using some third party python modules, but even with the simpliest of tools it fails to run from Portal
Sample code
import arcpy
import openpyxl
class Toolbox(object😞
def __init__(self😞
"""Define the toolbox (the name of the toolbox is the name of the
.pyt file)."""
self.label = "Toolbox"
self.alias = ""
# List of tool classes associated with this toolbox
self.tools = [Tool]
class Tool(object😞
def __init__(self😞
"""Define the tool (tool name is the name of the class)."""
self.label = "Tool"
self.description = ""
self.canRunInBackground = False
def getParameterInfo(self😞
"""Define parameter definitions"""
param0 = arcpy.Parameter(
displayName="Input Features",
name="in_features",
datatype="GPString",
parameterType="Required",
direction="Input")
param0.filter.type = "ValueList"
param0.filter.list = ['test1','test2']
param1 = arcpy.Parameter(
displayName="Label",
name = "LabelTest",
datatype="GPString",
parameterType="Optional",
direction="Input"
)
params = [param0, param1]
return params
def isLicensed(self😞
"""Set whether tool is licensed to execute."""
return True
def updateParameters(self, parameters):
"""Modify the values and properties of parameters before internal
validation is performed. This method is called whenever a parameter
has been changed."""
parameters[1].value = parameters[0].value
return
def updateMessages(self, parameters):
"""Modify the messages created by internal validation for each tool
parameter. This method is called after internal validation."""
return
def execute(self, parameters, messages😞
"""The source code of the tool."""
returnVal = parameters[0].value + ' TEST ' + parameters[1].value
arcpy.AddWarning(returnVal)
return
If I run this from Pro theres no problem and I can publish it as a webtool, no problem
But when I want to run it from portal i fails, unless I publis it without the import openpyxl line
So how do I get portal to use third party python modules like openpyxl ?
what ever environment portal runs from doesn't have that module installed
I know that, but how do I install those modules on the server, and witch server ?
My configuration is
Now that I'm developing in Python 3.x I would like for the server to run Python 3.x so I don't have any trouble with unicode characters ect.
You may want to Move your question to Python since the ArcGIS API for Python is actually a different environment, which uses Jupyter Notebook ( ArcGIS API for Python | ArcGIS for Developers )
re: getting installed on the Server....have you tried installing Pro on the server? I would think that would get Python 3 the environment setup with the correct paths, etc., but Dan is much more knowledgeable about that than I am, so I'll defer to him/others.
|
https://community.esri.com/t5/arcgis-api-for-python-questions/python-toolbox-from-pro-to-portal/td-p/835365
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Reverse bits of a given 32 bits unsigned integer.
Example
Input
43261596 (00000010100101000001111010011100)
Output
964176192 (00111001011110000010100101000000)
A 32-bit unsigned integer refers to a nonnegative number which can be represented with a string of 32 characters where each character can be either ‘0’ or ‘1’.
Algorithm
- for i in range 0 to 15
- If the ith bit from the beginning and from the end is not the same then flip it.
- Print the number in binary.
Explanation
- We swap the bits only when they are different because swapping the bits when they are the same does not change our final answer.
- To swap two different bits, we flip the individual’s bits by using the XOR operator.
Implementation
C++ Program for Reverse Bits
#include <bits/stdc++.h> using namespace std; void Reverse(uint32_t n) { for (int i = 0; i < 16; i++) { bool temp = (n & (1 << i)); bool temp1 = (n & (1 << (31 - i))); if (temp1 != temp) { n ^= (1 << i); n ^= (1 << (31 - i)); } } for (int i = 31; i >= 0; i--) { bool temp = (n & (1 << i)); cout << temp; } cout << endl; } int main() { uint32_t n; cin >> n; Reverse(n); return 0; }
43261596
00111001011110000010100101000000
JAVA Program for Reverse Bits
import java.util.*; public class Main { public static void Reverse(int n) { for (int i = 0; i < 16; i++) { int temp = (n & (1 << i)); temp = temp==0? 0: 1; int temp1 = (n & (1 << (31 - i))); temp1 = temp1==0? 0: 1; if (temp1 != temp) { n ^= (1 << i); n ^= (1 << (31 - i)); } } for (int i = 31; i >= 0; i--) { int temp = (n & (1 << i)); temp = temp==0? 0: 1; System.out.print(temp); } } public static void main(String[] args) { Scanner sc = new Scanner( System.in ) ; int n=sc.nextInt(); Reverse(n); } }
3456785
10001000111111010010110000000000
Complexity Analysis for reverse bits
Time complexity
We traverse only on 16 bits so time complexity is O(16) which is basically O(1).
Space complexity
It is also O(1) as we only took 2 extra bool type variable.
Variation of reverse bits
- In this question, we were asked to reverse bits of the given unsigned integer, but the same question can be modified as find the complement of the given unsigned integer.
Now, what is the complement of a given number?
A number obtained by toggling the state of each binary character in the given binary representation of integer is known as the complement of that number.
Hence to do that we can simply change all 0’s to 1 and all 1’s to 0 in the given binary string.
Example
If the given number is 20, then it’s binary representation will be 10100
Hence it’s complement will be 01011.
Implementation
#include<bits/stdc++.h> using namespace std; int main(){ int n = 20; string ans; while(n){ if(n%2==0){ ans = '1'+ans; } else{ ans = '0'+ans; } n/=2; } cout<<"Complement of the given number is: "<<ans; }
Complement of the given number is: 01011
- Many questions can be framed on the binary representation of integers like the addition of two binary strings. To solve such kind of questions one should know about the rules of binary addition.
Rules for addition:
This table is known as the truth table. It is used to find the output of binary bitwise operations like addition, subtraction, etc.
- More complex problems related to bitwise operations and binary representations come under bit manipulation which is commonly asked in competitive programming.
But the basics of bit manipulation is how different data structures are represented in binary form.
|
https://www.tutorialcup.com/interview/string/reverse-bits.htm
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
I am using AngularJS 1.8.2, Karma 2.0.0, and Jasmine 3.7.0. My tests are driving me nuts. I am trying to test that a function correctly handles different results from the same external function call. I have a test that runs perfectly fine in isolation if I select it with fit (or for that matter with ..
Category : karma-runner
It usually happens in Jenkins, and to me it seems that Karma is trying to launch testing while the generating bundle process is on its way to get completed, so I was wondering whether there is a way to make Karma to use an existing bundle, or generate a bundle first and then launch karma ..
Very similar to my last Karma upgrade issue in this post, I have all my unit test for a component failing after upgrading from v7 to v8. Once again due to a custom component from a shared library. So here I get the following.. NullInjectorError: StaticInjectorError(DynamicTestModule)[ProgressIndicatorComponent -> ElementRef]: StaticInjectorError(Platform: core)[ProgressIndicatorComponent -> ElementRef]: NullInjectorError: No provider ..
I": ..
I’m trying to unit test a component that requires a Resolver, Router and ActivatedRoute as dependencies. I’ve tried to use the RouterTestingModule and mock my resolver to provide them in the testing module, but it seems to have some side effects on the creation of the component instance. Here is the code of my component: ..
Continuing my journey of updating our Angular projects from 7 to 8 (first step on the way to 12), I continue to have most problems in the unit tests. This time I am getting… Error: No component factory found for CalendarHeaderComponent. Did you add it to @NgModule.entryComponents? at noComponentFactoryError () at CodegenComponentFactoryResolver.push../node_modules/@angular/core/fesm5/core.js.CodegenComponentFactoryResolver.resolveComponentFactory () at CdkPortalOutlet.push../node_modules/@angular/cdk/esm5/portal.es5.js.CdkPortalOutlet.attachComponentPortal ..
@Input() public openDrawer: BehaviorSubject<{ open: boolean; assetConditionDetails: AssetConditionIdDetails[]; selectedAssets: SelectedAssets[]; }>; public ngOnInit(): void { this.openDrawer.subscribe((result) => { if (result) { this.showLoader = result.open; this.isDrawerActive = result.open; this.selectedAssets = result.selectedAssets; this.assetConditionDetails = result.assetConditionDetails; } }); } can someone please tell me how to write a unit test case for this ..? this is what I ..
import { HttpClientModule } from ‘@angular/common/http’; import { CUSTOM_ELEMENTS_SCHEMA } from ‘@angular/core’; import { NO_ERRORS_SCHEMA } from ‘@angular/core’; import { async, ComponentFixture, TestBed } from ‘@angular/core/testing’; import { RouterModule, Routes } from ‘@angular/router’; import { DrawerService} from ‘../services/create-work-order-confirmation-drawer.service’; import { HttpClientTestingModule, HttpTestingController} from ‘@angular/common/http/testing’; import { ConfirmationDrawerComponent } from ‘./create-work-order-confirmation-drawer.component’; import { FormsModule } from ..
I am getting "Can’t bind to ‘formGroup’ since it isn’t a known property of ‘form’" error (it(‘should create’, () => {} ) test for every component using formGroup in my app. The app is working fine. I have imported FormsModule & ReactiveFormsModule in every module. <form [formGroup]="uploadDocumentFormGroup"> export class UploadFormComponent implements OnInit { public uploadDocumentFormGroup: ..
Recent Comments
|
https://angularquestions.com/category/karma-runner/
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
7.13: Drawing “Press a key” Text to the Screen
- Page ID
- 14565
def drawPressKeyMsg(): pressKeySurf = BASICFONT.render('Press a key to play.', True, DARKGRAY) pressKeyRect = pressKeySurf.get_rect() pressKeyRect.topleft = (WINDOWWIDTH - 200, WINDOWHEIGHT - 30) DISPLAYSURF.blit(pressKeySurf, pressKeyRect)
While the start screen animation is playing or the game over screen is being shown, there will be some small text in the bottom right corner that says "Press a key to play." Rather than have the code typed out in both the
showStartScreen() and the
showGameOverScreen(), we put it in a this separate function and simply call the function from
showStartScreen() and
showGameOverScreen().
|
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/07%3A_Wormy/7.13%3A_Drawing_%E2%80%9CPress_a_key%E2%80%9D_Text_to_the_Screen
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Java while loop is another loop control statement that executes a set of statements based on a given condition. In this tutorial, we will discuss in detail about java while loop. When compared to for loop, while loop does not have any fixed number of iteration. Unlike
for loop, the scope of the variable used in java while loop is not limited within the loop since we declare the variable outside the loop.
Java while loop syntax
while(test_expression) { //code update_counter;//update the variable value used in the test_expression }
test_expression – This is the condition or expression based on which the while loop executes. If the condition is true, it executes the code within the while loop. If it is false, it exits the while loop.
update_counter – This is to update the variable value that is used in the condition of the java while loop. If we do not specify this, it might result in an infinite loop.
How while loop works
The below flowchart shows you how java
while loop works.
- When the execution control points to the while statement, first it evaluates the condition or test expression. The condition can be any type of operator.
- If the condition returns a true value, it executes the code inside the while loop.
- It then updates the variable value either increments or decrements the variable. It is important to include this code inside the java while loop, otherwise, it might result in an infinite javawhile loop. We will discuss the infinite loop towards the end of the tutorial.
- Again control points to the while statement and repeats the above steps.
- When the condition returns a false value, it exits the java while loop and continues with the execution of statements outside the while loop
Simple java while loop example
Below is a simple code that demonstrates a java while loop.
public class simpleWhileLoopDemo { public static void main(String[] args) { int i=1; while(i<=5) { System.out.println("Value of i is: " + i); i++; } } }
Value of i is: 1 Value of i is: 2 Value of i is: 3 Value of i is: 4 Value of i is: 5
We first declare an int variable i and initialize with value 1. In the while condition, we have the expression as i<=5, which means until i value is less than or equal to 5, it executes the loop.
Hence in the 1st iteration, when i=1, the condition is true and prints the statement inside java while loop. It then increments i value by 1 which means now i=2.
It then again checks if i<=5. Since it is true, it again executes the code inside the loop and increments the value.
It repeats the above steps until i=5. At this stage, after executing the code inside while loop, i value increments and i=6. Now the condition returns false and hence exits the java while loop.
While loop in Array
Similar to
for loop, we can also use a java while loop to fetch array elements. In the below example, we fetch the array elements and find the sum of all numbers using the while loop.
public class whileLoopArray { public static void main(String[] args) { int[] numbers = {20,10,40,50,30}; int i=0; int sum=0; while(i<numbers.length) { sum = sum+numbers[i]; i=i+1; } System.out.println("Sum of array elements: " + sum); System.out.println("Length of array: " + i); } }
Sum of array elements: 150 Length of array: 5
Explanation:
First, we initialize an array of integers numbers and declare the java while loop counter variable i. Since it is an array, we need to traverse through all the elements in an array until the last element. For this, we use the length method inside the java while loop condition. This means the while loop executes until i value reaches the length of the array.
Iteration 1 when i=0: condition:true, sum=20, i=1
Iteration 2 when i=1: condition:true, sum=30, i=2
Iteration 3 when i=2: condition:true, sum =70, i=3
Iteration 4 when i=3: condition:true, sum=120, i=4
Iteration 5 when i=4: condition:true, sum=150, i=5
Iteration 6 when i=5: condition:false -> exits
while loop
Please refer to our Arrays in java tutorial to know more about Arrays.
Infinite while loop
As discussed at the start of the tutorial, when we do not update the counter variable properly or do not mention the condition correctly, it will result in an infinite
while loop. Let’s see this with an example below.
public class infiniteWhileLoop { public static void main(String[] args) { int i = 0; while(i>=0) { System.out.println(i); i++; } } }
Here, we have initialized the variable i with value 0. In the java while loop condition, we are checking if i value is greater than or equal to 0. Since we are incrementing i value inside the while loop, the condition i>=0 while always returns a true value and will execute infinitely.
We can also have an infinite java while loop in another way as you can see in the below example. Here the value of the variable bFlag is always true since we are not updating the variable value.
public class infiniteWhileLoop { public static void main(String[] args) { Boolean bFlag = true; while(bFlag) { System.out.println("Infinite loop"); } } }
Hence infinite java while loop occurs in below 2 conditions. It is always important to remember these 2 points when using a while loop.
- when we do not update the variable value
- when we do not use the condition in while loop properly
Nested while loop
We can also have a nested while loop in java similar to
for loop. When there are multiple while loops, we call it as a nested while loop.
public class Nestedwhileloop { public static void main(String[] args) { int i=1,j=10; while(i<=5) { System.out.println("i: " + i); i++; while(j>=5) { System.out.println("j: " + j); j--; } } } }
i: 1 j: 10 j: 9 j: 8 j: 7 j: 6 j: 5 i: 2 i: 3 i: 4 i: 5
In this example, we have 2 while loops. The outer while loop iterates until i<=5 and the inner while loop iterates until j>=5.
When i=1, the condition is true and prints i value and then increments i value by 1. Next, it executes the inner while loop with value j=10. Since the condition j>=5 is true, it prints the j value. Now, it continues the execution of the inner while loop completely until the condition j>=5 returns false. Once it is false, it continues with outer while loop execution until i<=5 returns false.
This is why in the output you can see after printing i=1, it executes all j values starting with j=10 until j=5 and then prints i values until i=5. When i=2, it does not execute the inner
while loop since the condition is false.
Java while loop with multiple conditions
We can have multiple conditions with multiple variables inside the java while loop. In the below example, we have 2 variables a and i initialized with values 0. Here we are going to print the even numbers between 0 and 20. For this, inside the java while loop, we have the condition a<=10, which is just a counter variable and another condition ((i%2)==0) to check if it is an even number. Inside the java while loop, we increment the counter variable a by 1 and i value by 2.
public class Whileloopconditions { public static void main(String[] args) { int a = 0; int i = 0; System.out.println("Even numbers between 0 to 20:"); while((a<=10) && ((i%2)==0)) { System.out.println(i); a++; i=i+2; } } }
Even numbers between 0 to 20: 0 2 4 6 8 10 12 14 16 18 20
|
https://www.tutorialcup.com/java/java-while-loop.htm
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Supplier QML Type
Holds data regarding the supplier of a place, a place's image, review, or editorial. More...
Properties
- icon : PlaceIcon
- name : string
- supplier : QPlaceSupplier
- supplierId : string
- url : url.
Example
The following example shows how to create and display a supplier in QML:
import QtQuick 2.0 import QtPositioning 5.5 import QtLocation 5.6 Supplier { id: placeSupplier name: "Example" url: "" } Text { text: "This place is was provided by " + placeSupplier.name + "\n" + placeSupplier.url }
See also ImageModel, ReviewModel, and EditorialModel.
Property Documentation
This property holds the icon of the supplier.
This property holds the name of the supplier which can be displayed to the user.
The name can potentially be localized. The language is dependent on the entity that sets it, typically this is the Plugin. The Plugin::locales property defines what language is used.
For details on how to use this property to interface between C++ and QML see "Interfaces between C++ and QML Code".
This property holds the identifier of the supplier. The identifier is unique to the Plugin backend which provided the supplier and is generally not suitable for displaying to the user.
This property holds the URL of the supplier's.
|
https://doc.qt.io/archives/qt-5.7/qml-qtlocation-supplier.html
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Visual Studio 15.7 Preview 3 has shipped initial support for some C# 7.3 features. Let's see what they are!
System.Enum,
System.Delegate and
unmanaged constraints.
Now with generic functions you can add more control over the types you pass in. More specifically, you can specify that they must be
enum types,
delegate types, or "blittable" types. The last one is a bit involved, but it means a type that consists only of certain predefined primitive types (such as
int or
UIntPtr), or arrays of those types. "Blittable" means it has the ability to be sent as-is over the managed-unmanaged boundary to native code because it has no references to the managed heap. This means you have the ability to do something like this:
void Hash<T>(T value) where T : unmanaged { fixed (T* p = &value) { // Do stuff... } }
I'm particularly excited about this one because I've had to use a lot of workarounds to be able to make helper methods that work with "pointer types."
Ref local re-assignment
This is just a small enhancement to allow you to assign
ref type variables / parameters to other variables the way you do normal ones. I think the following code is an example (off the top of my head)
void DoStuff(ref int parameter) { // Now otherRef is also a reference, modifications will // propagate back var otherRef = ref parameter; // This is just its value, modifying it has no effect on // the original var otherVal = parameter; }
Stackalloc initializers
This adds the ability to initialize a stack allocated array (did you even know this was a thing in C#? I did :D) as you would a heap allocated one:
Span<int> x = stackalloc[] { 1, 2, 3 };.
Indexing movable fixed buffers
I can't really wrap my head around this one so see if you can understand it
Custom fixed statement
This is the first I've seen this one, and it is exciting for me! Basically, if you implement an implicit interface (one method), you can use your own types in a
fixed statement for passing through P/Invoke. I'm not sure what the exact method is (
DangerousGetPinnableReference() or
GetPinnableReference()) since the proposal and the release notes disagree but if this method returns a suitable type then you can eliminate some boilerplate.
Improved overload candidates
There are some new method resolution rules to optimize the way a method is resolved to the correct one. See the propsal for a list of the.
Expression Variables in Initializers
The summary here is "Expression variables like out var and pattern variables are allowed in field initializers, constructor initializers, and LINQ queries." but I am not sure what that allows us to do...
Tuple comparison
Tuples can be compared with
== and
!= now!
Attributes on backing fields
Have you ever wanted to put an attribute (e.g.
NonSerializable) on the backing field of a property, and then realized that you then had to create a manual property and backing field just to do so?
[Serializable] public class Foo { [NonSerialized] private string MySecret_backingField; public string MySecret { get { return MySecret_backingField; } set { MySecret_backingField = value; } } }
Not anymore!
[Serializable] public class Foo { [field: NonSerialized] public string MySecret { get; set; } }
Discussion (6)
Expression Variables in Initializers
I think it will let us do something like this:
Oh, that could be!
Thanks Jim for the article.
C# 7.3 update gave me an impression that it's for providing optimizations.
About half of C# 7.1 and 7.2 was also optimization. I think they want to focus on making the language less verbose and able to do more in one line!
I am most excited about the
Tuple Equalitychecks :thumbsup:
Saves many keystrokes
Good summary.
Ref local re-assignment should be updated:
var otherRef = ref parameter; => ref var otherRef = ref parameter;
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/borrrden/whats-new-in-c-73-26fk
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
README
Build Accessible React Apps with Speed ⚡️
Chakra UI provides a set of accessible, reusable, and composable React components that make it super easy to create websites and apps.
Looking for the documentation? 📝Looking for the documentation? 📝
For older versions, head over here =>
Latest version (v1) =>
Features 🚀Features 🚀
- follow the WAI-ARIA guidelines specifications and have the right
aria-*attributes.
- Dark Mode 😍: Most components in Chakra UI are dark mode compatible.
Support Chakra UI 💖Support Chakra UI 💖
By donating $5 or more you can support the ongoing development of this project. We'll appreciate some support. Thank you to all our supporters! 🙏 [Contribute]/react package and its peer dependencies:
$ yarn add @chakra-ui/react framer-motion # or $ npm install @chakra-ui/react framer-motion
UsageUsage.
- Now you can start using components like so!:
import { Button } from "@chakra-ui/react" function Example() { return <Button>I just consumed some ⚡️Chakra!</Button> }
CodeSandbox TemplatesCodeSandbox Templates
- JavaScript Starter:
- TypeScript Starter:
Templates create-react-app
information on how to use our official
create-react-app templates.
ContributingContributing
Feel like contributing? That's awesome! We have a contributing guide to help guide you.
Contributors ✨Contributors ✨
Thanks goes to these wonderful people
This project follows the all-contributors specification. Contributions of any kind welcome!
Testing supported byTesting supported by
LicenseLicense
MIT © Segun Adebayo
|
https://www.skypack.dev/view/@chakra-ui/parser
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.