text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Using TxtFormatProvider TxtFormatProvider makes it easy to import and export TXT files. Note that TXT is a plain text format and holds only the contents of the worksheet without its formatting. Exporting a file to this format strips all styling and saves only cell's result value with their format applied using tab as a delimiter. Moreover, it exports only the contents of the active worksheet – no support for exporting multiple worksheets into a txt at once is available. Importing from TXT respectively creates a new workbook with a single worksheet named Sheet1. In order to import and export txt files, you need an instance of TxtFormatProvider, which is contained in the Telerik.Windows.Documents.Spreadsheet.FormatProviders.TextBased.Txt namespace. The TxtFormatProvider implements the interface IWorkbookFormatProvider that appears in the Telerik.Windows.Documents.Spreadsheet.FormatProviders namespace. Import Example 1 shows how to import a txt file using a FileStream. The sample instantiates a TxtFormatProvider and passes a FileStream to its Import() method: Example 1: Import TXT file Workbook workbook; IWorkbookFormatProvider formatProvider = new TxtFormatProvider(); using (Stream input = new FileStream(fileName, FileMode.Open)) { workbook = formatProvider.Import(input); } Export Example 2 demonstrates how to export an existing Workbook to a TXT file. The snippet creates a new workbook with a single worksheet. Further, it creates a TxtFormatProvider and invokes its Export() method: Example 2: Export TXT file Workbook workbook = new Workbook(); workbook.Worksheets.Add(); string fileName = "SampleFile.txt"; IWorkbookFormatProvider formatProvider = new TxtFormatProvider(); using (Stream output = new FileStream(fileName, FileMode.Create)) { formatProvider.Export(workbook, output); }
https://docs.telerik.com/devtools/document-processing/libraries/radspreadprocessing/formats-and-conversion/txt/txtformatprovider
CC-MAIN-2020-50
en
refinedweb
I’m trying to improve my cross-platform implementation of the weak event pattern (). Currently my impl require the listener register to pass an additional gcTarget parameter as below: public class WeakSimpleEvent { public void addWeak(object gcTarget, Action listener) { // GcBinder will ensure a strong ref to listener until the gcTarget is recycled. GcBinder.bind(listener, gcTarget); addImpl(listener); } But in the impl of .NET’s WeakEventManager, it’s easy to ignore such a gcTarget, because it can be obtained directly by listener.Target. And in Java, since each delegate is implemented as a new anoymous class, it will certainly contain a hidden ref of outer object if it’s non-static, which is named as “argN” by Elements compiler and “this$N” in official Java. So it’s still possible to make the improvement. But for iOS, I’ve found the delegate is implemented as a NSMallocBlock object which I still can’t find a way to get sth equivalent to gcTarget. After learning this artical (), I think in theory, the block’s invoke needs to call another object’s method, so the block structure must store a pointer to that object. I know this improvement depends on details of specific compiler, but the weak event pattern is really useful for UI dev. Maybe Elements can consider to provide a Target property for cross-platform Delegate implementation.
https://talk.remobjects.com/t/is-it-possible-to-get-the-target-of-a-delegate-on-ios/8843
CC-MAIN-2020-50
en
refinedweb
Experimental features that may or may not make it into the library. These features should not expected to be stable. #include <boost/hana/experimental/type_name.hpp> Returns a hana::string representing the name of the given type, at compile-time. This only works on Clang (and apparently MSVC, but Hana does not work there as of writing this). Original idea taken from..
https://www.boost.org/doc/libs/develop/libs/hana/doc/html/group__group-experimental.html
CC-MAIN-2020-50
en
refinedweb
SK Genius wrote:What was your program doing KIDYA wrote:Hello Experts!! Shaik Haneef wrote:I created Public Assembly by providing strong name, then what is that use still i need to give reference of that assembly. Shaik Haneef wrote:Then what is the use of Global assembly ? public class Child<br /> {<br /> public string FirstName, MiddleName, LastName;<br /> public string Sex;<br /> public DateTime DOB;<br /> public string EducationDetails;<br /> public string Health;<br /> } public byte[] Serialize(object oSerialize) { ms = new MemoryStream(); bf = new BinaryFormatter(); bf.Serialize(ms, oSerialize); return ms.ToArray(); } Member 6289976 wrote:how can access other controls such as button, textbox etc. in an other application from C#. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Forums/1649/Csharp?df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=3077603&fr=117401
CC-MAIN-2020-50
en
refinedweb
This blog post will guide us through the steps required to configure the SAP S/4HANA Cloud SDK Continuous Delivery Toolkit to scale dynamically on a Kubernetes cluster. Note: This post is part of a series. For a complete overview please visit the SAP S/4HANA Cloud SDK Overview. The goal of this blog post When we decide to set up our first continuous delivery infrastructure, the first question that arises is, how much resources we need to reserve. However, answering this question is not easy. Having limited resources could throttle concurrent build-pipeline executions. On the other hand, if we plan to reserve enough resources to support concurrent build-pipeline execution, we may end up wasting the resources. There are good chances that, these resources stay idle for the most part of the day. Autoscaling of the infrastructure solves such a problem. This feature has been introduced in the latest release of the SAP S/4HANA Cloud SDK Pipeline. Instead of reserving the resources proactively, the pipeline creates the Jenkins agents dynamically on a Kubernetes cluster during the execution. Once the agent completes the dedicated task, it is deleted and the resources are freed. The infrastructure component failure such as a node crash is hard to predict and prevent. If we use the dockerized approach as introduced here, then we need to establish an additional mechanism to manage the failures. For example, we may need an infrastructure health monitoring tool such as Nagios to identify the failure and a fallback script to start services on a backup node. Thanks to Kubernetes, the monitoring and self-healing come out of the box. Kubernetes performs regular health checks of the infrastructure and the services. If any infrastructure component or the service fails the health check, the kubernetes will create a new component and decommission the old one. If the degraded component is a node then kubernetes will create a new node and also ensures that services that were running on the node are moved to a healthy node. If the Jenkins master pod fails, kubernetes will spin up a new pod and the state is restored by reusing the persistent volume. If a pod fails while executing the pipeline stages, it will be re-created by the Kubernetes without propagating the failure to the pipeline. In the following sections, we will see how to make use of the autoscaling feature. For a better understanding of this article, please go through the following tutorials first: Note: The Kubernetes support in SAP S/4HANA Cloud SDK is currently offered only as an experimental feature. Prerequisite The current version of the SAP S/4HANA Cloud SDK Pipeline supports autoscaling only if the Jenkins master is also set up on a Kubernetes cluster. To begin with, we need a Kubernetes cluster where we will set up the Jenkins using the Jenkins helm chart. Helm is a package management tool for kubernetes. The documentation to Install Jenkins using helm can be found here. To use the Jenkins image provided by the SAP S/4HANA Cloud SDK, we have to pass s4sdk/jenkins-master as a value for the Master.Image command line argument while deploying Jenkins to Kubernetes. The successfully completed deployment consists of a Jenkins pod with port 80 and 50000 exposed for HTTP and internal JNLP traffic respectively. The deployment also creates two services each to listen to incoming HTTP traffic on port 80 and the internal JNLP traffic on port 50000. Please note that in this example setup, the SSL/TLS termination happens at the load balancer, hence all the traffic between a load balancer and the Jenkins pod is unencrypted. Kubernetes Plugin Configuration The SAP S/4HANA Cloud SDK makes use of the Jenkins Kubernetes plugin. It comes pre-installed with the latest version of the s4sdk/jenkins-master docker image. If we use the helm to install Jenkins then the plugin is automatically configured by the helm chart and we can skip this section. However, if we use any other means to install the Jenkins then, we need to configure the plugin to establish the connectivity to the Kubernetes cluster. To do that navigate to Manage Jenkins menu on the left-hand side of our Jenkins welcome page. Further navigate to Configure System > Cloud section, configure the Kubernetes cluster details by entering a value to the Kubernetes URL field. Please use the Credentials that has an edit access to the namespace which is configured in the Kubernetes Namespace field. Click on Test Connection. The agents are created as a new pod, a logical node with the collection of containers that are required to execute the pipeline stage. The agent communicates with the master using the JNLP protocol. In this example setup, we are using port 50000 for internal JNLP traffic. Enter the value for the Jenkins tunnel field, which is service name followed by a port number (jenkins-agent:50000). Please note the tunnel value should not be prefixed with the protocol. Environment Variable The SAP S/4HANA Cloud SDK Continuous delivery Pipeline needs an environment variable set in the Jenkins to make use of the auto scaling feature. In order to set the environment variable, navigate to Manage Jenkins > Configure System> Global Properties. Add an environment variable ON_K8S and set the value to true. Pipeline configuration Now the Jenkins is ready to run our project pipeline as described here. However, every dynamic agent is created as a new pod with a JNLP agent container. By default, the jnlp-agent docker image is used. However, if our Jenkins has a TLS/SSL configuration which terminates at the pod level, then we need to provide a custom JNLP agent image with a valid certificate. Once we publish the custom JNLP image to a secured location, configure the same in our pipeline configuration file pipeline_config.yml. Note: The Jenkins master server contains a user jenkins with ID 1000. hence, the JNLP agent is expected to have a user with ID 1000 as well to avoid access issues to the files that are shared by both the master and the agent. #Project Setup general: jenkinsKubernetes: jnlpAgent: 'custom-jenkins-agent-k8s:latest' That’s all the configuration required to benefit from the autoscaling feature of SAP S/4HANA Cloud SDK Pipeline. Push these changes to the repository. Now, the Jenkins agents are dynamically created and resources are allocated on the fly from the Kubernetes cluster for each stage of the pipeline as shown below. Troubleshooting Check connectivity to the Kubernetes cluster We can test our configuration using the below example Jenkinsfile. Please replace the value for the image if we are using the custom JNLP image. def podName= 'testing' podTemplate(label: podName, containers: [ containerTemplate(name: 'jnlp', image: 's4sdk/jenkins-agent-k8s:latest'), containerTemplate(name: 'testcontainer', image: 'alpine:latest', ttyEnabled: true, command: 'cat') ]) { node(podName) { stage('Sanity check') { container('testcontainer') { echo "I am inside a container" } } } } Connection to Kubernetes cluster is not established Please make sure that the URL to the Kubernetes cluster is valid. We can get the URL by executing the kubectl cluster-info command. Make sure that the credential that we are using has the authorization to create and list pods in a namespace that we are using. The agent is created but suspended This issue can arise due to following two reasons The service is not created to listen to the JNLP traffic: Please make sure that there is a Kubernetes service which is listening to the JNLP traffic Misconfigured certificates in Jenkins master and agent: Please make sure that master and the agent uses the same certificate if they have SSL/TLS encryption enabled for communication. Cannot create file exception If the Jenkins master and the agent have different user IDs then there will be an issue while accessing files that are created by each other. We might notice errors as shown below. sh: 1: cannot create /home/jenkins/workspace/my-project/durable-5bb5cd84/jenkins-log.txt: Permission denied sh: 1: cannot create /home/jenkins/workspace/my-project/durable-5bb5cd84/jenkins-result.txt.tmp: Permission denied touch: cannot touch '/home/jenkins/workspace/my-project/durable-5bb5cd84/jenkins-log.txt': Permission denied Please make sure that the Jenkins master and agent have same user IDs. By default, both s4sdk/jenkins-master and s4sdk/jenkins-agent-k8s uses user ID 1000. Conclusion The SAP S/4HANA Cloud SDK Continuous Delivery Toolkit enables users to scale the infrastructure on demand. We will benefit from the autoscaling feature only if the infrastructure resources (Kubernetes cluster) are shared among multiple teams (projects) or if we have a Kubernetes cluster that supports autoscaling. However, there could be a noticeable penalty on the on the performance due to network overhead and pod spin up time if used on a slow infrastructure in comparison to the standalone setup. Questions and Feature Requests Feel free to reach out to us on Stackoverflow via our s4sdk tag. We actively monitor this tag in our core engineering teams. You can also leave us a comment or a question in SAP Community using the SAP S/4HANA Cloud SDK tag. If you would like to report a bug or have an idea for a feature request, you can create a corresponding issue in our GitHub repositories: S/4HANA Cloud SDK Build Pipeline
https://blogs.sap.com/2018/09/26/autoscaling-of-sap-s4hana-cloud-sdk-continuous-delivery-toolkit-on-kubernetes/
CC-MAIN-2018-43
en
refinedweb
[ ] Mark Struberg reassigned OWB-1216: ---------------------------------- Assignee: Mark Struberg > InjectionPoint.getType() returns wrong type for produced beans > -------------------------------------------------------------- > > Key: OWB-1216 > URL: > Project: OpenWebBeans > Issue Type: Bug > Reporter: John D. Ament > Assignee: Mark Struberg > Priority: Major > > Assuming I have a producer (same thing happens for a custom registered 3rd > party bean, this is just easier to demonstrate): > {code} > public class MyProducer { > @Produces > @SomeQualifier > public String doProducer(InjectionPoint ip) { > return ip.getType().toString(); > } > } > {code} > As well as the following injection point (with test): > {code} > @Inject > @SomeQualifier > private String myString; > > @Test > public void shouldBeStringType() { > assertThat(myString).isEqualTo(String.class.toString()); > } > {code} > The expectation is that the value of {{myString}} is {{java.lang.String}} but > actually the value is the producer {{MyProducer}}. We should be relying on > the injection point's value, not the producer class. It seems that it uses > the value of {{getBeanClass}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
https://www.mail-archive.com/dev@openwebbeans.apache.org/msg09294.html
CC-MAIN-2018-43
en
refinedweb
This class is an InputStream that implements one half of a pipe and is useful for communication between threads. A PipedInputStream must be connected to a PipedOutputStream object, which may be specified when the PipedInputStream is created or with the connect( ) method. Data read from a PipedInputStream object is received from the PipedOutputStream to which it is connected. See InputStream for information on the low-level methods for reading data from a PipedInputStream. A FilterInputStream can provide a higher-level interface for reading data from a PipedInputStream. public class PipedInputStream extends InputStream { // Public Constructors public PipedInputStream( ); public PipedInputStream(PipedOutputStream src) throws IOException; // Protected Constants 1.1 protected static final int PIPE_SIZE; =1024 // Public Instance Methods public void connect(PipedOutputStream src) throws IOException; // Public Methods Overriding InputStream public int available( ) throws IOException; synchronized public void close( ) throws IOException; public int read( ) throws IOException; synchronized public int read(byte[ ] b, int off, int len) throws IOException; synchronized // Protected Instance Methods 1.1 protected void receive(int b) throws IOException; synchronized // Protected Instance Fields 1.1 protected byte[ ] buffer; 1.1 protected int in; 1.1 protected int out; } PipedOutputStream.{connect( ), PipedOutputStream( )}
http://books.gigatux.nl/mirror/javainanutshell/0596007736/ch09-77318.html
CC-MAIN-2018-43
en
refinedweb
Interface for compilation databases. More... #include "clang/Tooling/CompilationDatabase.h" Interface for compilation databases. A compilation database allows the user to retrieve compile command lines for the files in a project. Many implementations are enumerable, allowing all command lines to be retrieved. These can be used to run clang tools over a subset of the files in a project. Definition at line 81 of file CompilationDatabase.h. Tries to detect a compilation database location and load it. Looks for a compilation database in directory 'SourceDir' and all its parent paths by calling loadFromDirectory. Definition at line 122 of file CompilationDatabase.cpp. References findCompilationDatabaseFromDirectory(), clang::tooling::getAbsolutePath(), and llvm::str(). Referenced by clang::tooling::ArgumentsAdjustingCompilations::getAllCompileCommands(). Tries to detect a compilation database location and load it. Looks for a compilation database in all parent paths of file 'SourceFile' by calling loadFromDirectory. Definition at line 107 of file CompilationDatabase.cpp. References findCompilationDatabaseFromDirectory(), clang::tooling::getAbsolutePath(), and llvm::str(). Referenced by clang::tooling::ArgumentsAdjustingCompilations::getAllCompileCommands(). Returns all compile commands for all the files in the compilation database. FIXME: Add a layer in Tooling that provides an interface to run a tool over all files in a compilation database. Not all build systems have the ability to provide a feasible implementation for getAllCompileCommands. By default, this is implemented in terms of getAllFiles() and getCompileCommands(). Subclasses may override this for efficiency. Reimplemented in clang::tooling::ArgumentsAdjustingCompilations, and clang::tooling::JSONCompilationDatabase. Definition at line 135 of file CompilationDatabase.cpp. References clang::driver::Action::CompileJobClass, clang::DiagnosticsEngine::Error, getAllFiles(), getCompileCommands(), clang::driver::Action::getKind(), clang::DiagnosticConsumer::HandleDiagnostic(), clang::driver::Action::InputClass, clang::driver::Action::inputs(), and clang::tooling::CompilationDatabasePlugin::~CompilationDatabasePlugin(). Returns the list of all files available in the compilation database. By default, returns nothing. Implementations should override this if they can enumerate their source files. Reimplemented in clang::tooling::ArgumentsAdjustingCompilations, and clang::tooling::JSONCompilationDatabase. Definition at line 130 of file CompilationDatabase.h. Referenced by clang::tooling::AllTUsToolExecutor::execute(), and getAllCompileCommands(). Returns all compile commands in which the specified file was compiled. This includes compile commands that span multiple source files. For example, consider a project with the following compilations: $ clang++ -o test a.cc b.cc t.cc $ clang++ -o production a.cc b.cc -DPRODUCTION A compilation database representing the project would return both command lines for a.cc and b.cc and only the first command line for t.cc. Implemented in clang::tooling::FixedCompilationDatabase, clang::tooling::ArgumentsAdjustingCompilations, and clang::tooling::JSONCompilationDatabase. Referenced by getAllCompileCommands(), and clang::tooling::ClangTool::run(). Loads a compilation database from a build directory. Looks at the specified 'BuildDirectory' and creates a compilation database that allows to query compile commands for source files in the corresponding source tree. Returns NULL and sets ErrorMessage if we were not able to build up a compilation database for the build directory. FIXME: Currently only supports JSON compilation databases, which are named 'compile_commands.json' in the given directory. Extend this for other build types (like ninja build files). Definition at line 65 of file CompilationDatabase.cpp. Referenced by findCompilationDatabaseFromDirectory(), and clang::tooling::FixedCompilationDatabase::getCompileCommands().
http://clang.llvm.org/doxygen/classclang_1_1tooling_1_1CompilationDatabase.html
CC-MAIN-2018-43
en
refinedweb
Dave, I've implemented the inheritance as the tutorial say, it worked fine, but the factory method still need to get the initial artist class to know the type, then ask cayenne again to give the right class to instantiate. I think there may be another way to get the object without doing this double call to objectForPK. public static Artist getArtist(String id) { if (id == null || id.equals("")) return null; DataContext context = DataContext.getThreadDataContext(); Artist c = (Artist) DataObjectUtils.objectForPK(context, Artist.class, Integer.parseInt(id)); if (c.getTipoEnum() == EnumTipoArtist.EXPERT) return (Artist) DataObjectUtils.objectForPK(context, ExpertArtist.class, Integer.parseInt(id)); else return c; } Thanks Hans ----- "Dave Lamy" <davelamy@gmail.com> escribió: > Hey Hans-- > > While I'm certain that the inheritance structure isn't data based to > you, I > imagine that Cayenne is going to HAVE to have a data value to know > which > subclass to instantiate. It's effectively going to get back a row > from the > Artist table and be asked to transform that row into an Artist object. > How > can it determine which kind? Through some sort of data analysis. > Either a > particular attribute value (ARTIST_TYPE) or via a linked table > structure > (don't think Cayenne supports that yet?). That data attribute should > not be > of particular importance to your application, however. It's just an > ORM > crutch. > > Dave > > On Mon, Mar 2, 2009 at 8:41 AM, <hans@welinux.cl> wrote: > > > Michael, > > > > Thank you, i already saw it, but my intent was to make it entirely > outside > > cayenne mappings, the problem i need to solve is just behavioral, > not data > > based... by the way, it's not clear how to try the example using > the > > modeler: creating an empty class with no attributes or something. > > > > Hans > > > > ----- "Michael Gentry" <mgentry@masslight.net> escribió: > > > > > Is this what you are after? > > > > > > > > > > > > > > > On Sun, Mar 1, 2009 at 7:46 PM, <hans@welinux.cl> wrote: > > > > Hi, > > > > > > > > I'm trying to implement a factory method pattern based on a > cayenne > > > data object. > > > > > > > > Say for example i have a class (cayenne based) called Artist, i > want > > > to make a subclass of Artist, say ExpertArtist that implements > some > > > specific behavior. > > > > > > > > Actually i have a big static Factory class that give me the all > > > objects, i have a method like this: > > > > > > > > public static Artist getArtist(String id) { > > > > > > > > if (id == null || id.equals("")) > > > > return null; > > > > > > > > DataContext context = > > > DataContext.getThreadDataContext(); > > > > > > > > Artist object = (Artist) > > > DataObjectUtils.objectForPK( > > > > context, Artist.class, > > > Integer.parseInt(id)); > > > > return object; > > > > } > > > > > > > > Obviously i can declare ExpertArtist as an subclass of Artist. > > > > > > > > package xxx.xxx; > > > > > > > > public class EspertArtist extends Artist { > > > > public String getName() { > > > > return super.getName() + " i'am expert !!"; > > > > } > > > > } > > > > > > > > I've tried to instantiate an ExpertArtist, just modifying the > > > Factory method, with no results. I don't know how to bouild the > parent > > > class calling super or something... > > > > > > > > Obviously these are not the real classes, the actual classes > are > > > really big and this solution: just modifying the factory method is > the > > > best for me. > > > > > > > > > > > > Thanks > > > > Hans > > > > > > > > > > > > > > > > -- > > Hans Poo, WeLinux S.A. > > Oficina: 697.25.42, Celular: 09-319.93.05 > > Bombero Ossa # 1010, Santiago > > > > -- Hans Poo, WeLinux S.A. Oficina: 697.25.42, Celular: 09-319.93.05 Bombero Ossa # 1010, Santiago
http://mail-archives.apache.org/mod_mbox/cayenne-user/200903.mbox/%3C1314533533.4601236030009204.JavaMail.root@ronin%3E
CC-MAIN-2018-43
en
refinedweb
Prompt for and read a password #include <unistd.h> char *getpass( const char *prompt ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The getpass() function can be used to get a password. It opens the current terminal, displays the given prompt, suppresses echoing, reads up to 32 characters into a static buffer, and restores echoing. This function adds a null character to the end of the string, but ignores additional characters and the newline character. A pointer to the static buffer. This function leaves its result in an internal static buffer and returns a pointer to it. Subsequent calls to getpass() modify the same buffer. The calling process should zero the password as soon as possible to avoid leaving the clear-text password visible in the process's address space.
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/g/getpass.html
CC-MAIN-2018-43
en
refinedweb
Why do the test cases involve 0 and negative numbers when these values are not traditionally even considered in regards to whether a number is prime or not and when the question gives no indications of how these cases should be handled? Primes - Why do the test cases involve 0 and negative numbers Is_prime problem: Isn't 7 a prime number ?! It gives me this message "Your function fails on is_prime(-7). It returns True when it should return False." Help We can often times get away with a brute force method that assumes only valid inputs will be given, especially if we are the only user. The minute our code gets out into the ether, we can no longer be sure that the program will always be given valid inputs. It is up to the programmer to build in safeguards against invalid inputs. Now assuming the inputs will always be numbers, one safeguard would be to refuse any value less than 2. There are no primes smaller than 2. Another would be to truncate floats by converting any input that fails the first test to a natural number. All primes are counting numbers. That would be enough to guarantee than any number you input will have a result, or be rejected. I’m not even certain that the test cases include a float because it has never been necessary in order to pass. It bears noting, though. x < 2 ? return false x = int(x) Should any non-numeric inputs occur, both of the above lines will choke, but it will be on the first that an exception is raised. >>> 'a' < 2 Traceback (most recent call last): File "<pyshell#215>", line 1, in <module> 'a' < 2 TypeError: unorderable types: str() < int() >>> We can busy ourselves with creating all kinds of type detection, or we can turn to Python’s exception handling tools that are at our disposal. The following kicks butt and takes a number… def is_prime(x): try: if x < 2: return False return True except TypeError: return "Invalid input." For a test run, > is_prime('a') => 'Invalid input.' > is_prime(-1) => False > is_prime(2) => True Of course any number that fails the less than 2 test will be True at this point, but it bears noting, 2 returns True out of the gate. We have a short and effective way to deal with values that cannot be used in comparison with a number. This brings up another special type of input, which our exception handling does not need to confront, booleans. > is_prime(False) => False > is_prime(True) => False These two values cast to 0 and 1 respectively, but both pass the less than 2 test so return False. At this point we are free to cast x as an integer and proceed with the trial division as prescribed in the lesson. Of course we are well beyond the lesson parameters, but the rest falls back in line with the instructions. Bottom line is you will have a tougher function that gives predictible results and does not raise any exceptions on bad inputs. Don’t let this segue you from your studies, but keep it in your kit going forward. def is_prime(x): try: if x < 2: return False u = int(x ** 0.5) + 1 x = int(x) for n in range(2, u): if x % n: continue else: return False return True except TypeError: return "Invalid input."
https://discuss.codecademy.com/t/primes-why-do-the-test-cases-involve-0-and-negative-numbers/232933
CC-MAIN-2018-43
en
refinedweb
On 13/04/2018 10:41, p...@highoctane.be wrote: > A somewhat unique name makes it easier to google for it (like Roassal > Pharo, or DeepTraverser Pharo, or Zinc Pharo). > These will give us hits that are relevant. > > Try System Browser, Inspector... > > And Apple has worked with NSObject and stuff like that without too much > trouble (at a much larger scale for whatever XyzKit they released). > Ah, yes, they used the "Kit" suffix. Maybe can we have something like > that with Pharo. Like SystemBrowserKit ( nah, too Appleish. But I prefer these names: * Calypso System Browser * Calypso Debugger * Iceberg Source Control Management * Zinc HTTP Client * Zinc HTTP Server * Fuel Serialization * Glamorous Spotter [*] * etc. [*] I particularly dislike "Glamorous" adjective. In the case of wrapper for libraries I'm hesitant to decide whether to indicate pharo name in it or not. I mean stuff like a NaCl wrapper calling it "NaCl-Pharo" instead of calling "Salty". > Let's try SystemBrowserMeccano (longish), or SystemBrowserPack (too > bland), or SystemBrowserGear (why not), SystemBrowserRig (this one > sounds cool actually)). Fortunately in the past the lack of namespaces caused the use of prefixes instead of suffixes. With time prefixes become invisible. A suffix, instead, will get into all your names, bothering with other existing suffixes like `Component`, `Model`, `Collection`, and so on. -- Esteban A. Maringolo
https://www.mail-archive.com/pharo-users@lists.pharo.org/msg30896.html
CC-MAIN-2018-43
en
refinedweb
hi there I want to give a for loop in my programm that if any student fails in more than four subject he will not be able to go to the next level. if average is less than 40 then the student will be failed . i tried in many ways. but compiler is not taking at all. below i am just showing the logs for the loop but not the whole program. the program has got other parts. Basically i could not put the exact logic. #include <stdio.h> main() { int s,t; Printf ("This student is not allowed to go to the next level as he fails in more than two subject\n"); scanf("%d",&s); /*here what will be the address of s ?*/ for (t=1, t>=4, t++) }
https://www.daniweb.com/programming/software-development/threads/140459/looping-problem
CC-MAIN-2018-43
en
refinedweb
Stores a copy of the current floating-point environment #include <fenv.h> int fegetenv ( fenv_t *envp ); The fegetenv( ) function saves the current state of the floating-point environment in the object referenced by the pointer argument. The function returns 0 if successful; a nonzero return value indicates that an error occurred. The object type that represents the floating-point environment, fenv_t, is defined in fenv.h. It contains at least two kinds of information: floating-point status flags, which are set to indicate specific floating-point processing exceptions, and a floating-point control mode, which can be used to influence the behavior of floating-point arithmetic, such as the direction of rounding. The fegetenv( ) and fesetenv( ) functions can be used to provide continuity of the floating-point environment between different locations in a program: static fenv_t fpenv; // Global environment variables. static jmp_buf env; /* ... */ #pragma STDC FENV_ACCESS ON fegetenv(&fpenv); // Store a copy of the floating-point environment if ( setjmp(env) == 0 ) // setjmp( ) returns 0 when actually called { /* ... Proceed normally; floating-point environment unchanged ... */ } else // Nonzero return value means longjmp( ) occurred { fesetenv(&fpenv); // Restore floating-point environment to known state /* ... */ } fegetexceptflag( ), feholdexcept( ), fesetenv( ), feupdateenv( ), feclearexcept( ), feraisexcept( ), fetestexcept( )
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-61.html
CC-MAIN-2018-43
en
refinedweb
A few months ago, a developer sent the following error stack trace to Square from their iOS app: This led to a patch of Valet, Square’s popular open source library for managing iOS and macOS keychains. Along the way, we also uncovered a subtle but interesting inconsistency in keychain behavior on iOS and macOS. This post will cover the following: - A detailed look at the error; - An overview of the iOS keychain and Valet; - Debugging and patching Valet; and - How we stumbled upon the differing keychain behavior in iOS and macOS, and the factors that contribute to it. Error Details NSInvalidArgumentException’, reason: ‘-[__NSCFData isEqualToString:]: unrecognized selector sent to instance 0x6080000b97b0 The error message above indicates that isEqualToString (an NSString method) is being called on an __NSCFData instance (a private subclass of NSData that does not implement isEqualToString). Looking further down the stack trace, the first call that is not part of Apple’s CoreFoundation framework is: -[VALValet migrateObjectsMatchingQuery:removeOnCompletion:] VALValet is a class in Valet, and the migrateObjectsMatchingQuery method calls isEqualToString once here: Immediately before the code calls isEqualToString and the crash occurs, the kSecAttrAccount value retrieved from the keychain entry is stored as an NSString: NSString *const key = keychainEntry[(__bridge id)kSecAttrAccount]; The exception, however, indicates that key is actually an NSData object. Why does Valet assume that the kSecAttrAccount value is a String? Before answering this, having answers to these prerequisite questions is helpful: - What is the iOS keychain? - What is Valet? - What does Valet’s migrateObjectsMatchingQuery method do? What is the iOS Keychain? If you’re already familiar with the iOS keychain, feel free to skip this section. One way of persisting data between app launches is via UserDefaults, which is a key-value store with a simple API: let myUsername = "foo" // Store value in UserDefaults UserDefaults.standard.set("bar", forKey: myKey) // Retrieve data UserDefaults.standard.string(forKey: myKey) The iOS keychain is another persistence option, allowing apps to store sensitive data, such as passwords, in an encrypted SQLite database file on the device. Keychain entries can be retrieved and stored through the use of predefined attribute fields (e.g., kSecAttrAccount, kSecClass, kSecAttrService). What’s the difference between the iOS keychain and UserDefaults? The iOS keychain is secure storage (and has a hard-to-use API), while UserDefaults is entirely insecure (but has an easy-to-use API). UserDefaults stores data in plain text in a .plist (property list) file within the application’s sandbox. Utilities like iBrowse and iExplorer can easily inspect files on a device, making UserDefaults only suited for storing non-sensitive data like user preferences and settings. The following illustrates the minimum amount of code necessary to create and fetch a username/password entry in the the iOS keychain: Not only is keychain usage verbose, but every function that interacts with the keychain (e.g., SecItem*) returns a bevy of error codes (see OSStatus in SecBase.h) that need to be checked and handled. What is Valet? Dealing with the keychain is complex, prone to error, and usually includes a large amount of boilerplate code. Unfortunately, it’s also unavoidable for any app that needs to store account information or sensitive data. To make keychain management easier, Valet was developed by Square as an open source library for both iOS and macOS that hides the complexities of the keychain under a simple key-value interface. Storing and retrieving a username/password in the keychain via Valet is similar to UserDefaults:) Now that we have more context, let’s return to our investigation of the error received by the developer. What does Valet’s migrateObjectsMatchingQuery Method do? As mentioned previously, the exception: NSInvalidArgumentException [__NSCFData isEqualToString:]: unrecognized selector occurs in VALValet’s migrateObjectsMatchingQuery method. What does this method do? Typical usage of the migrateObjectsMatchingQuery method looks like ) migrateObjectsMatchingQuery:removeOnCompletion: allows an application to shift management of existing keychain entries to Valet. If the application was utilizing the keychain to persist data prior to using Valet, this method restructures existing keychain entries to a format that allows these entries to be updated or fetched via Valet’s simpler API. let query: [String: Any] = [ kSecAttrService as String: Bundle.main.bundleIdentifier as Any, kSecClass as String: kSecClassGenericPassword as Any ] The query parameter is used to select the keychain entries to be migrated to Valet, and uses the standard keychain attribute field names as filter criteria. Migration allows apps to transition to Valet without forcing users to re-enter account passwords or other information stored in the keychain. Taking a Closer Look at Where the Error Occurs Revisiting the code block in migrateObjectsMatchingQuery where the exception is thrown: Prior to migration, this section of code is responsible for sanity checking the existing keychain entries that are candidates for migration. For example, to be eligible, existing entries must have kSecAttrAccount and kSecValueData values. A line-by-line explanation: for (NSDictionary *const keychainEntry in queryResultWithData) queryResultWithData is an Array containing data for the entries returned after searching the keychain using the query supplied to the migrateObjectsMatchingQuery method. NSString *const key = keychainEntry[(__bridge id)kSecAttrAccount]; Each entry’s kSecAttrAccount value is stored in the key variable. Note the assumption here that the value is a String. The compiler allows this due to the fact that NSDictionary values are id pointers, which allow implicit casts. if ([key isEqualToString:VALCanAccessKeychainCanaryKey]) { // We don't care about this key. Move along. continue; } Before each entry is sanity checked, key is compared against VALCanAccessKeychainCanaryKey and all checks are skipped if a match is found. VALCanAccessKeychainCanaryKey is a string constant stored in the kSecAttrAccount field of an entry by Valet when performing exploratory (canary) keychain entry inserts. These inserts verify that the proper keychain accessibility was specified when initializing a Valet instance. If the query results contain a canary entry, we want to skip any sanity checks for that entry. Canary entries are a special case that should not cause any migration sanity checks to fail. Everything works fine when key (the value in an entry’s kSecAttrAccount field) is an NSString. The exception, however, proves that it’s possible for key to be an NSData object. The problem is the assumption Valet makes that an entry’s kSecAttrAccount value will be an NSString object: NSString *const key = keychainEntry[(__bridge id)kSecAttrAccount]; Why does Valet assume that the kSecAttrAccount value is an NSString? Apple’s documentation for the keychain value associated with the SecAttrAccount attribute provides the following description: A key whose value is a string indicating the item’s [keychain entry’s] account name. From this, it seems reasonable to assume that kSecAttrAccount values will always be strings. Also, from this example of Valet usage:) Valet only accepts String arguments for keys (myUsername in the example above), which are then stored as kSecAttrAccount values in the keychain. Any entries created by Valet abide by Apple’s type restriction on kSecAttrAccount values. However, for existing keychain entries, Valet makes the flawed assumption that either: - All developers comply with these keychain type restrictions; or - The framework or operating system enforces type restrictions on keychain data. Reproducing the Error The exception received by the developer, however, indicates that it is possible to store an NSData object in a keychain entry’s kSecAttrAccount field, which contradicts Apple’s documentation. The following code snippet proves that this is indeed the case, and the same exception is thrown when attempting to migrate keychain entries. Fix: Adding a Type Check Since it can no longer be assumed that kSecAttrAccount is always a string, we need to add a type check prior to the string comparison to avoid sending the isEqualToString message to a non-NSString object. if ([key isKindOfClass:[NSString class]] && [key isEqualToString:VALCanAccessKeychainCanaryKey]) { // We don’t care about this key. Move along. continue; } Since Valet always uses NSString values in the kSecAttrAccount field for entries it creates, a Valet canary entry’s key will always be an NSString. Therefore, adding this type check is a valid requirement. An Unintentional Find: Inconsistent behavior in macOS and iOS Keychains Next, we added the test below to verify our change (Valet is written in Objective C): NSString *identifier = @"my_identifier"; NSData *dataBlob = [@"foo" dataUsingEncoding:NSUTF8StringEncoding]; NSDictionary *keychainData = @{ (__bridge id)kSecAttrService : identifier, (__bridge id)kSecClass : (__bridge id)kSecClassGenericPassword, (__bridge id)kSecAttrAccount : dataBlob, (__bridge id)kSecValueData : dataBlob }; The test deliberately stores dataBlob, which is an NSData object in the kSecAttrAccount field of keychainData, which is inserted in to the keychain. As expected, the fix prevents an exception from being thrown. Interestingly, the last assertion passes on an iOS target but fails in a macOS environment (Valet can manage iOS and macOS keychains): NSError *error = [self.valet migrateObjectsMatchingQuery:query removeOnCompletion:NO]; XCTAssertNil(error); // Passes for iOS, fails for macOS Running the test on a macOS target resulted in a VALMigrationErrorKeyInQueryResultInvalid error, indicating that the keychain entry’s kSecAttrAccount value was nil. Recall that any keys that are candidates for migration must have a value in the kSecAttrAccount field. Inserting an entry containing a kSecAttrAccount Data object on macOS results in the kSecAttrAccount value not being persisted if it is not the correct type. iOS, on the other hand, persists the kSecAttrAccount value regardless of whether it is the correct type or not. Inspecting the keychain entry after inserting keychainData in both operating systems confirms that the kSecAttrAccount data value is stored successfully on iOS but not on macOS: macOS: (lldb) po keychainEntry { /** MISSING ACCT VALUE !!! **/ cdat = "2017-08-12 11:37:39 +0000"; class = genp; // kSecClass labl = "Keychain_With_Account_Name_As_NSData"; mdat = "2017-08-12 11:37:39 +0000"; svce = "my_identifier"; // kSecAttrService "v_Data" = <666f6f>; // kSecValueData "v_PersistentRef" = <...>; } iOS:(lldb) po keychainEntry { /** PERSISTED kSecAttrAccount VALUE !!! **/ acct = <666f6f>; agrp = "<*>.com.squareup.Valet-iOS-Test-Host-App"; cdat = "2017-08-12 11:35:45 +0000"; mdat = "2017-08-12 11:35:45 +0000"; musr = <>; pdmn = ak; svce = "my_identifier"; // kSecAttrService sync = 0; tomb = 0; "v_Data" = <666f6f>; // kSecValueData "v_PersistentRef" = <67656e70 00000000 00000325>; } Why does iOS persist the kSecAttrAccount value while macOS does not? A developer at Apple posted this reply to a thread inquiring about this very issue: There are two implementations of the SecItem API: the iOS implementation, which is also used for iCloud Keychain on OS X the OS X implementation, which is a compatibility shim that bridges over to the traditional keychain Note Both are available in Darwin. Search the Security project for SecItemUpdate_ios and SecItemUpdate_osx to see how this works under the covers. The iOS implementation is based on SQLite. If you’re familiar with SQLite you’ll know that it is, at its core, untyped, and thus the iOS implementation has to do its own type conversion. This is why that implementation is somewhat forgiving on the types front. However… IMPORTANT I strongly recommend that you use the types specified in the header. Other types do work but that’s an accident of the implementation rather than a designed in feature. Moreover, as there are two implementations, it’s not always the case that these accidents line up. I realise that this rule is broken by various bits of Apple sample code. In short, these two factors contribute to this difference in behavior: - iOS and macOS do not use the same data store for their keychains. - The keychain implementation is different for both operating systems. iOS and macOS Use Different Keychain Data Stores SQLite, which is the data store backing the iOS keychain, uses type affinity rather than rigid typing, meaning a column’s datatype in SQLite is more a recommendation rather than a requirement. From the SQLite documentation: Any column can still store any type of data. It is just that some columns, given the choice, will prefer to use one storage class over another…A column with TEXT affinity stores all data using storage classes NULL, TEXT or BLOB. This explains the observed behavior in iOS, which allowed an NSData object to be stored in the kSecAttrAccount column, even though the recommended data type is a string. Differing Keychain Implementation on iOS and macOS iOS and macOS both use the SecItemUpdate function to update keychain entries. Its implementation can be found in SecItem.cpp, which is open sourced as part of Apple’s Security framework: It’s apparent from the can_target_(ios|osx) boolean values and SecItemUpdate_(ios|osx) functions that the codepaths are different for each operating system. For interested readers, SecItemUpdate_ios is aliased via a preprocessor macro to the SecItemUpdate function in SecItem.c. SecItemUpdate_osx is defined in SecItem.cpp. The entire Security framework can be downloaded here. Summary of Keychain Findings To summarize what we’ve learned about keychains as a result of this investigation: - Both iOS and macOS do not return an error if a Data object is stored in a keychain entry’s kSecAttrAccount field. - iOS persists the kSecAttrAccount value, regardless of type, while macOS does not. - It’s clear that macOS keychains are persisted in a type safe database, or type checks have been added to the macOS codepath that do not exist on iOS. Fixing the Test Returning back to the failing Valet test, in order to account for the differing keychain behavior across platforms, the following: XCTAssertNil(error); // Passes for iOS, fails for macOS needs to be replaced with: # if TARGET_OS_IPHONE XCTAssertNil(error); # elif TARGET_OS_MAC /** iOS allows kSecAttrAccount NSData entries, while OSX sets the value to nil for any non-string entry. */ XCTAssertEqual( error.code, VALMigrationErrorKeyInQueryResultInvalid ); # else [NSException raise:@"UnsupportedOperatingSystem" format:@"Only OSX and iOS are supported"]; # endif With this change the test now passes, and a link to the final Valet fix can be found here on Github. Communicating Findings to the Developer After completing our investigation, we relayed to the developer that the crash was due to an NSData object being stored against a keychain entry’s kSecAttrAccount attribute. They responded that their app had always stored kSecAttrAccount string values, but were using a third-party library that was also storing credentials containing kSecAttrAccount values in the keychain. It was highly probable that this library was inserting the offending keychain entry. The library, however, was critical to the app’s functionality and could not be removed. It’s common to use the app’s main bundle identifier as the kSecAttrService attribute value, which acts as a namespace within the keychain. The developer’s code and the library were most likely storing entries under the same namespace. The code used in the developer’s app to migrate their keychain entries to Val ) The query above returns all generic passwords stored under the app’s bundle identifier, and in this case, it included an entry stored by a third party library with a kSecAttrService Data object. Updating the developer’s app to a version of Valet containing the fix resolved the issue for them. I’d like to thank Eric Muller and Dan Federman, the creators of Valet, for helping to debug and fix this issue.
http://engineeringjobs4u.co.uk/uncovering-inconsistent-keychain-behavior-while-fixing-a-valet-ios-bug
CC-MAIN-2018-43
en
refinedweb
Allegload. AllegroGraph 6.3.0 AllegroGraph Server: General Changes Catalogs may no longer be named root or system. The names root and system are reserved as catalog names. root interferes with the default catalog (which is referred to as the root catalog) and the system catalog is used for auditing and other system data. If you have catalogs named root or system, contact allegrograph-support@franz.com if you have difficulty upgrading to 6.3.0. 'agtool archive' no longer accepts --catalog, catalog now part of repo name The agtool archive program (see Backup and Restore) now specifies catalogs and repositories together using the catalog:repository format. The --catalog option to agtool archive is no longer accepted. Rfe15210 - Log which user starts a session When a session is started, AllegroGraph now logs the name of the user that started the session. (Although not documented, the AllegroGraph log has always said when a session was started and stopped.) A sample log message: [2017-09-21T15:59:48.028 p21266 back] Session on port 43704 started by user "test". Rfe15194 - Support XQuery and XPath Math Functions AllegroGraph now supports the following XPATH math functions - acos - asin - atan - atan2 - cos - exp - exp10 - log - log10 - pi - pow - sin - sqrt - tan See for details. Rfe15132 - agtool archive list can operate on backup directories agtool archive list now can operate not only on single archive file but can be given backup directory as source. In that case it displays details about every repository inside backup. Additionally it can be given the optional DB-IN-ARCHIVE argument to display details about single repository in ARCHIVE even if other repositories are present. New option --summary allows agtool archive list to only list repositories inside ARCHIVE, without printing details about every one. agtool archive is described in the Backup and Restore document. Rfe12362 - CORS support CORS (Cross-Origin Resource Sharing), if enabled, allows scripts run on a web page from one server to make HTTP requests to the (different) server where AllegroGraph is running. CORS is not enabled by default. You enable it with various donfiguration directives described in the CORS directives section of the Server Configuration and Control document. Rfe12201 - Switching a repository between commit and no-commit modes This now requires superuser privileges. Bug24927 - Turtle serializer tried to escape capital A The turtle serializer could try to escape the letter capital A which could lead to invalid output. This has been corrected. Bug24910 - Unhelpful Turtle parser error message Improved the error message the Turtle parser produced when the object of a triple was missing. Bug24905 - Audit log does not display in AG 6.2.3 The Audit log display feature in AGWebView was inadvertently broken in release v6.2.3. This has been corrected. Bug24842 - Custom services should not use user namespaces Custom services were being executed in the context of the current user rather than in the context of the code that defined the service. This has been corrected. Bug23757 - If a script file generates errors (warnings) while loading, don't start the session. Previously, any errors or warnings that occurred when compiling a Lisp script were written to the AllegroGraph log file and the session would still start. Now, any problems are still written to the log file but the session will not start and the text of the problem(s) will be returned in the HTTP response. HTTP API Bug20845 - /version/info web service removed The /version/info web service caused the server to report detailed internal information about the server and the machine on which the server was running. It was decided that the information was not in fact useful and could expose security vulnerabilities. The URL now returns 404. SPARQL Defining your own magic properties AllegroGraph lets you define your own Magic Properties using the Lisp API. The Defining Magic Properties Tutorial describes how to do this and provides numerous simple examples. Rfe15182 - All of AllegroGraph's query warnings are now documented. See the Query Warnings section of the SPARQL Reference document. Rfe15152 - Propagate impossible query conditions outward in the plan When a query contains a triple pattern that cannot possibly match, it will mark the pattern as impossible. The impossible marking will propagate outward (e.g., from the pattern to the BGP to the enclosing JOIN and so on). This allows queries with typos or other errors to fail quickly. Rfe15135 - Warn if any SPARQL type errors occur during query execution SPARQL will now log a warning if any type errors are encountered during query execution. For example, it is an error to try and cast the string '1.23e4' to an integer because the lexical representation of an XSD integer consists of "a finite-length sequence of decimal digits (#x30-#x39) with an optional leading sign." (cf). Therefore, a query like: select * { bind(xsd:integer('1.23e4') as ?int) } will leave ?int unbound. Previously, this type error would have been surpressed. Now, AllegroGraph will issue a warn-sparql-type-errors warning. Rfe15112 - Improve cancelQueryOnWarnings option The cancelQueryOnWarnings query option now cancels queries immediately for most plan or query warnings. This option defaults to off but can be turned on by placing QueryOption cancelQueryOnWarnings=yes in the AllegroGraph configuration or by prefixing a SPARQL query with the PREFIX PREFIX franzOption_cancelQueryOnWarnings: <franz:yes> Rfe12249 - Improve magic predicates for Solr and MongoDB The magic predicates that interface AllegroGraph's SPARQL engine with Solr and MongoDB have been improved. The three magic predicates: - - - can now take an optional second parameter in their list of subject arguments. This second parameter will be bound to the matching objects of any triples found by the search in the case of solr:matchID and to the associated linking identifier in the cases of solr:match and mongo:find. For example, the following query will bind ?s to the subjects found and ?text to the object: prefix solr: <> select ?s ?text { (?s ?text) solr:matchId 'boatswain' . } Rfe11193 - Improvement performance of SPARQL expression compilation The SPARQL query expression compiler is now approximately 45 times faster than before. This means that queries that use many expressions (e.g., in FILTERs, BIND, ORDER BY, etc.) will perform more efficiently. Note that aggregation expressions still use the older and slower compiler. Rfe11678 - Full-scan warning should take dataset into account AllegroGraph signals a warning when a SPARQL query needs to perform a full scan and the store contains more than one million triples. AllegroGraph was incorrectly signaling this warning when the query was executed against a SPARQL dataset. This has been corrected. Bug24898 - Fix problem with magic SNA centrality predicates and neighbor caches Some queries that used AllegroGraph's SNA centrality magic predicates and neighbor caches could cause an error. This has been corrected. Bug24861 - Sparql parser confused by "?o<='foo'^^<...> " The SPARQL parser was incorrectly treating the inequality operator as the start of a URI. This has been corrected. Bug24681 - Parsing should fail if a blank node label is used in more than one BGP AllegroGraph did not signal a parsing error if a SPARQL query used the same blank node in more than one basic graph pattern (BGP). This has been corrected. AGWebView Rfe15198 - Index management page An index management page has been added to AGWebView. The new page can be accessed from the Repository Overview page by following a link labeled 'Manage triple indices'. The list of indices has been removed from the overview page and the 'purge deleted triples' link has been moved to the index management page. See The Manage triple indices page for more information. Rfe14563 - Repository reports A set of reports describing disk usage and other repository properties has been added to WebView. The reports can be accessed through links on the repository overview page. See the Reports section in the Repository Overview page description for more information. Rfe13419 - Add warmup triple-store to AGWebView Add new option Warmup store in Repository Control section in repository overview page. See the description of the Repository Overview Page in the WebView document. Rfe11915 - Creating freetext index prefills stop words with default ones. The AGWebView dialog for creating a freetext index now starts with a default selection of stop words (rather than starting with no stop words and the option to add a default selection). See the Free-Text Indices page in the WebView document for more information. Rfe11256: Import RDF statements directly. It is now possible to import RDF statements directly by typing or pasting them into a text box in AG WebView. See the Load and Delete Data section of the Repository Overview page description. Bug24824 - Spurious warning for ASK queries in AGWebView AllegroGraph WebView was showing a query warning for ASK queries when the "Limit to 1000 results" checkbox was checked. This has been corrected. Bug24480: "Ignore errors" option was ignored when loading triples from server file The "Ignore errors" error handling option was not being honored when loading a server-side file using AG WebView. This has been fixed. Bug23607 - Improve accuracy of query timing results in AGWebView Previously, the timing results in AGWebView represented the time it took to find the first result. Now, it represents the time it takes to find all results. The total time to display may still be longer as it will also include the time it takes the web browser to render the returned data. Changes to the Lisp API Remove :plain datatype mapping The :plain datatype mapping converted typed string literals into the equivalent plain literal. E.g., it would convert "bear"^^xsd:string into "bear". :plain mappings are no longer necessary because AllegroGraph handles typed strings more efficiently automatically. Mappings are specified with setf and datatype-mapping and predicate-mapping. Note that any existing datatype or predicate mappings of type :plain will be automatically removed from the store. Bug24857 - path-finding over HTTP fails when paths include literals If an SNA path included typed literals, then a remote-triple-store would convert them into the wrong UPI type leading to UPI not in string-table errors. This has been corrected. Prolog No significant changes. Documentation No significant changes. nD Geospatial No significant changes. AllegroGraph 6.2.3 AllegroGraph Server: General Changes Rfe15101: Restore tlmgr to tlmgr.restored The tlmgr file contains information about replication jobs. Since the existence of replication job information can result in transaction log files accumulating, the tlmgr file is no longer restored as is. Now it is renamed to tlmgr.restored during a restore. After a restore, replication jobs must be reestablished by the user. Rfe12408 - Add graph filtering support to agmaterialize agtool materialize can now limit materialization to a specified list of graphs via the --graph (short form -f -- note that -g specifies the default graph for inferred triples) command line argument. The inferred graph (specified with --inferred-graph or -g) will always be included in the list. See the Materializer document. Bug24729 - Spaces in filenames prevent server-side file import. Importing a server-side file with spaces in its path would fail with an error message about illegal characters in a URI. This issue affected both WebView and the HTTP API, but not agtool load and has been corrected. HTTP API Bug20772 - Loading JSON format may fail to commit all triples Using the 'commit=number' parameter when importing statements through the HTTP API could result in some statements not being committed. The issue only affected JSON imports and has been corrected. SPARQL Rfe15077 - Signal a query warning when CONSTRUCT tries to generate invalid triples AllegroGraph will now signal a query warning if any triples produced by the CONSTRUCT template are invalid. Formerly, these triples were silently discarded. Rfe15075 - Improve parsing efficiency of large USING [NAMED] datasets The SPARQL parser now parses UPDATE commands with very large dataset specifications using the USING and USING NAMED clauses more efficiently. Rfe15074 - SPARQL parser has non-linear behavior for large VALUES clauses Improved the SPARQL parser performance for queries with 1000s of elements inside of VALUES clauses. Rfe15061 - Improve efficiency of SPARQL queries with large datasets AllegroGraph now determines at query time whether it would be more efficient to issue getStatements calls for each graph in a dataset or to issue a single getStatements call and filter the results to include only graphs in the dataset. Depending on the triple-store and the dataset, this can be significantly more efficient. Rfe15006 - Improve GRAPH and PROPERTY PATH query planning and execution AllegroGraph now handles property path queries inside of GRAPH clauses more efficiently when the graph is a variable and the clause contains no property path queries that use any of the ZeroOrMorePath, OneOrMorePath, or ZeroOrOnePath operators. Rfe14450 - Improving cross product warning The warning issued when AllegroGraph detects a cross product in a SPARQL query is now more descriptive. Bug24779: Constructed triples should not allow blank nodes in the predicate postion AllegroGraph was incorrectly allowing CONSTRUCT to produce blank nodes in the predicate position of triples. This has been corrected. Bug24762 - SPARQL DELETE/INSERT/WHERE and Chunk processing could fail If the chunk size was too small, a SPARQL DELETE/INSERT/WHERE command could fail to correctly process triples. Bug24755 - Problems in intervalAfterDatetime and intervalBeforeDatetime The intervalAfterDatetime magic property could fail to find solutions when using a constant datetime rather than a variable. Additionally, both intervalAfterDatetime and intervalBeforeDatetime could fail to find solutions when the intervals were defined on explicit times rather than using named time points. Bug24709 - FILTER EXISTS and unbound variables on the RHS AllegroGraph used to return incorrect results for FILTER EXISTS expressions when the RHS contained any unbound variables. This has been corrected. Bug24704 - Improve strategy used to find unique values in a store AllegroGraph uses several techniques to determine all of the unique values stored for subject, predicate, object, or graph. Depending on the quads in the store and the shape of the query, one of these techniques can be much more efficient than the others. It was possible, however, for AllegroGraph to choose the wrong technique in some cases. This has been corrected. Bug24665 - SPARQL queries with :dateTime inequality filters could lose results Query results could be lost if a SPARQL query used a non-inclusive inequality filter on xsd:dateTimes or xsd:times. This has been corrected. Bug24455 - Incorrect results when join variables might be unbound AllegroGraph would sometimes return incorrect results from joins where a part of the join key was unbound. This has been corrected. Bug24215 - Unbound values in join keys When merging solutions during a join, AllegroGraph would sometimes not produce bindings for variables that are only bound on the RHS of the join and are a part of the join key. For instance when running this query: select * { { values(?x ?y) {(1 undef)} } { values(?x ?y) {(undef 2)} } } AllegroGraph would return (?x = 1, ?y = undef) instead of the expected (?x = 1, ?y = 2). This has been corrected. AGWebView No significant changes. Changes to the Lisp API Bug24796 - intern-resource error on future-part with unregistered namespace abbreviation A call to intern-resource on a future-part whose namespace abbreviation was not defined would signal a bus error. This will now signal a condition of type cannot-resolve-namespace-for-future-part-error. Prolog No significant changes. Documentation No significant changes. nD Geospatial No significant changes. AllegroGraph 6.2.2 AllegroGraph Server: General Changes Rfe14926: Reserve parentheses characters Parentheses are no longer allowed in triple store or catalog names. These characters are reserved for use in triple store specifications. Bug24700 - Unfriendly error message when session ports are unavailable Previously, if a request to create a session cannot be completed due to lack of an available session port, an unfriendly error message was reported. This has been fixed and now a clear error message is returned. Bug24687 - Deleting a tlog when free list is full throws error. Fixes a bug introduced in v6.1.4 where a tlog that was deleted due to the free list being full would be repeatedly deleted by AllegroGraph despite it no longer existing. This bug could prevent the freeing/deletion of old tlogs, causing them to accumulate. Log entries that indicate this bug, would be similar to the following: [2017-05-22T13:49:21.800 p17318 tlarchiver] Store "bug24687", free list is full, will just delete "tlog-2477c96f-0579-ba54-b60b-7083f2293fa8-27" ... later ... [2017-05-22T13:51:34.996 p17318 tlarchiver] Store "bug24687", free list is full, will just delete "tlog-2477c96f-0579-ba54-b60b-7083f2293fa8-27" [2017-05-22T13:51:34.999 p17318 E tlarchiver] Store "bug24687", Attempt to delete tlog-27 gave error There is no file named /dbs/bug24687/tlog-2477c96f-0579-ba54-b60b-7083f2293fa8-27: No such file or directory [errno=2].: No such file or directory [errno=2]. Bug24673 - FTI merger can create empty data file The FTI merger process sometimes fails and logs an error message similar to the following: Process 'repo FTI Merger' raised: opening #<mapped-file-simple-stream #P"/agdata/data/rootcatalog/repo/fti-2-chunk-114-data" mapped closed @ #x100054959b2> resulted in error (code 22): Invalid argument. this has been corrected. Bug24663 - TriG parser uses the wrong graph on the first blank node In TriG nested blank nodes can be specified in a graph other than the default graph. When the TriG parser encountered the first nested blank node inside a non-default graph the graph for the triples derived from the parsing of that nested blank node would incorrectly be set to the default graph. This has been fixed to use the correct graph. Bug24642: Incorrect text index query warning AllegroGraph v6.1.4 added a check and warning for text index queries which could not possibly match. Under certain circumstances this check would warn even though the query could match. This has been corrected. Bug24623 - SNA path finding in remote-triple-stores did not suppport blank nodes If a path contained blank nodes, then the SNA path finding functions would signal an error if used on a remote triple-store. This has been corrected. Bug24615 - Static filter function not re-esablished when attaching to a store Previously when opening an existing triple store, the static attribute filter was not reestablished. This has been fixed. Bug24588 - load-nqx does not properly track line numbers When an error occurs loading extended triples via load-nqx and agload the incorrect line number is reported. This has been corrected.24545 - Store specification containing a remote URL could fail A "missing value for argument db.agraph::user" error would occur when starting a session using a store specification containing a remote URL without authentication information. This has been fixed. Bug24342 - load-turtle loses default-attributes for RDF collections Previously the Turtle parser would fail to include attributes when parsing an RDF collection such as: prefix : <ex://foo#> :a :p (1 2 3) . This has been fixed. Bug21595 - agload bad error message when given a filespec that does not exist The agload program reported misleading error messages when attempting to load files that did not exist or contained wildcards. This has been improved by reporting an accurate and more useful error message. See Data Loading for more information on agload. HTTP API Rfe14967 - Ensure that the statements REST API uses namespace abbreviations The statements REST API was not using user defined namespaces to abbreviate RDF resources in formats that supported them like turtle or TriG. This has been corrected. Rfe14930 - Improve error message when a file cannot be accessed The agload process for loading a local file via the HTTP interface reported misleading and inaccurate error messages when the local file was inaccessible. This has been improved by returning an accurate and useful error message in the HTTP response. SPARQL Rfe14978 - Improve efficiency for SPARQL datasets with many graphs AllegroGraph now operates on SPARQL datasets with many graphs more efficiently. Rfe14975 - SPARQL dataset validator is inefficient for large dataset specifications The SPARQL engine validates any dataset specification before it begins execution. This change improves the performance of the validation which helps query speed especially when the dataset has more than a few thousand graphs. Rfe14974 - SPARQL parser is inefficient for large dataset specifications SPARQL queries with thousands of FROM and FROM NAMED clauses in their dataset description now parse much more efficiently. Rfe14931 - Improve how property path queries implement filtering SPARQL Property Path queries now handle FILTERs more efficiently. Rfe14899 - Warn if query LIMIT/OFFSET and externally imposed LIMIT/OFFSET conflict Pursuant with the changes made for Rfe14597 (described below), AllegroGraph will now warn if a query's internal and external LIMITs differ or if a both an internal and external OFFSET are supplied. Rfe14597 - Reduce confusion between internal and external query limit and offsets Previously, any externally imposed LIMIT or OFFSET would override the LIMIT or OFFSET in the query string. This was most evident in AGWebView because the web interface uses a dropdown selector to pick the LIMIT and this LIMIT overrode any specified in the query. Now, AllegroGraph will: - use the minimum of the internal and external LIMITs. - use the sum of the internal and external OFFSETS. Bug24684 - Property Paths and MINUS could interact badly It was possible for queries that used MINUS and Property Paths to generate incorrect answers. This has been corrected. Bug24610 - Trivial equivalence constraints cause an error. Queries containing trivial equivalence constraints, such as filter(?x = ?x), used to cause the following error: missing value for argument db.agraph.sbqe::var2 This has been corrected. Bug24609 - User attributes set via the SPARQL prefix could be ignored Depending on how a query was executed, any user attributes set via the userAttributes SPARQL query prefix option could be ignored. This has been corrected. Bug24608 - Spurious join results. When performing joins where the left hand side contained unbound values (resulting from the use of OPTIONAL or UNDEF keywords) AllegroGraph could sometimes return the same solution row multiple times. For instance, the following query: SELECT * { { values (?a ?x ?y) {(0 1 2)} } optional { values (?a ?y ?z) { (0 2 3) (0 undef 4) (undef 2 5) } } } Would return 4 results, including two copies of the (0 1 2 5) row. This has been corrected. Bug24599 - concatenated-cursors don't support extended triples When a SPARQL query which uses attributes-related magic properties is issued against a graph-filtered triple store, an error like the following can occur: Executing query failed: #<concatenated-cursor @ #x1000732dc52> does not support fat-triples This has been fixed.". AGWebView Rfe14968 - Remove the deprecated planner option from AGWebView Remove the long deprecated planner option from AGWebView's query page. This option has had no effect for some time. Rfe13239 - Provide "no limit" option for query results in AGWebView Subject to the resources of the web browser, AGWebView can now return an unlimited number of results from a query. By default, AGWebView will display 1000 results or use the limit specified by the query (if it is smaller than 1000). Because of this, the More Results button has also been removed. See the WebView document for information on AGWebView. Bug24616 - N-Triples parser errors mention NQX When trying to load a .nt or .nq file through AGWebView and a parsing error occurs the error message states that a "NQX" parsing error occurred when it should have stated it was an N-Triples (for .nt) or N-Quads (for .nq) parsing error. This has been fixed to report the correct format type. Changes to the Lisp API Bug24669 - serialize-nqx does not work on remote-triple-stores Serialization of attributed triples was not working correctly for remote-triple-stores. This has been corrected. Bug24658 - load-nqx fails on remote-triple-stores An error would occur if no default attributes were passed to load-nqx on a remote-triple-store. This has been corrected. In order to reduce future confusion, the :attributes argument to the various load functions in the Lisp API has been changed to :default-attributes. Note this is an incompatible change. Code using :attributes will signal an error. Bug24549: open-store-from-specification does not work Previously, attempting to open a triple store using a URL could fail like so: (open-triple-store "") Error: Invalid character in triple-store name "//localhost:10443/repositories/kennedy". The name may not contain these characters /, \, ~, :, Space, $, {, }, <, >, *, +, |, [, ]. This has been fixed. Prolog No significant changes. Documentation No significant changes. Python Client Bug24680 - Tonativestring is broken on Python 2 The Python client sometimes failed while processing values with non-ascii characters, showing the following error message: UnicodeEncodeError: 'ascii' codec can't encode characters in position ??: ordinal not in range(128) This has been corrected. Bug24443 - getStatements sometimes breaks for raw Python values The getStatements function can be called with a raw Python value, instead of an RDF term, as its third argument (object filter). For instance, the following call: conn.getStatements(None, somePredicate, True) would retrieve all triples with the predicate of somePredicate and the object equal to "true"^^xsd:boolean. This mechanism did not work correctly if the second argument (predicate filter) was None. In that case an AttributeError exception would be raised. This has been corrected. Java Client Rfe14990 - Provide methods to download query results The Java client now contains methods to download and save query results to a file. The new methods are: - AGRepositoryConnection.downloadStatements - AGRepositoryConnection.streamStatements - AGQuery.download - AGQuery.stream Both methods have multiple overloads, allowing for the output path and desired output format to be passed in a variety of ways. Rfe14970 - Speed up AllegroGraph java client connection pools. Depending on the connection pool configuration, many unnecessary requests could be made to the AllegroGraph server when borrowing/returning connections to/from a connection pool. These calls have been cleaned up, resulting in improved connection pool performance. Bug24648 - inferredGraph parameter missing from AGMaterializer When materializing triples through the Java API it is now possible to specify the graph into which the generated triples will be added. Bug24644 - Fix TriG import via the Java client. The AllegroGraph Java client would fail when attempting to import TriG data due to the content-type "application/x-trig" being unsupported in the server. AllegroGraph now supports TriG data imported with a content-type of either "application/trig" or "application/x-trig". nD Geospatial No significant changes. AllegroGraph 6.2.1 AllegroGraph Server: General Changes Rfe14881 - Cross attribute comparisons AllegroGraph 6.2.0 added a restriction against using an attribute filter operator with two different types. An attempt to perform such an operation would result in a "Comparisons between attributes of different types is not supported" error message. This restriction has been removed. Note, however, that ordered comparisons between different attribute types is still not allowed since there is no defined ordering between two ordered attributes. Triple attrinutes are discussed in Triple Attributes. Bug24567 - Initial superuser account file owned by root if configure-agraph run as root Previously, if AllegroGraph was installed, configured, and started as root, the directories and account file for the initial superuser remained owned by root. This resulted in the following error the first time the superuser made access to AllegroGraph: mkstemp failed: Permission denied [errno=13] This has been corrected. Bug24557 - Fix bug restoring repositories with a non-default stringTableSize. When restoring repositories with a non-default stringTableSize directive to v6.2.0, a bug in an upgrade step would cause the repo's string table to be set to the default size of 16M. If the stringTableSize was larger than this default, the string table would be truncated, causing data loss and possible corruption. This bug has been fixed.24342 - load-turtle loses default-attributes for RDF collections Previously the Turtle parser would fail to include attributes when parsing an RDF collection such as: prefix : <ex://foo#> :a :p (1 2 3) . This has been fixed. HTTP API No significant changes. SPARQL". Bug24529 - Some queries using VALUES produce wrong results. Some queries using the VALUES keyword and joins or unions produced incorrect results. More specifically some rows were missing from the result. Two examples of such queries are: select ?x { { values(?x) {(1)} } union { values(?x) {(2)} } } and select ?x ?y ?z { { values (?x ?y ?z) {(1 2 3)} } { values () {()} }} The first one would produce only a single result (2), while the second one would not produce any results. This has been corrected. Bug24528 - Wrong join results when using the CAAT strategy When using the chunk-at-a-time execution strategy some queries that performed left joins with the identity set would produce spurious results. One such query is: select ?a ?b ?c { { ?a ?b ?c } union { ?c ?b ?a } optional { bind(42 as ?x) } } This has been corrected. AGWebView No significant changes. Changes to the Lisp API Bug24545 - parse-remote-store-specification returns inconsistent results A "missing value for argument db.agraph::user" error would occur when starting a session using a store specification containing a remote URL without authentication information. This has been fixed. Prolog No significant changes. Documentation No significant changes. Python Client Rfe14695 - Replace pycurl with requests. The HTTP library used by the AllegroGraph Python client has been changed from pycurl to requests. This change means that the Python client no longer depends on any libraries that require native extensions. This results in a simpler installation process, as it is no longer necessary to have a working C development environment on the target machine. Note that pycurl will still be used if it is installed, since it might offer better performance in some scenarios. Bug24582 - AGRAPH_PROXY variable not used Proxy settings specified through the AGRAPH_PROXY environment variable used to be ignored. This has been corrected. Java Client No significant changes. nD Geospatial No significant changes. AllegroGraph 6.2.0 AllegroGraph Server: General Changes Rfe14846 - Revised triple attribute filter operators There are extensive changes to attribute filter operators, allowing better control when comparing user attributes with triple attributes. New operators include or, not, empty, overlap, attributes-overlap, subset, superset, equal, attribute-set\<=, attribute-set<, attribute-set=, attribute-set>, attribute-set>= The existing operator attribute>= has been renamed attribute-set>= (the old name is still accepted but users are advised to switch to the new name). See Triple Attributes for information on attribute filter operators. Rfe14806 - Audit attribute and static filter definitions If auditing is enabled an audit record is made for defining an attribute and for defining the static attribute filter. See the Auditing document for more information on auditing. The rdf:type of the new records are: <> <> <> Rfe14773 - Changes to agraph-control The agraph-control program that starts and stops AllegroGraph has a new command restart, which requests that the server shut itself down, if running, and then start back up again. The reload command, which reloaded the configuration file, is no longer supported. See the agraph-control section of the Server Configuration and Control document. Rfe14797 - Add permissions for user-attribute specification Added user permissions user-attributes-header (if true, user can send attributes via HTTP as the value of the x-user-attributes header), and user-attributes-prefix (if true, user can specify the user attributes via the SPARQL prefix franzOption_userAttributes. See the Managing Users section of the WebView document for information on user permissions. Rfe14733 - String comparison operator for attribute filter Added the attribute= operator which can be used to compare attributes to other attributes, attributes to strings, or strings to strings. See Triple Attributes for information on attribute filter operators. Rfe14671 - New configuration option StringTableCompression The StringTableCompression configuration option allows specifying the compression type (including no compression) for the string table. The default is lzo999 compression. See Server Configuration and Control for information on configuration options. Rfe14661 - agtool combines many command-line programs The *agtool program replaces the programs agraph-backup, agmaterialize, agquery, agexport, agload, agraph-recover, agraph-replicate, and convertdb. agtool takes as its first argument the task to perform (usually indicated by a shortened version of the old program name but sometimes quite different -- so agraph-backup becomes archive and convertdb becomes upgrade) and then the various argument and options for that task. For example, in order to load the Ntriples file mytriples.nt into the mystore database in the server listening on port 10035, in earlier releases you ran this command: % agload --port 10035 mytriples.nt mystore now you use agtool as follows: % agtool load --port 10035 mytriples.nt mystore (Other argument/option combinations will do the same as those shown.) For most tasks, the arguments and options are the same as with the earlier tools, with the following differrences: agraph-backup (archive), agmaterialize (materialize), The --ext-backup-command is no longer supported. Otherwise arguments and options are the same. agraph-recover (recover), convertdb (upgrade), these changes: -c accepted for --catalog, -p accepted for --port. agload (load), this change: -u not accepted as synonym for --base-uri. agquery (query) and agexport (export), this change: --username is now --user and -u is a synonym of --user. agraph-replicate (replicate), this change: -c accepted for --catalog, -u accepted for --user. See the agtool General Command Utility document for more information and links to all the documentation for the individual tasks. Rfe13254 - Tools for saving and loading metadata Metadata, that is attribute and static filter definitions, can be saved from a database and loaded into a database. The methods for doing so include using the new --save-metadata option to agtool export (see Data Export) and the new --metadata option to agtool load (see Data Loading), along with new Lisp functions and REST API, discussed under Lisp API and HTTP API below. Bug24425 - install-agraph no longer creates a log/ directory install-agraph would incorrectly create a log/ directory alongside the lib/ and bin/ directories it creates. It no longer does so, leaving the creation of the log directory to tools like configure-agraph, agraph-control, or the agraph server itself. HTTP API Rfe13254 - load/save attribute metadata GET and POST metadata commands allow obtaining metadata (attribute and static filter definitions) from a repo and loading them into another repo. See the Attributes section of the REST/HTTP interface document for details. SPARQL Rfe14812 - Support attributes in INSERT DATA and INSERT/WHERE The syntax of SPARQL INSERT DATA and DELETE/INSERT/WHERE commands has been extended to support the fine-grained specification of attributes on a per-triple basis. See Triple Attributes for more information on attributes. Rfe14726 - New prefixes for attributes The defaultAttributes SPARQL query option specifies the attributes to be associated with each triple added via the INSERT DATA, DELETE/INSERT/WHERE, or LOAD commands. The userAttributes query option specifies the user attributes to be used when processing the query. See Triple Attributes for more information on attributes. Rfe13305 - Old-style SPARQL GEO queries are no longer supported Because AllegroGraph's magic properties support a richer range of geospatial features than the old-style SPARQL GEO syntax, the old syntax was deprecated in 2014 and has now been removed from the code. Any queries that use the old-style syntax will need to be rewritten. Magic properties support both the deprecated 2D geospatial encodings and the newer nD ones. Bug24481 - Wrong results for VALUES with a single variable When processing a VALUES clause in a SPARQL query AllegroGraph would return incorrect results if there was exactly one variable bound by that clause and the list of values contained either: An UNDEF value: it would be silently ignored (but other values would still be used). Duplicates: each value will be matched exactly once, regardless of the number of times it appears inside VALUES. This has been corrected. Bug24470 - Spurious results for SELECT DISTINCT on a bound variable When executing a query which tries to find unique values of an already bound variable like in the example below: select distinct ?s WHERE { bind(ex:A as ?s) . ?s ?p ?o . } AllegroGraph would ignore the binding and return all results that match the pattern. This error could be observed only if the single-set execution strategy was used. This has been corrected. AGWebView Rfe14576 - Make deleting namespaces easier on AGWebView Query page You can now delete namespaces on the AGWebView Query page. Rfe14569 - New vmstat-like server performance charts added to WebView A new menu item in the WebView Utilities menu brings up a page of performance charts on the server machine. The several charts show approximately the same machine load data printed by the vmstat command, charted for the past five minutes. AGWebView is documented in the WebView document. The new Server Load page is documented here in that document. Changes to the Lisp API Rfe14797 - New user-attributes-prefix-permission-p keyword argument to run-sparql The function run-sparql now have a user-attributes-prefix-permission-p keyword argument. It must be given a true value in order to run a SPARQL query including "PREFIX franzOption_userAttributes". Rfe14754 - Allow #\: to be used as the catalog and repository delimiter In addition to using the slash character (#\/), the colon (#\:) can now be used to designate a triple-store in a different catalog. I.e., the following are equivalent: (open-triple-store "test" :catalog "production") (open-triple-store "production/test") (open-triple-store "production:test") Note that the format AllegroGraph uses to display triple-stores will now be catalog:name rather than catalog/name. Rfe14705 - The triple-store-user (db.agraph.user) package uses the sparql package The triple-store-user package, also named the db.agraph.user package, now uses the sparql package which means that symbols in the sparql package like run-sparql can be used more easily as they do not need to be package qualified. Rfe14703 - :return-fat-triples keyword argument is now :return-extended-triples The return-fat-triples keyword argument to get-triples, get-triples-list, and freetext-get-triples has been renamed return-extended-triples. The new functions extended-triple-triple and extended-triple-attributes return the triple and the attributes (respectively) of an extended triple. The new function copy-extended-triple makes a copy of an extended triple. Rfe13254 - New functions to load/save attribute metadata The functions get-metadata. set-metadata, load-metadata, and save-metadata allow retrieving metadata (attribute and static filter definitions) from a store and adding metadata to a store. The metadata can be transmitted as a string or written/read from a file. Prolog No significant changes. Documentation No significant changes. Python Client No significant changes. Java Client No significant changes. nD Geospatial No significant changes. AllegroGraph 6.1.6 AllegroGraph Server: General Changes Rfe14785 - Allow default session idle timeout to be set in agraph.cfg Two new configuration parameters have been defined for controlling the default and the maximum allowed session idle timeout. DefaultSessionTimeout specifies the default timeout and MaximumSessionTimeout specifies the maximum timeout. See the description of those directives in the list of top-level directives in Server Configuration and Control for details. Bug24519 - Path-finding broken on remote-reasoning-triple-store Path-finding operations (such as depth-first-search) were broken for remote-reasoning-triple-stores since v6.1.4. This has been fixed. Bug24389 - Bus error when adding triples with unspecified graph to security-filtered-triple-store Previously, adding a triple to a security-filtered-triple-store could result in a bus error. This has been fixed. Bug24221 - String table configuration mismatch during restore. An error would occur when trying to restore a database that was backed up in AllegroGraph prior to version 6 if the catalog it was restored to had a string table size different than the default 16M entries. The error was: > String table configuration mismatch: Configured number of slots X does not match recovered number of slots (16777216) This has been corrected. HTTP API No significant changes. SPARQL Bug24463 - Binding produced index out of range error during query Some queries using BIND used to fail with the above error message. This has been corrected. AGWebView No significant changes. Changes to the Lisp API No significant changes. Prolog No significant changes. Documentation Bug24427 - Javadocs missing from agraph server distribution Previously the Javadocs for the AllegroGraph Java client were missing from the installed documentation set. This has been fixed. Python Client Rfe14803 - Support for decimal literals Literal objects constructed from Python's Decimal values will now have the default datatype of xsd:decimal. Rfe14694 - Drop support for cjson The Python client no longer uses the cjson library. Python 2.6 users should install the simplejson library instead to avoid a significant degradation in performance. Other versions of Python are unaffected. Bug24428 - Creating literals from time objects with timezone RepositoryConnection.createLiteral used to raise an error when passed a datetime.time instance that contains time zone information. This has been corrected - such calls will now return literals of type xsd:time with time value converted to UTC. Bug24405 - Non-ASCII characters in addData() The Python client used to raise an exception (specifically a UnicodeDecodeError) if the data passed to addData was a Unicode string containing non-ASCII characters. This has been corrected. Bug24322 - Python clients reads imported file into memory When uploading data to the server the Python client used to read the whole file into memory before sending it. This could cause out of memory errors for larger files. To avoid this issue, the client now sends files in chunks. This method requires only a limited amount of memory. Java Client No significant changes. nD Geospatial No significant changes. user-visible changes Server Server Changes Bug19296 - AllegroGraph can stop responding because all threads seem busy Under rare circumstances it was possible for AllegroGraph to stop responding. Such a hang was accompanied by "All threads busy, pause" log messages. This has been corrected. AllegroGraph 5.1.1 Because there are only a few changes in release 5.1.1, only headings with actual changes are listed. Server Server13501 - Implement HTTPNoProxy configuration directive This change adds a new global configuration directive, HTTPNoProxy. HTTP requests to domains that match one of the suffixes specified with HTTPNoProxy are never proxied. HTTPNoProxy can be specified multiple times. HTTPNoProxy localhost HTTPNoProxy mydomain.com Requests to 'localhost' and '127.0.0.1' are never proxied. Top level directives are described in Server Configuration and Control.23077 - Rare thread safety issue in shared back-ends Previously, when multiple requests for the same repository were being executed concurrently on the same shared back-end a very rare thread safety issue could cause the backend to die. This bug has been fixed. Bug22944 - Freetext indexer could mishandle encoded literals AllegroGraph's built-in freetext indexer was trying to index encoded literals (numbers, dates, times and dateTimes) and would sometimes misinterpret them leading to extremely slow indexing. This has been corrected by preventing the indexing of these literals. This means that freetext indices will not include values for encoded literals. Bug20556: Double-fault of trie-full condition Previously, when adding a word with many hits to a text index a "trie-full" error could occur. This has been fixed. HTTP Client Bug23064 - Post requests to the statements REST API could fail It was possible for POST requests to the statements REST API to corrupt the stream of bytes being sent. This has been corrected. SPARQL Rfe13683 - Improve variable count tracking during query execution AllegroGraph keeps better track of the estimated cardinality of bindings for variables during query execution which improves the execution engine's overall efficiency.23125 - Improve query planning for some property path queries SPARQL Property Path queries using the zero-or-more operator (i.e., *) could sometimes choose a sub-optimal plan because AllegroGraph was misestimating the size of the triple-patterns involved. This has been corrected. Bug23095 - Correct internal error that occurred during some Property Path queries Some Property Path queries could fail to run to completion because of an internal bookkeeping error. This has been corrected. Bug23081 - Certain SPARQL queries could cause cursor data-structure leakage Certain SPARQL queries could fail to correctly clean up all of the cursors that they used which could lead to backends becoming unresponsive if they were under heavy load. This has been corrected.388: Chunkprocessingmemory defaults to a larger value than memoryLimit The chunkProcessingMemory size defaulted to a larger size than the memoryLimit which meant that queries could fail when using chunk processing. This has been corrected.. nD Geospatial Bug22786, Bug22850: improve nD string writers and parsers nD data encoding and decoding supports various human-readable formats (ISO6709 and ISO8601) when such a format can be applied unambiguously to a particular nD encoding. If a human-readable format applies, it will be preferred on decoding (output) and accepted on input (encoding). Previously some correct externalization formats were rejected on input or not used on output. nD now correctly supports encodings with multiple ordinates of the same specialized type, i.e. lat, lon, alt, and time. Ordinates must of course all have different names. Use of such encodings will necessarily suppress human-readable externalizations since they are ambiguous. Error detection during nD encoding/decoding is improved along with error message text. AllegroGraph 5.0.1 Server Server Rfe12470 - Apply multiple constraints to the same triple pattern Previously, AllegroGraph could only apply a single logical FILTER expression to any given triple pattern. This meant that queries like: select * { ?s ?p ?o . filter( ?p = ex:pred1 && ?o > 10 ) } would optimize the query plan with either the ?p constraint or the ?o constraint but not both. This has now been improved so that queries like the above will use both constraints simultaneously and thereby run much more efficiently. Bug22803 - Fix bug in authorizationBasic query option The authorizationBasic query option was not being parsed correctly which meant that queries that tried to use it would fail. This has been corrected. Server Server Changes. AllegroGraph 4.13.2 The only change was to replace the SSL library with one that avoided the heartbleed bug. AllegroGraph 4.13.1 Server Changes Bug22255 - Concurrent full index optimization requests Previously, if index optimization was requested with a full index optimization already in progress, the database instance would log an error and hang. This bug has been fixed. AllegroGraph 4.13 Server Changes Rfe12645 - Add a command line interface to AllegroGraph's materializer AllegroGraph now includes a command line tool to materialize the inferred triples in a triple-store. The tool is named agmaterialize. It can currently be run only on the same machine where the AllegroGraph server is running. The command line interface is described in the Materializer documentation. Rfe12642: Improve handling of datafile free space information Previously, datafile free space information was maintained in a fixed sized data structure which could run out of room after a certain level of datafile fragmentation was reached. This has been corrected. Rfe12590 - Shared backend scheduling improvements Previously, when all shared backends were busy, further requests were held until a backend became available. Such pending requests were scheduled for execution in basically random order with latency up to 1s. With this change, pending requests are scheduled rather than essentially random. The scheduling is basically FIFO (first in, first out, meaning the oldest request is processed first), but FIFO may be violated for efficiency reasons. (Strict FIFO behavior is undesirable, because it greatly reduces AllegroGraph's ability to schedule requests optimally under stress.) Also with this change, scheduling latency is now minuscule and shared backends are now allowed to execute slightly more queries in parallel if all backends are busy, which allows greater efficiency in most cases. Rfe9523 - Logging improvements This change adds a centralized logging facility to the AllegroGraph daemon. The new logger guarantees no interleaving of messages and is able to deal with full filesystem conditions gracefully. Lisp direct clients have now the option of sending log messages to an AllegroGraph server with the with-log-routed-to-server macro. See the documentation for more. Previously, log messages were sent to the AllegroGraph server belonging to the first direct connection opened in the client. Bug22220 - Race condition while determining last written transaction log id Previously, a rare error could be encountered when starting replication or while switching roles, indicating that replication failed due to a missing transaction log that had not yet been written to disk. This problem has been fixed. Now, only transaction logs on disk are used to start replication. Bug22184 - Instance shutdown and replication In rare circumstances, when the instance belonging to a replicated repository was shut down, writing the transaction log could fail. This bug has been fixed. Bug22156 - date-string-to-upi doesn't handle zero time zones consistently The date-string-to-upi functor was not treating a Zulu time zone specified as "Z" consistently with the rest of AllegroGraph's date and time parsers. This has been corrected. Bug22111 - Database names may not contain special characters Because it is impossible to reliably use databases (also called repositories) in session specifications if their names contained characters that are used in those specifications, the list of characters which must not be used in names has been expanded. The following characters are now disallowed in database (repository) names: \ / ~ : Space $ { } < > * + [ ] | Previously, only the following four characters were disallowed in database (repository) names: \ ~ / : Note that this change only effects new databases (repositories); existing databases (repositories) with any of the newly invalid characters in their names will continue to be accessible. It may, however, result in difficulties when attempting to use them in more complex session specifications. Bug22100 - More strict, faster parsing of some xsd datetimes The parsers for xsd:date, xsd:dateTime and xsd:time used to be lenient and accept things like: 20131118 # missing separator 2013-11-18T12:27 # missing seconds and many other convenient but non-standard formats. The new parser only accepts what the XML Schema specification mandates which also allows it to be faster. See XSD Date and Time datatypes for more information on these datatypes. Bug22086 - Materializer deletes triples with the same subject, predicate and object The initial version of AllegroGraph's materializer deleted all triples with the same subject, predicate and object. This limitation has now been lifted: the materializer now only deletes triples with matching subject, predicate, object and graph. Bug22032 - Control whether or not external references are followed during RDF/XML import Previously, AllegroGraph ignored external references in RDF/XML source files. The Lisp API now includes a new keyword argument to load-rdf/xml: - :resolve-external-references : if true, the external references will be followed. The default continues to be nil. agload also has a new command-line switch. Supplying either of -x or --external-references will cause agload to follow external references. See Data Loading. Bug21953 - Allow import of triples to reasoning-triple-store A bug was introduced in v4.12 that prevented import of triples via load-* routines (e.g. load-rdf/xml) on reasoning-triple-stores, when the source argument was passed a URI. This problem has been fixed. This change also provides a better error message when attempting to import statements to a federated-triple-store, which is not allowed. Bug21656 - Cleaner daemon shutdown When the AllegroGraph daemon was stopped, despite the claim of doing a 'clean shutdown' in agraph.log, the shutdown of the instances was not clean and the corresponding repositories would potentially require crash recovery the next time they were opened. Also, such a shutdown lost any index optimization work performed since the most recent checkpoint. With this change, instance processes perform a checkpoint if necessary which makes shutdown slower, but startup faster and it makes sure the index optimization results persist. Bug20773 - Support federations of RDFS++ stores Previously, queries against a federation of RDFS++ stores would cause an error during query planning. This has been corrected. HTTP Client Rfe12763 - Update HTTP API for the materializer The REST API for materializeEntailed now includes an inferredGraph argument that can be used to specify which graph newly entailed triples are added to (or deleted from). The materializer is described in Materializer. SPARQL Rfe12767 - Improved the efficiency of single group aggregate queries The code generated for single group aggregate queries is now slightly more efficient. Rfe12762 - Optimizing level 2 index optimization With this change, level 2 index optimization (the default) no longer tries to optimize access for triples added since optimization was started. This allows index optimization to finish even if triples are added continuously at a high rate. The new code is faster on average, has the same worst case disk usage (about two times the original size), but it has a higher average disk usage. Rfe12760 - Improve efficiency of some SPARQL Property Path queries AllegroGraph now builds some property path data structures as needed rather than computing all of them up front. Depending on the data, this can greatly enhance the efficiency of some queries. Rfe12757 - Improve efficiency of some property path queries In some circumstances, AllegroGraph could choose the less efficient of two strategies when evaluating a property path query. AllegroGraph now ensures that it makes the better choice. Rfe12717 - Handle RDFterm-equality of strings consistently Previously, AllegroGraph implemented sameTerm() using a strict interpretation of RDF literal equality when comparing plain literals and typed literal strings but implemented = using a more relaxed interpretation. These tests are now done consistently using the relaxed interpretation. In particular, 'hello' is RDFterm-equal to 'hello'^^xsd:string This is in accordance with both common usage and the RDF 1.1 specification. Rfe12699 - Improve efficiency of SPARQL equality tests on resources The efficiency of equivalency tests in SPARQL filter expressions has been improved. Rfe12681 - Optimize interprocess communication The performance of message passing between different AllegroGraph server processes has been slightly improved. Rfe12679 - Use query local namespace abbreviations in result serialization AllegroGraph now uses the namespaces defined by a SPARQL query's prefixes when serializing output in RDF/XML. Previously, temporary namespaces were used instead. Rfe12670 - Allow /SPARQL access to AllegroGraph as a SPARQL endpoint Previously, access to an AllegroGraph triple-store as a SPARQL endpoint was via sending a query to SERVER:PORT/respositories/NAME?query=... This REST API has been augmented so that sending a query to the triple-store's URL with a trailing /sparql also works. E.g., SERVER:PORT/respositories/NAME/sparql?query=... This is not required by the SPARQL standard but is in conformance with many other existing endpoints. Rfe12657 - More efficient handling for the VALUES clause AllegrGraph now rearranges queries so that VALUES clauses are evaluated first when possible. This improves query efficiency as VALUES clauses are generally small. Rfe12656 - Improve execution of some nested joins AllegroGraph now evaluates nested joins more efficiently. Rfe12498 - Improve full-scan warning heuristics AllegroGraph warns when a SPARQL query executes a full-scan. Heretofore, this meant that a query like select * { ?s ?p ?o } limit 100 could have produced a full-scan warning even though it is benign. AllegroGraph now saves its warnings for when they may actually indicate a problem with the query. Rfe11981 - New query option chunkProcessingMemory Previously when chunk-at-a-time processing was enabled (see query option chunkProcessingAllowed), the query option chunkProcessingSize controlled memory consumption by limiting the number of intermediate results stored in memory. The total memory consumption depended on the number of variables and the complexity of the query, so it was rather difficult to know how much a query will actually use. With this change, chunkProcessingSize is deprecated and chunkProcessingMemory can be specified instead. The unit of chunkProcessingMemory is a byte but the usual abbreviations are also accepted ("8G" (the default), "2M"). Rfe11515 - Fully support the SPARQL 1.1 HTTP Protocol AllegroGraph now fully supports the five different HTTP request types specified in the SPARQL 1.1 HTTP Protocol. These are: - Query - via GET with query parameters - via POST with urlencoded parameters - via POST with a plain query string (using the content-type of application/sparql-query). - Update - via POST with urlencoded parameters - via POST with a plain update request string (using the content-type of application/sparql-update). In addition, AllegroGraph continues to support SPARQL update using POST where the update command is encoded in the query portion of the HTTP request. Bug22350 - Distinct queries could produce duplicate results in rare cases Under rare circumstances it was possible for the query engine to produce duplicate results when using chunk-at-a-time (CaaT) processing. This has been corrected. Bug22214 - Encoded IDs failed to compare properly in SPARQL queries Inequality comparisons between encoded IDs were always returning true. This has been corrected. Bug22204 - decimal to decimal cast loses precision When in a SPARQL query an xsd:decimal was cast to xsd:decimal it lost precision as it was converted to a single float internally. This bug has been fixed. Bug22203 - Fix decimal parsing in SPARQL queries In SPARQL query strings, decimals in the (-1,0) interval were parsed with the wrong sign. For example, -0.9 was parsed as 0.9. This bug has been fixed. Bug22153 - Some wild card property path queries could generate incorrect paths In rare circumstances, the data structures used to cache property path information could be invalidated leading to too many path results. This has been corrected. Bug22147 - Add support for ObjectId's in the _id field. The SPARQL MongoDB magic predicate now works with documents containing ObjectId's in the _id field. Bug22142 - Expressions on aggregates fail for empty group If a query wrapped an aggregate with an expression like select (xsd:float(avg(?age)) as ?avgAge) ... and the result set contained an empty group (i.e., a group with no members), then AllegroGraph would signal error. This has been corrected. Bug22134 - SPARQL aggregates on expressions with errors could compute incorrect results Other than COUNT, a SPARQL aggregate on an expression should return unbound if that expression evaluates to a type error during computation. AllegroGraph was incorrectly returning the aggregate of any bindings produced after the last type error. This has been corrected. As an example, if AllegroGraph needed to compute the MIN( ?p * 2 ) and ?p was bound to 4, 7, 15, "hello", 6, and 1, then AllegroGraph would previously have returned 2 (the minimum of 12 and 2). Now, it will return no binding for that expressions. Bug22123: Empty construct template will cause an error If a SPARQL CONSTRUCT had an empty template like CONSTRUCT {} WHERE { ?s a ?o } then AllegroGraph would signal an error. This has been corrected. Bug22094 - Correct sparql< and sparql> for date and dateTime comparisons Because there is no defined operator mapping for inequality comparisons between xsd:date and xsd:dateTime, comparing them should throw a SPARQL type-error. AllegroGraph was incorrectly returning nil instead. This has been corrected. See XSD Date and Time datatypes for more information on these datatypes. Bug22074 - DELETE templates with unknown variables caused an error AllegroGraph would signal an error if a SPARQL DELETE/WHERE command's DELETE clause contained variables that were not bound by the WHERE clause. This has been corrected. An example of such a command would be: DELETE { graph ?unknownGraph { ?s ?p ?o } } WHERE { ?s ?p ?o } Bug21561 - Remote-triple-stores do not support the table results format Remote-triple-stores did not support returning tables from SPARQL queries. This has been corrected. AGWebView Bug22161 - Adding security filters in Firefox Previously, with Firefox, clicking the 'add' button in the security filters table of the user management page (Admin/users in the menu) of AGWebView did not submit the form. This bug has been fixed. Changes to the Lisp API Rfe12632: Improvements in Lisp API to AllegroGraph materializer Materialization previously occurred only for triples in the default-graph of the triple-store. It now occurs for all of the graphs in the store. Materialization on subsets of the graphs can be accomplished by creating a graph-filtered triple-store and calling materialize-entailed-triples on it. Materialization previously placed inferred triples into a special graph that could not be utilized by the rest of AllegroGraph (the special graph behaved similarly to but not exactly like the usual AllegroGraph default-graph). It now places inferred triples into a graph that can be specified in the call to materialize-triples using the materialized-graph argument. It defaults to. Two new arguments were added to delete-materialized-triples. These are: db- which controls the triple-store that is modified. materialized-graph- which controls the triples that are deleted. It defaults to. Materialization now logs its start and end to the AllegroGraph log file. The overall efficiently of the materializer has also been improved. Bug22132 - calling run-sparql with the :with-variable argument with a timezoned upi When the Lisp function run-sparql was used in conjunction with a :with-variable argument that bound a variable to an encoded xsd:dateTime or xsd:time, then in some cases comparisons involving that variable failed to match equivalent representations of the same xsd:dateTime or xsd:time with a different timezone. This bug has been fixed. Prolog Bug22171: Select0 should return UPIs when it can Previously, the Prolog select0 query macro could return future-parts rather than UPIs. For example, this query would return a future-part. (select0 (?a) (= ?a !rdf:type)) This could cause problems in the unlikely event that namespace abbreviations changed when running the same query multiple times. select0 now returns UPIs so that the above behavior is not possible. Note that if a future-part represents a triple that is not currently in the triple-store, then a query can return UPIs that cannot be printed (because the UPI will not be in the string dictionary). Bug22157 - Temporal functors could signal an error when used with all bound values Previously, AllegroGraph used real numbers to represent times, dates and dateTimes. Now, the internal representation includes time-zone information which means that comparing two values can no longer use simple inequalities. The Prolog temporal functors like point-before-datetime had not been updated to reflect these changes. This oversight has been corrected. This change means that the Prolog functors will use the same XSD logic used by the SPARQL date and time functions. Documentation No significant changes. Python Client No significant changes. Java Client No significant changes. AllegroGraph 4.12.2 Server Changes Rfe12734 - More aggressive level 2 index optimization In some cases, level 2 index optimization (the default) left a small part of the index unoptimized. With this change, the entire index is optimized in all cases. Rfe12726 - Index optimization increasing database size on disk Space in some garbage on-disk data structures is now reclaimed in a more timely manner which makes the increase in database size during index optimization less severe. Rfe12725 - Faster level 2 index optimization Level 2 index optimization (which is the default level) has been made faster. Rfe12636 - Make config-file argument to agraph-backup backup-settings optional. Previously, the agraph-backup backup-settings command required a config-file argument. This argument has been eliminated in favor of an optional --config option that can be used in place of the --port argument. If both --port and --config are supplied, then the port argument takes precedence. Also, agraph-backup could fail in the past if there were discrepencies between a --port argument specified on the command line, and a Port directive contained in a config file. Now, if the --port argument is specified, it will take precedence over any Port directives read from a config file. See Backup and Restore. Rfe12635 - Major revision to agraph-backup operating modes. The various backup and restore modes of agraph-backup have been overhauled. Previously, backup mode created an archive file containing only repository data. Now, it creates an archive directory that contains both repository data and settings. This archive can be restored via restore or restore-all. The --ext-backup-command option is supported. Previously, restore mode operated on an archive file. Now, it can also operate on an archive directory created by backup or backup-all. In either case, it will restore a single repository (data and settings) from the archive. The layout of archive directories created by backup-all are slightly changed, and support for the --ext-backup-command option is added. The restore-all mode will now fail if repositories being restored to already exist. Use the --supersede option to force an overwrite of any such repositories. Previously, the backup-settings mode created an archive containing both repository and system settings. It also supported the --ext-backup-command option. Now, it only archives system settings and the --ext-backup-command option is no longer supported. Adds a restore-settings option that restores system settings, overwriting any existing settings. It operates on any valid archive directry (though ones created by backup will likely not have any system settings data). Please refer to the updated Backup and Restore documentation for full details. Rfe12621 - xsd:time improvements Zoneless xsd:times are no longer converted to zero timezone when added, and timezone offsets roundtrip to and from the database. Also, with this change xsd:times and xsd:dateTimes are no longer comparable. This puts xsd:time on equal footing with xsd:dateTime. See the date and time documentation for more information. Rfe12580 - Don't create repo if client/server uids don't match Previously, if the uid of the direct Lisp client and the AllegroGraph server differed, then a create-triple-store call would fail with a `server-pid-and-client-pid-must-match-error` indicating that the creation failed. However, the repository would actually be created, possibly overwriting an existing repository. Now, the uid check is performed earlier in the create repository process, so that the repository is not created when the uids do not match. Rfe12456 - Time zone roundtrip for xsd:dateTime and xsd:time AllegroGraph previously represented xsd:times and xsd:dateTimes with Zulu (i.e. Greenwich Mean) time internally and converted data to that time zone as it was imported. On output, any xsd:time or xsd:dateTimes would be displayed in Zulu time and would lose the original time zone information. For example, "2014-02-23T08:18:59+05:00"^^xsd:dateTime would become "2014-02-23T03:18:59Z"^^xsd:dateTime AllegroGraph now keeps the original data's time zone information. Note that the additional computation involved with coercing time information during query means that queries involving xsd:dateTimes or xsd:times may run slightly (a few percentage points) more slowly. There is also a new query option, presentationTimeZone, that controls the display of time zones in the output of SPARQL queries. For example, adding PREFIX franzOption_presentationTimeZone: <franz:-04:00> will cause all xsd:times and xsd:dateTimes to be printed at the -04:00 time zone. See the documentation for more information. Since the database format has been changed, the usual upgrade process involving a backup/restore cycle (see Backup and Restore) is required. Please note the following: 1: There are no changes for xsd:dates. They still support what the XML Schema describes as 'recoverable timezone'. 2: "2014-02-23T08:18:59+08:00"^^xsd:dateTime is DISTINCT from "2014-02-23T00:18:59Z"^^xsd:dateTime. 3: Joins can be made over dates that represent the same point on the same timeline even if they have different timezones. Which (equivalent) representation is returned by the join is unspecified. See the date and time documentation for more information. Rfe11303 - Fix partial repo creation on agraph-backup restore operations When performing an agraph-backup restore operation, if you passed a directory name as the archive argument, the restore would fail, but not before a repository was created. Further, the repository would remain in 'restore' mode, so it could not be opened, and its existence would prevent a subsequent valid agraph-backup restore operation from succeeding. agraph-backup now first checks that the argument filename names an existing file which is not a directory before creating any repositories. Also, error messages when agraph-backup** is passed invalid archive file arguments have been improved. See Backup and Restore. Rfe11044 - Add -p short form of --port to agraph-backup. With agraph-backup, users may use either -p or --port to specify the port on which an AllegroGraph server is running. See Backup and Restore. Rfe10629 - Add --supersede argument to agraph-backup agraph-backup now accepts a --supersede option, which (if specfied) allows overwriting of existing archives (for the backup command) and directories (for the backup-all command), and databases (for the restore and restore-all commands). See Backup and Restore. Bug22099 - Ensure UUIDs are unique during agraph-backup restores. Previously, when restoring archive backups to an AllegroGraph server, it was trivially possible to create multiple repositories with the same uuid. Now, agraph-backup will error if a restore operation will cause a uuid conflict. See Backup and Restore. Bug22090 - Timezone minutes in xsd types Specifying timezone minutes is no longer optional in xsd:dateTimes and xsd:times, so the following examples are invalid: "1978-01-03T00:00:00+05"^^xsd:dateTime INVALID Correct form: "1978-01-03T00:00:00+05:00"^^xsd:dateTime "00:00:00+05"^^xsd:time INVALID Correct for: "00:00:00+05:00"^^xsd:time Previously, only xsd:dates checked for the presence of timezone minutes. See the documentation for more information. Bug22083 - xsd:date and xsd:dateTime years AllegroGraph now correctly parses both positive and negative dates with more than four digit years. It also signals an error if the date is too large to be encoded into a UPI. Previously, dates like these were silently truncated. See the date and time documentation for more information. Bug22073 - ORDER BY and non-encoded decimals When non-encoded decimals were being ordered, they were first converted to IEEE 54 double floats (which have a precision of 15-17 significant digits) which potentially caused them to be sorted incorrectly. This bug has been fixed. Bug22072 - Validate xsd:decimals AllegroGraph now signals an error if an xsd:decimal's exponent in scientific notation is outside of the range [-63, 63] because these values cannot be encoded into a UPI. Previously, xsd:decimals like these were silently truncated. Bug22068 - Reasoning and constant triple patterns With reasoning enabled, some queries involving triple patterns without variables failed with 'Bus error'. This has been corrected. Bug22066 - Printing of NaN, INF and -INF Serialization of xsd single and double float NaN, INF and -INF was broken (for instance, INF became "#.inEinity-double"). This bug has been fixed. Bug22057 and Bug22054- agraph-backup backup-settings works as documented. The agraph-backup backup-settings command was incorrectly backing up data as well as settings rather than just backing up settings. Now it just backs up settings. Also, the backup-settings command now can work by finding the settings location from the config file (so the server need not be running). See Backup and Restore where a note describes all the agraph-backup revisions. Bug22045 - 14 hour datetime comparison logic For two xsd:dateTimes, one with and one without a timezone, all comparisons must return false if they differ by less than 14 hours. Similar logic applies to xsd:dates and xsd:times. Previously, support for this 14 hour comparison logic was spotty. It worked for some combinations (such as comparisons in a FILTER with a literal) but didn't work for others. With this fix, comparisons for all three types follow the XML Schema specification. See the date and time documentation for more information. Bug22044 - Fix xsd:date xsd:dateTime comparisons AllegroGraph previously compared xsd:dates and xsd:dateTimes by converting the date to a dateTime at its first instance. Because dates are actually 24-hour intervals, this logic is suspect so AllegroGraph now signals a type error if this comparison is made (i.e., FILTERs will not longer match and BIND would leave its variable unbound). The previous behavior can still be had by using an explicit cast from date to dateTime. See the date and time documentation for more information. Bug22041 - Comparisons to decimal zero Due to a bug in the encoding, decimal 0 appeared to be greater than all decimals less than 1.0. The usual upgrade process involving a backup/restore cycle (see Backup and Restore) is required. All misencoded decimal 0s will be converted to the correct value. If any such corrections are made during the upgrade, then indices are resorted and background index optimization is begun. The index optimization may continue even after the upgrade process has completed. The resulting indices are then as optimized as if all the triples were just imported into the database. This means if the backup was of a fully optimized repository, then after the upgrade running level 2 (the default) index optimization is necessary to get back to that state. Bug22039 - agload character encoding fixes Previously, the --encoding argument to agload was ignored. This has been corrected. See Data Loading for information on agload. Bug22034 - part->concise on encoded dates and decimals For xsd:dates, xsd:dateTimes, xsd:times and xsd:decimals, the Lisp functions part->concise and part->terse returned the stringified internal representation (like "172839/500" for decimals, and "( (part->concise (value->upi "2013-01-03T10:34:00-05:00" :date-time)) => "2013-01-03T10:34:00-05:00" See XSD Date and Time datatypes. HTTP Client No significant changes. SPARQL Rfe12624 - Standardize some low-level SPARQL equality tests The efficiency and conformance of low-level tests used to compute RDFterm-equality and SPARQL ordering has been improved. One non-backwards compatible change is that AllegroGraph previously allowed ordered comparisons between plain literals that looked like numbers and typed numeric literals (e.g., "6.21" and "6.23"^^xsd:integer). This comparison now correctly throws a SPARQL type error. Bug22092 - Fix for SPARQL queries that use the STR() function with a constant If a SPARQL query contained a FILTER that used the STR() function applied to a constant, then the query could signal an error at planning time. For example, a FILTER like FILTER( STR( <http:://example.com> ) = ?var ) would not succeed. This has been corrected. Bug22078 - SPARQL insert did not always encode typed literals Triples added to a triple-store using SPARQL inserts that used computed values did not always have their typed literals encoded into UPIs. For example, an insert like: insert { <ex:a> <ex:b> ?c . } where { bind( '1'^^xsd:byte as ?c ) } would store the literal in string form rather than encoding it directly. This has been corrected. Bug22077 - Improve efficiency of certain SPARQL sub-queries Bound variable information could be lost while processing certain SPARQL sub-query forms which would then result in less efficient query execution. This has been corrected. AGWebView No significant changes. Changes to the Lisp API Bug21608 - Include ag-call in remote Lisp client. Previously ag-call was omitted from the remote Lisp client. This has been corrected. Prolog No significant changes. Documentation No significant changes. Python Client No significant changes. Java Client No significant changes. AllegroGraph 4.12 Server Changes Rfe12576 - Improve RDF/XML parser efficiency Improve the lower-level efficiency of AllegroGraph's RDF/XML parser so that it is between 5 and 10% faster. Rfe12522 - Global configuration directive for query options A new global configuration directive named QueryOption was added. It can be used multiple times. Each specification is equivalent to a query prefix option. The global configuration directive "QueryOption NAME=VALUE" is the same as the query prefix option "PREFIX franzOption_NAME: Rfe12429 - More flexible parsing for InstanceTimeout With this change, InstanceTimeout configuration parameter values in the configuration file can be specified as times rather than only as a number of seconds. For example, you can use 12s, 2m or 3h to specify 12, 120 or 10,800 seconds. The configuration file is described in Server Configuration. Rfe11010 - Require the i index for freetext queries AllegroGraph's native freetext index requires the presence of the triple-id (i) index in order to operate efficiently. A common problem was to try to use freetext queries when there was no i index. This change automatically adds an i index whenever a freetext index is created and also signals an error if a freetext query is attempted when there is no such index. Bug22024 - Serialization of decimals xsd:decimals with zeroes right before the decimal point were serialized with the decimal point misplaced. This bug has been fixed. The database contents were not affected. Bug21978 - serialize-* functions in Lisp client default to UTF-8. Use UTF-8 as the external-format when a filename is passed to one of the serialize-* functions in the Lisp client instead of the default external format, which can vary depending on the system's locale. Bug21976 - Complex GRAPH queries could produce incorrect graph bindings Depending on the dataset, queries with deeply nested structure that used EXISTS or NOT EXISTS FILTERS with embedded GRAPH clauses could generate incorrect bindings for the graph variable. This has been corrected. Bug21975 - Increase AllegroGraph Load speed Low-level efficiency enhancements improve AllegroGraph load speed by roughly 10%. Bug21966 - Possible "UPI not in string table" problem with remote-triple-stores It was a possible for a remote-triple-store to lose track of the strings cached on the client after a rollback operation. This could lead to a "UPI not in string table" error. This has been corrected. Bug21956 - Improve memory usage for zero-length property path queries A zero-length property path query connects all subjects and objects on the right and left hand sides of the path to themselves. If the store is large and the path produces many matches on one or both sides, AllegroGraph was using more memory than necessary. This has been greatly improved though some queries may experience a slight performance degradation. Bug21954 - Problem generating soundexes for phrases in Callimachus If a store was operating in Callimachus mode and a phrase was soundexed that had words in it with no soundex (e.g., a number) then AllegroGraph would generate garbage soundex data. This has been corrected Bug21949 - Improve bookkeeping of already processed variables during query execution AllegroGraph was not updating the list of bound variables when evaluating a BIND clause during query execution. This did not change the final results of a query but did mean that AllegroGraph would not always evaluate the query in the most efficient manner. This has been corrected. Bug21937 - Zoneless xsd:dateTime roundtrip Adding a triple with an xsd:dateTime without a timezone (e.g. "2013-07-30T23:59:59") was stored as if it had a timezone of zero. With this bug fix, zoneless dateTimes are stored and serialized without timezone. This is in agreement with the XML schema specification and similar to how xsd:dates behave. Bug21934 - String table mutex not reinitialized after recovery A mutex used in the string table data structure was not being reinitialized after database restart. Under certain circumstances this could result in a database hanging after restart. This problem has been corrected. Bug21920 - Soundex should ignore punctuation AllegroGraph's soundex function (which is used by Callimachus) was not ignoring punctuation properly. This has been corrected. Bug21915 - Make upi->value on xsd:dates return integers Due to an undocumented change in v4.11, upi->value on a upi encoded as :date returned a cons of the integer representing the instant and a flag indicating whether the timezone was present. With this change only the integer part is returned (like before v4.11). The flag is returned as the second value by upi->values. Bug21914 - xsd:dates on the first day of the month Adding a triple with an xsd:date with a positive timezone that falls on the first day of a month resulted in error "illegal date 0". This bug has been fixed. Bug21909 - agraph-backup could fail to generate unique UUID on restore When using the --newuuid option, it was possible for agraph-backup to create multiple triple-stores with the same UUID. This has been corrected. See Backup and Restore for information on agraph-backup. Bug21903 - Free text indexing with chunk-at-a-time processing Depending on the execution plan, a query that used both fti:match and chunk-at-a-time processing (see franzOption_chunkProcessingAllowed) could produce too few results. This bug has been fixed. Bug21899 - Shared memory server fails to clean up Under low memory conditions, the shared memory server can fail to allocate blocks of shared memory on behalf of a database instance. Under certain circumstances the shared memory server would fail to clean up from partial allocations, thereby exacerbating the low memory condition. This problem has been fixed. Bug21873 - Link to docs in rpm install When AllegroGraph was installed from an rpm package, the documentation links in AGWebView were broken. This bug has been fixed. Bug21872 - agload will write output to agload.log when in --debug mode Along with the other effects of using the --debug command line argument to agload, all output will be logged to agload.log. See Data Loading. Bug21869 - Fix problem with registering cartesian predicate mappings There was a problem which prevented the shortened form of cartesian predicate mappings from working. This has been corrected. Bug21853 - Connection Pool session port connection failure Under certain circumstances, the AllegroGraph Java client would throw an exception like the following: com.franz.agraph.http.exception.AGHttpException: Possible session port connection failure Setting the SessionPorts configuration parameter is discussed here in Server Installation and also in Server Configuration. But this exception could occur even with a proper SessionPorts setting. This problem has been resolved. Bug21846 - Long running federated queries could fail The query canceling interface introduced in AllegroGraph 4.11 could lead to query failures when using federated triple-stores. This has been corrected. Note that it is not currently possible to cancel queries running on a federated store. This will be improved upon in a future release of AllegroGraph. Bug21816 - Fix session multiplexing file descriptor leak When requests to sessions are multiplexed through the server's main port, the frontend didn't close the http socket used to communicate with the session process. This bug has been fixed. Note that session multiplexing can be enabled for the java client by setting the system property: com.franz.agraph.http.useMainPortForSessions=true Bug21784 - Safer auto-close for garbage db's in the Lisp direct client The db's are now marked for closure when it is noticed they are no longer used and closed at a later specific time. Bug18813 - Reasoning and subclasses of owl:TransitiveProperty With this fix the reasoner is able to infer that a property whose type is a subclass of owl:TransitiveProperty is a transitive property. That is, :p rdfs:subClassOf owl:TransitiveProperty . :q rdf:type :p . :a :q :b . :b :q :c . entails :a :q :c . HTTP Client No significant changes. SPARQL Rfe12537 - Improve zero length property path clause estimation This change improves the heuristics AllegroGraph uses to estimate the number of rows that a zero length property path query can generate which improves the way AllegroGraph processes a BGP. Rfe12520 - The SNA members and size magic properties work with RDF lists You can now use the and magic Properties to query RDF lists stored in your triple-store. For example, if points to the head of the RDF list "1", "2", "3", then this query would return the three elements: prefix sna: <> select ?item { <ex:list> <ex:pointsTo> ?list . ?item sna:members ?list . } Would return "1", "2" and "3". See SPARQL Magic Properties for SNA. Rfe12506 - Improve handling of some queries that use GRAPH clauses SPARQL is defined such that a query like GRAPH ?g { A } should behave as if A was evaluated with ?g bound to each graph in the dataset in turn and then the results of all of these evaluations were unioned together. While correct, this implementation can be very inefficient. AllegroGraph uses various heuristics to determine if a given GRAPH clause can be evaluated more directly and does so when it can. This change improves these heuristics so that more queries will be able to use the more efficient implementation. Rfe12481 - Improve SPARQL DELETE efficiency This change greatly increases the efficiency of the DELETE portion of a SPARQL DELETE... INSERT... WHERE... command. The actual increase is dependent on the data and the delete template but it can be more than 40 times faster. Rfe12442 - Efficiency improvements for SPARQL aggregation Optimize the code used for SPARQL aggregate queries in general and for count aggregates in particular. Rfe12400 - Minor property-path query optimization Improve the underlying efficiency of some property-path queries. Rfe12398 - Clarify semantics of the identity clause reorderer AllegroGraph has two clause reorderers which try to preserve the ordering of triple-patterns in a BGP (basic graph pattern). Previously, these were named: :identity - try to preserve ordering but still prefer to augment the current set of bindings rather than start a new set. :strict-identity - use the given ordering directly without changing it. To avoid future confusion, these options are being renamed such that the first method is called :weak-identity and the second is called :identity. The default query clause reorderer continues to be statistical so queries that do not specify a reorderer will behave exactly as they did before this change. Rfe12392 - Optimize certain complex join combinations The query optimizer now converts a query with an algebra like: (:join (:join A :bgp1) :bgp2) into the equivalent: (:join A :bgp1-and-bgp2) This BGP (basic graph pattern) merger improves AllegroGraph's ability to reorder queries for efficient execution. AllegroGraph also converts the expression: (:join (:join :bgp1 A) :bgp2) into (:join :bgp1-and-bgp2 A) unless the two BGPs do not overlap (in which case the transform would produce a cross-product). Rfe12385 - Improve BGP reordering in some cases In some cases, the SPARQL query planner now does a better job reordering the triple-patterns in a BGP (basic graph pattern). Rfe12380 - Improve efficiency of some DISTINCT or REDUCED queries AllegroGraph now collects fewer intermediate results for some DISTINCT or REDUCED queries. Rfe12377 - Add intervalContainsTime magic property Added an additional temporal magic property that can be used to determine if a time is contained within an interval. See SPARQL Magic Properties. Rfe12372 - Make BGP reordering more intuitive Previously, AllegroGraph would switch the execution of patterns in a BGP when the BGP contained only two patterns and these patterns had the same statistical clause estimates. E.g., in a query like: ?a ?b ?c . ?c ?d ?e . AllegroGraph would execute the second pattern first. AllegroGraph now executes the first pattern first in the case of ties which is more intuitive. Rfe12368 - Use constraints to limit bindings for GRAPH queries Improve AllegroGraph's use of FILTERs in limiting the graphs considered in GRAPH clauses with variable graphs. For example, GRAPH ?g { A } FILTER( ?g = <ex:test> ) will now be evaluated more efficiently since AllegroGraph will only consider the graph <ex:test> rather than every graph in the dataset. Rfe12352 - Provide a sort order for SNA path and group gensyms Social Network Analysis (SNA) Magic Properties can create node identifiers to keep track of paths and groups. These act like blank nodes and so had no sort order defined for them. This was inconvenient because it made it impossible to, for example, list all of the nodes of a path together. This change adds a sort order to these node identifiers so that they are easier to use in queries. Rfe12192 - Optimize some wild-card property-path queries AllegroGraph now computes property-path queries that use the * or + operators more efficiently. Rfe12032 - Improve property-path expansion AllegroGraph now expands more property-path queries into their equivalent BGPs (basic graph patterns). This can improve the performance of the query optimizer. For example, AllegroGraph previously expanded ?s :a/:b ?o . into ?s :a :b0 . :b0 :b ?o . but did not expand paths with non-simple operators like wild-cards or alternation. It now expands these as well so that a path like ?s :a/:b/(:c|:d)/:e/:f ?o becomes ?s :a _:b0 . _:b0 :b _:b1 . _:b1 (:c|:d) _:b2 . _:b2 :e _:b3 . _:b3 :f _?o . Rfe11948 - Add PREFIX option for enabling reasoning in a SPARQL query AllegroGraph's SPARQL 1.1 engine now supports the inferenceMode query option to control the kind of inference used by an individual query. The mode is selected using the query option PREFIX notation. For example, the following PREFIX would cause a query to be evaluated using AllegroGraph's RDFS++ reasoner: PREFIX franzOption_inferenceMode: <franz:rdfs++> The following modes can be supplied: rdfs++- use RDFS++ reasoning, false- use no inference, restriction- use RDFS++ and the restriction reasoner. Rfe11625 - Improve algebraic optimizations of joins and left-joins Improve the logic used in the algebraic transform of (:join (:left-join A B) C) into (:left-join (:join A B) C) Which is the textual equivalent of moving the optional clause down in a query like A optional { B } C so that it becomes A C optional { B } which can be evaluated more efficiently unless A and C share no bindings. Rfe11146 - Handle simple OPTIONAL clauses more efficiently AllegroGraph now handles OPTIONAL clauses that contain only a single triple-pattern more efficiently. This means that queries with clauses that look like ?s :predicateA ?o . OPTIONAL { ?s :predicateB ?b } OPTIONAL { ?s :predicateC ?c } will be processed more efficiently. Rfe11068 - Improve literal equality constraint handling AllegroGraph now handles filter expressions involving literal equality more efficiently. For example, in a query like select * { ?a :predicate ?b . filter( ?b = 'literal' ) } AllegroGraph previously would scan all triples with the given predicate and apply the filter afterwards. Now AllegroGraph determines the set of possible matches for 'literal' and queries for them directly. In addition, AllegroGraph now performs a similar operation if the filter is a literal equality on an expression using the SPARQL str function. For example: filter( str(?variable) = 'searching' ) As with regular literal equality constraints, such an expression is converted into the equivalent tests for specific values. Bug22030 - ORDER BY expressions with non-query variables could cause errors If a SPARQL query used an ORDER BY expression that contained variables that were not found anywhere in the query, then an error could result. An example of this would be a query like: select * { ?s a ?o . } order by desc( str( ?notFoundInQuery )) This issue has been corrected. Bug22021: Correct SPARQL TSV printing problem for encoded UPIs When encoded UPIs were serialized from a SPARQL query using the SPARQL TSV (tab sepated values) output format, they were printing without the enclosing quotes. This has been corrected. Bug22016 - Correct SPARQL 1.1 bug with IN FILTERs It was possible for the query engine to produce incorrect results when using FILTER with either the SPARQL IN operator or multiple equality filters on the same variable. This has been corrected. Bug22015 - Fix possible error when using the SPOGI cache and SPARQL 1.1 In rare circumstance it was possible for a SPARQL 1.1 query to signal an error when the SPOGI cache was enabled. This has been corrected. Bug21986 - Using distinct aggregation with GROUP_CONCAT and a separator could fail If a SPARQL query used the GROUP_CONCAT aggregator along with both the DISTINCT and SEPARATOR modifiers, then the solution bindings for that variable could be left empty. This has been corrected. Bug21980 - Too many results in DISTINCT and REDUCED queries Under extremely rare circumstances, queries involving DISTINCT or REDUCED could return too many results. This bug has been fixed. Bug21974 - Assertion failure involving 'n-valid-partials' When chunk-at-a-time processing was enabled (see query prefix franzOption_chunkProcessingAllowed, described here in the SPARQL Reference), the code keeping track of the number of intermediate results used to run into an assertion failure under very rare circumstances (when two consecutive intermediate results were the same). This bug has been fixed. Bug21962 - Improve estimates for certain query patterns AllegroGraph was mis-estimating the sizes of query patterns that resulted from the inlining of filters that used the IN expression. For example, ?s ?predicate ?object . FILTER( ?predicate IN ( ex:a, ex:b, ex:c ) This could result in sub-optimal BGP re-ordering. This has been improved. Bug21961 - Incorrect results with EXISTS and NOT EXISTS Some queries involving EXISTS or NOT EXISTS filter expressions returned too few results. This bug has been fixed. Bug21955 - Sorting on unbound variables could lead to a string dictionary error If a query used an ORDER BY expression involving a computation and some of the variables in the expression were unbound, then the expression could signal a UPI not in string dictionary error. This has been corrected. Bug21948 - Improve bookkeeping of bound variables during sub-query When executing a sub-query with a limit, AllegroGraph failed to update the list of variables known to be bound. This prevented some queries from executing as efficiently as possible. This has been corrected. Bug21947 - Improve efficiency of VALUES in some cases AllegroGraph now does a better job of using the information in a VALUES clause to restrict alternate solutions. This can greatly increase the speed of some queries. Bug21941 - Queries with magic properties could fail in a remote-triple-store Queries that used magic properties could cause an error when run from a remote-triple-store. This has been corrected. Bug21940 - Using magic properties with filters could lose solutions If the circumstances were just right, a query that used a magic property that produced multiple bindings and that used a filter on those bindings could fail to return all solutions. This has been corrected. Bug21933 - Externally introduced variables could fail in nested sub-queries If a query used externally introduced variables (e.g., via the Lisp APIS's with-variables argument or via the variable bindings in an HTTP REST request), and these variables were used in FILTER clauses in a nested sub-query, then the query could fail to find any matches. This has been corrected. Bug21921 - BGP triple filters with Chunk-at-a-Time processing In rare cases, a filtering triple-pattern could fail to remove some solutions when using Chunk-at-a-Time query processing. This has been fixed. Bug21902 - Fix incompatibility between magic predicates and the identity reorderer If the identity BGP clause reorderer was being used in a query with magic predicates, then the magic predicates were being sent to the end of the BGP. This has been corrected. Bug21901 - Minor improvements to some range query clause orderings Improve the statistical estimates used to reorder BGP patterns that use range queries. Bug21892 - Query engine selection failed to take OFFSET into account AllegroGraph's decision on whether to use the single-set or the chunk-at-a-time query execution engine relied only on the LIMIT of the query rather than the LIMIT and the OFFSET. This has been corrected. Bug21880 - Using with-variables in SPARQL describe caused an error Using SPARQL describe with an external variable binding would lead to an error at query execution time. For example, this query would fail: describe ?x (where ?x was bound externally). This has been corrected. Bug21865 - Comparisons of xsd:dateTimes with fractional seconds Comparisons (including ORDER BY) of xsd:dateTimes with fractional seconds were broken. This bug has been fixed. Bug21862 - Some FILTERs and BGP orderings could lead to empty bindings In rare cases, AllegroGraph could fail in its bookkeeping of which variables were bound while it executed a query. This could lead to the introduction of spurious rows with null bindings. This has been corrected. Bug21858 - Distinct, SERVICE and sub-query could lead to planning failure If a query used a sub-query in a SERVICE clause and asked for unique results, then AllegroGraph would signal an error during query planning. This has been corrected. Bug21852 - Improve bound variable tracking AllegroGraph should assume that any variable supplied via external settings (e.g., via the with-variables parameter to run-sparql) is bound. Failure to do so could cause some queries to fail. This has been corrected. Bug21828 - VALUES could fail to correctly handle OPTIONAL bindings Using the VALUES clause could incorrectly drop solutions whose rows contained unbound values. This has been corrected. Bug21815 - Fix property-path memory leak Some queries involving property-path expressions could run out of memory, because the intermediate result set was not freed as soon as it was consumed. This bug has been fixed. Bug21803 - Exclusive ranges of encoded non-integer types Previously, filter expressions on all non-integer types (floats, decimals, etc, excluding xsd:date and xsd:dateTime) involving exclusive ranges ('<' and '> ') behaved as if an inclusive range ('<=', '> =') had been specified. This bug has been fixed. Rfe12591: Improve efficiency of some Basic Graph Pattern evaluations AllegroGraph now makes better use of intermediate result sets when evaluating basic graph patterns (BGPs) in nested groups of joins and left-joins. AGWebView Bug21997 - Agwebview and sessions Due to a thread safety issue, AGWebView's 'Create Session' or any subsequent action in that session could produce garbage results ranging from 'Not found' to garbled output. This bug has been fixed. Changes to the Lisp API Rfe12562 - upi->value round-trip For dates and date-times a complete representation is a (<universal-time> . <timezone-information>) cons. While this would make a good candidate for the return value of upi->value, to maintain backward compatibility by default upi->value must return a single number representing the universal time. However, this means that during the upi->value/value->upi round-trip the timezone information is lost. This change adds a new keyword argument :complete? (defaults to nil). With :complete? t, upi->value returns the complete representation and the roundtrip works in the sense of: (upi= (value->upi (upi->value upi :complete? t) (upi-type-code upi)) upi) Currently, for UPIs other than dates and date-times, :complete? has no effect. Rfe12557 - run-sparql can now use a UPI or future-part for the default base Previously the Lisp function run-sparql only allowed a string to specify the default-base. It can now take a string or a UPI or future-part. Rfe12508 - normalizing with-variables should convert $var to ?var The with-variables parameter in the Lisp API to run-sparql allows for variable names to be identified as strings or symbols. The name can be a plain string like "person" or a string whose first character is #\? or #\$ like "?person" or "$person". All three of these representations refer to the same query variable: ?person. Prior to this change, AllegroGraph could misinterpret variable names that used $ as the first character. This has been corrected. Bug21953 - Support server-side loading via the Lisp client. When using the Lisp client and connected to a remote-triple-store, passing a URI or URI-like string to one of the bulk-load functions (e.g. load-ntriples) will cause the resource to be loaded directly by the server instead of being delivered via the client. Bug21891 - db.agraph.log:log-output captured in dumplisp If dumplisp (a function used to save a running image) was used after opening a triple store, the stream used for logging would be captured. Upon restore of the dumped image, subsequent logging attempts (performed by internal AllegroGraph code) could result in confusing errors or file corruption. This has been fixed. Bug21860 - Make Lisp clients find bundled liblzo2.so.2 Previously, Lisp clients resorted to using the liblzo library located in the acl directory ("sys:") or somewhere the OS dynamic library loader can find it. With this change the bundled liblzo2.so.2 is loaded. Bug21841 - db-get-triple leaks sockets The Lisp function db-get-triple didn't close the socket it used for communication with a remote triple store which resulted in leaking sockets in CLOSE_WAIT state. This bug has been fixed. Bug21848 - load-turtle could use the wrong triple-store The load-turtle command could fail to respect the value of the db parameter and instead load triples into the store currently bound to db. This has been corrected. Bug21810 - Polygon-vertexes did not work with future-parts If called with a future-part rather than a UPI, the polygon-vertexes function would return no results. This has been corrected. Bug21611 - Load patches in Lisp remote client With this fix the Lisp remote client loads patches and enables pretty printing of parts and triples just as the direct client does. Bug21070 - Fix corruption hazard in direct Lisp client. Prevent possible database corruption that could occur when interrupting a direct Lisp client app that is modifying the transaction log. Bug21045 - create-freetext-index function failed when called with UPIs The create-freetext-index function (in the Lisp API) would fail if called with UPIs (rather than, e.g., future-parts). This has been corrected. Prolog Rfe12222 - AllegroGraph Prolog unifies UPIs and future-parts When AllegroGraph is loaded, the semantics of Prolog unification is extended such that: Non-eq UPIs that are equivalent (byte-for-byte the same) will unify. Non-eq future-parts that resolve to the same UPI will unify. Similarly, a UPI and a future-part that resolves to that UPI will unify with each other. This enhancement does not change the capabilities of Prolog queries in AllegroGraph, but allows simpler, less-verbose coding that looks a little more declarative and less procedural. For example, the following (select (?manager ?employee) (member ?employee (!acme:joe !acme:jill !acme:jim !acme:jerry)) (q ?employee !acme:hasManager ?manager) (member ?manager (!acme:bill !acme:bob !acme:betty !acme:benjamin))) would not work as intended before this change because ?manager would be unified with a UPI, and a UPI would not unify the future-parts in the last clause. The last clause would have had to be coded as something like (member ?manager1 (!acme:bill !acme:bob !acme:betty !acme:benjamin)) (part= ?manager manager1) Using Prolog is AllegroGraph is documented here in the Lisp Reference. Documentation No significant changes. Python Client No significant changes. Java Client Bug21953. Fix generation of some POST method requests. AGHTTPClient would write query parameters to the body of POST requests when the content-type was not application/x-www-form-urlencoded. This can result in http servers not finding the query parameters and processing the request incorrectly. This bug corrects the problem by writing POST request query parameters to the URL, unless the request has no body and the content type is application/x-www-form/urlencoded. AGRepositoryConnection.load() (see Javadocs) now works for importing server-side statements. AllegroGraph 4.11 Server Changes Rfe12309 - Improve algebraic query manipulation AllegroGraph now notices patterns of the form (:join (:join A (:bgp B)) (:bgp C)) and transforms them into the equivalent (:join A (:bgp B C)) (I.e., it merges the two BGPs). This transform brings more triple-patterns together which lets AllegroGraph order the query operations better. Rfe12305 - Relative paths in the configuration file Relative file or directory paths in the configuration file were always interpreted relative to the directory containing the configuration file. In this rfe, a BaseDir configuration directive was added that allows another path to be specified for this purpose. See the documentation in Server Configuration for more. Rfe12280 - Simplify geospatial type-mapping API Geospatial subtype mappings can now be set up with the regular datatype and predicate mappings rather than needing to use the specialized functions. The old functions still work but are now unnecessary. They will be removed in some future release. Rfe12260 - Automatically map temporal predicate data to xsd:dateTimes Triple-stores created in this version of AllegroGraph will include the three AllegroGraph temporal predicates in the list of automatically mapped predicates. I.e., the objects in triples with these predicates: - <> - <> - <> will be converted into encoded xsd:dateTime literals. This encoding makes range queries against them faster and makes it easier to use AllegroGraph's temporal functionality. Rfe12259 - Optimize certain combinations of JOINs and FILTERs The query engine now recognizes certain patterns of JOINs and FILTERs and rewrites them to be more efficient. For example, a pattern like (:join A (:filter B f)) can often be better evaluated as (:filter (:join A B) f) Rfe12182 - More restrictive umask With this change, users who aren't the owner of the file, or a member of the group of the file (that is, "other" users in chmod manpage terms) have no read, write or execute permission on files created by AllegroGraph. On existing installations, it's recommended, but in no way enforced, to change permissions the same way with 'chmod -R o-rwx <directories>' where <directories> is the list of directories holding AllegroGraph data (LogDir, SettingsDirectory, catalog directories, etc). Rfe12175 - Audit log When the 'Auditing' global configuration directive is set to 'yes' (it defaults to 'no'), events that have implications on security or performance will be logged to the new system database in the new system catalog. Note that regardless of permissions only superuser can read or write the system catalog. See Auditing for more information. Rfe12151 - Remove support for loading N-Triples into a federated triple-store AllegroGraph 3 introduced some techniques that optimized loading of N-Triples into a federated-triple-store. These have been superseded in AllegroGraph 4 because agload is significantly faster and easy to use. Rfe12147 - Better manage memory especially for large solutions AllegroGraph now manages large query result sets (with or without DISTINCT or ORDER BY) more efficiently. Rfe12138 - Agload: reduced memory footprint and other enhancements Reduced memory consumption of agload processes. Particularly in the case where there are many input sources. Agload now analyzes input sources on the fly. This means you will not necessarily encounter errors about invalid input sources prior to modification of a triple-store. Use --dry-run command line argument if you wish to pre-analyze an agload job. Rfe12125 - Improve performance of unordered queries with small limits AllegroGraph now uses chunk-at-a-time query processing for queries with small LIMITs that do not use ORDER BY more frequently. In particular, the chunkProcessingAllowed query option can now take three values: - yes - use chunk-at-a-time mode whenever possible, - no - always use single-set mode, or - possibly - choose the execution mode that makes the most sense for the current query. The default value for chunkProcessingAllowed has been changed from no to possibly. Rfe12111 - Improve memory checking efficiency Improve the efficiency of AllegroGraph's memory tracking code slightly. Rfe12089 - Improve cursor resource management Improve AllegroGraph's low-level cursor memory management. Rfe12063 - Allow in-memory-triple-stores to use blank nodes The in-memory-triple-store implementation for the remote-lisp-client was missing several methods required for full blank node support. This has been corrected. Rfe11973 - Improve usage of UPI-map Improved the way AllegroGraph manages the use of secondary indices (UPI-maps). Rfe11955 - :commit argument added to delete-duplicate-triples The :commit argument was added to delete-duplicate-triples to allow for the deleting a very large number of duplicates. Rfe11886 - Use statistics more often when reordering query clauses The query engine now uses store statistics in more places when working to reorder the clauses in a query's basic graph patterns (BGPs). This should produce better plans more often with less need for customer query tweaking. Rfe11585 - Some queries unnecessarily accumulated partial results AllegroGraph was sometimes accumulating results unnecessarily. For example, the query select (count(*) as ?total) where {?s ?p ?o} was accumulating the actual results even though only the summary was necessary. Rfe11511 - Encode xsd:decimals xsd:decimals used to be stored as strings. With this change, they are automatically encoded directly in the UPI which potentially brings significant savings in storage space and computation in queries. The most significant 20 digits are preserved by the encoding. Note that this change only affects the encoding of new triples. To convert old triples and take advantage of the more efficient storage, the database contents must be reloaded from a serialized format. We recommend exporting in nquad format and then reimporting it with agload. Restoring from a backup is not sufficient. Rfe11197 - Serialize triple IDs as resources rather than literals Previously, a triple ID would have been serialized as a typed literal like "13"^^<> This format prevented the use of triple IDs in the subject of a triple so we have changed the serialization to a URI like <> Any triple IDs already in a triple-store will remain the same (though they will print differently). This change does mean, however, than any files that contain triple IDs in the old format will need to be converted in order to be re-loaded (note, however, that the IDs assigned to triples are not part of any serialization format which means that they cannot reliably round-trip in any case). Rfe10292 - Handle filename limit for unix domain sockets Linux has a limit of 107 characters for filenames of UNIX domain sockets. With this change, when bumping into that limit, a meaningful error message is printed and the workaround of setting the UnixDomainSocketDirectory or the SettingsDirectory configuration directive to a shorter path is suggested. Bug21793 - Fix race in inter process communication Under extremely rare circumstances, multiple threads could end up listening on the same communication channel which can lead to errors and deadlocks. This was duly detected by an assertion that appeared in the log as: the assertion (endp (intersection (db.agraph.spawn::receiver-for db.agraph.spawn::r) (db.agraph.spawn::receiver-for db.agraph.spawn::receiver))) failed. This bug has been fixed. Bug21782 - Potential double close of cursors An inadvertent double close of cursors was discovered that could potentially result in improper sharing of a data structure. This has been corrected. Bug21779 - Fix low-level cursor resourcing bug Under heavy load, the secondary index (UPI-map) creation function could leak a cursor leading to potential resource problems. This has been corrected. Bug21761 - Prolog q- functor could be treated as q If a reasoning triple-store can use secondary indices (UPI-maps) then it was possible for AllegroGraph to treat some q- functors as q functors in a given Prolog select query. This has been corrected. Bug21745 - Fix crash related to aggregate functions Under rare circumstances, queries involving aggregate functions may cause the backend to crash. This bug has been fixed. Bug21744 - Some Prolog select queries could return duplicate results If using secondary indices (i.e., UPI-maps), it was possible for Prolog select queries that returned raw UPIs to overwrite one solution with another. This problem could only occur with UPI only queries and never with queries that returned strings. The problem is now fixed. Bug21740 - Rare hangs under high load The message broker (called the "Hub" process) may get into a state where forwarding a message blocks, preventing all other messages from being handled. This bug has been fixed. Bug21735 - Corrupted distinct results Under rare circumstances, queries involving DISTINCT could produce results with the bindings assigned to the wrong variables. This bug has been fixed. Bug21702 - XSD casting in a BIND could convert to xsd:integer incorrectly Using an XSD cast in a BIND could loss the datatype and switch it to xsd:integer. E.g., BIND(xs:long(?test) AS ?testLong) would bind ?testLong to a value whose datatype was xsd:integer rather than xsd:long. This has been corrected. Bug21693 - Avoid forking too many blank node servers in agload. Improves agload performance under default conditions when loading ntriples or nquads files which are small and numerous. Bug21691 - Reasoning over subproperties of symmetric properties AllegroGraph's RDFS++ reasoner partially failed to infer triples from sub-properties of a symmetric property. This has been corrected. Bug21680 - Some BIND forms could be evaluated incorrectly. A BIND form at the start of a BGP could fail to get values from earlier in the query evaluation. For example, this form ?a a ?b . { BIND( ?b as ?otherB ?otherB a ?c . } could lose the binding for ?otherB. This has been corrected. Bug21670 - FILTER and EXISTS could interact incorrectly. Depending on the shape of the rest of the query, it was possible for a FILTER clause to be applied to the inside of an EXISTS or NOT EXISTS filter rather than being applied to the main body of the query. This has been corrected. Bug21663 - Redo of new index creation confuses instance process If a database with a newly-created index has been shut down uncleanly, restart of the database may crash. This has been corrected. Bug21651 - Prolog ?? and reasoning On a reasoning triple store, the prolog ?? construct in the predictate position used to fail with a type error. This bug has been fixed. Bug21642 - Agload buffer overflow Due to a buffer overflow, agload could corrupt memory and potentially crash. This has been fixed. Bug21640 - Race condition when reaping abandoned sessions Previously, if concurrent processes performed the operation to reap abandoned sessions, shared memory state could be corrupted, resulting in hangs or crashes. This has been fixed. (This bug was actually fixed in version 4.10.) Bug21625 - Large result sets could produce bogus triples with INSERT and CONSTRUCT In cases where there were more than 100,000 results, the machinery that prevents INSERT and CONSTRUCT queries from producing duplicate results could create invalid answers. This has been corrected. Bug21596 - Serialize-rdf/xml could fail on some predicates AllegroGraph's RDF/XML serializer would fail if a predicate ended with something other than a letter, a digit, an underscore or a dash. This conformed to an older version of the Extensible Markup Language but not to the most recent version. This has been corrected. Bug21586 - Possible divide by zero error in index optimizer Under some circumstances an index optimizer could attempt to perform a division by zero when logging statistics to agraph.log. This would result in a non-fatal error message being logged instead of the statistics. This has been corrected. Bug21580 - Improve external variable binding bookkeeping The values for externally supplied variables could be lost for some complex interactions of EXISTS (or NOT EXISTS) filters and UNIONS. This has been corrected. Bug21570 - Fix 'No session ports available' error message When trying to start a dedicated session and no more free ports were available, AllegroGraph failed with an internal error (calling @UUID on NIL) instead of producing an informative error message. This bug has been fixed. Also, the http error code was changed from 400 ("Bad request") to 503 ("Service unavailable") to indicate a temporary condition. Bug21553 - Fix MemoryCheckWhen without MemoryReleaseThreshold If the memoryReleaseThreshold configuration directive was not specified but memoryCheckWhen was, then query execution failed with type errors trying to compare memory footprint to NIL. With this fix, AllegroGraph refuses to start in these circumstances. HTTP Client Rfe12136 - Optimize front-end http server The timeout for reading headers was lowered to 5s which makes the front-end http server make better use of its available resources and more resilent when clients with http keep-alive don't close the connection in a timely manner. Also, a warning is logged if the number of idle http worker threads falls below 2, because it can easily cause a performance drop. SPARQL Rfe12296 - Support AllegroGraph direct reification in SPARQL AllegroGraph now supports direct reification in SPARQL queries using the <> magic property. This property allows you to bind the ID of a triple to a SPARQL variable which can then be used in reification. For example, the following query first adds two triples and then adds four additional triples that describe the first two: prefix : <eh://> prefix franz: <> insert data { :gary :likes :dogs . :gary :likes :cats . } ; insert { ?id :mentioned ?now . :gary :believes ?id . } where { ?id franz:tripleId (:gary :likes ?what) . bind( now() as ?now ) } See the documentation for additional details on direct reification and its advantages and limitations. Rfe12246 - Enhancements to SPARQL freetext queries The fti:match and fti:matchExpression predicates can now optionally retrieve the object of the matching triples and select the freetext index to use. (These enhancements are currently available only for AllegroGraph's native freetext indices. Support for Solr and MongoDB will be forthcoming). The new features rely on SPARQL magic properties to provide a syntactic shortcut for list patterns. As an example, this query retrieves the subject (into ?subject) and text (into ?text) for the freetext query that matches "nepal": select ?subject ?text { (?subject ?text) fti:match 'nepal' . } This query retrieves only subjects and uses only the freetext index named select ?item { ?item fti:match ('upgrade' 'comments') } More examples and information can be found in the documentation. Rfe12243 - Support AllegroGraph Social Networking Analysis in SPARQL AllegroGraph's SPARQL engine now supports a set of magic properties to enable Social Networking Analysis including centrality measures, path finding and cliques. See SPARQL Magic Properties for more details. Rfe12231 - Improve SPARQL geospatial integration AllegroGraph now supports several magic properties to make SPARQL / geospatial integration easier. For example, this query finds all of the points within a circle centered at 10, 2: select ?who ?where { (?who ?where) geo:inCircleXY ( ex:location 10 2) . } The following magic properties are provided: geo:inBoundingBoxXY geo:inBoundingBox geo:inCircle geo:inCircleKilometers geo:inCircleMiles (where geo is the usual Geospatial namespace abbreviation for <>). See SPARQL Magic Properties for additional details on using these properties. Rfe12171 - Improve variable GRAPH clause handling in some situations SPARQL specifies that a graph clause like GRAPH ?g { ... } must be implemented as if it iterated over each graph in the dataset and took the union of the resulting bindings. There are sometimes more efficient ways to get the same results and AllegroGraph uses these when it can. AllegroGraph was not, however, using information about any constraints on the variable ?g that were provided by other parts of the query. For example, a query like: ?s :p ?g . GRAPH ?g { ... } might produce a single binding for ?g but AllegroGraph was still iterating over all of the named graphs of the dataset. This has been optimized so that AllegroGraph will use the additional information when it can determine that the number of bindings for ?g are smaller than the number of graphs in the dataset. Rfe12057 - Manage disk and memory resources better for CONSTRUCT queries When there are many results, SPARQL CONSTRUCT queries buffer their results to disk to conserve memory. The cutoff value for when to buffer to disk was unnecessarily small which meant that CONSTRUCT went to disk more often then necessary. This has been corrected. Rfe12035 - Improve handling of some SPARQL queries with a GRAPH clause By definition, SPARQL must handle a GRAPH clause as if the query engine evaluated the clause once for each graph in the dataset. However, it is often possible to evaluate the clause more efficiently. This change improves the heuristics AllegroGraph uses to determine when the more efficient approach will achieve the correct results which means that some queries with a GRAPH clause will be executed more quickly. Rfe11319 - Support SPARQL federated queries (the SERVICE clause) AllegroGraph now supports SPARQL federated query using the SERVICE clause. The initial implementation is functional but not highly optimized. Rfe8063 - Support AllegroGraph temporal reasoning in SPARQL AllegroGraph's SPARQL engine now supports a set of magic properties to enable temporal reasoning. See SPARQL Magic Properties for more details. Bug21757 - SPARQL geospatial queries could fail Depending on the set up of a triple-store's geospatial subtype mappings, it was possible for a SPARQL query using geospatial extensions to fail to find results. This has been corrected. Bug21742 - SPARQL parser failed on quad patterns with collections The SPARQL parser could lose track of triple-patterns that arose from abbreviations using either nested blank nodes or collections. This has been corrected. Bug21725 - Correct potential error when SPARQL Update touches too many rows If a SPARQL DELETE ... INSERT ... WHERE request changes many rows, then AllegroGraph caches the results of the WHERE expression to disk to reduce memory pressure. In some cases, this could result in a error when the cached results were re-read. This only occurred for Update requests that touched more than 100,000 rows. This has been corrected. Bug21718 - Unused variable optimization could produce incorrect results AllegroGraph analyzes SPARQL queries that use DISTINCT in order to remove the collection of bindings for variables that are not needed to produce the final result. In some cases, it was possible for this analysis to cause the execution of a code path that added solutions incorrectly. This has been corrected. Bug21573 - SPARQL queries could intern strings unnecessarily SPARQL queries that used the FROM or FROM NAMED clause always tried to intern the graphs into the triple-store's string table. This was both unnecessary and could cause an error if the triple-store was read-only or a federation. This has been corrected. Bug21011, Bug21571 - handle nested SPARQL BGP syntactic sugar correctly Some SPARQL triple-patterns that involved nested brackets and object lists were being parsed incorrectly. This has been corrected. Some examples of queries that work correctly now but that failed without this change include: PREFIX : <ex:> SELECT * { :x :y [:z (:b [:d (:e :f),(:g :h)])] } PREFIX : <ex:> SELECT * { ?psi :a :b, [:c :d], [:e :f] . } and PREFIX : <ex:> SELECT * { ?psi :a :b, [:c [:d :e]] . } Improve Pattern estimation where there is no :p index Improved query pattern estimators for triple-stores with no predicate index (e.g., :posgi, :psogi, etc.). This will help reorder query clauses more intelligently. AGWebView Rfe12325 - Add audit log query interface to AGWebView There is now an audit menu item in the admin menu. This provides access to the audit log query page which lets you filter audit log events by user, class and date. Changes to the Lisp API Bug21708 - Fix some low-level Lisp API geospatial functions Some low-level Lisp API functions had not been enhanced to account for AllegroGraph's geospatial encoding. This has been corrected. Bug21689 - Lisp client printer variable leakage On a remote store, the run-sparql function failed with an 'Illegal sharp character' error if print-length was set to a small value. This bug and more generally the leakage of printer variables into run-sparql has been fixed. Documentation No significant changes. Python Client No significant changes. Java Client Rfe12118 - AGRepository with connection pooling With this change, an AGRepository can now be configured to use a connection pool via the setConnPool method, enabling Sesame apps to transparently benefit from connection pooling. See the javadoc for setConnPool for details. Rfe10513 - Configurable blankNodesPerRequest With this change, the Java client supports configuring the number of blank nodes automatically fetched per request to the server by the AGRepository's AGValueFactory. The default remains at 100, but it can be configured by setting the property: com.franz.agraph.repository.blankNodesPerRequest Applications can also configure it directly, see the javadoc for AGValueFactory#setBlankNodesPerRequest for details. Rfe9869 - Support SPARQL/Update from Jena With this change the Jena adapter supports an execUpdate method in AGQueryExecution for executing SPARQL Updates. Jena Tutorial example13 has been augmented to demonstrate its use. AllegroGraph 4.10 Server Server <<ttp://example/> would cause this error: inline data syntax SPARQL 1.1 recently switched from a syntax for inline data that used BINDINGS to one that uses VALUES. The standard also made it possible to use inline <X> TO GRAPH <X> Server indices Server franz Server indices=<N> option to truncate overly long messages. The default is order Server Changes Rfe11338 - CJK bigram tokenizer A new tokenizer :simple-cjk for Chinese/Japanese/Korean (CJK) text has been added.. Server Server Server); AllegroGraph 4.3.2 Server. AllegroGraph 4.3.1 Server). AllegroGraph 4.3 Server !<<description>. <> syntax. Now, POSTing to /session correctly opens a session on a remote triple store.. AllegroGraph 5.0 note: the module to require is :agraph5 and the file is agraph5.fasl starting in version 5.0.. AllegroGraph 4.2.1c Server. AllegroGraph 4.2.1a Server. AllegroGraph 4.2.1. Server RDF4J. Server <store>[rdfs++# <clearNamespaces/> tag to fail. This has been rectified.. AllegroGraph 4.1.1 Welcome to AllegroGraph 4.1.1. This release of AllegroGraph provides overall product stability and efficiency improvements, with special attention to improvements in AGWebView and documentation. Server. AllegroGraph 5.0 note: the file is agraph5.fasl starting in version 5.0. Indices Indices There is a new, high-level document describing AllegroGraph's system of triple indices. AllegroGraph 4.0.6a Welcome to AllegroGraph 4.0.6a. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server. AllegroGraph 4.0.6 Welcome to AllegroGraph 4.0.6. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server <>. This means that these UPIs will be be read and written in this format. For example, the triple <foo x="y"/> by creating references to empty string literals instead of blank nodes like the RDF/XML syntax specification mandates. Now, the RDF/XML parser correctly creates blank nodes.. AllegroGraph 5.0 note: the module to require is :agraph5 and the file is agraph5.fasl starting in version 5.0.. AllegroGraph 4.0.5d Welcome to AllegroGraph 4.0.5d. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server Changes. AllegroGraph 4.0.5c Welcome to AllegroGraph 4.0.5c. This release of AllegroGraph v4.0 provides efficiency improvements in the area of reasoning. Server. AllegroGraph 4.0.5b Welcome to AllegroGraph 4.0.5b. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server. AllegroGraph 4.0.5 Welcome to AllegroGraph 4.0.5. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server, 5. AllegroGraph 4.0.4 Welcome to AllegroGraph 4.0.4. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server !<foo> were not printing correctly (the angle brackets were left off). This patch corrects both of these problems.. AllegroGraph 4.0.3 Welcome to AllegroGraph 4.0.3. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server indices After an index merge, a task sometimes runs to perform maintenance of the data structures used to track deleted triples. This operation requires a full scan of indices..html, has been updated to enlarge on the information about the new AllegroGraph Java Client. [5.0 note: the convert3to4.html document is no longer part of the documentation set.]. AllegroGraph 4.0.2 Welcome to AllegroGraph 4.0.2. This release of AllegroGraph v4.0 provides overall product stability and efficiency improvements. Server <setNamespace> and <removeNamespace> tags has been shown to exist (TopBraid), so they have been implemented. indices not updated aggressively Previously, if regular query operations were performed against a database that was continually growing, file handle exhaustion could occur due to incorrect reference counts on indices. indices. Server. Serverorder indices indices. Changes to earlier releases Significant changes in AllegroGraph 4.0 compared to earlier releases makes changes to earlier releases irrelevant to release 4.0 and later. Contact allegrograph-support@franz.com if you have questions about releases prior to 4.0.
https://franz.com/agraph/support/documentation/current/change-history.html
CC-MAIN-2018-43
en
refinedweb
#include <MeteorMgr.hpp> Inherits StelModule. Initialize the MeteorMgr object. Implements StelModule. Draw meteors. Reimplemented from StelModule. Update time-dependent parts of the module. This function adds new meteors to the list of currently visiable ones based on the current rate, and removes those which have run their course. Implements StelModule. Defines the order in which the various modules are drawn. Reimplemented from StelModule. Get the current zenith hourly rate. Set the zenith hourly rate. Set flag used to turn on and off meteor rendering. Get value of flag used to turn on and off meteor rendering. Set the maximum velocity in km/s.
http://www.stellarium.org/doc/0.10.4/classMeteorMgr.html
CC-MAIN-2016-36
en
refinedweb
xml\:lang=, xml:lang Namespaces are not supported for this format. If there is an indication prior to this sentence that the browser supports namespaces, test passes if the background is blue . Otherwise, test passes if the background is yellow . Assertion: [Exploratory test] An xml\:lang= selector with a value that matches an xml:lang attribute value will not produce styling for documents served as XML if the browser supports namespace declarations, but will otherwise. selectors-071 Result summary & related tests
http://www.w3.org/International/tests/html-css/generate?test=selectors-071
CC-MAIN-2016-36
en
refinedweb
Many of you who have upgraded to the new Beta version of sendmail have had or are still having problems getting mailman to run. The problem lies with smrsh itself. I had asked for help on the sendmail news group and had received an answer quite promptly on how to fix the problem and get Sendmail to play nice with Mailman. With the new smrsh, the directory to put links in to allow programs to run was moved. from default /etc/smrsh (this is what I had) to /usr/adm/sm.bin you can keep this and create new links or copy the compiled programs in to that directory, or you can change it. To change it you: cd /wherever/sendmail-8.10.0.Beta12/smrsh vi/emacs/other_editor smrsh.c locate this bit of code /* directory in which all commands must reside */ #ifndef CMDDIR # define CMDDIR "/usr/adm/sm.bin" #endif /* ! CMDDIR */ and change it to wherever you like recompile by doing ./Build and install by doing ./Build install See if mailman runs correctly. If not look in your maillog and see what the problem is. Most likely it will be sendmail complaining about the wrapper/script wanting the wrong gid. Feb 20 13:58:51 doink Mailman mail-wrapper: Failure to exec script. WANTED gid 12, GOT gid 2. (Reconfigure to take 2?) If that's the case you have to reinstall mailman. run ./configure --with-mail-gid=<what number it wants> run make install test and have fun Jamie webmaster at planetphat.com
https://mail.python.org/pipermail/mailman-users/2000-February/003751.html
CC-MAIN-2016-36
en
refinedweb
options 1.0 Container for flexible class, instance, and function call options A module that helps encapsulate option and configuration data using a multi-layer stacking (a.k.a. nested context) model. Classes are expected to define default option values. When instances are created, they can be instantiated with “override” values. For any option that the instances doesn’t override, the class default “shines through” and remains in effect. Similarly, individual method calls can set transient values that apply just for the duration of that call. If the call doesn’t set a value, the instance value applies. If the instance didn’t set a value, the class default applies. Python’s with statement can be used to tweak options for essentially arbitrary duration. This layered or stacked approach is particularly helpful for highly functional classes that aim for “reasonable” or “intelligent” defaults and behaviors, that allow users to override those defaults at any time, and that aim for a simple, unobtrusive API. It can also be used to provide flexible option handling for functions. This option-handling pattern is based on delegation rather than inheritance. It’s described in this StackOverflow.com discussion of “configuration sprawl”. Unfortunately, it’s a bit hard to demonstrate the virtues of this approach with simple code. Python already supports flexible function arguments, including variable number of arguments (*args) and optional keyword arguments (**kwargs). Combined with object inheritance, base Python features already cover a large number of use cases and requirements. But when you have a large number of configuration and instance variables, and when you might want to temporarily override either class or instance settings, things get dicey. This messy, complicated space is where options truly begins to shine. Usage from options import Options, attrs class Shape(object): options = Options( name = None, color = 'white', height = 10, width = 10, ) def __init__(self, **kwargs): self.options = Shape.options.push(kwargs) def draw(self, **kwargs): opts = self.options.push(kwargs) print attrs(opts) one = Shape(name='one') one.draw() one.draw(color='red') one.draw(color='green', width=22) yielding: color='white', width=10, name='one', height=10 color='red', width=10, name='one', height=10 color='green', width=22, name='one', height=10 So far we could do this with instance variables and standard arguments. It might look a bit like this: class ClassicShape(object): def __init__(self, name=None, color='white', height=10, width=10): self.name = name self.color = color self.height = height self.width = width but when we got to the draw method, things would be quite a bit messier.: def draw(self, **kwargs): name = kwargs.get('name', self.name) color = kwargs.get('color', self.color) height = kwargs.get('height', self.height) width = kwargs.get('width', self.width) print "color='{}', worse if we want to set default values for all shapes in the class. We have to rework every method that uses values, the __init__ method, et cetera. We’ve entered “just one more wafer-thin mint…” territory. But with options, it’s easy: Shape.options.set(color='blue') one.draw() one.draw(height=100) one.draw(height=44, color='yellow') yields: color='blue', width=10, name='one', height=10 color='blue', width=10, name='one', height=100 color='yellow', width=10, name='one', height=44 In one line, we reset the default for all Shape objects. options proves simpler. Supporting shining through if they’re not set at higher levels. This stacking or overlay model resembles how local and global variables are managed in many programming languages. Magic Parameters Python’s *args variable-number of arguments and **kwargs keyword arguments are sometimes called “magic” arguments. options takes this up a notch, allowing setters much like Python’s property function or @property decorator. This allows int (integer/numeric) values, your module auto-magically supports relative parameters for height and width.: they’re useful, they’re enormously useful and highly leveraged, leading to much simpler, much higher function APIs. We call them ‘magical’ functions/methods for which the set of known/possible parameters is limited–these work just fine with classic Python calling conventions. For those, options is overkill. /objects. This may seem a doubly rarefied case–and it is, relatively speaking. But it does happen, and when you need multi-level processing, it’s really, really super amazingly handy to have Subclass options may differ from superclass options. Usually they will share many options, but some may be added, and others removed. To modify the set of available options, the subclass defines its options with the add() method to the superclass ) Because some of the “additions” can be prohibitions (i.e. removing particular options from being set or used), this is “adding to” the superclass’s options in the sense of “adding a layer onto” rather than strict “adding options.” An alternative is to copy (or restate) the superclass’s options. That suits cases where the subclass is highly independent, and where changes to the. This makes the following equivalent: q1 = Quoter('[', ']') q2 = Quoter(prefix='[', suffix=']') An explicit addflat() method is provided not as much for Zen of Python reasons (“Explicit is better than implicit.”), but because flat arguments are commonly combined with abbreviation/shorthand conventions, which may require some logic to implement. For example, if only a prefix is given as a flat argument, you may want to use the same value to implicitly set the suffix. To this end, addflat returns the set of keys that it consumed: if args: used = self.options.addflat(args, ['prefix', 'suffix']) if 'suffix' not in used: self.options.suffix = self.options.prefix? A use case for this: “Abbreviation” options that combine multiple changes into one compact option. These would probably not have stored values themselves. It would require setting the “dependent” option values via side-effect rather than functional return values. - The author, Jonathan Eunice or @jeunice on Twitter welcomes your comments and suggestions. Recent Changes - Commenced automated multi-version testing with pytest and tox. Now successfully packaged for, and tested against, Python 2.6, 2.7, 3.2, and 3.3. - Options is now packaged for, and tested against, PyPy 1.9 (based on 2.7.2). The underlying stuf module and orderedstuf class is not certified for PyPy, and it exhibits a bug with file objects on PyPy. options works around this bug, and tests fine on PyPy. Still, buyer beware. - Versions subsequent to 0.200 require a late-model version of stuf to avoid a problem its earlier iterations had with file objects. Versions after 0.320 depend on stuf for chainstuf, rather than the otherstuf sidecar. - Now packaged as a package, not a set of modules. six module now required only for testing. - API for push() and addflat() cleaned up to explicitly delink those methods. Installation pip install options To easy_install under a specific Python version (3.3 in this example): python3.3 -m easy_install options (You may need to prefix these with “sudo ” to authorize installation.) -: options-1.0.xml
https://pypi.python.org/pypi/options/1.0
CC-MAIN-2016-36
en
refinedweb
China, Russia Try To Hack Australia's Upcoming Submarine Plans 83 An anonymous reader writes: Chinese and Russian spies have attempted to hack into the top secret details of Australia's future submarines (paywalled), with both Beijing and Moscow believed to have mounted repeated cyber attacks in recent months. One of the companies working on a bid for Australia's new submarine project said it records between 30 and 40 cyberattacks per night. Cool paywall, bro (Score:1, Informative) Nice ad for a subscription, and another link being a brief blurb. Journalism at its best. Re:Cool paywall, bro (Score:5, Insightful) Anything that references an article behind a paywall should automatically get rejected. Re:Cool paywall, bro (Score:5, Insightful) It's a Rupert paper. Any time I'm paywalled by News Corporation, he's doing *me* a favour by disallowing the reading of his trashy article. Re: (Score:2) Re: (Score:1) Don't fool me, bro. I read this before. You don't bother to check the date, don't you? The day Murdoch announced to buy National Geographic was 2015/09/10, the day Murdoch FIRED the magazine's staff was 2015/11/04. You either did not mention the difference of two stories, Murdoch bought the magazine and in other one, he laid off the staff. Damn you, before name calling other, check your stupid measure. Re: (Score:1) What's the big deal for China or Russia or anyone? The subs haven't been designed yet. No tenders let. No specifications finalised. More likely its the tenderers trying to get an inside edge on each other. They are from France, Germany and Japan. When in doubt, Follow The Money. Shocking! (Score:4, Insightful) Foreign intelligence agencies trying to learn the specifics of a new military system? I am shocked, shocked! [youtube.com] The only news here is that there are signs of it, and seemingly attributable ones as well. Re: (Score:3) The only news here is that there are signs of it That isn't news either. My home router gets more than 30-40 "cyber-attacks" per night. and seemingly attributable ones as well. The "attribution" is just speculation. They have no actual evidence. They are just softening up the public for a money-grab to conduct "cyber-warfare". Re: (Score:1) Let's just scrap the CIA. America just isn't interested in "trying to learn the specifics of a new military system", unlike the nosy Chinese/Russians. Re: (Score:1) my company website gets 10-100 "cyber-attacks" per hour, for last 3 years. More precisely, those are blind shots at admin login pages typical for popular CMSs. After filling .http_access with reject for some 80 blocks covering mostly Russia, Ukraine, Belarus, Kazakhstan and China at least I'm not sending any bytes back. Internet (Score:5, Insightful) Why do they have this kind of stuff where it can be reached from the internet? I don't see why that's necessary. If it's convenient for the designers then it's too damn convenient for your enemies. Re: (Score:2) Why do they have this kind of stuff where it can be reached from the internet? I don't see why that's necessary. If it's convenient for the designers then it's too damn convenient for your enemies. From the headline: Chinese and Russian spies have attempted to hack into the top secret details of Australia's future submarines (paywalled) Sounds to me like they didn't want to pay for them either... *grin*. That's before you eve Re: (Score:2, Informative). Actually what updates exactly do you need for a computer that's not on the network at large? Most security updates would be superfluous, and the vast majority of 'fix' updates fix stuff doesn't fix system or program breaking issues for most users, barring those introduced by another update. As for commercial software wanting to phone home, that's easily resolved by NOT choosing such software in the first place. It's perfectly practical to air-gap networks if you go in with the mind set of it from the get go. Re: (Score:2) Re: (Score:2) Then why don't they use Linux ? Linux works just fine. It doesn't need to talk to the outside world. All it needs for updates is mirror it can contact. The mirror could be internal, getting it's packages from the Internet, logged, etc. Re: (Score:1) Re: (Score:3) When you're talking billions of dollars I think it might be possible to bypass the internet. I know I have two computers that never see the internet. Updates are done on one manually and the other never. It's entirely possible still to pass data over a telephone network with a modem. I just think when extreme secrecy is required then extreme actions need to be taken. Re: (Score:1) "it is not practical to air-gap computer networks anymore" Anything dealing with classified military information certainly should be air-gapped. It's not hard to do and only creates some minor inconvenience when you need to share that data to a 3rd party. Hand delivered encrypted USB drives is one example. The inconvenience is nothing compared to having a enemy or competitor get their hands on the data. And an air-gapped network will also cut down on the number of hours the employees waste surfing the web. " Huh? (Score:3) DoD work is supposed to be air gaped when classified. Sure, there is a difference between military contractors and Government. Guess which ones give up information? Not the guys building the military gear, because they are held accountable for their actions. Re: (Score:2) Who says it can actually be reached from the internet? Re: (Score:2) Re: (Score:3) US contractors need links back to their multinationals and mil, global sourcing of US parts and US/UK trained experts. Australia could do all the work at a secure site, base, port but that is been blocked by the USA. The problem is the US would then not share its more secure export grade electronics. So Australia has to keep its networks wide open to keep US contractors happy and ensure jobs and profits are shared with the US military–industrial complex. Th Really? 30-40/night (Score:4, Insightful). Re: (Score:3). Or someone with a political ax to grind who's making it all up... Re: (Score:3) Re: (Score:2) What is a meaningful attack though? An attack that goes through is meaningful and if they let pass 40 attacks/night, they're doing a really bad job. Security is pretty much black and white, you either get compromised or you don't. Re: (Score:2) What is a meaningful attack though? That's what I wish they would define. An example of a "meaningful" attack might be a flurry of portscans from a single IP address hitting all of their known public IP addresses in sequence in a short timeframe (indicating they were the specific target of the scan). Otherwise they just sound like a software firewall trying to justify its own existence. Re: (Score:1) Nothing to see. (Score:2) Back in my sat technology days, that would be an average night. It went on like that for years. Its only news if they break in. What Intelligence Agencies Should Be Doing (Score:2, Insightful) I see absolutely nothing wrong with this. This is exactly what intelligence agencies should be doing - investigating rival countries' military capabilities and assessing threats to the nation. Meanwhile, what intelligence agencies most definitely shouldn't be doing is mass surveillance of their own people. Intelligence agencies don't exist to suppress descenting opinions. They don't exist to erode freedom. They don't exist to keep the populous inline. The reason they exist to assess external threats to t Re: (Score:2) Go live in China or Russia if you think it's so great there... Moron... Re: What Intelligence Agencies Should Be Doing (Score:1) He does. Re: (Score:3) How do you know when you have the right plans? (Score:3) When you steal plans for a multi-billion dollar project, how do you know when you've got the real plans, and when you've got decoy plans that were carefully developed to be plausible, yet incorrect? Re: (Score:2) The problem with any files now found is the US ability to redesign fake plans for any project and have other nations waste decades on junk plans. eg Operation Merlin... [wikipedia.org] Would any nation trust digital files found in Australia, unencrypted on an Re: (Score:2) Re: (Score:2) You give the government waaaay too much credit. Amazing news. (Score:2) only 30 to 40? (Score:3) Just try it (Score:1) The country was founded by convicts and houses the most poisonous animals on the planet. I bet they put some of that stuff into their submarine technology. I wonder, how they count -- and what... (Score:3) I wonder, what these numbers mean because I — without doing any classified research whatsoever — get log-entries like these every day: Do I get to count each entry as a separate attack? Or one "attack" per remote IP? Re: (Score:2) Jesus how many entries do you allow before you just ban the IP? I allow like 3 and then a 15min ban. Re: (Score:2) I also allow three (depending on the attempted login, actually — trying to get in as "root" will cause an immediate ban, because I would never attempt that myself), and then ban permanently (until the router is rebooted, rather). However, from the time the log-watching script decides to issue a ban and the time the ban is actually in place, there is a delay, because the router is slow and establishing a Everyone's got one (Score:3) Meh - everyone has a submarine these days. . . Even rebel separatist groups. Here in the Philippines the Moro Islamic Liberation Front (MILF) sadly have trouble with the Google ranking due to competition in the namespace for that acronym. However, that didn't stop plans for the purchase of a Swede-made MSM Type A midget submarine [manilatimes.net], which was to be used to disrupt the development of an oil and gas project in the now hotly disputed South-china Sea. The MILFs are one of several separatist groups in the Philippines, which come in Islamic and Communist, and just-plain-thug varieties. The formation of the of the MILF is actually, unsurprisingly, a tragic story. In the 60s with the incumbent government of the Philippines, proceed with plans to invade and reunite neighboring Sabah, which was granted under a lease, but somehow after World War 2 ended up as Malaysian territory. Troops from the western region of Mindanao were selected and trained to form an elite squadron. When the troops learned that their mission would involve lethal combat with their neighboring kin-folks they refused to participate, so they were massacred by the Philippines Armed Forces on March 18, 1968 [wikipedia.org]. This led to years of uprising and political unrest, and it was only recently that the Philippines Government formally acknowledged that the incident occurred. Reading about this and other affairs helped me to learn about governments, terrorism, political intrigue and rebel groups. We live in a violent world where democracy and other formal government processes seem to be a thin, fragile structure over game-of-thrones style chaos. Re: (Score:2) And during the years of Spanish influence nearly all of the tribes adopted adopted Catholicism, except for the Igorotte highlanders and the Aeta who held to their ancient animism, while the Moro proceeded with their adopted Islam. . . of courses over the years there have been Hindu influences - eg the Tagalog tribe had ties ties with the Kingdom of Medang in Java, and northern Philippines, with its proximity to China. . . . these are generalizations of course, individuals, obviously made many and varied per Actual text from paywalled article (Score:1) submarine Re: (Score:1) IANAL but there must be something immoral or illegal about posting content from behind a paywall. Damn those spies! (Score:2) I guess these are the same spies that are trying to hack into my website every night! I guess they're lucky they're only getting Chinese and Russian ones! Seriously though, three news articles are linked to in this story and zero of them have any more information that differentiates this even remotely from the standard brute force hacking attempts that I'm sure everyone that reads Slashdot puts up with on a daily basis on their various servers and systems. As far as I can tell for anyone in IT here in Austral Re: (Score:2) So many nations want the contracts, jobs, cash that *anyone* could be using random internet cover to find out more. Not so much mil secrets just the governments staffs thinking on keeping local jobs vs fully importing a turn key sub. A lot of cash of decades is in play. Just knowing what to present and when could be a winning contract. W A great trolling opportunity wasted.. (Score:1) 30 to 40 a night? (Score:2) I get 30 to 40 attempts an hour, and that's just the ssh attempts. Secret 'Submarine Plans' and the Internet .. (Score:2) Paywalled? WTF? (Score:2) > Chinese and Russian spies have attempted to hack into the top secret details of Australia's future submarines (paywalled)... If the secret details are paywalled why those stupid spies didn't simply pay to access them? Budget cuts? 30 and 40 cyberattacks per night? LOL (Score:2) :-) This amount of attacks simply means that they don't care. I am working on one website of known Japanese corporation and this is the log from my IDS - how many attacks were detected/prevented a day - on average between 100 - 200... and that is just one commercial company: .... 105x 2015-11-02 122x 2015-11-03 226x 2015-11-04 108x 2015-11-05 125x 2015-11-06
https://tech.slashdot.org/story/15/11/09/209227/china-russia-try-to-hack-australias-upcoming-submarine-plans
CC-MAIN-2016-36
en
refinedweb
8 August 2011 By clicking Submit, you accept the Adobe Terms of Use. This guide assumes you are familiar with the Flash Professional workspace and have a basic knowledge of working with FLA files and ActionScript.: To change the alpha (transparency) of a graphic: Note: If you use a TLF text field, you can skip Step 2. For classic text or any type of drawing object, however, you need to convert the asset to a movie clip or button symbol before applying the color effect.: shape_mc.alpha = .5; To change the color of a graphic: import flash.geom.ColorTransform; // Change the color of the shape var ct:ColorTransform = new ColorTransform(); ct.color = 0x0099FF; shape_mc.transform.colorTransform = ct;: This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe.
http://www.adobe.com/devnet/flash/learning_guide/graphic_effects/part02.html
CC-MAIN-2016-36
en
refinedweb
Download source - 31 KB The French version of this article is available at this link. A few years ago, I developed the storage part of a NoSQL DBMS technology. The server should store data in the form key / value on the disk, and be able to manage various aspects such as transactional system, replication, lock, repair corrupt segments, compaction and counters on the number of data in the database, wasted space ... To overcoming all these features, the data should be recorded with a header block, and all of these blocks should be placed in several indexes with multiple criteria. Initially, I used the standard tools provided by the NET Framework: SortedList, Dictionary, SortedDictionary, but soon, malfunctions were found during load testing. SortedList Dictionary SortedDictionary It was therefore necessary to create my own list to help make a server that can handle very large amounts of data, fast, with low memory usage, with the ability of indexing data sets by sorting several keys. Before starting to present my solution, I will explain the problem. For this, we observe the behavior of different algorithms offered by the. NET Framework while trying to inject their 10 million objects with random data. During these tests, I will collect the amount of memory used and the time spent for each tool tested. This is the object that will be used to fill our indexes: public class DataObject { private static readonly Random Random = new Random(); private readonly Guid _id = Guid.NewGuid(); private readonly DateTime _date = new DateTime(Random.Next(1980, 2020),Random.Next(1,12), Random.Next(1,27)); private readonly int _number = Random.Next(); public Guid ID { get { return _id; } } public DateTime Date { get { return _date; } } public int Number { get { return _number; } } } A factory will generate as much as desired instance in our test: public static class DataFactory { public static List<dataobject> GetDataObjects(int number) { var result = new List<dataobject>(); for(int i = 0; i < number; i++) result.Add(new DataObject()); return result; } } You will notice that the factory directly returns a list instead of an enumeration. This is for not affecting the time calculations with the generation time of each object. We will also establish a performance counter to collect data on the time and amount of memory used by the algorithm: public class PerfCounter : IDisposable { private readonly Stopwatch _stopwatch = new Stopwatch(); private readonly long _currentMemoryUsage = Process.GetCurrentProcess().WorkingSet64; private readonly string _filePath; public PerfCounter(string filePath) { if( string.IsNullOrEmpty(filePath) ) throw new FileNotFoundException(); string directoryPath = Path.GetDirectoryName(filePath); if( string.IsNullOrEmpty(directoryPath) ) throw new FileNotFoundException(); Directory.CreateDirectory(directoryPath); if (!File.Exists(filePath)) File.AppendAllText(filePath, "Memory\tTime\r\n"); _filePath = filePath; _stopwatch.Start(); } public void Dispose() { _stopwatch.Stop(); GC.Collect(); long memory = Process.GetCurrentProcess().WorkingSet64 - _currentMemoryUsage; File.AppendAllText(_filePath, _stopwatch.Elapsed.ToString() + "\t" + memory + "\r\n"); } } Now we have the needed tools to start our analysis, kick off with the first tool available. A SortedList is a collection of key / value pairs sorted by key which is accessible by key and by index. SortedList We will inject 1 million random objects using the following method: private static void TestSortedList() { var sortedList = new SortedList<Guid, DataObject>(); for (int i = 0; i < 10; i++) { List<DataObject> datas = DataFactory.GetDataObjects(100000); using (var counter = new PerfCounter("c:\\Indexation\\SortedList.txt")) { foreach (DataObject dataObject in datas) { sortedList.Add(dataObject.ID, dataObject); } } } } Here is the results of this little program: Total insertion time for 1 million items : 785 seconds Total memory to index 1 million items : 54 MB As you can see, the inclusion in a SortedList is extremely slow because it took us 13 minutes to insert one million objects. I call to mind that our goal is to manage tens of millions of objects, it is unthinkable to use this method. For a very simple reason: a list is an array stored in a contiguous elements. When you add an item to the end of the list, it just stacked like this: This provides no cost to the algorithm. This is also why with performance testing on nonrandom data, you can believe that the inclusion in a list is extremely fast because the system adds the data in the list. By cons, if you want to insert an element in the middle of the list (this is a sorted list), we have no choice to shift the entire block of higher elements to make room for new data, and then pick up this series at the end of the list. Obviously, this is very costly because more are the elements in the list, and more the operation of copy / paste will be long, especially when it will insert elements at the beginning of the list. Here, the equation for the trendline on the insertion time is Thus, importing 10 million random elements in a SortedList takes theoretically 36 hours! A dictionary is a collection of generic objects of type key / value that is based on the Hashtable algorithm. Hashtable As before, we will get the execution time and the amount of memory used by the algorithm for 10 million objects. Here is the result: Total insertion time for 10 million items: 2 seconds Total memory to index 10 million items: 638 MB The insertion time is much better, but the memory usage is too high. Imagine if we want to create five indexes with 20 bytes for each object, the weigh for all elements will be 200 MB while the algorithm would take 3 GB and 190 MB of RAM! When adding an item in a hashtable, a key is used to calculate a relatively unique number (called hashcode). I say "relatively" because there may be two different keys is the same hashcode, this is called a collision. hashtable hashcode To overcome this problem, a hashtable has internally a two-dimensional table (called buckets) that contains entries in the form of structures in several parameters. these are the entries that take up much space. hashtable Usually, an entry is composed of a hashcode, a key, a value and a pointer to the next element, because buckets are actually arrays of linked lists. The interest of the whole structure is to store data intelligently indexed by hashcode. Basically, the hashcode is subject to a modulo of the length of the bucket which gives the index where entry is stored. Once the index is known, a loop route all matching entries to compare all the keys and their hashcodes with the item to insert. If the same key is found, the system will return an error. If the same hash code is found while the key is different, a new entry is created in the linked list. If you look at the graph, you can observe that at 6 million objects, memory do an exceptionally raise of 100%. I guess this had to be a pre-allocation in the bucket after a large number of collisions. Thus, more are objects to index, higher the probability of collision is high, forcing the hashtable to allocate a lot of memory to maintain its level of performance. This technique is not recommended when you want to treat a very large number of objects! According to MSDN, the SortedDictionary class was designed to overcome the problem of insertion speed of a SortedList. Let's see what happens with our test system: SortedDictionary Total insertion time for 10 million items: 5 secondsTotal memory to index 10 million items: 612 MB Total insertion time for 10 million items: 5 secondsTotal memory to index 10 million items: 649 MB According to the results, you'll wonder why use a SortedDictionary rather than just Dictionary? since with equivalent memory consumption, the Dictionary is twice as fast! Dictionary Here we come to a second dimension of our problem, indexing at different depths. All our tests are based on a single key search, but there are cases where we want to sort objects by a combination of date, size and by id. If we wanted to do this with a simple Dictionary, we would have to implement it as follows: Dictionary<DateTime, Dictionary<int, Dictionary<Guid, DataObject>>> index = new Dictionary<DateTime, Dictionary<int, Dictionary<Guid, DataObject>>>(); As you can see, we imbricate three dictionaries. This not only adds complexity to the implementation level, but also the amount of memory used will be multiplied by 3, requires 2GB of RAM instead of 650MB of SortedDictionary. The SortedDictionary eliminates the problem with the mandatory implementation of a "comparer" that will allow it to do a binary search on a list of internal dictionary. On the same model as the previous tests, this is what gives the insertion of 10 million objects in a SortedSplitList: SortedSplitList Total insertion time for 10 million items: 5 secondsTotal memory to index 10 million items: 235 MB The algorithm takes as mush time as SortedDictionary for insertion, but it consume three times lower memory than Dictionary or SortedDictionary ! To develop this algorithm, I tried to improve the speed of insertion for a standard SortedList. As explained in the analysis of performance for the SortedList, the slowness just appears when inserting an element at the beginning of a list. Indeed, the system must first remove all upper elements to element to be inserted, and then re-stacking thereafter. For realizing the lost time to do this operation, just stack one hundred plates on top of each other, and then insert one into the eighteenth position. The method is therefore not to work on a single list that can potentially be huge, but to work on several small lists rows: Internal structure of the SortedSplitList Suppose we want to insert the element 27. As a first step, we do a binary search on the set of lists to find one that will contain the element; Search in the list to accommodate the new element Once the resulting list, simply make a second binary search inside this one, and insert like a classic element sorted list: Locating integration in the sub-list Insert the new item. With this simple method, the impact on the insertion time is greatly reduced while maintaining the advantage of optimal memory consumption! To use a SortedSplitList, you must first declare a "comparer" default object you want to index: comparer public class TestObject { public int Id { get; set; } public int Id2 { get; set; } public DateTime Date { get; set; } public string Data { get; set; } } public class CompareById : IComparer<TestObject> { public int Compare(TestObject x, TestObject y) { if (x.Id == y.Id) return 0; if (x.Id < y.Id) return -1; return 1; } } Then create a new instance of SortedSplitList like this: SortedSplitList<TestObject> sortedSplitList = new SortedSplitList<TestObject>(new CompareById()); Once done, you can add, retrieve or remove items from the list by using the Add(), Retrieve() or BinarySearch() and Remove() RemoveAll() or Clear(). Add() Retrieve() BinarySearch() Remove() RemoveAll() Regarding the Retrieve method, the first argument must be an object instance configured with the parameters on which you want to search. Retrieve For example, if I want to find a TestObject whose identifier is 78 for the value of the Data field, I have to do it the following way: TestObject var searchObject = new TestObject {Id = 78}; var foundObject = sortedSplitList.Retrieve(searchObject); Console.WriteLine(foundObject != null ? foundObject.Data : "Data not found."); The BinarySearch() method allows to retrieve the index of the item that is sought, giving the ability to browse the SortedSplitList according to your needs. To learn how to work with the object returned by the method BinarySearch (index), see on MSDN dealing with BinarySearch() method on generic list. BinarySearch Imagine our TestObject have a composite key of Date + Id. How to store and retrieve an element in a SortedSplitList? To store items, it's easy. Simply define a default comparer as follows: public int Compare(TestObject x, TestObject y) { int dateResult = x.Date.CompareTo(y.Date); if (dateResult == 0) { if (x.Id == y.Id) return 0; if (x.Id < y.Id) return -1; return 1; } return dateResult; } Thus, the Add() and Retrieve() methods allow you to work with your composite key elements. Now, what to do if we want to know all the elements of a given date? Seeing that allows SortedSplitList provides access to a enumerator, a beginner in programming would be tempted to use the following method: foreach (var testObject in sortedSplitListSortedById.Where (a => a.Date == DateTime.Parse("2003/01/01"))) Console.WriteLine(testObject.Data); This can work perfectly if your list contains few elements, but the performance of this line of code can be catastrophic if the list contains millions of objects. The appropriate method in this case is to make a binary search with a specialized search for date compare, then browse through the items on the list in reverse to find the first. Once this is found, simply retrace the list in the right direction by returning only the items that correspond to the search criteria. If all this sounds complicated to implement, the PartiallyEnumerate() method may be useful to carry out this task. Here's how to use it. PartiallyEnumerate() First of all, we define a special look for the search by date, and only by date: public class CompareByDate : IComparer<TestObject> { public int Compare(TestObject x, TestObject y) { return x.Date.CompareTo(y.Date); } } Then we can call the PartallyEnumerate() method, passing it an object of comparison with the date we want, and comparer above: PartallyEnumerate() foreach (var testObject in sortedSplitList.PartiallyEnumerate(new TestObject() { Date = DateTime.Parse("01/01/2003") }, new CompareByDate()) ) Console.WriteLine(testObject.Id); With this method, the foreach will display all the ids Elements dated 01/01/2003 without taking several minutes if our list contains millions of objects. foreach Although the indexing provided by the .NET framework are more than enough for most needs software tools, SortedSplitList can be very useful for treating a large volume of objects in your application, it combines the optimizing memory of a SortedList, the speed of inserting a SortedDictionary while giving the opportunity to work on multiple keys. So you can easily index your data based on multiple criteria without fear of too quickly appear an OutOfMemoryException. OutOfMemory.
http://www.codeproject.com/Articles/610399/SortedSplitList-An-Indexing-Algorithm-in-Csharp?fid=1843659&df=90&mpp=50&noise=5&prof=False&sort=Position&view=Normal&spc=Relaxed
CC-MAIN-2016-36
en
refinedweb
The Java Specialists' Newsletter Issue 1392007-02-10 Category: Tips and Tricks Java version: JDK 1.6 GitHub Subscribe Free RSS Feed Welcome to the 139th edition of The Java(tm) Specialists' Newsletter, sent to you from the beautiful island of Crete. In the last 8 years, I have flown over 100 times and have experienced more than 20 airports. Flying fills me with anxiety, not because of the actual flight, but because of the airports before and after. Airports are unpleasant, soulless places, where employees are paid to harass you (so it seems). The airport of Chania (CHQ) in Crete is a breath of fresh air. It is tiny, but has an enormous runway. I was told that this is the largest runway in Europe and GoogleEarth seems to agree. It is as long as Heathrow (imagine that - on an island!) but broader. It doubles as a military airport, so you are not allowed to take pictures. This also explains its size. I love the margin for error :-) The whole experience is completely pleasant. It being Crete, no one harasses you. You only have to walk short distances. The staff is cordial and polite. And so far, they have not bothered me due to overweight luggage. Definitely the best airport experience so far :-) The Sun Developer Day in Athens was incredible! There were 380 attendees (they expected 100). The main room was packed, with people standing in the aisles. They had two overflow rooms as well. Thanks to Aris Pantazopoulos and the whole Sun Microsystems team for putting on this cracker event! NEW: Please see our new "Extreme Java" course, combining concurrency, a little bit of performance and Java 8. Extreme Java - Concurrency & Performance for Java 8. JDK 1.1 introduced a clever mechanism in the way that we could add new JDBC drivers by simply loading the correct class. You could thus load the JDBC-ODBC bridge driver with the command Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"). This would, in the static intialiser block, register the driver object with the DriverManager. Class.forName("sun.jdbc.odbc.JdbcOdbcDriver") In Java 6 Mustang, this was changed to a more general concept with the java.util.ServiceLoader. We use the META-INF/services/ directory to store all the implementations of services for our system. META-INF/services/ Let's take for instance the JDBC-ODBC bridge driver. Since Mustang, you do not need to use the Class.forName() class loading anymore. The JRE/lib/resources.jar contains a META-INF/services/java.sql.Driver file, with one line: sun.jdbc.odbc.JdbcOdbcDriver. The ServiceLoader is then queried for an instance of a java.sql.Driver. It will go through all the jar files in its classpath and find all possible database drivers. Class.forName() JRE/lib/resources.jar META-INF/services/java.sql.Driver sun.jdbc.odbc.JdbcOdbcDriver java.sql.Driver Let's say we want to use the Derby database. All we need to do is include the derby.jar file in our classpath, and the driver will automatically be available to us. There is no more need to use Class.forName(). Code that still uses the old mechanism will be compatible with Mustang. This mechanism can be used for any service, not just for the JDBC driver. For example, if you wanted to call JRuby scripts from Java, you would include the jruby-engine.jar file, which contains a META-INF/services/javax.script.ScriptEngineFactory file. This would provide the glue to load the correct scripting language into the VM and to then provide it to the user. jruby-engine.jar META-INF/services/javax.script.ScriptEngineFactory So now let us say that we would like to add our own services. How could we implement this? First off, we define an interface for the service we want to offer, for example: package com.cretesoft.services; import java.util.List; public interface MusicService { List<String> getTitleList(); void play(String title); } We then define an implementation of this service, in this case I just call it MusicServiceImpl: package com.cretesoft.music; import com.cretesoft.services.MusicService; import java.util.Arrays; import java.util.List; public class MusicServiceImpl implements MusicService { private final String[] titles = { "Don't Worry Be Happy - Bobby Mcferrin", "I've just seen Jesus - Larnelle Harris", "When Praise Demands a Sacrifice - Larnelle Harris", "Sultans of Swing - Dire Straits" }; public List<String> getTitleList() { return Arrays.asList(titles); } public void play(String title) { System.out.println("Playing: " + title); } } Now comes the slightly fiddly bit: We need a META-INF/services/com.cretesoft.services.MusicService file containing the text: META-INF/services/com.cretesoft.services.MusicService com.cretesoft.music.MusicServiceImpl The META-INF/services directory needs to be included in a jar file. I also created a MANIFEST.MF file in the META-INF directory: META-INF/services Manifest-Version: 1.0 Main-Class: MusicServiceTest The test code can now use the standard ServiceLoader to discover any services that implement our MusicService interface. import com.cretesoft.services.MusicService; import java.util.List; import java.util.ServiceLoader; public class MusicServiceTest { public static void main(String[] args) { ServiceLoader<MusicService> musicServices = ServiceLoader.load(MusicService.class); for (MusicService musicService : musicServices) { System.out.println(musicService.getClass()); List<String> titles = musicService.getTitleList(); for (String title : titles) { musicService.play(title); } } } } To make it doubly easy for you to run this example, here is an Ant build script: <?xml version="1.0"?> <project name="music" default="compile"> <target name="init"> <tstamp/> <mkdir dir="build"/> </target> <target name="compile" depends="init"> <javac srcdir="src" source="1.6" target="1.6" destdir="build"/> <copy todir="build/META-INF"> <fileset dir="src/META-INF"/> </copy> <jar jarfile="music.jar" basedir="build" filesetmanifest="merge"/> </target> <target name="clean"> <delete dir="build"/> <delete file="music.jar"/> </target> </project> If you managed to run Ant successfully, it should be possible to simply call java -jar music.jar which should then produce this output: java -jar music.jar class com.cretesoft.music.MusicServiceImpl Playing: Don't Worry Be Happy - Bobby Mcferrin Playing: I've just seen Jesus - Larnelle Harris Playing: When Praise Demands a Sacrifice - Larnelle Harris Playing: Sultans of Swing - Dire Straits We could now add new types of Music Services at will, all based on the original MusicService interface. As I learned in Athens on Wednesday, you can now use wildcards for jar files in classpaths. This means that the order of the services is non-deterministic and can depend on the operating system. Since you cannot rely on the order in which services are offered to you, a secondary mechanism is necessary to decide which service to use. For example, to get a scripting engine, you can specify either the scripting language used, the file extension or the MIME type. Similar approaches should be used for your services. That's all for this week. I need to get mountain biking with my son Maxi now ... :) Kind regards from the Island of Crete Heinz Tips and Tricks Articles Related Java Course
http://www.javaspecialists.co.za/archive/newsletter.do?issue=139
CC-MAIN-2016-36
en
refinedweb
SMILA/Development Guidelines/Tuscany Integration This page lists the current state of the Tuscany integration in SMILA and SMILA related issues in Tuscany. Contents Tuscany OSGi bundles Tuscany is making good process in creating separate bundles. 3rd party jars are also available as separate bundles now. There are still some classloading issues regarding Dynamic-Imports, Meta-INF/services and OSGi runtime extensions. Here is an overview (either in text or visualized format) of the Tuscany bundle dependencies. For a minimal integration I did a step-by-step analysis of the bundles needed to create a SCADomain and a Contribution that uses implementationtype.osgi and binding.sca within an Equinox OSGi runtime. Below you will find lists of required bundles for certain functionality. These lists will be updated as needed. Basic set of bundles required Tuscany jars - org.apache.tuscany.sca.api_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.assembly.xml_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.assembly_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.contribution.impl_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.contribution.java_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.contribution.namespace_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.contribution.xml_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.contribution_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.core.spi_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.core_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.definitions_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.extensibility.osgi_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.extensibility_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.host.embedded_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.implementation.node_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.interface.java_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.interface_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.monitor_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.node.api_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.node.impl_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.osgi.runtime_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.policy_1.4.0.SNAPSHOT.jar - tuscany-extensibility-equinox-1.4-SNAPSHOT.jar (this is currently not included in the osgi build and has to be build manually) required 3rd party jars - org.apache.tuscany.sca.3rdparty.net.sf.cglib_2.0.0.1_3.jar - org.apache.tuscany.sca.3rdparty.org.apache.geronimo.specs.geronimo-commonj_1.1_spec_1.0.0.jar - org.apache.tuscany.sca.3rdparty.org.codehaus.woodstox.wstx-asl_3.2.1.jar - org.apache.tuscany.sca.3rdparty.org.apache.ws.commons.schema.XmlSchema_1.3.2.jar - org.apache.tuscany.sca.3rdparty.wsdl4j_1.6.2.jar - org.apache.tuscany.sca.3rdparty.javax.jws.jsr181-api_1.0.0.MR1.jar - org.apache.tuscany.sca.3rdparty.org.objectweb.asm.all_3.1.0.jar - org.apache.tuscany.sca.3rdparty.javax.xml.ws.jaxws-api_2.1.0.jar (Attention: org.apache.tomcat_6.0.16 exports this package, but only with 2 classes !!!) required 3rd party jars already included in SMILA - org.apache.tuscany.sca.3rdparty.javax.xml.stream.stax-api_1.0.2.jar -> javax.xml.stream_1.0 - org.apache.tuscany.sca.3rdparty.javax.xml.bind.jaxb-api_2.1.0.jar -> javax.xml.bind_1.0 - org.apache.tuscany.sca.3rdparty.javax.activation_1.1.0.jar -> javax.activation_1.1.0 binding.rmi These bundles are needed to use binding.rmi: - org.apache.tuscany.sca.binding.rmi_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.host.rmi_1.4.0.SNAPSHOT.jar - org.apache.tuscany.sca.extension.helper_1.4.0.SNAPSHOT.jar binding.ws These bundles are needed to use binding.ws: t.b.d Adjustments to Tuscany The following adjustments where made to Tuscany bundles: - adapted manifest in org.apache.tuscany.sca.3rdparty.org.codehaus.woodstox.wstx-asl_3.2.1.jar: add Eclipse-RegisterBuddy: javax.xml.stream to allow classloader to find STAX implementation - added Ivans Fix to org.apache.tuscany.sca.databinding.jaxb.JAXBDataBinding (TUSCANY-2346), this is not included in current Tuscany code - I provided a contribution for TUSCANY-2281 that solves the problem Adjustments to SMILA The following adjustments where made to SMILA classes/configurations: - enabled @AllowsPassByReference on org.eclipse.eilf.connectivity.framework.crawler.filesystem.FileSystemCrawler - use classes="..." in <t:implementation.osgi> of .composite file to allow parsing of Annotations - as for Crawlers annotations are included in the implementation class and in the abstract base class AbstractCrawler it is important to add both classes to this list - Annotations are not processed on implementation.osgi, because the method processAnnotations(boolean doWait) of class OSGiImplementationProvider is never called. For testing purpose I just called it in the start() method of the same class. With this fix the annotation are processed successfully. - adopted method getCrawler(final String crawlerId) of class CrawlerControllerImpl to make use of the fix provided for TUSCANY-2281 Tuscany open issues This is a list of JIIRA issues in Tuscany that are required by SMILA and should be adressed: - TUSCANY-2270 - Conversations do not to work with binding.rmi - TUSCANY-2281 - How to create ServiceReferences for references using multiplicity="1..n" - TUSCANY-2343 - OSGi bundle design leads to class loading issues Unassigned Georg Schmidt - TUSCANY-2346 - weaks in databinding-jaxb plug-in - TUSCANY-2605 - Annotations are not processed for implementation.osgi SMILA open issues - (solved) because of TUSCANY-2281 it is not possible to use more than one CrawlerComponent (e.g. Filesystem and Web). It is however possible to crawl multiple datasources on the same CrawlerComponent in parallel - as long as SMILA is run inside of eclipse IDE everything works fine. I build an application and tried to run SMILA outside of eclipse IDE. I did not manage to get it to run and start a SCADomain. - In general Component references are initialized on the first method call on a Component. Usually this is done on the SCA service reference. Our JMX management wrappers do not use a SCA service reference but a reference to the underlying DeclarativeService (see in org.eclipse.eilf.management.crawlercontroller.Activator). So SCAServices need to be created "somewhere" so that the references are initialized. Otherwise the underlying DeclarativeService has no references set. This can be achieved by using the Annotation @EagerInit on class CrawlerControllerImpl. This forces reference initialization at initialization time (and not at the first method call). This works with binding.sca. I don't know if this setup also works with binding.rmi. I guess it will fail, as no local DeclarativeService is available. I think we will always have to look up SCAService instead of DeclarativeService references. I will test this when I have finished the list of required bundles for binding.rmi.
http://wiki.eclipse.org/SMILA/Development_Guidelines/Tuscany_Integration
CC-MAIN-2016-36
en
refinedweb
Dear all, once"? { "keys": "super+f8"], "command": "test" }, { "keys": "super+shift+c"], "command": "test2" } ] Thank you, r. I'm not sure what you're asking... You can type sublime.log_commands(True) in the console to see what command a keybinding executes. what i am asking in detailed in the first post thank you for your input, but isn't log_commands logging all of these to the console? i need to know programmatically the key bindings associated to a command, so if a developer modifies them, my plugin can still succesfully give the right tips on which keys to press on specific situations. hopefully this makes more sense ^^_ I figure it should be possible: def_keys = sublime.load_settings('Default (Windows).sublime-keymap') I'm not sure how to then parse this object to find "command" : "your_command", but it's just JSON. I don't know if there is a more direct root I'm going to try this and will post here code, if successful. So.. unfortunately I cannot find a way to make this work, since key bindings are defined in an array, not in a dict (which I suspect is what the sublime.Settings.get(Name) uses to retrieve the values. Any other ideas? Hmm, still not really understading what you want to do. But to get the key code(s) corresponding to a given command you could try something like all_keys = sublime.load_settings('Default (Windows).sublime-keymap') command = 'some_command' command_keys = [key for key in all_keys if key['command'] == command] But maybe you want something completely different? [quote="svenax"]Hmm, still not really understading what you want to do. But to get the key code(s) corresponding to a given command you could try something like But maybe you want something completely different?[/quote] No that's exactly what I'd like Unfortunately: TypeError: 'Settings' object is not iterable Ah, sorry, I didn't check that. Just assumed the snippet above worked. OK, that makes things a bit more complicated. A simple approach like this mostly works: import sublime import json file = '/User/Default (Windows).sublime-keymap' command = 'some_command' json_data = json.load(open(sublime.packages_path() + file)) command_map = [key for key in json_data if key['command'] == command] However, the Python json parser does not allow for comments, which are used in most default files. To handle that, you need a custom decoder object. I use this code: ... You're right, it's not fool proof. This is probably a better solution (didn't try it): thank you! will use this as a starting point and will post back if i find better solutions!
https://forum.sublimetext.com/t/retrieve-key-bindings/5359/9
CC-MAIN-2016-36
en
refinedweb
I tried to convert this pseudocode function IntNoise(32-bit integer: x) x = (x<<13) ^ x; return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0); end IntNoise function from this website to python. But it seems to rely on wraparound within the 32 bit int. Can you duplicate this behavior in python? Alan
https://mail.python.org/pipermail/tutor/2005-March/036502.html
CC-MAIN-2016-36
en
refinedweb
# -*- mode: org -*- #+TITLE: =README= for =swak4Foam= - Version for OpenFOAM 2.x #+OPTIONS: H:4 * - 2.0, 2.1 or 2.2 of OpenFOAM (a version that works with 1.6, 1.6-ext and 1.7 is available separately) - The =finiteArea=-stuff will probably work with version 2.0-ext (once that is available) -* ** and later be merged to the =default=-branch. Packaging for OpenFOAM 2.x should be done in the branch =debianPackaging_2.x= These sources are based on =basicSource= and can be used *without* a modification of the solver (they are only available in the 2.x version): -) *** *** =swakLagrangianParser= Parser for calculating expressions on clouds of lagrangian particles *** ) **** lagrangianCloudAdaptors-directory Because of the templating all plugin-functions have to be reinstaniated for new particle classes. The libraries in this directory - reimplement the functions from =swakLagrangianCloudSourcesFunctionPlugin= - the =CloudProxy= for the =cloud= parser for special particle classes. These are - coalCloudAdaptor :: the library =libswakCoalCloudAdaptor= that handels the =CoalParcel=-class These libraries have to be included in the =libs=-entry to be able to handle these libraries ** *** =calcNonUniformOffsetsForMapped= Calculates the =offsets=-entry in the =polyMesh/boundary=-file according to the specification in a dictionary. Only needed if you have mapped patches and the regular uniform offset is not enough for your purposes *** =fieldReport= Utility that quickly does some quantitative analysis (minimum, maximum, average etc ) on a field on the disc (internal field but also patches, sets, zones, ...) *** =funkyPythonPostproc= This utility loads specified fields into memory, executes a list of user-specified function objects whose data is then passed to a python script which does the user-specified analysis. *** =funkySetLagrangianField= Utility to calculate fields for a lagrangian cloud (or setting it up from scratch) **, =pythonIntegration= with calculations using =numpy= **** - Mesh preparation :: Execute the script =prepare.sh= in that directory (requires PyFoam: if not installed change in the =boundary=-file the type of the =defaultFaces= to =wall=) **** delayed-t-junction - Solver :: pimpleFoam - Demonstrates :: Delayed variables to simulate an inflow that depends on the value of the outflow **** *** FunkyDoCalc Example dictionaries for =funkyDoCalc= ***=. Demonstrates "live" comparing to another case using /foreign meshes/ **** **** *** FromPresentations Cases that were shown in some presentations **** OSCFD_cleaningTank3D - Solver :: interFoam - Case preparation :: run the =prepareCase.sh=-script - Description :: The case described on the slides of the talk about =swak4Foam= at the OSCFD-conference 2012 in London - Demonstrates :: Boundary conditions, function objects, global variables and delayed variables **** OSCFD_cleaningTank2D A 2D-variant of the above case **** OFW8_sandPitOfCarcoon - Solver :: twoPhaseEulerFoam - Case preparation :: run the =prepareCase.sh=-script - Description :: Simulate a sand-monster from the StarWars-movie "Return of the Jedi" - Demonstrates :: Use of =funkySetFields=, =groovyBC= and functionObjects for lagrangian particles **** OFW8_landspeedersInCanyon - solver :: simpleFoam - Case preparation :: run the =prepareCase.sh=-script - Description :: Simulates two landSpeeders (as seen in the StarWars-movie "A New Hope") - Demonstrates :: Advanced searchableSurfaces (for =snappyHexMesh=), functionObject for passive scalar, functionObject to calculate distributions ***= **** *** =CodeStream= Demonstrates working together with the =coded=-stuff in OpenFOAM 2 *** Stuff that has to do with lagrangian particles **** functionObjects **** parser Testing the =cloud=-parser for lagrangiant particles ***** parcelInBoxWithExpressions - Solver :: reactingParcelFoam - Demonstrates :: Adding evaluations on the cloud to a regular case ***** simplifiedSiwek Variation of the tutorial case - Solver :: coalChemistryFoam - Demonstrates :: creating new clouds with =funkySetLagrangianField= and evaluations on clouds during the simulation - Mesh preparation :: run the =prepareCase.sh= script to set up the mesh and the fields *** *** BugCases These are cases provided by users to demonstrate bugs. Not maintained nor documented and may be removed at any time ** ** releaseTesting Scripts and configuration to test for a release in a virtual machine using =vagrant=. Also to be used for packaging *. **. Only if you got through Mercurial it can be ensured that your contribution is recognized (if you want to stay anonymous send. *** Maintaining feature and hotfix-branches The repository comes with a =.hgflow=-file that is set for the =hgflow=-extension found at [[]] (there are multiple branches of this extension. This *seems* to be the most up-to date and still under active development) The two main lines (=1.x= and =2.x=) have slightly different names for the =master= and the =dvelopment=-branch. But apart from that in the future this repository will try to stick to the model described in [[]] * ** No tab-completion for regular Python-shell and old IPython-versions The tab-completion does not work except for up-to-date versions of IPython. This seems to be a problem with the =readline=-library inside an embedded Python. Low priority * History ** 2010-09-13 - version number : 0.1 First Release ** 2010-12-18 - version number : 0.1.1 - version number : 0.1 - version number : 0.1.3 - version number : 0.1.4 *** Port to OpenFOAM 2.0 This is the first release that officially supports OpenFOAM 2.0 Also it is the first release that incorporates the =simpleFunctionObjects=-library *** - version number : 0.1.5 *** **** Using OF 2.0 codeStreams Adds a functionObject =swakCoded= that extends the =coded=-functionObject to read and write global - version number : 0.1.6 ***) **** Boundary condition =groovcBCDirection= Based on the =directionMixed= boundary condition this allows to set a boundary condition as a Dirichlet-condition only in certain directions while in the other directions it is a gradient-condition **** Boundary condition =groovyBCJump= Boundary condition that imposes a jump in the value on a cyclic boundary condition pair (based on =jumpCyclic=). Only works for scalar values **** *** Discontinued features **** =groovyFlowRateInletVelocity= This boundary condition will be removed in future releases because the base class now supports the more general =DataEntry=-class for which a =swak=-subclass exists ** 2012-04-13 - version number : 0.2.0 Friday the 13th ***= - internalFaFields :: =oldTime= and =ddt= - faPatch :: only **** **** =interFoam=-based example solvers do not compile on 2.1 As reported in [[]] due to a change the way the PISO-loop is treated the =interFoamWithSources= and =interFoamWithFixed= don't compile with 2.1 anymore. To avoid =#ifdef= in the solver sources there is now a separate set of sources (labeled =pre2.1=) for older versions. The regular sources work with 2.1 (and hopefully the following) **** =-allowFunctionObjects=-option not working for =replayTransientBC= Function-objects only work with the =while(runTime.loop())=-construct in 2.1. The utility now uses this. ****. ****) **** =mappedFvPatch= not treated like regular patches The field-driver created patch fields there as =calcuated= when =zeroGradient= would have been more appropriate **** (this only seem to be an issue with 1.7.x. It all works fine on 2.1.x) **** **** Compilation script checks =SWAK4FOAM_SRC= The environment variable =SWAK4FOAM_SRC= is needed for the =swakCoded=-functionObject. The =Allwmake=-script now checks whether this variable exists and prints a message with the correct value if it doesn't. It also checks whether the value is correct and warns if it isn't **** **** 2.x-branch did not compile on 2.0 Branch only compiled on 2.1, but not on 2.0 due to changes in the OpenFOAM-API Fix provided by Bruno Santos **** <<<<<<< variant A **** Recent versions of 2.1.x break compilation of =CommonValueExpressionDriver.C= The definition of the operator =lessOp= clashed with another definition. Renamed. Fix provided by Bruno Santos **** Examples **** Cases from the /OSCFD12/ Conference in London On the slides the case files were promised *** convenience *** Supports OpenFOAM 2.2 This is the first version to compile with OpenFOAM 2.2 Due to changes in OpenFOAM it requires several =#ifdef= (something that is usually avoided in the OpenFOAM-world) and other prepocessor definitions) *** Incompatibilities to previous versions **** =simpleFunctionObjects= and =simpleLagrangianFunctionObjects= no longer independent from rest Due to incompatibilities between OpenFOAM 2.2 and previous versions there are compatibility headers included from the rest of swak4Foam. Theoretically both libraries can be easily made independent again. *** Bug fixes **** Compiles on =1.6-ext= again The last release (0.2.2) did not compile on =1.6-ext=. This is fixed **** Missing field files for the OSCFD2012-cases Due to a stupid =.hgignore= the =0.orig=-directories were missing. Nobody complained though **** Did not compile on =2.0.x= This has been **** Adaption of cases to 2.2 This may break them for previous versions of OpenFOAM *** **** =swakDataEntry= improved Two enhancements - the name of the independent variable no can be specified. This variable holds the value that is passed to the data entry as a uniform value - data entry can now be integrated. This allows using it for instance for the injection rate in lagrangian models ** Next release - version number : 0.3.0 The reason for the jump in the minor revision number is that with the introduction of the parser for lagrangian particles (=cloud=) the last white spot as far as major data structures in OpenFOAM is "explored" *** Incompatibilities to previous versions **** Support of /old/ =1.6-ext= lost Due to certain differences this version only compiles with the =nextRelease=-branch of 1.6-ext (from the =git=).Usually the failing parts can be fixed b commenting out the appropriate =#define= in =Libraries/swak4FoamParsers/include/swak.H=. *** Infrastructure **** Make error messages in =Allwmake= more verbose The error messages where already quite verbose. But people didn't understand them and asked the same questions over and over ... **** =simpleFunctionObjects= no longer considered an independent project As there are going to be more cross-dependencies the =simpleFunctionObjects= now have to be part of swak. Changes to the compile-scripts reflect this. **** =Allwmake= makes sure that swak is compiled for the same installation The script writes the version it is used with to disk and at later compiles that this is the same (this makes sure that not a wrong version is used inadvertently to compile) **** Additional macros for Debugging output There are two macros defined that inside an object write the name of the class and the address of the object (if debug is enabled). This makes the output easier distinguishable from the output from other classes/objects. The two macros are: - Dbug :: Writes the debug information *only* on the master processor - Pbug :: Writes the output on all processors and for parallel runs prefixes the processor number Both macros are to be used like regular streams and don't have to be enclosed in =if(debug){}= (this is part of the macro) *** Documentation Important enhancements of the documentation in the =Documentations=-folder **** Documentation of =accumulations= The possible values of the common =accumulations=-lists are documented **** General documentation of the Python-embedding The general options and the behavior of the Python-embedding are described *** Incompatibilities to previous versions **** =outputControlMode= =timestep= renamed to =timeStep= Because of report to be consistent with the nomenclature in the 'regular' function-objects. *** Bug fixes **** Missing =timeSet= in function-objects This method has been added in 2.2.x and breaks the compilation of several function-objects Fix developed by Bruno Santos **** =sourceImplicit= unstable For some reason using =SuSp= gave unstable results for the PDE-functionObjects. Changed to =Sp= **** Fixed bug were only one =swakCoded= worked at a time Bug originally reported in. Reason was that the variable describing the type was not correctly set. **** Incorrectly read entries in =swakCoded= The entries =codeEnd= and =codeExecute= were not correctly read but instead the entry =codeRead= was read. Fixed **** No logical variables found by most parsers Reported as Only the field parser correctly found logical variables (although they were stored). Added a method to pick up the variables (the reason why this part was commented out is that there is no such thing as a =volBoolField= and that is what the regular mechanism expected) **** =sampledSurface= not correctly updated at time of write Reported as In that case the driver was written before the surface was updated thus generating fields of size $0$. Now =update= is called at various places (to make sure it is called in any instance) **** =sumMag=-accumulation now working This accumulation was available but not implemented. Now implemented. For non-scalar types it is calculated separately for each component **** Calculation of weight fields failed if size on one processor was $0$ This was due to a logical error that was propagated through mindless copy/paste (only the Field-driver got it right). Fixed **** =groovyTotalPressure= does not read =value= Because it is not initialized from the superclass when the dictionary constructor is used. Fixed **** For multiple times the option =addDummyPhi= makes =funkySetFields= crash Because the pointer is already set. Fixed **** =aliases= not constructed from dictionary If the dictionary was read after the construction the aliases are not read. Fixed by moving this reading to the tables reading which is used in every time a dictionary is involved **** Gravity not correctly passed in =evolveXXCloud= Passed a value where a reference would have been needed. Fixed **** =writeOften= writes all the time Reason for this was a change of the interface of =outputTime= not being propagated to this function-object. Fixed **** Python-integration does not return single scalars as uniform The Python-integration returned single scalars (and vectors) not as a single value but as a field of length $1$. This caused warnings that messed up the output. Fixed *** New features **** Function object that executes if the OpenFOAM-version is right The functionObject =executeIfOpenFOAMVersionBiggerEqual= only executes if the OpenFOAM-version is bigger or equal to a specified version. The arguments =majorVersion= and =minorVersion= are required. If =patchVersion= is not specified it will match any version. A =git= version (=.x=) will match any patch-version ****. For OpenFOAM after 2.2 these are replaced by one that recalculates the energy or enthalpy **** Function object that calculates the average of one variable as a function of another The function object =swakExpressionAverageDistribution= calculates the average of one function (the =expression=) as a distribution over another (the =abscissa=). An example would be the average pressure in x-direction with =abscissa= being =pos().x=, =expression= being just =p= and the =weight= being =vol()=. The weight has to be a scalar. All other expressions can be any data-type **** New utility =fieldReport= This utility prints some quantitative statistics about a specified field. Optionally these statistics can also be printed for patches, sets and zones. The data can be written to a CSV-file. Also the distributions of the field can be written. **** New utility =funkyPythonPostproc= This utility needs a dictionary and a specification of times. For each time it - loads a list of fields specified in the dictionary - executes a list of function objects specified in the dictionary - executes python-scripts The idea is that the function objects pass data to the python-scripts via global variables and the python-scripts do whatever they like **** New utility =funkySetLagrangianParticle= This utility allows setting new fields of a lagrangian cloud. Like =funkySetFields= it has two modes: - Setting one field over the command line. This is triggered by the =-field=-option - Using a dictionary to specify what is being set. In this mode more than one field and more than one cloud can be set. In this mode there is also the possibility to specify a *new* cloud. This mode expects variables in global namespaces. On of these variables is the position of the particles, for the other variables fields of the same name will be created. All the variables have to be of the same size. Data where corresponding particle position is outside the mesh is discarded *** Enhancements **** Additional parser for lagrangian particles There is now an additional parser to calculate expressions on particles. This parser is organized in a separate library =libswakLagrangianParser= that has to be loaded to use this parser. Depending on whether the cloud in question is already in memory or on disk the data is handled differently: - if the data is on disc then the basic properties of the cloud (particle positions etc) are used to initialize the class. Then all files in the folder of the cloud are read and made available as field names that are the same as the file-names. Characters in the file names that are not compatible with the parser are replaced and created as aliases. In this mode =swak= has no idea about the data except the name and the type - if the cloud is in memory then a corresponding proxy object is sought. This proxy object knows which data the cloud has, what the type is and a short description. It makes the data available as fields. =swak= has by default proxy objects for most particle classes that come with =OpenFOAM=. For unsupported classes and adaptor library has to be written. When a parser for a cloud is used for the first time a table of all the available fields is printed to the screen with type and description (if available) **** **** Conditional functionObjects now have optional =else= It is now possible to add a sub-dictionary =else= that is used to initialize a =functionObjectProxy= that is executed if the condition is *not* fulfilled. The sub-dictionary inherits all settings that are not set from the parent-dictionary **** =swakCoded= now allows addition of data to functionObject The entry =codeData= is now read and inserted into the functionObject **** Parsers in =swakFiniteArea= no also have complete tensor-operations The two parsers in that library now also support the complete set of tensor operations (like =eigenValues= etc) **** =swakExpressionDistribution= now allows non-scalar weights For expressions whose results is not a scalar now the weight function can either be a scalar or of the same type as the expression (so every component can have a separate weight) **** More options for =accumulations= A number of possible accumulations have been added. Most of these are based on distributions. If the =weighted= variant is chosen then the meaning is the more physical one (for =weighted= the 'natural' weight of the quantity is used. For instance for cells the cell volume. Otherwise the weight $1$ is used). Some of these accumulations need a single floating point number as a parameter. This is simply added to the name. The added accumulations are: - weightedSum :: sum of the quantity times the weight. There is an alias =integrate= for this - median :: The value for which 50% of the distribution are smaller than this. More robust alternative to =average= - quantile :: =quantile0.25= for instance is the value for which 25% of the distribution are smaller than it - range :: The difference of the quantile of $\frac{1+f}{2}$ and $\frac{1-f}{2}$. For instance =range0.9= gives the range in which 90% of the values are (from the quantile 5% to 95%) - smaller :: The fraction of the distribution that is smaller than a given value - bigger :: The inverse of =smaller= - size :: The size of the underlying entity (usually number of cells, faces, points). For types with more than one components all the components have the same value - weightSum :: Sum of the weights of the underlying entity. Usually the volume oder the area of it. **** Python code files are now searched more flexible If a file specified with an option like =startFile= in a Python-functionObject (or similar) is not found in the current directory the path of the dictionary it is specified in is prepended and the file is searched there **** Python integration now uses =IPython= if possible The interactive shell of the python integration now uses =IPython= if it is installed. This improves tab-completion etc **** Preload libraries in the Python integration As problematic libraries could hand during importing these libraries can be imported in a safe way using the optional =importLibs=-entry which is a dictionary. The keys are the names under which the imports will appear in the Python-namespace. The value is optional and the name of the actual library **** Added standard function =weight()= All parsers now implement a function =weight()= that returns the "natural" weight that is used for instance in the weighted average (for internal fields that would be for instance the cell volume) **** =funkyDoCalc= now writes files Two kinds of files are optionally written - CSV-files with the values of an evaluation (each evaluation gets its own file) - Distributions of the evaluations (in a separate directory for each time) All the files go into a directory whose name is derived from the name of the evaluation. The outputs can be switched on with the options =writeDistributions= and =writeCsv=. Either - from the command line: this switches it on for *all* evaluations - one "per dictionary"-basis for each evaluation separately **** PDE-functionObjects now relax their equations The PDE-functionObjects now honor the =relaxationFactors. If =steady= is =false= then relaxation has to be switched on using =relaxUnsteady=. For the last corrector iteration the equation is not relaxed unless the parameter =relaxLastIteration= is set. **** Full set of =laplacian=-operations in =internalField=-parser The support of =laplacian= operations (especially with a coefficient that is different from a scalar) was incomplete. Now all possible coefficient types are supported. Also in the =fvcSchemes=-plugin functions the set of =laplacian=-operators was completed **** Function object =swakExpression= now has optional =mask= If the logical expression =mask= is set then only the results from =expression= for which =mask= is =true= are used for accumulations *** Examples **** Moved the OSCFD-examples to a different directory Started one new directory for all cases from presentations **** Added examples from the swak-training at the 8th Workshop Two new examples - sandPitsOfCarcoon :: Reenacting a scene from "Return of the Jedi" with =twoPhaseEulerFoam= - landspeedersInCanyon :: Simulating two landspeeders from "A new hope" with =simpleFoam=
https://sourceforge.net/p/openfoam-extend/swak4Foam/ci/feature/particleParser_2.2/%7E/tree/
CC-MAIN-2016-36
en
refinedweb
Hi Guys, I have the following problem. In my Form1 (IsMdiContainer = true) I have added a MenuStrip with the following structure: System System->Log In System->Register System->Log Out(Visible = false) Forms(Enabled=False) Forms->Table1 Forms->Table2 Statistics Statistics->Blablabl Statistics->... From my Log In button I am displaying a form that logs me in the system: Code: #include "LogInForm.h" ... ... private: System::Void logInToolStripMenuItem_Click(System::Object^ sender, System::EventArgs^ e) { LogInForm^form=gcnew LogInForm(); form->MdiParent = this; form->Show(); } In this form when correctly logged in I want to make the System->Log In and System->Register menu items visible = false, the Statistics and Forms menu items enabled. Here is the code of the login button on successful logging: Code: this->MdiParent->MainMenuStrip->Items["formsToolStripMenuItem"]->Enabled = true; this->MdiParent->MainMenuStrip->Items["statisticsToolStripMenuItem"]->Enabled = true; this->Close(); Forms and Statistics menu items are correctly being enabled. However with this I am not able to make the System->Log In menu item visible = false; Code: this->MdiParent->MainMenuStrip->Items["logInToolStripMenuItem"]->Visible = true; Can you please help me out. It is quite urgent. Additional information you might need - I am using MS Visual Studio 2005 and 2008 is not an option here. Thanks in advance! Best, Wisher #include "LogInForm.h" ... ... private: System::Void logInToolStripMenuItem_Click(System::Object^ sender, System::EventArgs^ e) { LogInForm^form=gcnew LogInForm(); form->MdiParent = this; form->Show(); } this->MdiParent->MainMenuStrip->Items["formsToolStripMenuItem"]->Enabled = true; this->MdiParent->MainMenuStrip->Items["statisticsToolStripMenuItem"]->Enabled = true; this->Close(); this->MdiParent->MainMenuStrip->Items["logInToolStripMenuItem"]->Visible = true; Accessing form controls this way is not recommended - it's breaking encapsulation. Large UIs can get really complicated if you have child forms accessing their parents directly - and it becomes impossible to know which window is causing certain behaviour. Maintenance hence becomes a nightmare. All the software companies I've ever worked for would have you shot on sight if you did this as part of their development team because of its impact of how maintainable the application is. Presumably your child form is being created by your MDI parent. If not, it should be. Implement an event on your child form such as "LoggedIn" and hook into it when your MDI parent is created. Fire the event when your child form is logged in and then the MDI parent can handle the event and set the relevant control states. If you don't know how to write your own delegates & events I suggest you look here. Or just google for C++/CLI delegates events. Darwen. Last edited by darwen; January 26th, 2009 at 05:13 AM. - PInvoker - the .NET PInvoke Interface Exporter for C++ Dlls. Darwen, Thanks for your criticism, but I want to know to how to do this exactly the way I have described. It's a project for the university, it is not for my employer and since it is a very simple application I have no problem with "Maintenance hence becomes a nightmare". Thanks, Wisher (1) I find bad habits are very hard to break. If you develop good habits in the first place then I tend to find life a lot easier. You have to learn good practice sooner or later so why not sooner ? (2) Coding 'just to get it done' tends to cause you more headaches than doing things the tried and tested way. (3) It's useful for you to learn events & delegates regardless as they're one of the most important and useful features of .NET. (4) Doing it the way I suggest fixes your problem. It's the way that I would do it. And maybe you're having a problem because you're not using the correct method in the first place ? Darwen. Okay, this one is dropped. Thanks darwen for the constructive criticism! I will learn more about delegates and events but had only 1 day to solve the problem -> found a workaround. It's now time I visited the c# section as the next exam is coming this Saturday. Thank you guys, you are great! Best, Wisher View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?469507-Access-a-MainMenuStrip-subitem-from-a-child-form&mode=hybrid
CC-MAIN-2016-36
en
refinedweb
Hi I have written a class library(dll), which is used in a site that I am creating. The same dll will be used in many of our sites and is universal for all the sites. So, I was thinking of centralizing my dll. I think putting the DLL in the GAC would be a good solution, and all the other sites will use this dll. But how do I do this? I tried doing the snk stuff, but I was not able to manage it. So, how do I change my dll code, so that it can be added into the GAC. Also, I would like to know how to access the same dll from GAC then? Will it work the same way, as when using the physical dll earlier? I mean, will all the namespaces be available like the same as earlier? Apart from GAC, if there is anyother solution please let me know. Thanks CodeNameVirus Forum Rules
http://www.antionline.com/showthread.php?277707-Creating-DLL-for-adding-in-GAC&p=943762&mode=linear
CC-MAIN-2016-36
en
refinedweb
Visualizing Context Switches that Cross Cores with Visual Studio 2010 Context switches that cross from one logical core to another can reduce the performance of your application. Visual Studio 2010 Premium or Ultimate versions allow you to visualize this problem on Windows Vista, Windows 7, Windows Server 2008 or Windows Server 2008 R2. When a context switch crosses from one logical core to another, a thread that was being executed on a physical core might shift to a completely different physical core. Cross-core context switches have an impact on overall throughput in multi-threaded applications. However, a single-threaded application running on a multicore CPU also suffers from cross-core context switches. A simple C# example will allow you to understand and visualize cross-core context switches thanks to Visual Studio 2010 new concurrency profiling features. The following C# code shows a simple and single-threaded Windows console application that consumes CPU cycles by appending millions of strings to a StringBuilder instance. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ContextSwitch { class Program { private const int MAX_NUMBER = 7000000; private static void Main(string[] args) { var sb = new StringBuilder(MAX_NUMBER * 7); for (int i = 0; i < MAX_NUMBER; i++) { // Append 7 strings sb.Append("START"); sb.Append(i.ToString()); sb.Append((i / 2).ToString()); sb.Append((i / 3).ToString()); sb.Append((i / 4).ToString()); sb.Append((i / 5).ToString()); sb.Append("STOP"); } // Just write the first letter of the resulting string Console.WriteLine(sb.ToString()[0]); } } } If you use Visual Studio 2010 Premium or Ultimate concurrency profiling features to visualize the behavior of this single-threaded application on a multicore CPU, you will be able to analyze context switches in the Cores view. I use a single-threaded application as an example because it makes it easy to understand what happened under the hoods for just the main thread. The following snapshot shows the Cores view with the results of profiling this application in a computer with a quad-core CPU with Hyper-Threading. The CPU has eight logical cores, also known as hardware threads. The profiler displays a graph with visual timelines that show how each thread was mapped to the available logical processor cores. This managed application doesn't control have code that controls thread affinity. Remember that thread affinity suggests the operating system scheduler to assign a thread to a specific logical core. However, it isn't convenient to use thread affinity with managed code. This graph with visual timelines shows how the managed thread scheduler and the operating system distribute the diverse threads during their lifetime. When threads move from one core to the other, a cross-core context switch occurs, and the legends for the graph provide summary information about the total number of these cross-core context switches. The Cores view shows how the different threads created by the application run on the eight logical cores. In this case, a single thread suffers from many cross-core context switches. The green bars that represent the main thread appear on many different logical cores. While the application was being profiled, the main thread had 417 cross-core context switches. Obviously, this number has an impact on overall performance. You can use the zoom slider to visualize the cross-core context switches with more detail at specific times. The following snapshot shows 19 cross-core context switches in less than 400 milliseconds. The next snapshot shows another 3 cross-core context that make the main thread move from logical core 0 to logical core 2, from logical core 2 to logical core 4, and then from logical core 4 to logical core 6. Most modern CPU micro-architectures include at least one of the cache memories shared between all the physical cores. One of the reasons for the existence of this last level shared cache memory is because CPU manufacturers want to reduce the impact of cross-core context switches in the cache. A last level shared cache usually improves overall throughput. When you write code that creates tasks with C# or Visual Basic and .NET Framework 4, the underlying threads suffer cross-core context switches. There is a lot of work being done to reduce undesired cross-core context switches in most modern run-times and operating systems.
http://www.drdobbs.com/parallel/visualizing-context-switches-that-cross/229300340
CC-MAIN-2016-36
en
refinedweb
This document is also available in these non-normative formats: PDF version. Copyright document has been reorganized to expose the abstract syntax, which is intended to support features that are shared with many concrete production rule languages. Each abstract construct is associated with normative semantics and a normative XML concrete syntax. Also, the definition of the operational semantics of rules and rulesets has been completed by adding a specification of a conflict resolution strategy. This strategy is proposed to be the default for rulesets interchanged using RIF-PRD. The WG is particularly interested in feedback regarding the operational semantics of rules and rulesets. many production rule languages. Production rules are rule statements defined in terms of both individual facts or objects, and groups of facts or classes of objects. They have an if part, or condition, and a then part, or action. The condition is like the condition part of logic rules (as covered by the basic logic dialect of the W3C rule interchange format, RIF-BLD). The then part contains actions, which is different to the conclusion part of logic rules which contains only a logical statement. Actions can add, delete, or modify facts in. ☐ As a production rule interchange format, RIF-PRD specifies an abstract syntax that shares most features with many concrete production rule languages, and it associates each abstract construct with normative semantics and a normative XML concrete syntax. Production rules are statements of programming logic that specify the execution of one or more actions in the case that their conditions are satisfied. Production rules therefore have an operational semantic (formalizing state changes, e.g., on the basis of a state transition system formalism). The OMG Production Rule Representation specification [PRR] summarizes it as follows: In the section Operational semantics of rules and rule sets, the semantics for rules and rule sets is specified, accordingly, as a labeled terminal transition system (PLO04), where state stransitions result from executing the action part of instantiated rules. When several rules are deemed able to be executed during the rule execution process, a conflict resolution strategy is used to determine the order of rules to execute Sub-section Instance Selection specifies how an intended conflict resolution strategy can be attached to a rule set interchanged with RIF-PRD, and defines a default conflict resolution strategy. However, as a RIF dialect, RIF-PRD has also been designed to allow interoperability between rule languages over the World Wide Web. In RIF, this is achieved by sharing the same syntax for constructs that have the same semantics across multiple dialects. As a consequence, RIF-PRD shares most of the syntax for rule conditions with RIF-BLD [RIF-BLD], and the semantics associated to the syntactic constructs used for representing the condition part of rules in RIF-PRD is specified, in Section Semantics of condition formulas, in terms of a model theory, as it is in the specification of RIF-BLD as well. In addition to exploiting similarities between the two dialects, it allows them to share the same RIF definitions for data types and built-ins [RIF-DTB]. In the section Operational semantics of actions, the semantics associated with the constructs used to represent the action part of rules in RIF-PRD is specified in terms of a transition relation between successive states of the data source, as defined by the condition formulas that they entail, thus making the link between the model-theoretic semantics of conditions and the operational semantics of rules and rulesets. The abstract syntax is specified in mathematical english, and the abstract syntactic constructs defined in the sections Abstract Syntax of Conditions, Abstract Syntax of Actions and Abstract Syntax of Rules and Rulesets, are mapped one to one onto the concrete XML syntax in Section XML syntax. A lightweight notation is also defined along with the abstract syntax, to allow for a human-friendlier specification of the semantics. A more complete presentation syntax is specified using an EBNF in Section Presentation Syntax. However, only the XML syntax and the associated semantics are normative. A normative XML schema will also be provided in future versions of the document. Example 1.2. In RIF-PRD presentation syntax, the first rule in example 1.1. might be represented as follows: Prefix(ex1) (* ex1:rule_1 *) Forall ?customer ?purchasesYTD ( If And( ?customer#ex1:Customer ?customer[ex1:purchasesYTD->?purchasesYTD] External(pred:numeric-greater-than(?purchasesYTD 5000))) Then ex1:Gold(?customer) ) The condition languages of RIF-PRD and RIF-BLD have much in common, including much of their semantics. Although their abstract syntax and rule semantics are different, due to the operational nature of the actions in production rules, there is a subset for which they are equivalent: essentially, rules with no negation and no uninterpreted functions in the condition, and with only assertions in the action part. For that subset, the XML syntax is the same, so many XML documents are valid in both dialects and have the same meaning. The correspondence between RIF-PRD and RIF-BLD is detailed in Appendix Compatibility with RIF-BLD. This document is mostly intended for the designers and developers of RIF-PRD implementations, that is, applications that serialize production rules as RIF-PRD XML (producer applications) and/or that deserialize RIF-PRD XML documents into production rules (consumer applications). This section specifies the language of the rule conditions that can be serialized using RIF-PRD, by specifying: Note to the reader: this section depends on Section Constants, Symbol Spaces, and Datatypes of RIF data types and builtins [RIF-DTB]. For a production rule language to be able to interchange rules using RIF-PRD, its alphabet for expressing the condition parts of a rule must, at the abstract syntax level, consist of: For the sake of readibility and simplicity, this specification introduces a notation for these constructs. That notation is not intended to be a concrete syntax, so it leaves out many details: the only concrete syntax for RIF-PRD is the XML syntax. Notice that the production rule systems for which RIF-PRD aims to provide a common XML serialization use only externally specified functions, e.g. builtins. RIF-BLD specifies, in addition, a construct to denote uninterpreted function symbols, which RIF-PRD does not require: this is one of two differences between the alphabets used in the condition languages of RIF-PRD and RIF-BLD. The second point of difference is that RIF-PRD does support a form of negation. RIF-BLD does not support negation because logic rule languages use many different kinds of negations, none of them prevalent enough to justify inclusion in the basic logic dialect of RIF (see also the RIF framework for logic dialects). The most basic construct that can be serialized using RIF-PRD is the term. RIF-PRD provides for the representation and interchange of several kinds of terms: constants, variables, positional terms and terms with named arguments Definition (Term). The atomic truth-valued constructs that can be serialized using RIF-PRD are called atomic formulas. Definition (Atomic formula). An atomic formula can have several different forms and is defined as follows: Note that not only predicates, but also frame atomic formulas can be externally defined. Therefore, external information sources can be modeled in an object-oriented way via frames. Editor's Note: Objects are commonly used in PR systems. In this draft, we reuse frame, membership, and subclass formulas (from RIF-BLD) to model objects. We are aware of current limits, such as difficulty expressing datatype and cardinality constraints. Future drafts will address that problem. We are interested in feedback on the merits and limitations of this approach. Observe that the argument names of frames, p1, ..., pn, are terms and so, as a special case, can be variables. In contrast, atoms with named arguments can use only the symbols from ArgNames to represent their argument names, which can neither be constants from Const nor variables from Var. Note that atomic formulas are sometimes also called terms, e.g. in the realm of logic languages: the specification of RIF-BLD, in particular, follows that usage. The abstract syntactic elements that are called terms in this specification, are called basic terms in the specification of RIF-BLD. Composite truth-valued constructs that can be serialized using RIF-PRD are called formulas. Note that terms (constants, variables and functions) are not formulas. More general formulas are constructed out of. The function Var(e) that maps a term, atomic formula or formula e to the set of its free variables is defined as follows: Definition (Ground formula). A condition formula φ is a ground formula if and only if Varφ = {} and φ does not contain any existential subformula. ☐ In other words, a ground formula does not contain any variable term. The specification of RIF-PRD does not assign a standard meaning to all the formulas that can be serialized using its concrete XML syntax: formulas that can be meaningfully serialized are called well-formed. Not all formulas are well-formed with respect to RIF-PRD: it is required that no constant appear in more than one context. What this means precisely is explained below. The set of all constant symbols, Const, is partitioned into several subsets as follows: Each predicate and function symbol that take at least one argument has precisely one arity. For positional predicate and function symbols, an arity is a non-negative integer that tells how many arguments the symbol can take. For symbols that take named arguments, an arity is a set {s1 ... sk} of argument names (si ∈ ArgNames) that are allowed for that symbol. Nullary symbols (which take zero arguments) are said to have the arity 0. An important point is that neither the above partitioning of constant symbols nor the arity are specified explicitly. Instead, the arity of a symbol and its type is determined by the context in which the symbol is used. intended semantics of the condition formulas in a RIF-PRD document. For compatibility with other RIF specifications (in particular, RIF data types and builtins), and to make the interoperability with RIF logic dialects (in particular RIF Core [RIF-Core] and RIF-BLD), the intended semantics for RIF-PRD condition formulas is specified in terms of a model theory. Definition (Semantic structure). A semantic structure, I, is a tuple of the form <TV, DTS, D, Dind, Dfunc, IC, IV, IF, Iframe, INF, Isub, Iisa, I=, Iexternal, Itruth>. Here D is a non-empty set of elements called the Herbrand domain of 'I, i of RIF data types and builtins [RIF-DTB] for the semantics of datatypes). The set of all ground (positional|named|frame|external) formulas which can be formed by using the function symbols with the ground terms in the Herbrand domain is the Herbrand base, HB. A semantic structure I is a Herbrand interpretation, IH, if the corresponding subset of HB is the set of all ground formulas which are true with respect to I. ☐. This mapping interprets positional terms and gives meaning to positional predicate function. This mapping interprets function symbols with named arguments and gives meaning to named argument functions. This is analogous to the interpretation of positional terms with two differences: This mapping interprets frame terms and gives meaning to frame functions. An argument, d ∈ Dind, to Iframe represents an object and the finite bag {<a1,v1>, ..., <ak,vk>} represents a bag of attribute-value pairs for d.. TThis: For convenience, we also define the following mapping I from terms to D: The effect of datatypes. The set DTS must include the datatypes described in Section Primitive Datatypes of RIF data types and builtins RIF data types and builtins . This section defines how a semantic structure, I, determines the truth value TValI(φ) of a condition formula, φ. In PRD a semantic structure is represented as a Herbrand interpretation. now define what it means for a set of ground formulas to satisfy a condition formula. The satisfaction of condition formulas by a set of ground formulas provides formal underpinning to the operational semantics of rulesets interchanged using RIF-PRD. Definition (State). A state S is a Herbrand Interpretation IH. ☐ Definition (Condition Satisfaction). A condition formula, φ is satisfied under variable assignment σ in a state S, written as S |= φ[σ], iff TValS(φ[σ]) = t. ☐ At the syntactic level, the interpretation of the variables by a valuation function IV is realized by a substitution. The matching substitution of constants to variables, as defined below, provides the formal link between the model-theoretic semantics of condition formulas and the operational semantics of rule sets in RIF-PRD.=0..n where Dom(σ) = {xi}i=0..n and σ(xi) = ti, i = 0..,n. ☐ Definition (Ground Substitution). A ground substitution is a substitution σ that assigns only ground terms to the variables in Dom(σ): ∀ x ∈ Dom(σ), Var(σ(x)) = ∅ ☐ Notice that since RIF-PRD covers only externally defined interpreted functions, a ground term can always be replaced by a constant. In the remainder of this document, it will always be assumed that a ground substitution assigns only constants to the variables in its domain. Definition (Matching Substitution). Let ψ be a condition formula, and φ be a set of ground formulas that satisfies ψ. We say that ψ matches φ with substitution σ : Var -> Terms if and only if there is a syntactic interpretation I such that for all ?xi in Var(σ), I(?xi) = I(σ(?xi)). ☐ This section specifies the action part of the rules that can be serialized using RIF-PRD (the conclusion of a production rule is often called the action part, or, simply, the action; the then part, with reference to the if-then form of a rule statement; or the right-hand side, or RHS. In the latter case, the condition is usually called the left-hand side of the rule, or LHS). Specifically, this section specifies: In production rule systems, the action part of the rules is used, in particular, to add, delete or modify facts in the data source with respect to which the condition of rules are evaluated and the rules instantiated. As a rule interchange format, RIF-PRD does not make any assumption regarding the nature of the data source represents the facts that the actions are intended to affect. In the same way, the semantics of the actions is specified in terms of how the effects of their execution are intended to affect the evaluation of rule condition. Editor's Note: This version of the draft specifies only a very limited set of basic atomic actions. Future draft will extend that set, in particular to support actions whose effect is not, or not only, to modify the fact base. For a production rule language to be able to interchange rules using RIF-PRD, its alphabet for expressing the action part of a rule must, at the abstract syntax level, consist of syntactic constructs to denote: Editor's Note: These actions may seem foreign to a reader who is familiar with typical production rule language actions, such as assert an object, modify/update a field of an object, and retract/remove an object. As noted in Section Atomic formulas, in this draft, objects are modeled using frame, membership, and subclass formulas that are re-used from RIF-BLD and RIF-Core. Therefore, the object-oriented actions are defined to act upon frame, membership, and subclass relations. Mappings from a typical Java-like object model to RIF-PRD maps "instanceof" to membership, "extends" and "implements" to subclass, and object properties/fields to frame slots. To assert an object requires asserting both its class membership and its frame slots. To modify a slot, e.g. change color from red to blue, requires retracting the old frame slot and asserting the new frame slot. Indeed, frame slots are multi-valued and, therefore, merely asserting a frame slot does not overwrite a prior value, it adds to the set of values. Frame slots do not inherently constrain either the datatype or the cardinality of their values. Future drafts will address the issue of object representation in RIF-PRD. We are open to suggestions. Atomic action constructs take constructs from the RIF-PRD condition language as their arguments. Definition (Atomic action). An atomic action can have several different forms and is defined as follows: Editor's Note: Whether and under what restrictions, if any, membership and subclass atomic formulas are allowable targets for atomic assert actions is still under discussion in the working group. We welcome feedback on that issue. Definition (Ground atomic action). An atomic action with target t is a ground atomic action if and only if Var(t) = ∅. ☐ The action block is the top level construct to represent the conclusions of the production rules that can be serialized using RIF-PRD. An action block contains a non-empty sequence of atomic actions. It may also include action variable declarations. The action variable declaration construct is used to declare variables that are local to the action block, called action variables, and to assign them a value within the action block. Editor's Note: This version of RIF-PRD supports only a limited mechanism to initialize local action variables. Action variables may be bound to newly created frame objects or to slot values of frames. Future versions may support different or more elaborate mechanisms. atomic actions, then Do((v1 p1), ..., (vn pn) a1 ... al) denotes an action block. ☐ The specification fo RIF-PRD does not assign a standard meaning to all the action blocks that can be standardized using its concrete XML syntax. Action blocks that can be meaningfully serialized are called well-formed. The notion of well-formedness, already defined for condition formulas, is extended to atomic actions, action variable declarations and action blocks. The main restrictions are that one and only one action variable bindings can assign a value to each action variable binding, and that the assertion of a membership atomic formula is meaningful only if for a new frame object. Definition (Well-formed atomic action). An atomic action atomic actions and frame object declarations as follows: Definition (Well-formed action block). An action block is well-formed if and only if all of the following is true: Definition (RIF-PRD action language). The RIF-PRD action language consists of the set of all the well-formed action blocks. ☐ This section specifies the intended semantics of the atomic actions in a RIF-PRD document. The effect intended of the ground atomic actions in the RIF-PRD action language is to modify the state of the fact base, in such a way that it changes the set of conditions that are satisfied before and after each atomic action is performed. As a consequence, the intended. Since the satisfaction of condition formulas is defined with respect to the Herbrand interpretation of ground formulas (Section Satisfaction of a condition), we will assume in the following that the states of the fact base are represented by such sets, for the purpose of specifying the intended operational semantics of atomic actions, or rules and of rule sets serialized using RIF-PRD. Definition (RIF-PRD transition relation). The intended semantics of RIF-PRD atomic actions is completely: Rule 1 says that all the condition formulas that were satisfied before an assertion will be satisfied after, and that, in addition, the condition formulas that are satisfied by the asserted ground formula will be satisfied after the assertion. Rule 2 says that all the condition formulas that were satisfied before a retraction will be satisfied after, except if they are satisfied only by the retracted fact. Rule 3 says that all the condition formulas that were satisfied before the removal of a frame object will be satisfied after, except if they are satisfied only by one of the frame or membership formulas about the removed object or a conjunction of such formulas. This section specifies the rules and rulesets that can be serialized using RIF-PRD, by specifying: For a production rule language to be able to interchange rules using RIF-PRD, in addition to the RIF-PRD condition and action languages, its alphabet must, at the abstract syntax level, contain syntactic constructs: Definition (Rule). A rule can be either: RIF-BLD compatibility. A rule If condition, Then action can be equivalently written action :- condition, that is, using RIF-BLD notation. Indeed, the normative XML syntax is the same for a conditional assertion in RIF-BLD and for a conditional action in RIF-PRD. The use of RIF-BLD notation is especially useful if the condition formula, condition, contains no negation, and if the action block, action, contains only Assert atomic actions. The use of the same notation emphasizes that such a rule has the same semantics in RIF-PRD and RIF-BLD. Editor's Note: At this stage, the above assertion regarding the equivalence of a specific fragment of RIF-PRD and RIF-BLD is mostly the statement of an objective of the working group. That issue will be addressed more completely in a future version of this document. To emphasize that equivalence even further, an action block can be written as simply And(φ1 ... φn), if it contains only atomic assert actions: Do(Assert(φ1) ... Assert(φn)), n ≥ 1. If the action block consists of a single atomic assert action: Do(Assert(φ)), then it can be written as simply φ. Notice that the notation for a rule with bound variables uses the keyword Forall for the same reasons, that is, to emphasize the overlap with RIF-BLD. Indeed, Forall does not indicate the universal quantification of the declared variables, in RIF-PRD, but merely that the execution of the rule must be considered for all their bindings as constrained by the binding patterns. However, when no negation is used in the conditions and only assertions in the actions, the XML serialization of a RIF-PRD rule with bound variables is exactly the same as the XML serialization of a RIF-BLD universally quantified rule, and their semantics coincide. As was already mentioned in Section, need be interchanged along with the rules : in RIF-PRD, the group is the construct that to the interchanged rules. Definition (Group). If strategy is an IRI that denotes a conflict resolution strategy, if priority is an integer, and if each rgj, 0 ≤ j ≤ n, is either a rule or a group, then any of the following is a group: The function Var(f), that has been defined for condition formulas and extended to actions, is further extended to rules, as follows: Definition (Well-formed rule). A rule, r, is a well-formed rule if and only if it contains no free variable, that is, Var(r) = ∅, and either: Definition (Well-formed group). A well-formed group is either a group that contains only well-formed rules and well-formed groups, or a group that contains no rule or group (an empty group). ☐ The set of the well-formed groups contains all the production rulesets that can be meaningfully interchanged using RIF-PRD. As already mentioned in Section Overview, the description of a production rule system as a transition system can be used to specify the intended semantics that is associated with production rules and rulesets 3.1. Judicael, a chicken and potato farmer, uses a rule based system to decide on the daily grain allowance for each of her chicken. Currently, Judicael's rule base contains one single rule, the chicken and mashed potatoes rule: (* ex:ChickenAndMashedPotatoes *) Forall ?chicken ?potato ?weight (And(?chicken#ex:Chicken (Exists ?age And(?chicken[ex:age->?age] External(pred:numeric-greater-than(?age, 8))))) And(?potato#ex:Potato ex:owns(?chicken ?potato) (Exists ?weight And(?potato[ex:weight->?weight] External(pred:numeric-greater-than(?weight External(func:numeric-divide(?age 2))))))) If And(External(pred:string-not-equals(External(ex:today()), "Tuesday")) Not(External(ex:foxAlarm()))) Then Do( (?allowance ?chicken[ex:allowance->?allowance]) Execute(ex:mash(?potato)) Retract(?potato) Retract(ex:owns(?chicken ?potato)) Retract(?chicken[ex:allowance->?allowance]) Assert(?chicken[ex:allowance->External(func:numeric-multiply(?allowance 1.1))])) Judicael has four chickens, Jim, Jack, Joe and Julia, that own three potatoes (BigPotato, SmallPotato, UglyPotato) among them: That is the initial set of facts w0. When the rule is applied to w0: Suppose that Judicael's implementation of today() returns Monday and that the foxAlarm() is false when the Chicken and mashed potatoes Chicken and mashed potatoes rule in applied to w1, the first pattern still selects {Jim/?chicken, Jack/?chicken, Julia/?chicken} as possible values for variable ?chicken, but the second pattern does not select any possible substitution for the couple (?chicken, ?potato) anymore: the rule cannot be satisfied, and the system, having detected a final state, stops. The result of the execution of the system is w1. ☐ More precisely, a production rule system is defined as a labeled terminal transition system (e.g. PLO04), for the purpose of specifying the intended semantics of a RIF-PRD rule or group of rules. Definition (labeled terminal transition system): A labeled terminal transition system is a structure {C, L, →, T}, where For many purposes, a representation of the states of the fact base is an appropriate representation of the states the rule instances that where selected and triggered the transition in those states. To avoid the confusion between the states of the fact base and the states of the transition system, the latter will be called production rule system states. Definition (Production rule system state). A production rule system state (or, simply, a system state), s, is characterized by In the following, we will write previous(s) = NIL to denote that a system state s is the initial state. Var(r) ⊆ Dom(σ), where Var(r) denotes the set of the rule variables in r, the result, ri = σ(r), of the substitution of the constant σ(?x) for each variable ?x ∈ Var Var(r). In the definition of a production rule system state, a rule instance, ri, is said to match a state of a fact base, w, if its defining substitution, substitution(ri), matches the RIF-PRD condition formula that represents the condition of the instantiated rule, rule(ri), to the ground formula that represents the state of facts w. Let W denote the set of all the possible states of a fact base. Definition (Matching rule instance). Given a rule, ri, and a state of the fact base, w ∈ W, ri is said to match w if and only if one of the following is true: Definition (Conflict set). Given a rule set, RS ⊆ R, and a system, s, the set, conflictSet(RS, s) of all the different instances of the rules in RS that match the state of the fact base, facts(s) ∈ W is called the conflict set determined by RS in s. ☐ In each non-final state, s, of a production rule system, a subset, picked(s), of the rule instances in the conflict set is selected and ordered; their action parts are instantiated, and they are executed. This is sometimes called: firing the selected instances. Definition (Action instance). Given a system state, s; given a rule instance, ri, of a rule in a rule set, RS; and given the action block in the action part of the rule rule(ri): Do((v1 p1)...(vn pn) a1...am), n ≥ 0, m ≥ 1, where the (v1 p1), 0 ≤ i ≤ n, represent the action variable declarations and the aj, 1 ≤ j ≤ m, represent the sequence of atomic actions in the action block; if ri is a matching instance in the conflict set determined by RS in system state s: ri ∈ conflictSet(RS, state to another only if the state of facts in the latter system state can be reached from the state of facts in the former by performing a sequence of ground atomic actions supported by RIF-PRD, according to the semantics of the atomic actions. The second condition states that the allowed paths out of any given system state are determined only by how rules instances are picked from the conflict set for execution by the conflict resolution strategy.Given a ruleset RS ⊆ R, the associated conflict resolution strategy LS, and an initial state of the fact base, w ∈ W, the input function to a RIF-PRD production rule system is defined as: Therefore, the exact behavior of a RIF-PRD production rule system), for it accounts for a common conflict resolution strategy used in most forward-chaining production rule systems. Future versions of the RIF-PRD specification may specify normatively the intended conflict resolution strategies to be attached to additional keywords. In addition, RIF-PRD documents may include non-standard keywords: it is the responsability of the producers and consumers of such document to agree on the intended conflict resolution strategies that are denoted by such non-standard keywords. Conflict resolution strategy: rif:forwardChaining Most existing production rule systems implement conflict resolution algorithms that are a combination of the following elements (under these or other, idiosyncratic names; and possibly combined with additional, idiosyncratic rules):, the list denoted by picked(s) contains a single rule instance, for any given system state, s. an rule instance, ri, and a system state, s, let lastPicked(ri, s) denote the number of system states before s, since ri has been last fired. lastPicked(ri, s) is specified recursively as follows: Finally, explicitely associated, or zero. Given a conflict set,, that is, it removes the refracted instances from the current conflict set; the priority rule removes the instances such that there is at least one instance with a higher priority; the recency rule removes the instances such that there is at least one instance that is more recent; and the tie-break rule keeps one rule from the set, arbitrarily. To select the singleton rule instance, picked(s), to be fired in a system state, s, given a rule set, RS, the conflict resolution strategy denoted by the keyword rif:forwardChaining consists in the following sequence of steps: a common concrete XML syntax to serialize any production rule set written in a language that share the abstract syntax speicifed in section 4.1, provided that its intended semantics agrees with the semantics that is described in section 4.2. In the following, after the notational conventions are introduced, we specify the RIF-PRD XML constructs that carry a normative semantics with respect to the intended interpretation of the interchanged rules. They are specified with respect to the abstract syntax, and their specification is structured according to the specification of the abstract syntax in sections 2.1, 3.1 and 4.1. The root element of any RIF XML document, Document and other XML constructs that do not carry a normative semantics with respect to the intended interpretation of the interchanged rules are specified in the last sub-section. specifies the XML constructs that are used in RIF-PRD to serialize condition formulas. The TERM class of constructs is used to serialize terms, be they simple terms, that is, constants and variables; or positional terms or terms with named arguments, both being, per the definition of a well-formed formula, representations of externally defined functions. As an abstract class, TERM is not associated with specific XML markup in RIF-PRD instance documents. [ Const | Var | External ] In RIF, the Const element is used to serialize> \{\{EdNote|text.\}\} Example 2.1. In each of the examples below, a constant is first described, followed by its serialization in RIF-PRD XML syntax. a. A constant with builtin type xsd:integer and value 123: <Const type="xsd:integer">123</Const> b. A constant which symbol today is defined in Joe the Hen Public element is used to serialize a variable. The content of the Var element is the variable's name, serialized as an Unicode character string. <Var> any Unicode string </Var> Example 2.2. The example below shows the XML serialization of a reference to a variable named: ?chicken. <Var> chicken <Var> As a TERM, the External element is used to serialize a positional term or a term with named arguments. In RIF-PRD, a positional or a named-argument term represents always a call to. Example 2.3. a. The first example below shows one way to serialize, serialization of a call to the application-specific nullary function today(), which symbol is defined in the example's namespace: <External> <content> <Expr> <op> <Const type="rif:iri"> </Const> </op> </Expr> </content> </External> The ATOMIC class is used to serialize atomic formulas: positional and named-arguments or an atomic formula with named arguments. serialization of the positional atom owns(?c ?p), where the predicate symbol owns is defined in the example namespace. <Atom> <op> <Const type="rif:iri"> </Const> </op> <args rif: <Var> c </Var> <Var> p </Var> </args> </Atom> In RIF, the Equal element is used to serialize equality atomic formulas. The Equal element must contain one left sub-element and one right sub-element. The content of the left and right elements must be a construct from the TERM abstract class. The order of the sub-elements is not significant. <Equal> <left> TERM </left> <right> TERM </right> </Equal> In RIF, the Member element is used to serialize membership atomic formulas. The Member element contains two unordered sub-elements: <Member> <instance> TERM </instance> <class> TERM </class> </Member> Example 2.5. The example below shows the RIF XML serialization serialize subclass atomic formulas. The Subclass element contains two unordered sub-elements: <Subclass> <sub> TERM </sub> <super> TERM </super> </Subclass> In RIF, the Frame element is used to serialize frame atomic formulas. Accordingly, a Frame element must contain: <Frame> <object> TERM </object> <slot rif: TERM TERM </slot>* </Frame> Example 2.6. The example below shows the RIF XML syntax that serializes://example.com/2008/joe#Chicken/age </Const> <Var> a </Var> </slot> </Frame> Editor's Note: The example uses an XPath style for the key. How externally specified data models and their elements should be referenced is still under discussion (see ISSUE-37). In RIF-PRD, the External element is also used to serialize an externally defined atomic formula. When it is a ATOMIC (as opposed to a TERM; that is, in particular, when it appears in a place where an ATOMIC is expected, and not a TERM), the External element contains one content element that contains one Atom element. The Atom element serializes the externally defined atom properly said: <External> <content> Atom </content> </External> The op Const in the Atom element must be a symbol of type rif:iri that must uniquely identify the externally defined predicate to be applied to the args TERMs. It can be one of the builtin predicates specified for RIF dialects, as listed in section List of RIF Builtin Predicates and Functions of the RIF data types and builtins document, or it can be application specific. In the latter case, it is up to the producers and consumers of RIF-PRD rulesets that reference non-builtin predicates to agree on their semantics. Example 2.7. The example below shows the RIF XML serialization of an externally defined atomic formula that tests whether the value denoted by the variable named > The FORMULA class is used to serialize condition formulas, that is, atomic formulas, conjunctions, disjunctions, negations and existentials. As an abstract class, FORMULA is not associated with specific XML markup in RIF-PRD instance documents. [ ATOMIC | And | Or | NmNot | Exists ] An atomic formula is serialized using a single ATOMIC statement. See specification of ATOMIC, above. A conjunction is serialized using the And element. The And element contains zero or more formula sub-elements, each containing an element of the FORMULA group. <And> <formula> FORMULA </formula>* </And> A disjunction is serialized using the Or element. The Or element contains zero or more formula sub-elements, each containing an element of the FORMULA group. <Or> <formula> FORMULA </formula>* </Or> A negation is serialized using the NmNot element. The NnNot element contains exactly one formula sub-element. The formula element contains an element of the FORMULA group, that serializes the negated statement. <NmNot> <formula> FORMULA </formula> </NmNot> Editor's Note: The name of that construct may change, including the tag of the XML element. An existentially quantified formula is serialized using the Exists element. The Exists element contains: <Exists> <declare> Var </declare>+ <formula> FORMULA </formula> </Exists> Example 2.8. The example below shows the RIF XML serialization of a boolean expression that tests whether the chicken denoted by variable ?c is older than 8 months, by testing the existence of a value, denoted by variable ?a, that is both the age of ?c, as serialized as a Frame element, as in example 2.6, and greater than 8, as serialized the action part of a rule supported by RIF-PRD. The ATOMIC_ACTION class of elements is used to serialize the atomic actions: assert and retract. [ Assert | Retract ] An atomic assertion action is serialized using the Assert element. An atom (positional or with named arguments), a frame, a membership atomic formula and a subclass atomic formula can be asserted. The Assert element has one target sub-element that contains an Atom, a Frame, a Member or a Subclass element that represents the facts to be added on performing the action. <Assert> <target> [ Atom | Frame | Member | Subclass ] </target> </Assert> The Retract construct is used to serialize retract atomic actions, that result in removing a fact from the fact base. Only atoms (positional or with named arguments), frames and frame objects can be retracted. The Retract element has one target sub-element that contains an Atom, a Frame, or a TERM construct that represents the facts or the object to be removed on performing the action. <Retract> <target> [ Atom | Frame | TERM ] </target> </Retract> Example 2> The INITIALIZATION class of elements is used to serialized the constructs that specify the initial value assigned an action variable, in an action variable declaration: it can be a new frame identifier or the slot value of a frame. As an abstract class, INITIALIZATION is not associated with specific XML markup in RIF-PRD instance documents. [ New | Frame ] The New element is used to serialize the construct used to create a new frame identifer. The New element has an instance sub-element that contains a Var, which serializes the action variable intended to be assigned the new frame identifier. <New> <instance> Var </instance>? </New> The Frame element is used, with restrictions, to the serialize the construct used to assign an action variable the slot value of a frame. In that position, a Frame must contain, in addition to its object sub-element, one and only one slot sub-element, that contains a sub-element of the TERM class, serializing the slot name, and a Var sub-element, that serializes the action variable intended to be assigned the slot value. <Frame> <object> TERM </object> <slot rif: TERM Var </slot> </Frame> The ACTION_BLOCK class of constructs is used to represent the conclusion, or action part, of a production rule serialized using RIF-PRD. If action variables are declared in the action part of a rule, or if some atomic actions are not assertions, the conclusion must be serialized as a full action block, using the Do element. However, simple action blocks that contain only one or more assert actions can be serialized like the conclusions of logic rules using RIF-Core or RIF-BLD, that is, as a single asserted ATOMIC or as a conjunction of the asserted ATOMICs. As an abstract class, ACTION_BLOCK is not associated with specific XML markup in RIF-PRD instance documents. [ Do | And | ATOMIC ] An action block is serialized using the Do element. A Do element contains: <Do> <actionVar rif: Var INITIALIZATION </actionVar>* <actions rif: ATOMIC_ACTION+ </actions> </Do> Example 2.9. The example below shows the RIF XML representation of an action block that asserts a new 100 decigram potato. <Do> <actionVar> <Var>p</Var> <New> <instance><Var>p</Var></instance> </New> </actionVar> <actions rif: <Assert> <target> <Member> <instance><Var>p</Var></instance> <class> <Const type="rif:iri"></Const> </class> </Member> </target> </Assert> <Assert> <target> <Frame> <object><Var>p</Var></object> <slot rif: <Const type="rif:iri"></Const> <Const type="xsd:decimal">100</Const> </slot> </Frame> </target> </Assert> </actions> </Do> An action block that contains only assert atomic actions can be serialized using the And element, for compatibility with RIF-Core and RIF-BLD. However, the atomic formulas allowed as conjuncts are restricted to atoms (positional or with named arguments), frames, and membership or subclass atomic formulas. In that position, an And element must contain at least one sub-element. <And> <formula> [ Atom | Frame | Member | Subclass ] </formula>+ </And> For compatibility with RIF-Core and RIF-BLD, an action block that contains only a single assert atomic action can be serialzed as the ATOMIC that serializes the target of the assert action. However, the only atomic formulas allowed in tat position are the ones that are allowed as targets to an atomic assert action: atoms (positional or with named arguments), frames, and membership or subclass atomic formulas. [ Atom | Frame | Member | Subclass ] optional> Editor's Note: Nested Foralls make explicit the scope of the declared variables, and, thus, impose an order on the evaluation of the pattern and if FORMULAs Document is the root element in a RIF-PRD instance document. The Document contains zero or one payload sub-element, that must contain a Group element. <Document> <payload> Group </payload>? </Document> Metadata can be associated with any concrete class element in RIF-PRD: those are the elements with a CamelCase tagname starting with an upper-case character: CLASSELT = [ TERM | ATOMIC | FORMULA | ACTION | RULE | Group | Document ] An identifier can be associated to any instance element element can contain an And element with zero or more formula sub-elements, each containing one Frame element. . Editor's Note: An uptodate version of the RIF-PRD presentation synatx will be included in a future version of this document. TBD Editor's Note: RIF-PRD and RIF-BLD [RIF-BLD]] share essentially the same presentation syntax and XML syntax. Future versions of this, or another, RIF document will include a complete, construct by construct, comparison table of RIF-PRD and RIF-BLD presentation and XML syntax
http://www.w3.org/TR/2008/WD-rif-prd-20081218/
CC-MAIN-2016-36
en
refinedweb
Java Assignment Problem Im having an error on my java assignment i have most of the program right but I cant figure out how to fix the errors i have. any help would be appreciated. The errors im getting are in the Dialog File. errors: error: variable sum might not have been initialized return sum; ^ error: unreachable statement System.exit(0); // Terminate ^\ error: missing return statement } ^ 3 errors 1st file. I believe this one is correct and i dont have any errors in it. Fibonacci.java ------------------------- This is the second file. This is the one im getting errors for. FibonacciJDialog.java There are a couple of problems for the second one: 1) Check the return type of your main method against a book. 2) Main methods don't typically return a value 3) When the main method ends, the program ends. So you don't need a System.exit call there anyway. [OCA 8 book] [OCP 8 book] [Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos] Other Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, TOGAF part 1 and part 2 Step 1: You want to call the 'Fib' method on the Fibonacci object. How would you do that? (Note, you already know how to call methods on objects, because you're already doing that in your code). Step 2: Assign the result of the method call to the variable 'sum'. You also already know how to do that (that's also already in your code!). taylor Lynch wrote:I have no way to compile this right now. But would this be what your talking about? No, this is not valid Java. Remove the "System.out.println()" that's around the call to box.Fib, and remove the "int". Also, you want line 17, which shows the result, under the call to box.Fib, and you'll want to show the value of 'sum' there. "The variable message" means nothing to us. Please cut-and-paste the full error message you get, as well as your current code. Without both of those all anyone here can do is guess what you have, and guess at a solution. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Update: I was able to get your code to run. It does exactly what you told it do to, assuming this is still your current code: What do you think happens inside your Fib() method?> There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors sorry - i was updating my reply when you posted this. See my previous post.sorry - i was updating my reply when you posted this. See my previous post.taylor Lynch wrote:When i run it like this without the return sum; statement, it compiles just fine. But when i run the program it stops right after you input a number for the variable n. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors I am assuming you pare passing in a value - something like 7. So, this says "while index is less than n...do nothing". You have set index to 1, and n is what you pass in - so it is something like 7. so..."while 1 is less than 7, do nothing". your code is stuck in an infinite loop. You need to write the code so that something HAPPENS on each iteration, and you eventually break out. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors taylor Lynch wrote:I just dont understand what it means for the code. Also i cant change anything from the Fibonacci.java file it was supplied by the Professor and cant be changed. I don't know how to help you then. IF - and I want to emphasize the IF - you can't change the Fibonacci class, and what I posted is indeed the real and actual code, then you are screwed. the method Fib(int n) defined in that class is an infinite loop. If you call it (and you do on line 17 of your FibonacciJDialog class), your code goes into an infinite loop. It will sit there, doing basically nothing until you kill the JVM. So either a) You misunderstood your professor b) This is NOT the real code c) You need a miracle of some kind. Those are the only options I see. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors The Fibonacci sequence consists of the numbers 1,1,2,3,5,8,13 ...... in which each number (except the first 2) is the sum of the two preceding numbers. Write a class which that you will call Fibonacci class with a method fib(N) that prints the corresponding Fibonacci value. Remember to use a long as a value for the returned fibonacci result because the number could be very large. Design and write a Dialog box for Input/Ouput that allows the users to input a number like 5 and the program will produce an output of 8, which is the fibonacci of 5 because the fib(5)=8. Remember to call the program: FibonacciJDialog.java. (Limit the number N to the value 40 maximum). Write a class which that you will call Fibonacci class with a method fib(N) That means you need to write the class. It does NOT mean that you can't edit or change it. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors . So...I really think you need to read this page. You seem to be writing and changing your code here and there, without understanding what you need to do. I would recommend you do do this...Forget about the dialog box, forget about your FibonacciJDialog class. let's focus on your Fibonacci class. I would put a main() method in it, so you can concentrate on nothing but it. I would suggest you throw away the entire Fibonacci code, and start over. Before you write a SINGLE line of code, you need to spend some time THINKING about what your code should do. If I asked YOU to give me the value of the 6th fibonacci number, how would YOU compute it? Explain it in ENGLISH, not in java. Explain it in clear steps that a 10yr old could follow. Then, when you feel confident you know how to do it... Write a Fibonacci class with a main method that prints "I'm in main". Once that works (and I literally mean compile and TEST it to be sure it wors), write a "void Fib()" method that does nothing but prints "I'm in Fib". Change your main to call it. Compile and test to make sure that works. Then change your Fib() method to take an int parameter, and print out what that parameter was...something like "Fib was passed the value 7". Once that works, change the method to return an int (for now, it may only return what it was passed in). Print out the value it returns in your main, compile and test. Keep going like this. You should only ever add 2-3 lines of code AT MOST before you re-compile and test. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors I have tried it, and you don’t get overflow for fib(40). I obviously had forgotten what happened previously. Fred is right. That code cannot be what you were provided with. You have misunderstood something and changed something before you showed us it. taylor Lynch wrote:The Fibonacci code that i've been using is copied and pasted directly from the site nothing has been changed except the things i changed in my previous post. This was your previous post: taylor Lynch wrote:I realize taht now. I changed the code in my previous post but as i said im still having a problem with the output being wrong. Your post prior to that was a quote of your assignement. The post prior to THAT says Ok i changed the index to 40 and it ran the program. So at this point... I have no idea WHAT your code looks like. I have no idea WHAT the issue you are having is. Your job, when you post here, is to make it EASY for me to help you. Post your current code. Post the EXACT and COMPLETE error message (if any). Post what output you GET. Post what output you EXPECT. Follow the advice you have been given (or at least, ACKNOWLEDGE you've seen it and why you don't think it will solve your problem). I've given you the best advice I can (in my post at 14:20:25). If you aren't going to follow that advice, there's not much more I can do. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Heres the new code there are no errors when it compiles the problem lies in when I run the program it doesn't provide the correct answer. This tells me that there is something wrong in the coding that needs to be changed. Try the following code !!! class Fibonacci { void fib(int n) { int in1=0;int in2=1; System.out.println("the fibonacci is"); System.out.println(in1); System.out.println(in2); int sum=0;//initial value int index=1; while (index<n) { sum=in1+in2; System.out.println(sum); in1=in2; in2=sum; index++; } } } public class Fib { public static void main(String ar[]) { Fibonacci fb=new Fibonacci(); fb.fib(10); } }
https://coderanch.com/t/599625/java/java/Java-Assignment
CC-MAIN-2016-36
en
refinedweb
1 #include <QtTest> 2 #include <QtCore> 3 4 class testDate: public QObject 5 { 6 Q_OBJECT 7 private slots: 8 void testValidity(); 9 void testMonth(); 10 }; 11 12 void testDate::testValidity() 13 { 14 // 11 March 1967 15 QDate date( 1967, 3, 11 ); 16 QVERIFY( date.isValid() ); 17 } 18 19 void testDate::testMonth() 20 { 21 // 11 March 1967 22 QDate date; 23 date.setYMD( 1967, 3, 11 ); 24 QCOMPARE( date.month(), 3 ); 25 QCOMPARE( QDate::longMonthName(date.month()), 26 QString("March") ); 27 } 28 29 30 QTEST_MAIN(testDate) 31 #include "tutorial1.moc" Save as tutorial1.cpp Stepping through the code, the first line imports the header files for the QtTest namespace. The second line imports the headers for the QtCore namespace (not strictly necessary, since QtTest also imports it, but it is robust and safe). Lines 4 to 10 give us the test class, testData. Note that testDate inherits from QObject and has the Q_OBJECT macro - QtTestLib requires specific Qt functionality that is present in QObject. Lines 12 to 17 19 to 27. In a later tutorial we will see how to work around problems that this behaviour can cause. Line. In the previous example, we looked at how we can test a date class. If we decided that we really needed to test a lot more dates, then we'd be cutting and pasting a lot of code. If we subsequently changed the name of a function, then it has to be changed in a lot of places. As an alternative to introducing these types of maintenance problems into our tests, QtTestLib offers support for data driven testing. The easiest way to understand data driven testing is by an example, as shown below: Example 9. QDate test code, data driven version 1 #include <QtTest> 2 #include <QtCore> 3 4 5 class testDate: public QObject 6 { 7 Q_OBJECT 8 private slots: 9 void testValidity(); 10 void testMonth_data(); 11 void testMonth(); 12 }; 13 14 void testDate::testValidity() 15 { 16 // 12 March 1967 17 QDate date( 1967, 3, 12 ); 18 QVERIFY( date.isValid() ); 19 } 20 21 void testDate::testMonth_data() 22 { 23 QTest::addColumn<int>("year"); // the year we are testing 24 QTest::addColumn<int>("month"); // the month we are testing 25 QTest::addColumn<int>("day"); // the day we are testing 26 QTest::addColumn<QString>("monthName"); // the name of the month 27 28 QTest::newRow("1967/3/11") << 1967 << 3 << 11 << QString("March"); 29 QTest::newRow("1966/1/10") << 1966 << 1 << 10 << QString("January"); 30 QTest::newRow("1999/9/19") << 1999 << 9 << 19 << QString("September"); 31 // more rows of dates can go in here... 32 } 33 34 void testDate::testMonth() 35 { 36 QFETCH(int, year); 37 QFETCH(int, month); 38 QFETCH(int, day); 39 QFETCH(QString, monthName); 40 41 QDate date; 42 date.setYMD( year, month, day); 43 QCOMPARE( date.month(), month ); 44 QCOMPARE( QDate::longMonthName(date.month()), monthName ); 45 } 46 47 48 QTEST_MAIN(testDate) 49 #include "tutorial2.moc" As you can see, we've introduced a new method - testMonth_data, and moved the specific test date out of testMonth. We've had to add some more code (which will be explained soon), but the result is a separation of the data we are testing, and the code we are using to test it. The names of the functions are important - you must use the _data suffix for the data setup routine, and the first part of the data setup routine must match the name of the driver routine. It is useful to visualise the data as being a table, where the columns are the various data values required for a single run through the driver, and the rows are different runs. In our example, there are four columns (three integers, one for each part of the date; and one QString ), added in lines 19 through 22. The addColumn template obviously requires the type of variable to be added, and also requires a variable name argument. We then add as many rows as required using the newRow function, as shown in lines 23 through 26. The string argument to newRow is a label, which is handy for determining what is going on with failing tests, but doesn't have any effect on the test itself. To use the data, we simply use QFETCH to obtain the appropriate data from each row. The arguments to QFETCH are the type of the variable to fetch, and the name of the column (which is also the local name of the variable it gets fetched into). You can then use this data in a QCOMPARE or QVERIFY check. The code is run for each row, which you can see below: ********* As an alternative to using QFETCH and QCOMPARE, you may be able to use the QTEST macro instead. QTEST takes two arguments, and if one is a string, it looks up that string as an argument in the current row. You can see how this can be used below, which is equivalent to the testMonth() code in the previous example. Example 11. QDate test code, data driven version using QTEST 1 void testDate::testMonth() 2 { 3 QFETCH(int, year); 4 QFETCH(int, month); 5 QFETCH(int, day); 6 7 QDate date; 8 date.setYMD( year, month, day); 9 QCOMPARE( date.month(), month ); 10 QTEST( QDate::longMonthName(date.month()), "monthName" ); 11 } ********* In the previous two tutorials, we've tested a date management class. This is an pretty typical use of unit testing. However Qt and KDE applications will make use graphical classes that take user input (typically from a keyboard and mouse). QtTestLib offers support for testing these classes, which we'll see in this tutorial. 1 #include <QtTest> 2 #include <QtCore> 3 #include <QtGui> 4 Q_DECLARE_METATYPE(QDate) 5 6 class testDateEdit: public QObject 7 { 8 Q_OBJECT 9 private slots: 10 void testChanges(); 11 void testValidator_data(); 12 void testValidator(); 13 }; 14 15 void testDateEdit::testChanges() 16 { 17 // 11 March 1967 18 QDate date( 1967, 3, 11 ); 19 QDateEdit dateEdit( date ); 20 21 // up-arrow should increase day by one 22 QTest::keyClick( &dateEdit, Qt::Key_Up ); 23 QCOMPARE( dateEdit.date(), date.addDays(1) ); 24 25 // we click twice on the "reduce" arrow at the bottom RH corner 26 // first we need the widget size to know where to click 27 QSize editWidgetSize = dateEdit.size(); 28 QPoint clickPoint(editWidgetSize.rwidth()-2, editWidgetSize.rheight()-2); 29 // issue two clicks 30 QTest::mouseClick( &dateEdit, Qt::LeftButton, 0, clickPoint); 31 QTest::mouseClick( &dateEdit, Qt::LeftButton, 0, clickPoint); 32 // and we should have decreased day by two (one less than original) 33 QCOMPARE( dateEdit.date(), date.addDays(-1) ); 34 35 QTest::keyClicks( &dateEdit, "25122005" ); 36 QCOMPARE( dateEdit.date(), QDate( 2005, 12, 25 ) ); 37 38 QTest::keyClick( &dateEdit, Qt::Key_Tab, Qt::ShiftModifier ); 39 QTest::keyClicks( &dateEdit, "08" ); 40 QCOMPARE( dateEdit.date(), QDate( 2005, 8, 25 ) ); 41 } 42 43 void testDateEdit::testValidator_data() 44 { 45 qRegisterMetaType<QDate>("QDate"); 46 47 QTest::addColumn<QDate>( "initialDate" ); 48 QTest::addColumn<QString>( "keyclicks" ); 49 QTest::addColumn<QDate>( "finalDate" ); 50 51 QTest::newRow( "1968/4/12" ) << QDate( 1967, 3, 11 ) 52 << QString( "12041968" ) 53 << QDate( 1968, 4, 12 ); 54 55 QTest::newRow( "1967/3/14" ) << QDate( 1967, 3, 11 ) 56 << QString( "140abcdef[" ) 57 << QDate( 1967, 3, 14 ); 58 // more rows can go in here 59 } 60 61 void testDateEdit::testValidator() 62 { 63 QFETCH( QDate, initialDate ); 64 QFETCH( QString, keyclicks ); 65 QFETCH( QDate, finalDate ); 66 67 QDateEdit dateEdit( initialDate ); 68 // this next line is just to start editing 69 QTest::keyClick( &dateEdit, Qt::Key_Enter ); 70 QTest::keyClicks( &dateEdit, keyclicks ); 71 QCOMPARE( dateEdit.date(), finalDate ); 72 } 73 74 QTEST_MAIN(testDateEdit) 75 19 and 20 show how we can test what happens when we press the up-arrow key. The QTest::keyClick function takes a pointer to a widget, and a symbolic key name (a char or a Qt::Key). At line 20, we check that the effect of that event was to increment the date by a day. The QTest:keyClick function also takes an optional keyboard modifier (such as Qt::ShiftModifier for the shift key) and an optional delay value (in milliseconds). As an alternative to using QTest::keyClick, you can use QTest::keyPress and QTest::keyRelease to construct more complex keyboard sequences. Lines 23 to 29 show a similar test to the previous one, but in this case we are simulating a mouse click. We need to click in the lower right hand part of the widget (to hit the decrement arrow - see Figure 1), and that requires knowing how large the widget is. So lines 23 and 24 calculate the correct point based off the size of the widget. Line 26 (and the identical line 27) simulates clicking with the left-hand mouse button at the calculated point. The arguments to Qt::mouseClick are: In addition to QTest::mouseClick, there is also QTest::mousePress, QTest::mouseRelease, QTest::mouseDClick (providing double-click) and QTest::mouseMove. The first three are used in the same way as QTest::mouseClick. The last takes a point to move the mouse to. You can use these functions in combination to simulate dragging with the mouse. Lines 30 and 31 show another approach to keyboard entry, using the QTest::keyClicks. Where QTest::keyClick sends a single key press, QTest::keyClicks takes a QString (or something equivalent, in line 30 a character array) that represents a sequence of key clicks to send. The other arguments are the same. Lines 32 to 34 show how you may need to use a combination of functions. After we've entered a new date in line 30, the cursor is at the end of the widget. At line 32, we use a Shift-Tab combination to move the cursor back to the month value. Then at line 33 we enter a new month value. Of course we could have used individual calls to QTest::keyClick, however that wouldn't have been as clear, and would also have required more code. Lines 50 to 60 show a data-driven test - in this case we are checking that the validator on QDateEdit is performing as expected. This is a case where data-driven testing can really help to ensure that things are working the way they should. At lines 52 to 54, we fetch in an initial value, a series of key-clicks, and an expected result. These are the columns that are set up in lines 39 to 41. However note that we are now pulling in a QDate, where in previous examples we used three integers and then build the QDate from those. However QDate isn't a registered type for QMetaType, and so we need to register it before we can use it in our data-driven testing. This is done using the Q_DECLARE_METATYPE macro in line 4 and the qRegisterMetaType function in line 38. Lines 42 to 47 add in a couple of sample rows. Lines 42 to 44 represent a case where the input is valid, and lines 45 to 47 are a case where the input is only partly valid (the day part). A real test will obviously contain far more combinations than this. Those test rows are actually tested in lines 55 to 59. We construct the QDateEdit widget in line 55, using the initial value. We then send an Enter key click in line 57, which is required to get the widget into edit mode. At line 58 we simulate the data entry, and at line 59 we check whether the results are what was expected.. The key class for building a list of test events is imaginatively known as QTestEventList. It is a QList of QTestEvents. The normal approach is to create the list, and then use various member functions to add key and mouse events. The normal functions that you'll need are addKeyClick and addMouseClick, which are very similar to the QTest::keyClick and QTest::mouseClick functions we used earlier in this tutorial. For finer grained operations, you can also use addKeyPress, addKeyRelease, addKeyEvent, addMousePress, addMouseRelease, addMouseDClick and addMouseMove to build up more complex event lists. You can also use addDelay to add a specified delay between events. When the list has been built up, you just call simulate on each widget. You can see how this works in the example below, which is the QDateEdit example (from above) converted to use QTestEventList. Example 14. QDateEdit test code, using QTestEventList 1 2 #include <QtTest> 3 #include <QtCore> 4 #include <QtGui> 5 Q_DECLARE_METATYPE(QDate) 6 7 class testDateEdit: public QObject 8 { 9 Q_OBJECT 10 private slots: 11 void testChanges(); 12 void testValidator_data(); 13 void testValidator(); 14 }; 15 16 void testDateEdit::testChanges() 17 { 18 // 11 March 1967 19 QDate date( 1967, 3, 11 ); 20 QDateEdit dateEdit( date ); 21 22 // up-arrow should increase day by one 23 QTest::keyClick( &dateEdit, Qt::Key_Up ); 24 QCOMPARE( dateEdit.date(), date.addDays(1) ); 25 26 // we click twice on the "reduce" arrow at the bottom RH corner 27 // first we need the widget size to know where to click 28 QSize editWidgetSize = dateEdit.size(); 29 QPoint clickPoint(editWidgetSize.rwidth()-2, editWidgetSize.rheight()-2); 30 // build a list that contains two clicks 31 QTestEventList list1; 32 list1.addMouseClick( Qt::LeftButton, 0, clickPoint); 33 list1.addMouseClick( Qt::LeftButton, 0, clickPoint); 34 // call that list on the widget 35 list1.simulate( &dateEdit ); 36 // and we should have decreased day by two (one less than original) 37 QCOMPARE( dateEdit.date(), date.addDays(-1) ); 38 39 QTest::keyClicks( &dateEdit, "25122005" ); 40 QCOMPARE( dateEdit.date(), QDate( 2005, 12, 25 ) ); 41 42 QTestEventList list2; 43 list2.addKeyClick( Qt::Key_Tab, Qt::ShiftModifier ); 44 list2.addKeyClicks( "08" ); 45 list2.simulate( &dateEdit ); 46 QCOMPARE( dateEdit.date(), QDate( 2005, 8, 25 ) ); 47 } 48 49 void testDateEdit::testValidator_data() 50 { 51 qRegisterMetaType<QDate>("QDate"); 52 53 QTest::addColumn<QDate>( "initialDate" ); 54 QTest::addColumn<QTestEventList>( "events" ); 55 QTest::addColumn<QDate>( "finalDate" ); 56 57 QTestEventList eventsList1; 58 // this next line is just to start editing 59 eventsList1.addKeyClick( Qt::Key_Enter ); 60 eventsList1.addKeyClicks( "12041968" ); 61 62 QTest::newRow( "1968/4/12" ) << QDate( 1967, 3, 11 ) 63 << eventsList1 64 << QDate( 1968, 4, 12 ); 65 66 QTestEventList eventsList2; 67 eventsList2.addKeyClick( Qt::Key_Enter ); 68 eventsList2.addKeyClicks( "140abcdef[" ); 69 70 QTest::newRow( "1967/3/14" ) << QDate( 1967, 3, 11 ) 71 << eventsList2 72 << QDate( 1967, 3, 14 ); 73 // more rows can go in here 74 } 75 76 void testDateEdit::testValidator() 77 { 78 QFETCH( QDate, initialDate ); 79 QFETCH( QTestEventList, events ); 80 QFETCH( QDate, finalDate ); 81 82 QDateEdit dateEdit( initialDate ); 83 84 events.simulate( &dateEdit); 85 86 QCOMPARE( dateEdit.date(), finalDate ); 87 } 88 89 QTEST_MAIN(testDateEdit) 90 #include "tutorial3a.moc" This example is pretty much the same as the previous version, up to line 25. In line 26, we create a QTestEventList. We add events to the list in lines 27 and 28 - note that we don't specify the widget we are calling them on at this stage. In line 30, we simulate each event on the widget. If we had multiple widgets, we could call simulate using the same set of events. Lines 31 to 34 are as per the previous example. We create another list in lines 35 to 37, although this time we are using addKeyClick and addKeyClicks instead of adding mouse events. Note that an event list can contain combinations of mouse and keyboard events - it just didn't make sense in this test to have such a combination. We use the second list at line 38, and check the results in line 39. You can also build lists of events in data driven testing as well, as shown in lines 41 to 70. The key difference is that instead of fetching a QString in each row, we are fetching a QTestEventList. This requires that we add a column of QTestEventList, rather than QString (see line 45). At lines 47 to 50, we create a list of events. At line 52 we add those events to the applicable row. We create a second list at lines 54 to 56, and add that second list to the applicable row in line 58.. Tests are skipped using the QSKIP macro. QSKIP takes two arguments - a label string that should be used to describe why the test is being skipped, and a enumerated constant that controls how much of the test is skipped. If you pass SkipSingle, and the test is data driven, then only the current row is skipped. If you pass SkipAll and the test is data driven, then all following rows are skipped. If the test is not data driven, then it doesn't matter which one is used. You can see how QSKIP works in the example below: Example 15. Unit test showing skipped tests 1 void testDate::testSkip_data() 2 { 3 QTest::addColumn<int>("val1"); 4 QTest::addColumn<int>("val2"); 5 6 QTest::newRow("1") << 1 << 1; 7 QTest::newRow("2") << 1 << 2; 8 QTest::newRow("3") << 3 << 3; 9 QTest::newRow("5") << 5 << 5; 10 QTest::newRow("4") << 4 << 5; 11 } 12 13 void testDate::testSkip() 14 { 15 QFETCH(int, val1); 16 QFETCH(int, val2); 17 18 if ( val2 == 2 ) 19 QSKIP("Two isn't good, not doing it", SkipSingle); 20 if ( val1 == 5 ) 21 QSKIP("Five! I've had enough, bailing here", SkipAll); 22 QCOMPARE( val1, val2 ); 23 } ********* from the verbose output, you can see that the test was run on the first and third rows. The second row wasn't run because of the QSKIP call with a SkipSingle argument. Similarly, the fourth and fifth rows weren't run because the fourth row triggered a QSKIP call with a SkipAll argument. Also note that the test didn't fail, even though there were two calls to QSKIP. Conceptually, a skipped test is a test that didn't make sense to run for test validity reasons, rather than a test that is valid but will fail because of bugs or lack of features in the code being tested. If you have valid tests, but the code that you are testing doesn't pass them, then ideally you fix the code you are testing. However sometimes that isn't possible in the time that you have available, or because of a need to avoid binary incompatible changes. In this case, it is undesirable to delete or modify the unit tests - it is better to flag the test as "expected to fail", using the QEXPECT_FAIL macro. An example of this is shown below: Example 17. Unit test showing expected failures 1 void testDate::testExpectedFail() 2 { 3 QEXPECT_FAIL("", "1 is not 2, even for very large 1", Continue); 4 QCOMPARE( 1, 2 ); 5 QCOMPARE( 2, 2 ); 6 7 QEXPECT_FAIL("", "1 is not 2, even for very small 2", Abort); 8 QCOMPARE( 1, 2 ); 9 // The next line will not be run, because we Abort on previous failure 10 QCOMPARE( 3, 3 ); 11 } ********* As you can see from the verbose output, we expect a failure from each time we do a QCOMPARE( 1, 2);. In the first call to QEXPECT_FAIL, we use the Continue argument, so the rest of the tests will still be run. However in the second call to QEXPECT_FAILwe use the Abort and the test bails at this point. Generally it is better to use Continue unless you have a lot of closely related tests that would each need a QEXPECT_FAIL entry. 1 void testDate::testUnexpectedPass() 2 { 3 QEXPECT_FAIL("", "1 is not 2, even for very large 1", Continue); 4 QCOMPARE( 1, 1 ); 5 QCOMPARE( 2, 2 ); 6 7 QEXPECT_FAIL("", "1 is not 2, even for very small 2", Abort); 8 QCOMPARE( 1, 1 ); 9 QCOMPARE( 3, 3 ); 10 } ********* The effect of unexpected passes on the running of the test is controlled by the second argument to QEXPECT_FAIL. If the argument is Continue and the test unexpectedly passes, then the rest of the test function will be run. If the argument is Abort, then the test will stop. If you are testing border cases, you will likely run across the case where some kind of message will be produced using the qDebug or qWarning functions. Where a test produces a debug or warning message, that message will be logged in the test output (although it will still be considered a pass unless some other check fails), as shown in the example below: Example 21. Unit test producing warning and debug messages 1 void testDate::testQdebug() 2 { 3 qWarning("warning"); 4 qDebug("debug"); 5 qCritical("critical"); 6 }. By contrast, the warning message in testDate::testValiditi still causes a warning to be logged, because the ignoreMessage call does not match the text in the warning message. In addition, because a we expected a particular warning message and it wasn't received, the testDate::testValiditi test function fails. 1 #include <QtTest> 2 #include <QtCore> 3 #include <QtGui> 4 5 class testLabel: public QObject 6 { 7 Q_OBJECT 8 private slots: 9 void testChanges(); 10 }; 11 12 void testLabel::testChanges() 13 { 14 QLabel label; 15 16 // setNum() is a QLabel slot, but we can just call it like any 17 // other method. 18 label.setNum( 3 ); 19 QCOMPARE( label.text(), QString("3") ); 20 21 // clear() is also a slot. 22 label.clear(); 23 QVERIFY( label.text().isEmpty() ); 24 } 25 26 QTEST_MAIN(testLabel) 27 #include "tutorial5.moc" Testing of signals is a little more difficult than testing of slots, however Qt offers a very useful class called QSignalSpy that helps a lot. QSignalSpy is a class provided with Qt that allows you to record the signals that have been emitted from a particular QObject subclass object. You can then check that the right number of signals have been emitted, and that the right kind of signals were emitted. You can find more information on the QSignalSpy class in your Qt documentation. An example of how you can use QSignalSpy to test a class that has signals is shown below. Example 26. QCheckBox test code, showing testing of signals 1 #include <QtTest> 2 #include <QtCore> 3 #include <QtGui> 4 5 class testCheckBox: public QObject 6 { 7 Q_OBJECT 8 private slots: 9 void testSignals(); 10 }; 11 12 void testCheckBox::testSignals() 13 { 14 // You don't need to use an object created with "new" for 15 // QSignalSpy, I just needed it in this case to test the emission 16 // of a destroyed() signal. 17 QCheckBox *xbox = new QCheckBox; 18 19 // We are going to have two signal monitoring classes in use for 20 // this test. 21 // The first monitors the stateChanged() signal. 22 // Also note that QSignalSpy takes a pointer to the object. 23 QSignalSpy stateSpy( xbox, SIGNAL( stateChanged(int) ) ); 24 25 // Not strictly necessary, but I like to check that I have set up 26 // my QSignalSpy correctly. 27 QVERIFY( stateSpy.isValid() ); 28 29 // Now we check to make sure we don't have any signals already 30 QCOMPARE( stateSpy.count(), 0 ); 31 32 // Here is a second monitoring class - this one for the 33 // destroyed() signal. 34 QSignalSpy destroyedSpy( xbox, SIGNAL( destroyed() ) ); 35 QVERIFY( destroyedSpy.isValid() ); 36 37 // A sanity check to verify the initial state 38 // This also shows that you can mix normal method checks with 39 // signal checks. 40 QCOMPARE( xbox->checkState(), Qt::Unchecked ); 41 42 // Shouldn't already have any signals 43 QCOMPARE( destroyedSpy.count(), 0 ); 44 45 // If we change the state, we should get a signal. 46 xbox->setCheckState( Qt::Checked ); 47 QCOMPARE( stateSpy.count(), 1 ); 48 49 xbox->setCheckState( Qt::Unchecked ); 50 QCOMPARE( stateSpy.count(), 2 ); 51 52 xbox->setCheckState( Qt::PartiallyChecked ); 53 QCOMPARE( stateSpy.count(), 3 ); 54 55 // If we destroy the object, the signal should be emitted. 56 delete xbox; 57 58 // So the count of objects should increase. 59 QCOMPARE( destroyedSpy.count(), 1 ); 60 61 // We can also review the signals that we collected 62 // QSignalSpy is really a QList of QLists, so we take the first 63 // list, which corresponds to the arguments for the first signal 64 // we caught. 65 QList<QVariant> firstSignalArgs = stateSpy.takeFirst(); 66 // stateChanged() only has one argument - an enumerated type (int) 67 // So we take that argument from the list, and turn it into an integer. 68 int firstSignalState = firstSignalArgs.at(0).toInt(); 69 // We can then check we got the right kind of signal. 70 QCOMPARE( firstSignalState, static_cast<int>(Qt::Checked) ); 71 72 // check the next signal - note that takeFirst() removes from the list 73 QList<QVariant> nextSignalArgs = stateSpy.takeFirst(); 74 // this shows another way of fudging the argument types 75 Qt::CheckState nextSignalState = (Qt::CheckState)nextSignalArgs.at(0).toInt(); 76 QCOMPARE( nextSignalState, Qt::Unchecked ); 77 78 // and again for the third signal 79 nextSignalArgs = stateSpy.takeFirst(); 80 nextSignalState = (Qt::CheckState)nextSignalArgs.at(0).toInt(); 81 QCOMPARE( nextSignalState, Qt::PartiallyChecked ); 82 } 83 84 QTEST_MAIN(testCheckBox) 85 #include "tutorial5a.moc" The first 11 lines are essentially unchanged from previous examples that we've seen. Line 15 creates the object that will be tested - as noted in the comments in lines 12-14, the only reason that I'm creating it with new is because I need to delete it in line 45 to cause the destroyed() signal to be emitted. Line 20 sets up the first of our two QSignalSpy instances. The one in line 20 monitors the stateChanged(int) signal, and the one in line 29 monitors the destroyed() signal. If you get the name or signature of the signal wrong (for example, if you use stateChanged() instead of stateChanged(int)), then this will not be caught at compile time, but will result in a runtime failure. You can test if things were set up correctly using the isValid(), as shown in lines 24 and 30. As shown in line 34, there is no reason why you cannot test normal methods, signals and slots in the same test. Line 38 changes the state of the object under test, which is supposed to result in a stateChanged(int) signal being emitted. Line 39 checks that the number of signals increases from zero to one. Lines 40 and 41 repeat the process, and again in lines 42 and 43. Line 45 deletes the object under test, and line 47 tests that the destroyed() signal has been emitted. For signals that have arguments (such as our stateChanged(int) signal), you may also wish to check that the arguments were correct. You can do this by looking at the list of signal arguments. Exactly how you do this is fairly flexible, however for simple tests like the one in the example, you can manually work through the list using takeFirst() and check that each argument is correct. This is shown in line 52, 55 and 57 for the first signal. The same approach is shown in lines 59, 61 and 62 for the second signal, and the in lines 64 to 66 for the third signal. For a more complex set of tests, you may wish to apply some data driven techniques.. In some configurations, there may be a build system option to turn on (or off) the compilation of tests. At this stage, you have to enable the BUILD_TESTING option in KDE4 modules, however this may go away in the near future, as later version of CMake can build the test applications on demand.. When choosing a display name, you should adopt a similar convention to the existing tests (e.g. in kdelibs/kdecore, all the display names start with kdecore-, which makes it easy to find the failing test if you run all the tests in kdelibs). If there are no existing tests, using a submodule prefix is probably a good idea.S} ) You are meant to replace "kwhatevertest" with the name of your test application. The target_link_libraries() line will need to contain whatever libraries are needed for the feature you are testing, so if it is a GUI feature, you'll likely need to use "${KDE4_KDEUI_LIBS}. To run all tests, you can just "make test". This will work through each of the tests that have been added (at any lower level) using kde4_add_unit_test, provided that you have include(KDE4Defaults) in your CMakeLists.txt. This is equivalent to running the "ctest" executable with no arguments. If you want finer grained control over which tests are run or the output format, you can use additional arguments. These are explained in the ctest man page ("man ctest" on a *nix system, or run "ctest --help-full").
https://techbase.kde.org/index.php?title=Development/Tutorials/Unittests&diff=72836&oldid=72071
CC-MAIN-2016-36
en
refinedweb
urn:lsid:ibm.com:blogs:entries-203cd156-7537-4b84-9c9d-77278729c8c1 A view from the clouds: Cloud computing for developers - Tags - apache_wink A view from the clouds: Cloud computing for developers 0 30 1 2013-12-06T05:09:12-05:00 IBM Connections - Blogs urn:lsid:ibm.com:blogs:entry-1a101046-9691-4eb6-882a-826db0dbedec Proxy techniques for the WebSphere CloudBurst REST API DKA 0600026VJ2 active false Comment Entries application/atom+xml;type=entry Likes true 2010-04-06T10:29:32-04:00 2010-04-06T10:33:28-04:00 <table cellpadding="15" cellspacing="15"> <tbody> <tr> <td> If you plan to write browser-based client code (JavaScript, Dojo, jQuery, etc.) that leverages the <a href="" target="_blank">WebSphere CloudBurst REST API</a>, you will have to put some type of proxy mechanism in place. One approach is to simply standup a regular web proxy, put the correct routing directives in place, and then send your client-side requests to that proxy. This is easy enough to do, and the web proxy will take care of all the routing. However, I strongly encourage you to think about putting an application-based proxy in place to handle client-side requests to the WebSphere CloudBurst REST layer. </td> </tr> <tr> <td> The reason I suggest the application proxy approach is twofold. First, it affords you the ability of having custom interactions with the REST API. For instance, you may insert logic into the server-side proxy code that returns only a subset of the JSON data contained in the response from the appliance. Alternatively, in an effort to reduce the chattiness on your client-side, you may join JSON data from multiple different REST requests to the appliance to fulfill a single client request. You may even decide to represent the data in an all together different format than JSON. All of these options and many more are available to you if you implement an application-based proxy to the REST API. </td> </tr> <tr> <td> The second reason I suggest the application approach is that it is easier, and seemingly safer, to not deal with user passwords on the client-side. If you setup your application proxy, you can configure it to retrieve the appropriate password from a secure location (like an encoded file) based on information passed along in the request. This means the password information is only present in the request (in encoded form of course) from the application proxy to the WebSphere CloudBurst Appliance. </td> </tr> <tr> <td> The good news about the application-based proxy approach is that it is simple to put in place. I composed one using the open source <a href="" target="_blank">Apache Wink</a> project. The Apache Wink project is an open source implementation of the <a href="" target="_blank">JAX-RS specification</a> (and then some), and it enables you to develop POJOs that are in turn exposed in a RESTful manner. In my case, I had a single resource POJO: </td> </tr> <tr> <td> <pre><code> package com.ibm.wca.resources; import java.io.InputStream; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.QueryParam; import javax.ws.rs.core.Context; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriInfo; @Path(value="/resources") public class WCAResource extends WCAResourceShim { @Context private UriInfo uriInfo; @GET @Path(value="/{var:.*}") public Response getResources( @QueryParam(value="wcaHost") String wcaHost, @QueryParam(value="wcaUser") String wcaUser) { return getResource(InputStream.class, wcaHost, wcaUser, uriInfo.getPath()); } } </code> </pre> </td> </tr> <tr> <td> The Apache Wink runtime routes any HTTP GET request whose path is like <b>/resources/*</b> to the <b>getResources</b> method in the WCAResource class. This method passes along information taken from the query string (the host name of the target WebSphere CloudBurst Appliance and the requesting WebSphere CloudBurst username), as well as the HTTP path information and sends it on to the getResource method declared as follows: </td> </tr> <tr> <td> <pre><code> public Response getResource(Class entityClass, String wcaHost, String wcaUser, String resourcePath) { Response rsp = null; try { Object entity = null; String uri = "https://" + wcaHost + "/" + resourcePath; Resource resource = WCARestUtils.getResource(wcaHost, wcaUser, uri); entity = resource.get(entityClass); rsp = Response.ok(entity).build(); } catch(Exception e) { rsp = Response.status(500).build(); } return rsp; } </code> </pre> </td> </tr> <tr> <td> The <b>getResource</b> method above uses the WebSphere CloudBurst host name and the request path to construct the URL for the corresponding WebSphere CloudBurst REST API call. Next, it constructs an Apache Wink <b>Resource</b> object and sends the REST request along to the WebSphere CloudBurst Appliance. How do we authenticate this request? We use the WebSphere CloudBurst username (sent as a query string parameter) to retrieve the appropriate encoded password information. Once we have that, we construct the necessary header for basic authorization over SSL. </td> </tr> <tr> <td> The application-based proxy shown here is simply a pass-through. It does not manipulate the data returned from the WebSphere CloudBurst REST API, nor does it map a single client-side call to multiple REST requests. However, it would be simple enough to extend it to do any of those things. If you have any questions about the code here, please let me know. I'd be happy to share more of the code, or talk about how and where to extend it. </td> </tr> <tr> <td> <i>-- Dustin Amrhein</i> </td> </tr> </tbody> </table> If you plan to write browser-based client code (JavaScript, Dojo, jQuery, etc.) that leverages the WebSphere CloudBurst REST API , you will have to put some type of proxy mechanism in place. One approach is to simply standup a regular web proxy, put the... 1 0 4298 urn:lsid:ibm.com:blogs:entries-203cd156-7537-4b84-9c9d-77278729c8c1 A view from the clouds: Cloud computing for developers 2013-12-06T05:09:12-05:00
https://www.ibm.com/developerworks/community/blogs/cloudview/feed/entries/atom?tags=apache_wink&lang=en
CC-MAIN-2016-36
en
refinedweb
A colleague of mine recently asked me about Perl's future. Specifically, he wondered if we have any tricks up our sleeves to compete against today's two most popular platforms: .NET and Java. Without a second's hesitation, I repeated the same answer I've used for years when people ask me if Perl has a future: Perl certainly is alive and well. The Perl 6 development team is working very hard to define the next version of the Perl language. Another team of developers is working hard on Parrot, the next- generation runtime engine for Perl 6. Parrot is being designed to support dynamic languages like Perl 6, but also Python, Ruby and others. Perl 6 will also support a transparent migration of existing Perl 5 code. Then I cheerfully continued with this addendum: Fotango is sponsoring one of their developers, Arthur Bergman, to work on Ponie, the reimplementation of Perl 5.10 on top of Parrot. That is often a sufficient answer to the question, "Does Perl have a future?" However, my colleague already knew about Perl 6 and Parrot. Perl 6 was announced with a great deal of fanfare about three and a half years ago. The Parrot project, announced as an April Fool's joke in 2001, is now over two years old as a real open source project. While Parrot has made some amazing progress, it is not yet ready for production usage, and will not be for some time to come. The big near-term goal for Parrot is to execute Python bytecode faster than the standard CPython implementation, and to do so by the Open Source Convention in July 2004. There's a fair amount of work to do between now and then, and even more work necessary to take Parrot from that milestone to something you can use as a replacement for something like, say, Perl. So, aside from the grand plans of Perl 6 and Parrot, does Perl really have a future? The State of Perl Development Perl 6 and Parrot do not represent our future, but rather our long-term insurance policy. When Perl 6 was announced, the Perl 5 implementation was already about seven years old. Core developers were leaving perl5-porters and not being replaced. (We didn't know it at the time, but this turned out to be a temporary lull. Thankfully.) The source code is quite complex, and very daunting to new developers. It was and remains unclear whether Perl can sustain itself as an open source project for another ten or twenty years if virtually no one can hack on the core interpreter. In 2000, Larry Wall saw Perl 6 as a means to keep Perl relevant, and to keep the ideas flowing within the Perl world. The fear at the time was quite palpable: if enough alpha hackers develop in Java or Python and not Perl, the skills we have spent years acquiring and honing will soon become useless and literally worthless. Furthermore, backwards compatibility with thirteen years (now sixteen years) of working Perl code was starting to limit the ease with which Perl can adapt to new demands. Taken to a logical extreme, all of these factors could work against Perl, rendering it yesterday's language, incapable of effectively solving tomorrow's problems. The plan for Perl 6 was to provide not only a new implementation of the language, but also a new language design that could be extended by mere mortals. This could increase the number of people who would be both capable and interested in maintaining and extending Perl, both as a language and as a compiler/interpreter. A fresh start would help Perl developers take Perl into bold new directions that were simply not practical with the then-current Perl 5 implementation. Today, over three years later, the Perl development community is quite active writing innovative software that solves the problems real people and businesses face today. However, the innovation and inspiration is not entirely where we thought it would be. Instead of seeing the new language and implementation driving a new wave of creativity, we are seeing innovation in the libraries and modules available on CPAN -- code you can use right now with Perl 5, a language we all know and love. In a very real sense, the Perl 6 project has already achieved its true goals: to keep Perl relevant and interesting, and to keep the creativity flowing within the Perl community. What does this mean for Perl's future? First of all, Perl 5 development continues alongside Perl 6 and Parrot. There are currently five active development branches for Perl 5. The main branch, Perl 5.8.x, is alive and well. Jarkko Hietaniemi released Perl 5.8.1 earlier this year as a maintenance upgrade to Perl 5.8.0, and turned over the patch pumpkin to Nick Clark, who is presently working on building Perl 5.8.3. In October, Hugo van der Sanden released the initial snapshot of Perl 5.9.0, the development branch that will lead to Perl 5.10. And this summer, Fotango announced that Arthur Bergman is working on Ponie, a port of Perl 5.10 to run on top of Parrot, instead of the current Perl 5 engine. Perl 5.12 may be the first production release to run on top of the new implementation. For developers who are using older versions of Perl for compatibility reasons, Rafael Garcia-Suarez is working on Perl 5.6.2, an update to Perl 5.6.1 that adds support for recent operating-system and compiler releases. Leon Brocard is working on making the same kinds of updates for Perl 5.005_04. Where is Perl going? Perl is moving forward, and in a number of parallel directions. For workaday developers, three releases of Perl will help you get your job done: 5.8.x, 5.6.x and, when absolutely necessary, 5.005_0x. For the perl5-porters who develop Perl itself, fixes are being accepted in 5.8.x and 5.9.x. For bleeding-edge developers, there's plenty of work to do on with Parrot. For the truly bleeding edge, Larry and his lieutenants are hashing out the finer points of the design of the Perl 6 language. That describes where development of Perl as a language and as a platform is going. But the truly interesting things about Perl aren't language issues, but how Perl is used. The State of Perl Usage. Looking at the "freshness" of CPAN doesn't tell the whole story about Perl. It merely indicates that Perl developers are actively releasing code on CPAN. Many of these uploads are new and interesting modules, or updates that add new features or fix bugs in modules that we use every day. Some modules are quite stable and very useful, even though they have not been updated in years. But many modules are old, outdated, joke modules, or abandoned. A pessimist looks at CPAN and sees abandoned distributions, buggy software, joke modules and packages in the early stage of development (certainly not ready for "prime time" use). An optimist looks at CPAN and sees some amazingly useful modules (DBI, LWP, Apache::*, and so on), and ignores the less useful modules lurking in the far corners of CPAN. Which view is correct? Looking over the module list, only a very small number of modules are jokes registered in the Acme namespace: about 85 of over 5800 distributions, or less than 2% of the modules on CPAN. Of course, there are joke modules that are not in the Acme namespace, like Lingua::Perligata::Romana and Lingua::Atinlay::Igpay. Yet the number of jokes released as CPAN modules remains quite small when compared to CPAN as a whole. But how much of CPAN is actually useful? It depends on what kind of problems you're solving. Let's assume that only the code released within the last three years, or roughly 82% of CPAN, is worth investigating. Let's further assume that everything in the Acme namespace can be safely ignored, and that the total number of joke modules is no more than twice the number of Acme modules. Ignoring a further 3-4% of CPAN leaves us with about 78%, or over 4,000 distributions, to examine.. And isn't that the real definition of production quality, anyway? The Other State of Perl Usage As Larry mentioned in his second keynote address to the Perl Conference in 1998 (), the Perl community is like an onion. The important part isn't the small core, but rather the larger outer layers where most of the mass and all of the growth are found. Therefore, the true state of Perl isn't about interpreter development or CPAN growth, but in how we all use Perl every day. Why do we use Perl every day? Because Perl scales to solve both small and large problems. Unlike languages like C, C++, and Java, Perl allows us to write small, trivial programs quickly and easily, without sacrificing the ability to build large applications and systems. The skills and tools we use on large projects are also available when we write small programs. Programming in the Small Here's a common example. Suppose I want to look at the O'Reilly Perl resource page and find all links off of that page. My program starts out by loading two modules, LWP::Simple to fetch the page, and HTML::LinkExtor to extract all of the links: #!/usr/bin/perl -w use strict; use LWP::Simple; use HTML::LinkExtor; my $ext = new HTML::LinkExtor; $ext->parse(get("")); my @links = $ext->links(); At this point, I have the beginnings of a web spider or possibly a screen scraper. With a few regular expressions and a couple of list operations like grep, map, or foreach, I can whittle this list of links down to a list of links to Safari, the O'Reilly's book catalog, or new articles on Perl.com. A couple of lines more, and I could store these links in a database (using DBI, DB_File, GDBM, or some other persistent store). I've written (and thrown away) many programs like this over the years. They are consistently easy to write, and typically less than one page of code. That says a lot about the capabilities Perl and CPAN provide. It also says a lot about how much a single programmer can accomplish in a few minutes with a small amount of effort. Yet the most important lesson is this: Perl allows us to use the same tools we use to write applications and large systems to write small scripts and little hacks. Not only are we able to solve mundane problems quickly and easily, but we can use one set of tools and one set of skills to solve a wide range of problems. Furthermore, because we use the same tools, our quick hacks can work alongside larger systems. Programming in the Large Of course, it's one thing to assert that Perl programs can scale up beyond the quick hack. It's another thing to actually build large systems with Perl. The Perl Success Stories Archive (perl.oreilly.com/news/success_stories.html) details many such efforts, including many large systems, high-volume systems, and critical applications. Then there are the high-profile systems that get a lot of attention at Perl conferences and on various Perl-related mailing lists. For example, Amazon.com, the Internet's largest retailer, uses HTML::Mason for portions of their web site. Another fifty-odd Mason sites are profiled () at, including Salon.com, AvantGo, and DynDNS. Morgan Stanley is another big user of Perl. As far back as 2001, W. Phillip Moore talked about where Perl and Linux fit into the technology infrastructure at Morgan Stanley. More recently, Merijn Broeren detailed (conferences.oreillynet.com/cs/os2003/view/e_sess/4293) how Morgan Stanley relies on Perl to keep 9,000 of its computers up and running non-stop, and how Perl is used for a wide variety of applications used worldwide. ValueClick, a provider of high-performance Internet advertising, pushes Perl in a different direction. Each day, ValueClick serves up over 100 million targeted banner ads on publisher web sites. The process of choosing which ad to send where is very precise, and handled by some sophisticated Perl code. Analyzing how effective these ads are requires munging through huge amounts of logging data. Unsurprisingly, ValueClick uses Perl here, too. Ticketmaster sells tickets to sporting and entertainment events in at least twenty countries around the world. In a year, Ticketmaster sells over 80 million tickets worldwide. Recently, Ticketmaster sold one million tickets in a single day, and about half of those tickets were sold over the Web. And the Ticketmaster web site is almost entirely written in Perl. These are only some of the companies that use Perl for large, important products. Ask around and you'll hear many, many more stories like these. Over the years, I've worked with more than a few companies who created some web-based product or service that was built entirely with Perl. Some of these products were responsible for bringing in tens of millions of dollars in annual revenue. Clearly, Perl is for more than just simple hacks. The New State of Perl Usage Many companies use Perl to build proprietary products and Internet-based services they can sell to their customers. Still more companies use Perl to keep internal systems running, and save money through automating mundane processes. A new way people are using Perl today is the open source business. Companies like Best Practical and Kineticode are building products with Perl, and earning money from training, support contracts, and custom development. Their products are open source, freely available, and easy to extend. Yet there is enough demand for add-on services that these companies are profitable and sustain development of these open source products. Best Practical Solutions () develops Request Tracker, more commonly known as RT (). RT is an issue-tracking system that allows teams to coordinate their activities to manage user requests, fix bugs, and track actions taken on each task. As an open source project, RT has been under development since 1996, and has thousands of corporate users, including those listed on the testimonials page (). Today, RT powers bug tracking for Perl development (rt.perl.org/perlbug), and for CPAN module development (rt.cpan.org). Many organizations rely on the information they keep in RT, sometimes upwards of 1000 issues per day, or 300,000 issues that must be tracked and resolved each year. Kineticode ()is another successful open source business built around a Perl product, the Bricolage content management system (). Bricolage is used by some rather large web sites, including ETOnline () and the World Health Organization (). Recently, the Howard Dean campaign () adopted Bricolage as its content management system to handle the site's frequent updates in the presence of millions of pageviews per day, with peak demand more than ten times that rate. A somewhat related business is SixApart (), makers of the ever-popular MovableType (). SixApart offers MovableType with a free license for personal and non-commercial use, but charges a licensing fee for corporate and commercial use. Make no mistake, MovableType is proprietary software, even though it is implemented in Perl. Nevertheless, SixApart has managed to build a profitable business around their Perl-based product. Surely these are the early days for businesses selling or supporting software written in Perl. These three companies are not the only ones forging this path, although they are certainly three of the most visible. Conclusion I started looking into the state of Perl today when my colleague asked me if Perl has a future. He challenged me to look past my knee-jerk answers, "Of course Perl has a future!" and "Perl's future is in Perl 6 and Parrot!" I'm glad he did. There's a lot of activity in the Perl world today, and much of it quite easily overlooked. Core development is moving along at a respectable pace; CPAN activity is quite healthy; and Perl remains a capable environment for solving problems, whether they need a quick hack, a large system, or a Perl-based product. Even if we don't see Perl 6 in 2004, there's a lot of work to be done in Perl 5, and a lot of work Perl 5 is still quite capable of doing. Then there's the original question that started this investigation rolling: "Can Perl compete with Java and .NET?" Clearly, when it comes to solving problems, Perl is at least as capable a tool as Java and .NET today. When it comes to evangelizing one platform to the exclusion of all others, then perhaps Perl can't compete with .NET or Java. Then again, when did evangelism ever solve a problem that involved sitting down and writing code? Of course, if Java or .NET is more your speed, by all means use those environments. Perl's success is not predicated on some other language's failure. Perl's success hinges upon helping you get your job done.
http://www.perl.com/pub/2004/01/09/survey.html
crawl-003
en
refinedweb
L_GrayScaleToDuotone #include "l_bitmap.h" L_LTIMGCLR_API L_INT L_GrayScaleToDuotone(pBitmap, pNewColor, crColor, uFlags) Converts the grayscale bitmap into a colored one by mixing or replacing the original values of the pixels with new colors. Returns This function does not support signed data images. It returns the error code ERROR_SIGNED_DATA_NOT_SUPPORTED if a signed data image is passed to this function. This function was designed for use with grayscale bitmaps. If the bitmap being used is not grayscale, this function only affects those pixels or areas of the bitmap where Red=Green=Blue. This function transforms the 8-bit grayscale bitmaps into colored 8-bit bitmaps (Palette) meanwhile the 12-bit and 16-bit grayscale bitmaps are transformed into a 48-bit colored bitmap. Monotone conversion is possible by setting uFlags to DT_REPLACE, which clears the palette. This function gives you the option of having the toolkit generate the array of colors to use or creating the array of colors to use yourself. To have the toolkit generate the array of colors: Pass the color to use for generating the array of gradient colors in the crColor parameter. To use a user-defined array of colors: This example loads a bitmap and converts it to a Duotone. L_INT GrayScaleToDuotoneExample(L_VOID) { L_INT nRet; BITMAPHANDLE LeadBitmap; /* Bitmap handle to hold the loaded image. */ COLORREF crColor; /* New Color */ /* Load the bitmap, keeping the bits per pixel of the file */ nRet = L_LoadBitmap (TEXT("C:\\Program Files\\LEAD Technologies\\LEADTOOLS 15\\Images\\IMAGE1.CMP"), &LeadBitmap, sizeof(BITMAPHANDLE), 0, ORDER_BGR, NULL, NULL); if(nRet !=SUCCESS) return nRet; /* Change the bitmap to grayscale bitmap */ nRet = L_GrayScaleBitmap (&LeadBitmap, 8); if(nRet !=SUCCESS) return nRet; /* The new color is Red */ crColor = RGB(255, 0, 0); /* Apply duotone conversion*/ nRet = L_GrayScaleToDuotone(&LeadBitmap, NULL, crColor, DT_REPLACE);; }
http://www.leadtools.com/help/leadtools/v15/main/api/dllref/l_grayscaletoduotone.htm
crawl-003
en
refinedweb
clang API Documentation Data for. More... #include <Index.h> List of all members. Data for. Definition at line 4383 of file Index.h. Definition at line 4384 of file Index.h. Non-zero if the AST file is a module otherwise it's a PCH. Definition at line 4393 of file Index.h. Location where the file is imported. It is useful mostly for modules. Definition at line 4389 of file Index.h. Generated on Thu May 24 2012 02:53:49 by Doxygen 1.7.5.1. See the Main Clang Web Page for more information.
http://clang.llvm.org/doxygen/structCXIdxImportedASTFileInfo.html
crawl-003
en
refinedweb
Optimistic. What is optimistic locking? Before defining optimistic locking, I'll first describe it's opposite, pessimistic locking. Suppose you need to update a record in your database, but you cannot do so atomically -- you'll have to read the record, then save it in a separate step. How do you ensure that, in a concurrent environment, another thread doesn't sneak in and modify the row between the read and update steps? The answer depends on the database you are using, but in Postgres you would issue a SELECT ... FOR UPDATE query. The FOR UPDATE locks the row for the duration of your transaction, effectively preventing any other thread from making changes while you hold the lock. SQLite does not support FOR UPDATE because writes lock the entire database, making row-level (or even table-level) locks impossible. Instead, you would begin a transaction in IMMEDIATE or EXCLUSIVE mode before performing your read. This would ensure that no other thread could write to the database during your transaction. This type of locking is problematic for, what I hope are, obvious reasons. It limits concurrency, and for SQLite the situation is much worse because no write whatsoever can occur while you hold the lock. This is why optimistic locking can be such a useful tool. Optimistic locking Unlike pessimistic locking, optimistic locking does not acquire any special locks when the row is being read or updated. Instead, optimistic locking takes advantage of the database's ability to perform atomic operations. An atomic operation is one that happens all at once, so there is no possibility of a conflict if multiple threads are hammering away at the database. One simple way to implement optimistic locking is to add a version field to your table. When a new row is inserted, it starts out at version 1. Subsequent updates will atomically increment the version, and by comparing the version we read with the version currently stored in the database, we can determine whether or not the row has been modified by another thread. Implementation Here's the code for the example implementation included in the documentation: from peewee import * class ConflictDetectedException(Exception): pass class BaseVersionedModel(Model): version = IntegerField(default=1, index=True) def save_optimistic(self): if not self.id: # This is a new record, so the default logic is to perform an # INSERT. Ideally your model would also have a unique # constraint that made it impossible for two INSERTs to happen # at the same time. return self.save() # Update any data that has changed and bump the version counter. field_data = dict(self._data) current_version = field_data.pop('version', 1) field_data = self._prune_fields(field_data, self.dirty_fields) if not field_data: raise ValueError('No changes have been made.') ModelClass = type(self) field_data['version'] = ModelClass.version + 1 # Atomic increment. query = ModelClass.update(**field_data).where( (ModelClass.version == current_version) & (ModelClass.id == self.id)) if query.execute() == 0: # No rows were updated, indicating another process has saved # a new version. How you handle this situation is up to you, # but for simplicity I'm just raising an exception. raise ConflictDetectedException() else: # Increment local version to match what is now in the db. self.version += 1 return True Here’s a contrived example to illustrate how this code works. Let’s assume we have the following model definition. Note that there’s a unique constraint on the username – this is important as it provides a way to prevent double-inserts, which the BaseVersionedModel cannot handle (since inserted rows have no version to compare against). class User(BaseVersionedModel): username = CharField(unique=True) favorite_animal = CharField() We'll load these up in the interactive shell and do some update operations to show the code in action. Example usage To begin, we'll create a new User instance and save it. After the save, you can look and see that the version is 1. >>> u = User(username='charlie', favorite_animal='cat') >>> u.save_optimistic() True >>> u.version 1 If we immediately try and call save_optimistic() again, we'll receive an error indicating that no changes were made. This logic is completely optional, I thought I'd include it just to forestall any questions about how to implement it: >>> u.save_optimistic() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "x.py", line 18, in save_optimistic raise ValueError('No changes have been made.') ValueError: No changes have been made. Now if we make a change to the user's favorite animal and save, we'll see that it works and the version is now increased to 2: >>> u.>> u.save_optimistic() True >>> u.version 2 To simulate a second thread coming in and saving a change, we'll just fetch a separate instance of the model, make a change, and save, bumping the version to 3 in the process: # Simulate a separate thread coming in and updating the model. >>> u2 = User.get(User.username == 'charlie') >>> u2.>> u2.save_optimistic() True >>> u2.version 3 Now if we go back to the original instance and try to save a change, we'll get a ConflictDetectedException because the version we are saving (2) does not match up with the latest version in the database (3): # Now, attempt to change and re-save the original instance: >>> u.>> u.save_optimistic() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "x.py", line 30, in save_optimistic raise ConflictDetectedException() ConflictDetectedException: current version is out of sync And that's all there is to it! Thanks for reading Thanks for reading this post, I hope that this technique and the example code will be helpful for you! Don't hesitate to add a comment if anything is unclear or if you have any questions. Commenting has been closed, but please feel free to contact me
http://charlesleifer.com/blog/optimistic-locking-in-peewee-orm/
CC-MAIN-2020-50
en
refinedweb
The PGX API is designed to be asynchronous. This means that all of its core methods ending with Async do not block the caller thread until the request is completed. Instead, a PgxFuture object is instantly returned. In PGX, three actions can be performed on the returned PgxFuture: The easiest and probably most obvious way to get the result is to call get() on the PgxFuture, which blocks the caller thread until the result is available: PgxFuture<PgxSession> sessionPromise = instance.createSessionAsync("my-session"); try { // block caller thread PgxSession session = sessionPromise.get(); // do something with session ... } catch (InterruptedException e) { // caller thread was interrupted while waiting for result } catch (ExecutionException e) { // an exception was thrown during asynchronous computation Throwable cause = e.getCause(); // the actual exception is nested } Note: PGX provides blocking convenience methods for every Async method, which call the get() for you. Typically, those methods have the same name as the asynchronous method they wrap, but without the Async suffix. For example, above code snippet is equal to try { // block caller thread PgxSession session = instance.createSession("my-session"); // do something with session ... } catch (InterruptedException e) { // caller thread was interrupted while waiting for result } catch (ExecutionException e) { // an exception was thrown during asynchronous computation Throwable cause = e.getCause(); // the actual exception is nested } PGX ships a version of Java 8's CompletableFuture named PgxFuture, a monadic enhancement of the Future interface. The CompletableFuture allows chaining of asynchronous computations without polling or the need of deeply nested callbacks (also known as callback hell). All PgxFuture instances returned by PGX APIs are instances of CompletableFuture and can be chained without the need of Java 8: import jsr166e.CompletableFuture; import jsr166e.CompletableFuture.Action; import jsr166e.CompletableFuture.Fun; ... final GraphConfig graphConfig = ... instance.createSessionAsync("my-session") .thenCompose(new Fun<PgxSession, CompletableFuture<PgxGraph>>() { @Override public CompletableFuture<PgxGraph> apply(PgxSession session) { return session.readGraphWithPropertiesAsync(graphConfig); } }).thenAccept(new Action<PgxGraph>() { @Override public void accept(PgxGraph graph) { // do something with loaded graph } }); In above example, we first make an asynchronous request createSessionAsync(), asking PGX to create a session. On the returned PgxFuture object, we call thenCompose() to describe what should happen next. The function we pass into thenCompose() gets called once the asynchronous createSessionAsync() completes with the result of the PgxFuture object as argument: the newly created PgxSession object. Inside the function, we make another asynchronous readGraphWithPropertiesAsync() request and return the PgxFuture object. The outer PgxFuture object returned by thenCompose() gets completed when the readGraphWithPropertiesAsync() request completes, so we can chain another function to it. This time, we use thenAccept() instead of thenCompose(). The difference is that the function you pass to thenAccept() does not return anything. Hence, the future return by thenAccept() is of type PgxFuture<Void>. Here is a full reference to what you can do with the CompletableFuture class. For most use cases, blocking the caller thread is fine. However, blocking can quickly lead to poor performance or deadlocks once things get more complex. As a rule, use blocking to quickly analyze selected graphs in a sequential manner, for example, in shell scripts or during interactive analysis using PGX Shell. Use chaining for applications built on top of PGX. To request cancellation of a pending request, simply invoke the cancel method of the returned PgxFuture instance. For example: PgxFuture<Object> promise = ... // do something else promise.cancel(); // will cancel computation Any subsequent promise.get() invocation will result in a CancellationException being thrown. Note: Because of Java's cooperative threading model, it might take some time before PGX actually stops the computation. In addition to the aforementioned actions that can be performed on a PgxFuture object, PGX offers an API PgxSession#runConcurrently to submit a list of suppliers of asynchronous APIs to run concurrently in the PGX server. Here is an example: import oracle.pgx.api.*; Supplier<PgxFuture<?>> asyncRequest1 = () -> session.readGraphWithPropertiesAsync(...); Supplier<PgxFuture<?>> asyncRequest2 = () -> session.getAvailableSnapshotsAsync(...); List<Supplier<PgxFuture<?>>> supplierList = Arrays.asList(asyncRequest1, asyncRequest2); //executing the async requests with the enabled optimization feature List<?> results = session.runConcurrently(supplierList); //the supplied requests are mapped to their results and orderly collected PgxGraph graph = (PgxGraph) results.get(0); Deque<GraphMetaData> metaData = (Deque<GraphMetaData>) results.get(1);
https://docs.oracle.com/cd/E56133_01/latest/prog-guides/block-chain-cancel.html
CC-MAIN-2020-50
en
refinedweb
Tutorial: Use Azure Storage for build artifacts This article illustrates how to use Blob storage as a repository of build artifacts created by a Jenkins continuous integration (CI) solution, or as a source of downloadable files to be used in a build process. One of the scenarios where you would find this solution'll be using the Azure Storage Plugin for Jenkins CI made available by Microsoft. Jenkins overview Jenkins enables continuous integration of a software project by allowing developers to integrate their code changes and have builds produced automatically. This pattern reduces friction and increases team productivity. Builds are versioned and build artifacts can be uploaded to distinct repositories. This article shows how to use Azure blob storage as the repository for the build artifacts. You'll also learn how to download dependencies from Azure blob storage. More information about Jenkins can be found at Meet Jenkins. Benefits of using the Blob service Benefits of using the Blob service to host your agile development build artifacts include: - High availability of your build artifacts or downloadable dependencies. - Performance when your Jenkins CI solution uploads your build artifacts. - Performance when your customers and partners download your build artifacts. - Control over user access policies, with choices such as anonymous access, expiration-based shared access signature access, and private access. Prerequisites - Azure subscription: If you don't have an Azure subscription, create a free Azure account before you begin. - Jenkins server: If you don't have a Jenkins server installed, create a Jenkins server on Azure. - Azure storage account: If you don't already have a storage account, create a Storage Account. Configure your environment If you currently don't have a Jenkins CI solution, you can run a Jenkins CI solution using the following technique: On a Java-enabled machine, download jenkins.war from. At a command prompt that is opened to the folder that contains jenkins.war, run: java -jar jenkins.war Browse to open the Jenkins dashboard. You use the Jenkins dashboard to install and configure the Azure Storage plugin. How to use the Blob service with Jenkins CI. How to install the Azure Storage plugin - Within the Jenkins dashboard, select Manage Jenkins. - In the Manage Jenkins page, select Manage Plugins. - Select the Available tab. - In the Artifact Uploaders section, check Microsoft Azure Storage plugin. - Select either Install without restart or Download now and install after restart. - Restart Jenkins. How to configure the Azure Storage plugin to use your storage account - Within the Jenkins dashboard, select Manage Jenkins. - In the Manage Jenkins page, select Configure System. - In the Microsoft Azure Storage Account Configuration section: - Enter your storage account name, which you can obtain from the Azure portal. - Enter your storage account key, also obtainable from the Azure portal. - Use the default value for Blob Service Endpoint URL if you are using the global Azure cloud. If you are using a different Azure cloud, use the endpoint as specified in the Azure portal for your storage account. - Select Validate storage credentials to validate your storage account. - [Optional] If you have additional storage accounts that you want made available to your Jenkins CI, select Add more Storage Accounts. - Select Save to save your settings. How to create a post-build action that uploads your build artifacts to your storage account For instructional purposes, you first need to create a job that will create several files, and then add in the post-build action to upload the files to your storage account. Within the Jenkins dashboard, select New Item. Name the job MyJob, select Build a free-style software project, and then select OK. In the Build section of the job configuration, select Add build step and select Execute Windows batch command. In Command, use the following commands: md text cd text echo Hello Azure Storage from Jenkins > hello.txt date /t > date.txt time /t >> date.txt In the Post-build Actions section of the job configuration, select Add post-build action and select Upload artifacts to Azure Blob storage. For Storage account name, select the storage account to use. For Container name, specify the container name. (The container will be created if it doesn. Select that link to learn the environment variable names and descriptions. Environment variables that contain special characters, such as the BUILD_URL environment variable, are not allowed as a container name or common virtual path. Select Make new container public by default for this example. (If you want to use a private container, you'll need to create a shared access signature to allow access, which is beyond the scope of this article. You can learn more about shared access signatures at Using Shared Access Signatures (SAS).) [Optional] Select Clean container before uploading if you want the container to be cleared of contents before build artifacts are uploaded (leave it unchecked if you don't want to clean the contents of the container). For List of Artifacts to upload, enter text/*.txt. For Common virtual path for uploaded artifacts, for purposes of this tutorial, enter ${BUILD\_ID}/${BUILD\_NUMBER}. Select Save to save your settings. In the Jenkins dashboard, select. - Select Storage. - Select the storage account name that you used for Jenkins. - Select Containers. - Select the container named myjob, which is the lowercase version of the job name that you assigned when you created the Jenkins job. Container names and blob names are lowercase (and case-sensitive) in Azure storage. Within the list of blobs for the container named myjob, you should see hello.txt and date.txt. Copy the URL for either of these items and open it in your browser. You see the text file that was uploaded as a build artifact. Only one post-build action that uploads artifacts to Azure blob storage can be created per job. value for the List of Artifacts to upload option: value for the List of Artifacts to upload option: build/\*.jar::binaries;build/\*.txt::notices. How to create a build step that downloads from Azure blob storage The following steps illustrate to configure a build step to download items from Azure blob storage, which is useful if you want to include items in your build. An example of using this pattern is JARs that you might want to persist in Azure blob storage. - In the Build section of the job configuration, select Add build step and select Download from Azure Blob storage. - For Storage account name, select the storage account to use. - For Container name, specify the name of the container that has the blobs you want to download. You can use environment variables. - For Blob name, specify the blob name. You can use environment variables. Also, you can use an asterisk, as a wildcard after you specify the initial letter(s) of the blob name. For example, project\* would specify all blobs whose names start with project. - [Optional] For Download path, specify the path on the Jenkins machine where you want to download files from Azure blob storage. Environment variables can also be used. (If you don't provide a value for Download path, the files from Azure blob storage will be downloaded to the job's workspace.). Components used by the Blob service This section provides an overview of the Blob service components. Storage Account: All access to Azure Storage is done through a storage account. A storage account is the highest level of the namespace for accessing blobs. An account can contain an unlimited number of containers, as long as their total size is under 100 TB., Append Blobs, and Page Blobs. URL format: Blobs are addressable using the following URL format: Notes: The format above applies to the global Azure cloud. If you are using a different Azure cloud, use the endpoint within the Azure portal to determine your URL endpoint. The placeholder storageaccountrepresents the name of your storage account. The placeholder container_namerepresents the name of your container. The placeholder blob_namerepresents the name of your blob. Within the container name, you can have multiple paths. These paths are separated by a forward slash. The example container name used for this tutorial is MyJob. The common virtual path is defined as ${BUILD_ID}/${BUILD_NUMBER}. This value results in the blob having a URL of the following form: Troubleshooting the Jenkins plugin If you encounter any bugs with the Jenkins plugins, file an issue in the Jenkins JIRA for the specific component.
https://docs.microsoft.com/sv-se/azure/developer/jenkins/azure-storage-blobs-as-build-artifact-repository
CC-MAIN-2020-50
en
refinedweb
The default behaviour of creating spawned objects from prefabs on the client can be customized by using spawn handler functions. This way you have full control of how you spawn the object as well as how you un-spawn it. You can register functions to spawn and un-spawn client objects with ClientScene.RegisterSpawnHandler. The server creates objects directly and then spawn them on the clients though this functionality. This function takes the asset ID of the object and two function delegates, one to handle creating objects on the client and one to handler destroying objects on the client. The asset ID can be a dynamic one or just the asset ID found on the prefab object you want to spawn (if you have one). The spawn / un-spawner need to have this object signature, this is defined in the high level API. // Handles requests to spawn objects on the client public delegate GameObject SpawnDelegate(Vector3 position, NetworkHash128 assetId); // Handles requests to unspawn objects on the client public delegate void UnSpawnDelegate(GameObject spawned); The asset ID passed to the spawn function can be found on NetworkIdentity.assetId for prefabs, where it is populated automatically. The registration for a dynamic asset ID is handled liked his: // generate a new unique assetId NetworkHash128 creatureAssetId = NetworkHash128.Parse("e2656f"); // register handlers for the new assetId ClientScene.RegisterSpawnHandler(creatureAssetId, SpawnCreature, UnSpawnCreature); // get assetId on an existing prefab NetworkHash128 bulletAssetId = bulletPrefab.GetComponent<NetworkIdentity>().assetId; // register handlers for an existing prefab you'd like to custom spawn ClientScene.RegisterSpawnHandler(bulletAssetId, SpawnBullet, UnSpawnBullet); // spawn a bullet - SpawnBullet will be called on client. NetworkServer.Spawn(gameObject, creatureAssetId); The spawn functions themselves are implemented with the delegate signature, here is the bullet spawner but the SpawnCreature would look the same but have different spawn logic public GameObject SpawnBullet(Vector3 position, NetworkHash128 assetId) { return (GameObject)Instantiate(m_BulletPrefab, position, Quaternion.identity); } public void UnSpawnBullet(GameObject spawned) { Destroy(spawned); } When using custom spawn functions, it is sometimes useful to be able to unspawn objects without destroying them. This can be done by calling NetworkServer.UnSpawn. This causes a message to be sent to clients to un-spawn the object, so that the custom un-spawn function will be called on the clients. The object is not destroyed when this function is called. Note that on the host, object are not spawned for the local client as they already exist on the server. So no spawn handler functions will be called. Here is an example of how you might set up a very simple object pooling system with custom spawn handlers. Spawning and unspawning then just puts objects in or out of the pool. using UnityEngine; using UnityEngine.Networking; using System.Collections; public class SpawnManager : MonoBehaviour { public int m_ObjectPoolSize = 5; public GameObject m_Prefab; public GameObject[] m_Pool; public NetworkHash128 assetId { get; set; } public delegate GameObject SpawnDelegate(Vector3 position, NetworkHash128 assetId); public delegate void UnSpawnDelegate(GameObject spawned); void Start() { assetId = m_Prefab.GetComponent<NetworkIdentity> ().assetId; m_Pool = new GameObject[m_ObjectPoolSize]; for (int i = 0; i < m_ObjectPoolSize; ++i) { m_Pool[i] = (GameObject)Instantiate(m_Prefab, Vector3.zero, Quaternion.identity); m_Pool[i].name = "PoolObject" + i; m_Pool[i].SetActive(false); } ClientScene.RegisterSpawnHandler(assetId, SpawnObject, UnSpawnObject); } public GameObject GetFromPool(Vector3 position) { foreach (var obj in m_Pool) { if (!obj.activeInHierarchy) { Debug.Log("Activating object " + obj.name + " at " + position); obj.transform.position = position; obj.SetActive (true); return obj; } } Debug.LogError ("Could not grab object from pool, nothing available"); return null; } public GameObject SpawnObject(Vector3 position, NetworkHash128 assetId) { return GetFromPool(position); } public void UnSpawnObject(GameObject spawned) { Debug.Log ("Re-pooling object " + spawned.name); spawned.SetActive (false); } } To use this manager do the following SpawnManager spawnManager; void Start() { spawnManager = GameObject.Find("SpawnManager").GetComponent<SpawnManager> (); } void Update() { if (!isLocalPlayer) return; var x = Input.GetAxis("Horizontal")*0.1f; var z = Input.GetAxis("Vertical")*0.1f; transform.Translate(x, 0, z); if (Input.GetKeyDown(KeyCode.Space)) { // Command function is called on the client, but invoked on the server CmdFire(); } } [Command] void CmdFire() { // Set up bullet on server var bullet = spawnManager.GetFromPool(transform.position + transform.forward); bullet.GetComponent<Rigidbody>().velocity = transform.forward*4; // spawn bullet on client, custom spawn handler will be called NetworkServer.Spawn(bullet, spawnManager.assetId); // when the bullet is destroyed on the server it is automatically destroyed on clients StartCoroutine (Destroy (bullet, 2.0f)); } public IEnumerator Destroy(GameObject go, float timer) { yield return new WaitForSeconds (timer); spawnManager.UnSpawnObject(go); NetworkServer.UnSpawn(go); } The automatic destruction shows how the objects are returned to the pool and re-used when you fire again.
https://docs.unity3d.com/ru/2017.2/Manual/UNetCustomSpawning.html
CC-MAIN-2020-50
en
refinedweb
Created on 2014-05-29 21:03 by Miquel.Garcia, last changed 2014-05-29 21:18 by Miquel.Garcia. This issue is now closed. In browsing the documentation for datetime.datetime: It states that the datetime.datetime.microseconds returns the number of microseconds but in trying (Python 2.7.1 r271:86832): import datetime print datetime.datetime.now().microseconds The interpreter returns AttributeError: 'datetime.datetime' object has no attribute 'microseconds' The correct way to access the number of microseconds is by: datetime.datetime.now().microsecond # Note the final 's' which is not consistent with the documentation. Many thanks Miquel Could you provide an actual quote where it refers to datetime.datetime.microseconds? Are you not by any chance confusing it with datetime.timedelta.microseconds? My mistake. Yes you are right, I was confused with the timedelta class. Sorry for the confusion. Many thanks! Miquel
https://bugs.python.org/issue21609
CC-MAIN-2020-50
en
refinedweb
Have problem getting the correct date format while print - kian hong Tan last edited by Hello, this is the code I use to print the log. def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) The output: Starting Portfolio Value: 100000.00 2020-05-04, Close, 8871.96 2020-05-05, Close, 8867.83 2020-05-05, Close, 8901.65 2020-05-05, Close, 8858.35 2020-05-05, Close, 8879.58 However, I wanted the date to be shown as 2020-05-04 16:30:00 (%Y-%m-%d %H:%M:%S), like how I specified for Data Feed, not just showing date only. Also, how can I change the date to UTC+8? Many Thanks. @kian-hong-Tan said in Have problem getting the correct date format while print: dt = dt or self.datas[0].datetime.date(0) When setting dates in backtrader, datetime is used. The first part: self.datas[0].datetime gets datetime object. The ending will give the format of date, time, or datetime. The endings are: datetime(0) # format is %Y-%m-%d %H:%M:%S date(0) # format is %Y-%m-%d time(0) # format is %H:%M:%S As a standard procedure, at the beginning of my strategy class next() method, as well as in analyzers and indicators where needed, I put the following code: # Current bar datetime, date, and time. dt = self.data.datetime.datetime() date = self.data.datetime.date() time = self.data.datetime.time() This allows me to grab time and date information in an easy, concise, and consistent way during programming. - kian hong Tan last edited by
https://community.backtrader.com/topic/2856/have-problem-getting-the-correct-date-format-while-print
CC-MAIN-2020-50
en
refinedweb
Data, called XForms. XForms is an HTML form replacement that provides much more functionality and flexibility. Although no browsers currently support the XForms standard, it offers tremendous promise, and developers should become familiar with its features. What's wrong with an HTML form? Many aspects of HTML development have been improved with technologies like Cascading Style Sheets (CSS), Dynamic HTML (DHTML), and even editing tools like Macromedia DreamWeaver, but the design and structure of the HTML form has not changed. Currently, HTML forms create numerous obstacles for the Web development community. First, the feature set of an HTML form is limited. Only basic data gathering elements (textbox, checkbox, etc.) are provided, and one action is associated with the form's submission. Scripting languages like JavaScript and VBScript can be used to extend the functionally, and technologies such as ASP.NET and JavaServer Pages (JSP) make it possible to bind data to these elements. But the approaches used are far from industry standard. Another HTML form drawback is the lack of accessibility features. Web site accessibility is a major initiative that must be addressed to ensure that all users can take advantage of a site's features. In addition, the Internet is fast moving from our desktops and laptops to mobile devices like cell phones and personal digital assistants (PDAs). HTML forms were designed to work well on the computer, so they do not always work well with the plethora of mobile devices available in today's market. Finally, XML is evolving into the de facto standard for data transmission. HTML forms provide no XML support. XForms provides an answer XForms addresses every concern or problem with ease. Given its strong ties to the XML community, XForms offers tight XML integration. Common form features like data validation and calculations are provided, and XForms are device-independent, so they work just as well on your phone as on your laptop. Accessibility features have been a part of the XForms standard from the start as well. Given its XML roots, XForms has been designed to be tightly integrated with XHTML (the next iteration of HTML). One of the key aspects of XForms is its goal to separate the form from the presentation. That is, the data and logic of an XForm is handled with XML and data is transferred using XML. But the actual presentation of XForm data is handled separately from the data. This allows the data to be designed without requiring any knowledge of how it will be used or presented. This separation of presentation and data makes it possible to support an unlimited number of devices. When XML is used, other XML technologies like Wireless Markup Language (WML) or VoiceXML may be used to correctly present the data to the requesting client. XForms controls The most important aspects of the XForms standard are the controls. These controls are user interface components that facilitate data entry and display. They're analogous to the standard HTML form elements (like input and select elements). However, the XForms counterparts do provide additional functionality. Here is an annotated list of XForms controls: - <input>—Data entry - <textarea>—Data entry - <secret>—Entry of sensitive data - <output>—Inline display of data - <range>—Slide control selection of data values - <upload>—Upload files or data - <trigger>—Activating form events - <submit>—Submitting form data - <select>—Selecting multiple options Data from the associated XML document is called instance data. Instance data can be assigned to individual or multiple controls, but it may not be used at all. The data is stored in memory, so it is still available. Storing the data in memory negates the necessity of hidden form fields. I find the trigger element to be most intriguing because it provides a type of event-driven approach to forms development. Also, it may negate the need for extensive client-side scripts. The elimination of much of the client-side scripting is a goal of XForms. Let's look at a simple example to illustrate some of what we've been discussing. First, here's a basic HTML form, which provides two data input fields and a submit button: <html> <head><title>Test</title></head> <body> <form action="process.jsp" method="post"> <p> First Name: <input type="text" id="firstname" name="firstname"> <br> Last Name: <input type="text" id="lastname" name="lastname"> <br> <input type="submit" value="submit"> </p> </form> </body> </html> The same form designed with XForms looks like this: <html xmlns= xmlns:xform=> <head> <xform:xform <xform:submitInfo </xform:xform> </head> <body> <xform:input <xform:caption>First Name:</xform:caption> </xform:input> <xform:input <xform:caption>Last Name:</xform:caption> </xform:input> <xform:submit <xform:caption>Submit</xform:caption> </xform:submit> </body> </html> This form uses the XHTML standard, so the correct namespaces are referenced in the first line. In addition, the XForms used in the XHTML are defined in the head section of the page. A name is assigned to the form, along with the necessary action to take when the associated form is submitted. This allows multiple forms to be set up, and their associated data can be intertwined. The instance data, data model, and data bindings are defined in the head section as well. Once this type of data is defined, it can be utilised in the rest of the form. Only the beginning The XForms specification is far from concise, and this article has provided only a brief introduction to this emerging standard. In addition to the form controls, XForms provides data types, data attributes, data structures, functions, and much more. Because it's an evolving standard, features may be removed or added as it matures. A need-to-know basis As I noted earlier, no major browser supports the XForms standard. A variety of tools are available to work with it, but deploying an XForms-based solution at this time is not realistic. Still, it is imperative to stay current with the evolution of XML and related standards. Developers must be aware of what is on the horizon. 1 Mark Birbeck - 05/07/04 Nice introduction, but would have benefited from pointing the reader to somewhere that they can actually try some XForms out! Contrary to the author's comment that no browsers currently support XForms, the reader can try the following 100%-conformant implementations: X-Smiles: a Java-based browser from the University of Helsinki. formsPlayer: a plug-in for IE 6. Novell XForms Technology Preview: also a Java-based browser. Regards, Mark (I'm the CEO of the company that produces formsPlayer ... just so you know where I'm coming from ;)) » Report offensive content 2 Happyman - 30/05/07 let's role » Report offensive content
http://www.builderau.com.au/program/web/soa/Prepare-for-the-transition-from-HTML-forms-to-XForms/0,339024632,320276682,00.htm
crawl-002
en
refinedweb
Re: How can I cause the datetime to be the name of the output file..... - From: Michael Mair <Michael.Mair@xxxxxxxxxxxxxxx> - Date: Sun, 05 Feb 2006 10:02:03 +0100 Keith Thompson wrote: "Merlin" <mrb7494@xxxxxxxxx> writes: I realize that I'm new to the group, but I'm hoping that someone might be able to help me out. What I'm doing is using the GNU 7zip command line utility to make a backup on my desktop. Essentially, I'm running an executable that will create a folder, and drop a zipped up copy of my data to that folder. There doesn't seem to be a way to get 7zip to automatically create a file where the filename.zip would be a date/time stamp (something like 020206143260.zip. What I'd like to be able to do is allow 7.zip to create the file with some arbitrary name, and then use the rename() function and the time.h library to somehow grab the current date/time from the computer and rename the file with the filename being the date/time stamp. Would time.h be the right library to use? If so, does anyone have any suggestions as to how a solution might be implemented? Any thoughts would be greatly appreciated. #include <stdlib.h> #include <stdio.h> #include <unistd.h> main() { chdir("C:\\Documents and Settings\\All Users\\Desktop\\"); mkdir("Backup"); chdir("C:\\Program Files\\Backup Utility\\"); system("7za.exe a -tzip \"C:\\Documents and Settings\\All Users\\Desktop\\Backup\\Backup.zip\" \"C:\\Program Files\\Data\\*.*\""); chdir("C:\\"); } A lot of this (<unistd.h>, 7zip, anything having to do with directories) isn't covered by standard C, and so is off-topic, but the core of your question is topical. Use "int main(void)" rather than "main()". Check for, and handle, errors on your function calls. Add a "return 0;" at the end of the program. You might also do a "return EXIT_FAILURE;" or "exit(EXIT_FAILURE);" if you detect an error (this requires <stdlib.h>). Yes, <time.h> is the header (not library) that you want to use to deal with timestamps. Call time() to get a raw timestamp, type time_t, then use either gmtime() or localtime() to convert the time_t to a " struct tm", then use strftime() to generate a string in whatever format you like. If you can specify the file name when you invoke the command itself, you can build a command string using sprintf(); making sure the string into which you write the formatted is long enough is left as an exercise. If your standard library supports it, I suggest the use of snprintf() instead of sprintf() to take care of unforeseen situations. <OT> If you care about being able to understand the file names after you've generated them, you might use a format other than 020206143260.zip, perhaps something like 2006-02-02-143260.zip; using MMMM-DD-YY also ITYM: YYYY-MM-DD Sort of a rotational error... has the advantage that it sorts nicely in a directory listing. </OT> Cheers Michael -- . - Follow-Ups: - Re: How can I cause the datetime to be the name of the output file..... - From: Keith Thompson - References: - How can I cause the datetime to be the name of the output file..... - From: Merlin - Re: How can I cause the datetime to be the name of the output file..... - From: Keith Thompson - Prev by Date: Re: How can I cause the datetime to be the name of the output file..... - Next by Date: Re: how to encrypt a C data and write a bin file and read a bin at run time and decrypt C data - Previous by thread: Re: How can I cause the datetime to be the name of the output file..... - Next by thread: Re: How can I cause the datetime to be the name of the output file..... - Index(es):
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2006-02/msg00520.html
crawl-002
en
refinedweb
Re: Is C99 the final C? (some suggestions) From: Sidney Cadot (sidney_at_jigsaw.nl) Date: 12/02/03 - ] Date: Tue, 02 Dec 2003 22:58:07 +0100 Paul Hsieh wrote: > Sidney Cadot <sidney@jigsaw.nl> wrote: >>I think C99 has come a long way to fix the most obvious problems in >>C89 (and its predecessors). > It has? I can't think of a single feature in C99 that would come as > anything relevant in any code I have ever written or will ever write > in the C with the exception of "restrict" and //-style comments. For programming style, I think loop-scoped variable declarations ae a big win. Then there is variable sized array, and complex numbers... I'd really use all this (and more) quite extensively in day-to-day work. >>[...] I for one would be happy if more compilers would >>fully start to support C99, It will be a good day when I can actually >>start to use many of the new features without having to worry about >>portability too much, as is the current situation. > I don't think that day will ever come. In its totallity C99 is almost > completely worthless in real world environments. Vendors will be > smart to pick up restrict and few of the goodies in C99 and just stop > there. Want to take a bet...? >>* support for a "packed" attribute to structs, guaranteeing that no >> padding occurs. > > Indeed, this is something I use on the x86 all the time. The problem > is that on platforms like UltraSparc or Alpha, this will either > inevitably lead to BUS errors, or extremely slow performing code. Preventing the former is the compiler's job; as for the latter, the alternative is to do struct unpacking/unpacking by hand. Did that, and didn't like it for one bit. And of course it's slow, but I need the semantics. > If instead, the preprocessor were a lot more functional, then you > could simply extract packed offsets from a list of declarations and > literally plug them in as offsets into a char[] and do the slow memcpy > operations yourself. This would violate the division between preprocessor and compiler too much (the preprocessor would have to understand quite a lot of C semantics). >>* upgraded status of enum types (they are currently quite >> interchangeable with ints); deprecation of implicit casts from >> int to enum (perhaps supported by a mandatory compiler warning). > > > I agree. Enums, as far as I can tell, are almost useless from a > compiler assisted code integrity point of view because of the > automatic coercion between ints and enums. Its almost not worth the > bothering to ever using an enum for any reason because of it. Yes. >>* a clear statement concerning the minimal level of active function >> calls invocations that an implementation needs to support. >> Currently, recursive programs will stackfault at a certain point, >> and this situation is not handled satisfactorily in the standard >> (it is not adressed at all, that is), as far as I can tell. > That doesn't seem possible. The amount of "stack" that an > implementation might use for a given function is clearly not easy to > define. Better to just leave this loose. It's not easy to define, that's for sure. But to call into recollection a post from six weeks ago: #include <stdio.h> /* returns n! modulo 2^(number of bits in an unsigned long) */ unsigned long f(unsigned long n) { return (n==0) ? 1 : f(n-1)*n; } int main(void) { unsigned long z; for(z=1;z!=0;z*=2) { printf("%lu %lu\n", z, f(z)); fflush(stdout); } return 0; } ...This is legal C (as per the Standard), but it overflows the stack on any implementation (which is usually a sumptom of UB). Why is there no statement in the standard that even so much as hints at this? >>* a library function that allows the retrieval of the size of a memory >> block previously allocated using "malloc"/"calloc"/"realloc" and >> friends. > > There's a lot more that you can do as well. Such as a tryexpand() > function which works like realloc except that it performs no action > except returning with some sort of error status if the block cannot be > resized without moving its base pointer. Further, one would like to > be able to manage *multiple* heaps, and have a freeall() function -- > it would make the problem of memory leaks much more manageable for > many applications. It would almost make some cases enormously faster. But this is perhaps territory that the Standard should steer clear of, more like something a well-written and dedicated third-party library could provide. >>* a #define'd constant in stdio.h that gives the maximal number of >> characters that a "%p" format specifier can emit. Likewise, for >> other format specifiers such as "%d" and the like. >> >>* a printf format specifier for printing numbers in base-2. > Ah -- the kludge request. I'd rather see this as filling in a gaping hole. > Rather than adding format specifiers one at > a time, why not instead add in a way of being able to plug in > programmer-defined format specifiers? Because that's difficult to get right (unlike a proposed binary output form). > I think people in general would > like to use printf for printing out more than just the base types in a > collection of just a few formats defined at the whims of some 70s UNIX > hackers. Why not be able to print out your data structures, or > relevant parts of them as you see fit? The %x format specifier mechanism is perhaps not a good way to do this, if only because it would only allow something like 15 extra output formats. >>* I think I would like to see a real string-type as a first-class >> citizen in C, implemented as a native type. But this would open >> up too big a can of worms, I am afraid, and a good case can be >> made that this violates the principles of C too much (being a >> low-level language and all). > > The problem is that real string handling requires memory handling. > The other primitive types in C are flat structures that are fixed > width. You either need something like C++'s constructor/destructor > semantics or automatic garbage collection otherwise you're going to > have some trouble with memory leaking. A very simple reference-counting implementation would suffice. But yes, it would not rhyme well with the rest of C. > With the restrictions of the C language, I think you are going to find > it hard to have even a language implemented primitive that takes you > anywhere beyond what I've done with the better string library, for > example (). But even with bstrlib, you need to > explicitely call bdestroy to clean up your bstrings. > > I'd be all for adding bstrlib to the C standard, but I'm not sure its > necessary. Its totally portable and freely downloadable, without much > prospect for compiler implementors to improve upon it with any native > implementations, so it might just not matter. >>* Normative statements on the upper-bound worst-case asymptotic >> behavior of things like qsort() and bsearch() would be nice. > > Yeah, it would be nice to catch up to where the C++ people have gone > some years ago. I don't think it is a silly idea to have some consideration for worst-case performance in the standard, especially for algorithmic functions (of which qsort and bsearch are the most prominent examples). >> O(n*log(n)) for number-of-comparisons would be fine for qsort, >> although I believe that would actually preclude a qsort() >> implementation by means of the quicksort algorithm :-) > Anything that precludes the implementation of an actual quicksort > algorithm is a good thing. Saying Quicksort is O(n*log(n)) most of > the time is like saying Michael Jackson does not molest most of the > children in the US. >>* a "reverse comma" type expression, for example denoted by >> a reverse apastrophe, where the leftmost value is the value >> of the entire expression, but the right-hand side is also >> guaranteed to be executed. > > This seems too esoteric. Why is it any more esoteric than having a comma operator? >>* triple-&& and triple-|| operators: &&& and ||| with semantics >> like the 'and' and 'or' operators in python: >> >> a &&& b ---> if (a) then b else a >> a ||| b ---> if (a) then a else b >> >> (I think this is brilliant, and actually useful sometimes). > > Hmmm ... why not instead have ordinary operator overloading? I'll provide three reasons. 1) because it is something completely different 2) because it is quite unrelated (I don't get the 'instead') 3) because operator overloading is mostly a bad idea, IMHO > While > this is sometimes a useful shorthand, I am sure that different > applications have different list cutesy compactions that would be > worth while instead of the one above. ... I'd like to see them. &&& is a bit silly (it's fully equivalent to "a ? b : 0") but ||| (or ?: in gcc) is actually quite useful. >>* a way to "bitwise invert" a variable without actually >> assigning, complementing "&=", "|=", and friends. > > Is a ~= a really that much of a burden to type? It's more a strain on the brain to me, why there are coupled assignment/operators for neigh all binary operators, but not for this unary one. >>* 'min' and 'max' operators (following gcc: ?< and ?>) > > As I mentioned above, you might as well have operator overloading instead. Now I would ask you: which existing operator would you like to overload for, say, integers, to mean "min" and "max" ? >>* a div and matching mod operator that round to -infinity, >> to complement the current less useful semantics of rounding >> towards zero. > Well ... but this is the very least of the kinds of arithmetic operator > extensions that one would want. A widening multiply operation is > almost *imperative*. It always floors me that other languages are not > picking this up. Nearly every modern microprocessor in existence has > a widening multiply operation -- because the CPU manufacturer *KNOW* > its necessary. And yet its not accessible from any language. ...It already is available in C, given a good-enough compiler. Look at the code gcc spits out when you do: unsigned long a = rand(); unsigned long b = rand(); unsigned long long c = (unsigned long long)a * b; > Probably because most languages have been written on top of C or C++. > And what about a simple carry capturing addition? Many languages exists where this is possible, they are called "assembly". There is no way that you could come up with a well-defined semantics for this. Did you know that a PowerPC processor doesn't have a shift-right where you can capture the carry bit in one instruction? Silly but no less true. >>Personally, I don't think it would be a good idea to have templates >>in C, not even simple ones. This is bound to have quite complicated >>semantics that I would not like to internalize. > Right -- this would just be making C into C++. Why not instead > dramatically improve the functionality of the preprocessor so that the > macro-like cobblings we put together in place of templates are > actually good for something? I've posted elsewhere about this, so I > won't go into details. This would intertwine the preprocessor and the compiler; the preprocessor would have to understand a great deal more about C semantics than in currently does (almost nothing). Best regards, Sidney - ]
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2003-12/0459.html
crawl-002
en
refinedweb
An Oracle JDeveloper How To DocumentNovember 2002 This document was written for Oracle9i JDeveloper versions 9.0.2 and 9.0.3. This is one of many BC4J-related HowTo articles that you can find at /products/jdev/howtos/index.html J2EE developers talk a lot about Value Objects. Value Objects, also known as Data Transfer Objects, are a J2EE Design Pattern for grouping a set of related attributes together to shuttle data between your "model" layer and "view" layer in a Model-View-Controller application architecture. While sometimes used alone, typically developers work with collections or lists of Value Objects, for example so that a JSP page in the View layer can present a list of employee information retrieved by the Model layer. When developers code value objects by hand, they typically model them as a simple JavaBean like the following EmpValueObject class: EmpValueObject package test.common;import java.io.Serializable;// Typical Value Object Implementation as a JavaBeanpublic class EmpValueObject implements Serializable { String _name; Float _salary; Integer _id; public String getName() { return _name; } public Float getSalary() { return _salary; } public Integer getId() { return _id; } public void setName(String value) { _name = value; } public void setSalary(Float value){ _salary = value; } public void setId(Integer value) { _id = value; } public EmpValueObject(){}} After creating the value objects, in most J2EE projects using the MVC architecture, a developer will create a service interface. The service interface isolates the View layer from the persistent business objects in the Model, and provides a central point of access to the various collections of value objects needed by the View layer. For a simple application working with employees, this service interface might look like this: package test.common;// Service interface to expose collections of value objects to the View layerpublic interface EmpModel { // Developers frequently use Vector, List, or Collection to return a set of Value Objects java.util.Vector vectorOfEmpsInDept(int deptno); java.util.Collection collectionOfEmpsInDept(int deptno); java.util.List listOfEmpsInDept(int deptno);} Then some class provides an implementation of this EmpModel interface for the View layer tier to use. With this service interface in place, the usage pattern in the View layer code usually goes like this: EmpModel Collection List Vector Iterator The routine coding pattern for this might look like the following code snippet: //-------------------------------------------------------------// Routine code in the "View" layer to access a collection/list// of business information from the "Model" layer//-------------------------------------------------------------EmpModel empModel = EmpModelFactory.createEmpModel(); :List list = empModel.listOfEmpsInDept(10);Iterator it = list.iterator();while (it.hasNext()) { // Get the next instance of our value object from the list EmpValueObject empValObj = (EmpValueObject)it.next(); // Use getter methods on the value object to access the data System.out.println(empValObj.getName()+","+empValObj.getId()+","+empValObj.getSalary()); } The implementation of the method listOfEmpsInDept(int deptno) usually contains code that interacts with an underlying Data Access Object (another J2EE Design Pattern) that encapsulates the JDBC code to actually retrieve the employee data from the database. listOfEmpsInDept(int deptno) So if we take a quick inventory of the objects that the typical J2EE developer adopting a Model-View-Controller architecture must create to put some business information on a web page, we have the following: EmpModelFactory Name Id Salary Business Components for Java provides a ready-made implementation of all of the above ingredients to make quick work of the task without having to hand code any of these participant classes yourself. The next section explains how. Here are the step by step instructions on how to build this typical Model layer using BC4J. Once the model layer is built, we'll see in the following section how to use this model layer from your MVC view layer client. Provide a name for the package that your business components will live in, let's say test, and set the connection name you want to use, then click (Finish). test The connection name should map to the SCOTT schema, containing the EMP table. We'll use a view object component to handle the value objects list of employee information we need. Select the test package in the System Navigator and choose New View Object... from its right-mouse menu. Call the new view object EmpsInDepartment, and click (Next>) until you get to "Step 5 of 6: Query" page. EmpsInDepartment Type in the following query in the "Query Statement" box: empno as "Id", ename as "Name", sal as "Salary" from empwhere deptno = ? Select the EmpsInDepartment view object in the System Navigator and choose Edit EditEmpsInDepartment... from the right-mouse menu. Click on the "Attribute Settings" tab. This shows you the attributes that will be included in the value object's in the list produced by executing this view object's query. Select the Id attribute in the poplist. Set its datatype to Integer. Select the Salary attribute in the poplist, and set its datatype to Float. Integer Float Of course, the default datatype for numbers ( oracle.jbo.domain.Number) will work just fine as well, but here we illustrate that we have fine control over the datatype of each attribute the value object. oracle.jbo.domain.Number We can also control which accessor method (or other custom methods) will appear on the value object the client will see. Click on the "Client Row Methods" tab and click the (>>) button to shuttle all of the accessor methods into the selected list. Then click (Finish). We'll use an application module component to implement the service interface for our value object collections needed by the view layer. Select the test package in the System Navigator and choose New Application Module... from its right-mouse menu. New Application Module... With these steps, we have created: Handles JDBC data access, and manages our lists of value objects based on employee information (Patterns: Value List Handler, Page-by-Page Iterator, Fast-Lane Reader, Data Access Object) EmpsInDepartmentRow Holds the Name, Id, and Salary attribute of an employee, and provides accessor methods. (Pattern: Value Object) Exposes collections of value objects to the "View" layer. (Pattern: Service Interface for View Layer) Later we'll see that BC4J provides a factory to easily get an instance of our EmpModel and work with it. That factory is generic to all application module classes, so it doesn't need to be specifically generated for the EmpModel. First let's write some custom code on our EmpsInDepartment view object to encapsulate the setting of the bind variable. Expand the icon in the system navigator for the EmpsInDepartment and double-click on the EmpsInDepartmentImpl.java class to see it in the code editor. Even though this is generated code, the view object "Impl" class is designed to have your custom code added right to it. Your custom code does not get "blown away" when you revisit the re-entrant view object wizard later to change declarative settings of the component. EmpsInDepartmentImpl.java We add the following method to the EmpsInDepartmentImpl.java file: // Encapsulate the settings of this View Object's Where Clausepublic void setDepartmentNumber(int deptno) { setWhereClauseParam(0,new Integer(deptno));} Then, we can expose the view object method to clients by visiting the View Object editor, clicking on the "Client Methods" tab, shuttling this new setDepartmentNumber method into the selected list, and pressing (Finish). setDepartmentNumber Next let's write some custom code in our EmpModel application module. Expand the icon in the system navigator for the EmpModel and double-click on the EmpModelImpl.java class to see it in the code editor. EmpModelImpl.java First we add a private method to return a RowSet of employees based on a deptno department number passed in. We'll reuse this method in several of the example methods we're about to write: RowSet deptno /** * Return the RowSet of Emps for department id passed in */private RowSet rowsetForEmpsInDepartment(int deptno) { EmpsInDepartment eid = getEmpsInDepartment(); // Use custom method on the VO to encapsulate WHERE Clause details eid.setDepartmentNumber(10); eid.executeQuery(); // ViewObject interface extends RowSet so we can just return this VO as RS return eid;} As we hinted at above, BC4J provides an EmpsInDepartmentRow value object for our use, along with a remoteable collection implementation called RowSet. However, if the developer is used to using the standard J2SE Vector, Collection, or List interfaces, we'll illustrate how to use those as well. The BC4J-supplied RowSet implementation works seamlessly with the BC4J-generated value objects, so we can write a simple method like the following to return the RowSet of EmpsInDepartmentRow value objects to the view layer: /** * Return a RowSet of EmpsInDepartmentRow instances */public RowSet rowsetOfEmpsInDept(int deptno) { return rowsetForEmpsInDepartment(deptno);} When using Vector, Collection, or List, you need to use a hand-written JavaBean as your value object like the EmpValueObject we showed at the beginning of this article. The code to return a Vector, List, and Collection of EmpValueObject instances looks like this: /** * Return a Vector of EmpValueObject instances */public Vector vectorOfEmpsInDept(int deptno) { Vector v = new Vector(); RowSet rs = rowsetForEmpsInDepartment(deptno); while (rs.hasNext()) { // Instantiate the hand-written value object and add to vector v.add(empValObjectForEmpRow((EmpsInDepartmentRow)rs.next())); } return v;}/** * Return a list of EmpValueObject instances */public List listOfEmpsInDept(int deptno) { ArrayList al = new ArrayList(); RowSet rs = rowsetForEmpsInDepartment(deptno); while (rs.hasNext()) { // Instantiate the hand-written value object and add to list al.add(empValObjectForEmpRow((EmpsInDepartmentRow)rs.next())); } return al;}/** * Return a collection of EmpValueObject instances */public Collection collectionOfEmpsInDept(int deptno) { // The java.util.List interface extends java.util.Collection interface return listOfEmpsInDept(deptno);} So, it takes a little more work and an extra hand-written value object to do what BC4J provides for free, but it's worth implementing all the permutations anyway for practice. Notice that in the Vector, List, and Collection cases, we're using a helper method named empValObjectForEmpRow which does the job of copying the data from the EmpsInDepartmentRow value object that BC4J provides into the hand-written value object that can be serialized as part of empValObjectForEmpRow /** * Create an EmpValueObject instance from an EmpsInDepartmentRow */private EmpValueObject empValObjectForEmpRow(EmpsInDepartmentRow empRow) { EmpValueObject empValObj = new EmpValueObject(); empValObj.setName(empRow.getName()); empValObj.setSalary(empRow.getSalary()); empValObj.setId(empRow.getId()); return empValObj;} Now that we've written our four public methods on our EmpModel implementation class, we can expose these methods to clients by visiting the Application Module editor for EmpModel and clicking on the Client Methods tab. By holding down the [Ctrl] key while clicking, we can multi-select the four method names in the list on the left: rowsetOfEmpsInDept vectorOfEmpsInDept listOfEmpsInDept collectionOfEmpsInDept and then click on the (>) button to shuttle these methods into the Selected list. When we click (Finish), the BC4J design time facilities will automatically create a new EmpModel interface for us, containing these four methods. So, we now have three interfaces in our project: Exposing our custom setDepartmentNumber method to clients... Exposing our typesafe getter/setter methods for our employee data values to clients... Exposing the four custom methods in our service interface for our view layer: rowsetOfEmpsInDept, vectorOfEmpsInDept, listOfEmpsInDept, collectionOfEmpsInDept... While we created our BC4J components in the test package, BC4J automatically creates these interfaces in the test.common package for us, emphasizing the fact that they are common interfaces that can be used on both the client tier and the business tier. test.common Another naming pattern that BC4J uses to help us remember which tier our classes belong in is the Impl suffix on classes in our project like: Impl EmpsInDepartmentRowImpl.java These classes should never appear in client-side code. As we'll see in the next section, client code should always refer to methods on the BC4J framework interfaces, or on the custom extensions to those interfaces that the BC4J framework creates and manages as we expose custom methods on View Objects, Value Objects (also known as View Object Rows), and Application Modules. With our simple business tier functionality built, and our custom methods exposed, we can concentrate on writing the client code that makes use of it. While our example will be a simple console program that outputs data to System.out, the techniques illustrated by this example are valid for any Java client code. System.out First we need to use a factory to obtain an instance of our EmpModel service interface to work with. BC4J provides several ways to acquire application modules, but the easiest is to use the oracle.jbo.client.Configuration class. oracle.jbo.client.Configuration Configurations are named lists of runtime parameters that affect how the client connects to the business tier. By default our EmpModel application module will get a configuration created named EmpModelLocal, with parameters setup for the client to connect to the business tier as a set of local java classes. If we later choose to deploy our application module as an EJB Session Bean, BC4J with again by default create a configuration named EmpModel9iAS with parameters set appropriately for a remote EJB-based connection from client to business tier. Using the BC4J framework's Configuration class, and an appropriate configuration name, you can easily get an instance of an application module like this: EmpModelLocal EmpModel9iAS import test.common.EmpModel; :String _am = "test.EmpModel"; // Full name of application moduleString _cf = "EmpModelLocal"; // Name of configurationEmpModel empModel = (EmpModel)Configuration.createRootApplicationModule(_am,_cf); you can use this instance of EmpModel as you need to, and then when you are finished using it, you use a companion method on the Configuration object to release your application module instance: Configuration Configuration.releaseRootApplicationModule(empModel, true /* Remove AM instance? */); If you pass true as the second argument to releaseRootApplicationModule, then the application module instance you were working with will be destroyed. If instead you pass false, the application module instance will be kept in a pool for subsequent reuse in the same process. true releaseRootApplicationModule The following program illustrates using the technique above to exercise all of the different custom methods we exposed above to retrieve and iterate over: java.util.List java.util.Vector java.util.Collection oracle.jbo.RowSet All four of the above value object collections are retrieved by invoking a custom method on the EmpModel service interface. Examples 5, 6, and 7 in the TestClient program then go on to illustrate some of the BC4J value object collection functionality that is possible directly using the base BC4J client interface oracle.jbo.ApplicationModule, from which our EmpModel extends. TestClient oracle.jbo.ApplicationModule Specifically, these last three examples illustrate retrieving and iterating: EmpsInDepartmentRows findViewObject() oracle.jbo.Row Here's the sample code: import java.util.Collection;import java.util.Iterator;import java.util.List;import java.util.Vector;import oracle.jbo.ApplicationModule;import oracle.jbo.Row;import oracle.jbo.RowSet;import oracle.jbo.RowSetIterator;import oracle.jbo.client.Configuration;import oracle.jbo.domain.Number;import test.common.EmpModel;import test.common.EmpValueObject;import test.common.EmpsInDepartment;import test.common.EmpsInDepartmentRow;public class TestClient { public static void main(String[] args){ String _am = "test.EmpModel"; // Full name of application module// The only thing that needs to change to run this same code// in a three-tier deployment is the name of the configuration // you want to use.//// String _cf = "EmpModel9iAS"; // Name of remote configuration String _cf = "EmpModelLocal"; // Name of configuration // Cast ApplicationModule to custom EmpModel interface to call custom methods EmpModel empModel = (EmpModel)Configuration.createRootApplicationModule(_am,_cf); p("1: Retrieve/iterate List of User-Created Value Objects from AM Custom Method"); List list = empModel.listOfEmpsInDept(10); Iterator it = list("2: Retrieve/iterate Vector of User-Created Value Objects from AM Custom Method"); Vector v = empModel.vectorOfEmpsInDept(10); it = v.iterator(); while (it.hasNext()) { EmpValueObject empValObj = (EmpValueObject)it.next(); p(empValObj.getName()+","+ empValObj.getId()+","+ empValObj.getSalary()); } p("3: Retrieve/iterate Collection of User-Created Value Objects from AM Custom Method"); Collection coll = empModel.collectionOfEmpsInDept(10); it = coll("4: Retrieve/iterate RowSet of Typesafe EmpsInDepartmentRows from AM Custom Method"); RowSet rs = empModel.rowsetOfEmpsInDept(10); // Not strictly needed since RowSet interface inherits from RowSetIterator // and for convenience BC4J aggregates a default iterator "built-in" to the // RowSet interface implementation object, but here we can create a secondary // iterator to illustrate the parallel programming pattern with Vector, List, // and Collection. Iterators can be named. Passing null says you don't care // about the name RowSetIterator rsit = rs.createRowSetIterator(null); while (rsit.hasNext()) { // Rather than needing a user-created value object, we use the // typesafe EmpsInDepartmentRow interface provided by BC4J // Same typesafe benefits, same network traffic savings, no hand-coded // value object to write for each query we use. BC4J does it for us. EmpsInDepartmentRow empValObj = (EmpsInDepartmentRow)rsit.next(); p(empValObj.getName()+","+empValObj.getId()+","+empValObj.getSalary()); } p("5: Retrieve/iterate RowSet of Typesafe EmpsInDepartmentRows using findViewObject"); // Cast to our custom interface EmpsInDepartment to call our custom VO methods EmpsInDepartment eid = (EmpsInDepartment)empModel.findViewObject("EmpsInDepartment"); // Use custom method on VO custom interface to encapsulate where clause details eid.setDepartmentNumber(10); eid.executeQuery(); // EmpsInDepartment interface extends ViewObject interface which extends // RowSet interface which extends RowSetIterator. So, here we show that we // can use the iterator behavior that's built-in to RowSet for convenience // instead of doing further casting while (eid.hasNext()) { EmpsInDepartmentRow empValObj = (EmpsInDepartmentRow)rs.next(); p(empValObj.getName()+","+empValObj.getId()+","+empValObj.getSalary()); } p("6: Retrieve/iterate RowSet of Generic Rows from AM Method"); rs = empModel.rowsetOfEmpsInDept(10); while (rs.hasNext()) { // Rather than using a typesafe Row subinterface like EmpsInDepartmentRow // above, here we just use the base Row interface and call getAttribute() // instead of typesafe getter methods Row empRow = rs.next(); p(empRow.getAttribute("Name")+","+empRow.getAttribute("Id")+","+ empRow.getAttribute("Salary")); } p("7: Retrieve/iterate RowSet of Generic Rows using findViewObject"); // Since findViewObject() is on the ApplicationModule interface, we // illustrate here that we can invoke it without using our // typesafe TestModule interface. Also, recall that ViewObject extends // RowSet so if it's more clear for our code, we can directly use // the RowSet interface here to refer to the ViewObject's default rowset // that it provides (through aggregation). ApplicationModule testAppModule = empModel; rs = testAppModule.findViewObject("EmpsInDepartment"); // Rather than calling the encapsulated setDepartmentMethod on our // custom EmpsInDepartment interface as we did above, here we show // the client directly setting the where clause parameters. // NOTE: Params are zero-based, so 0 is the first param in the SQL. rs.setWhereClauseParam(0,new Integer(10)); rs.executeQuery(); while (rs.hasNext()) { Row empRow = rs.next(); p(empRow.getAttribute("Name")+","+empRow.getAttribute("Id")+","+ empRow.getAttribute("Salary")); } // Release the AppModule (optionally back to the pool if we pass 'false' Configuration.releaseRootApplicationModule(empModel,true); } private static void p(String m){System.out.println(m);}} As the example illustrates, working with a BC4J RowSet uses the same familiar coding patterns as working with a java.util.Collection, however a BC4J RowSet interface is a collection with a few more tricks up its sleeve. In fact, in addition to making it possible to use automatically-generated value objects, a further benefit of using the BC4J RowSet is that the value objects in a RowSet can be optionally fully updateable. This is in sharp contrast to a java.util.Collection of hand-created value objects, which are typically strictly read-only in nature due to the complexity of keeping any updates made in sync with your business logic tier. BC4J offers significant help here by automating this synchronization, just by using a RowSet instead. Our EmpsInDepartment view object created above was a simple, read-only collection of database query results. However, if you were working with a RowSet produced by a BC4J view object that you related to one or more entity objects at design time, then that RowSet automatically becomes a collection of fully-updateable value objects. This means that rather than having to manage the updates to your business objects yourself, the BC4J framework allows your client to simply call appropriate setter methods to modify the attributes of a value object row in any way necessary, and then commit the changes. All subsequent business logic enforcement and business object persistence is handled automatically. In this brief article, we've seen how easy it is to use the BC4J framework to create a model layer for a Model-View-Controller application architecture, and how easy it is to access the various collections of value objects of business information through a service interface from our view layer. We've seen the best practice techniques allowing custom methods on our application module to be exposed to clients in a way that allows our deployment architecture to change at any time, without causing ripple effects to our application code. This means your BC4J business tier can be deployed as simple Java classes in your Servlet/Web tier, or as an EJB Session Facade in your EJB tier with the flick of a switch and no client code changes. And finally, we've seen that while possible to use BC4J in combination with "classic" collection classes/interfaces like Vector, List, and Collection from the java.util package together with hand-created value objects, the BC4J framework offers more functional, more targeted value object collection functionality that follows the same familiar programming paradigms. The examples in this article have illustrated how BC4J's ViewObject, RowSet, and Row interfaces can be used either in a generic way or a typesafe way with automatically-generated custom interfaces, providing a "collections of value objects" implementation that can optionally be fully-updateable with no extra programming work by the developer. java.util ViewObject Row
http://www.oracle.com/technology/products/jdev/howtos/bc4j/bc4j-collections.html
crawl-002
en
refinedweb
Updated: December 6, 2007 Applies To: Windows Server 2008 A backward compatible task is a Task Scheduler 1.0 task that is used in the Windows XP, Windows Server 2003, and Windows 2000 operating systems. A Task Scheduler 1.0 task can be registered (scheduled) to execute any of the following application or file types: Win32 applications, Win16 applications, OS/2 applications, MS-DOS applications, batch files (*.bat), command files (*.cmd), or any properly registered file type. You can register a Task Scheduler 1.0 task in one of three ways: Use the error code provided in the failure event to identify possible reasons the task failed to register. The error code descriptions are listed below. Use Task Scheduler to define and register a task compatible with earlier operating systems. To register the Task Scheduler 1.0 task: 1. Click the Start button and type Task Scheduler in the Start Search box. 2. Select the Task Scheduler program to start Task Scheduler. 3. Click Create Task. 4. On the General tab, set the Configurefor. A task's trigger is not found. Try to edit the task's triggers. One or more of the properties required to run this task have not been set. There is no running instance of the task. The Task Scheduler service is not installed on this computer. The task object could not be opened. The object is either an invalid task object or is not a task object. No account information could be found in the Task Scheduler security database for the task indicated. Set the account information for the task. Unable to establish existence of the account specified. Set the account information for the task. Corruption was detected in the Task Scheduler security database; the database has been reset. Task Scheduler security services are available only on Windows NT. The task object version is either unsupported or invalid. The task has been configured with an unsupported combination of account settings and run time options. The Task Scheduler Service is not running. Start the Task Scheduler service. The task XML contains an unexpected node. The task XML contains an element or attribute from an unexpected namespace. The task XML contains a value which is incorrectly formatted or out of range. The task XML is missing a required element or attribute. The task XML is malformed. The task is registered, but not all specified triggers will start the task. The task is registered, but may fail to start. Batch logon privilege needs to be enabled for the task principal. The task XML contains too many nodes of the same type. The task cannot be started after the trigger end boundary. An instance of this task is already running. The task will not run because the user is not logged on. The task image is corrupt or has been tampered with. The Task Scheduler service is not available. The Task Scheduler service is too busy to handle your request. Please try again later. The Task Scheduler service attempted to run the task, but the task did not run due to one of the constraints in the task definition. The Task Scheduler service has asked the task to run. The task is disabled. The task has properties that are not compatible with earlier versions of Windows. The task settings do not allow the task to start on demand. Attempt to reregister a Task Scheduler 1.0 (prior to Windows Vista) compatible task and verify that the registration succeeds. You can register a Task Scheduler 1.0 task in three ways: After you register the task, open Task Scheduler, select the task in the task folder hierarchy, and click the History tab for the task to verify that it contains events indicating the task was registered successfully. Backward Compatible Task Registration Management Infrastructure
http://technet.microsoft.com/en-us/library/cc775040(WS.10).aspx
crawl-002
en
refinedweb
> cdx.zip > IOCTL.HPP /*C4*/ //**************************************************************** // Author: Jethro Wright, III TS : 1/18/1994 9:36 // Date: 01/01/1994 // // ioctl.hpp : definitions/declarations for an // ioctl-oriented command interface for audio // cd access via mscdex // // History: // 01/01/1994 jw3 also sprach zarathustra.... //****************************************************************/ #if ! defined( __IOCTL_HPP__ ) #define __IOCTL_HPP__ #include "types.h" #include "device.h" // the principle error codes we're concerned w/ after each // ioctl operation. the entire status word from the // last ioctl operation is returned from each ioctl cmd.... #define IOS_ERROR 0x8000 // == some sort of failure #define IOS_BUSY 0x200 // == play audio is engaged // device status info bits from the GetDeviceStatus() fn.... #define STATUS_DOOR_OPEN 0x01 #define STATUS_DOOR_UNLOCKED 0x02 #define STATUS_COOKED_RAW_READS 0x04 #define STATUS_READ_WRITE 0x08 #define STATUS_DATA_AUDIO_CAPBL 0x10 #define STATUS_RESERVED 0x20 #define STATUS_PREFETCH 0x40 #define STATUS_AUDIO_MANIP 0x80 #define STATUS_DUAL_ADDRESSG 0x100 #define STATUS_NO_DISK 0x800 // // to use the IOCTL class, one simply creates an instance of the // class, specifying the mscdex drive to be accessed during // subsequent calls. since most pc systems are likely to have only // a sgl cdrom drive, the defalts are setup to go for the 1st drive // in the target system. the IOCTL instance shud be retained // for the entire lifetime of the object that uses it (or at least // until it no longer needs to issue cdrom cmds. // // maybe someone else will adapt this to make it more generic, but // so far the only other possible application for an ioctl-based // interface // class IOCTL { // // the public mbr fns of this class correspond to all of conventional // (read, anticipated) cmds one would want to perform on an audio cd // via mscdex. the c code which inspired this system, while // adequately coded, was less than organized in its approach to // the subject. this was probably true bec the mscdex spec doesn't // lend itself to understanding the means by which one gets audio out // of a cdrom. hence the design of this system, which reflects the // notion of encapsulating functionality where required. if add'l // capabilities are needed, it shud be fairly obvious how one would // supplement the classes in this kit. as stated above and in other // places in this kit, the goal of this proj was to // // anyway, when one of the these fn is invoked, it immed dispatches // a service request (a cmd) to the device driver and retns the // status word from the request header, for interrogation by the // caller.... // public: IOCTL( int, int, VFP, VFP ) ; ~IOCTL() ; WORD Play( DWORD, DWORD, BYTE ) ; WORD Stop( Boolean ) ; WORD LockDoor( Boolean ) ; WORD EjectDisk( void ) ; WORD CloseTray( void ) ; WORD ResetDisk( void ) ; WORD GetUPC( struct UPCCommand far * ) ; WORD GetTrackInfo( struct TrackInfoCmd far * ) ; WORD GetQInfo( struct QChannelInfoCmd far * ) ; WORD GetDiskInfo( struct DiskInfoCmd far * ) ; WORD GetDeviceStatus( DWORD far * ) ; WORD GetPlayStatus( struct PlayStatusCmd far * ) ; private: IOCTLSvcReq theRequest ; // these are the unit and sub-unit nbrs passed via the cstor. // normally, we req svc for the sub unit of the desired device, but // since it's only an extra couple of byts, let's hold onto the // unit nbr as well, in case we need it in the future.... int unitNbr, subUnitNbr ; VFP devStrategy ; // == addr of the device strategy fn // for this drive VFP devInterrupt ; // == addr of its device interrupt fn void DoCmd( void ) ; // dispatch the cmd to the device // driver inline void Setup( WORD sCmd, VFP sData, int sCnt ) { theRequest.rqh.command = sCmd ; theRequest.rqh.unit = subUnitNbr ; theRequest.rqh.len = sizeof( IOCTLSvcReq ) ; theRequest.xferBufr = sData ; theRequest.byteCnt = sCnt ; theRequest.media = theRequest.sector = 0 ; theRequest.volumeId = 0 ; } ; }; #endif // end: if ! defined( __IOCTL_HPP__ )
http://read.pudn.com/downloads/sourcecode/scsi/1449/IOCTL.HPP__.htm
crawl-002
en
refinedweb
See ACI. Access Control Instruction. An instruction that grants or denies permissions to entries in the directory. See ACL. Access Control List. The mechanism for controlling access to your directory. In the context of access control, specify the level of access granted or denied. Access rights are related to the type of operation that can be performed on the directory. The following rights can be granted or denied: read, write, add, delete, search, compare, selfwrite, proxy and all. Disables a user account, group of accounts, or an entire domain so that all authentication attempts are automatically rejected. A size limit which is globally applied to every index key managed by the server. When the size of an individual ID list reaches this limit, the server replaces that ID list with an All IDs token. A mechanism which causes the server to assume that all directory entries match the index key. In effect, the All IDs token causes the server to behave as if no index were available for the search request. When granted, allows anyone to access directory information without providing credentials, and regardless of the conditions of the bind. Allows for efficient approximate or "sounds-like" searches. Holds descriptive information about an entry. Attributes have a label and a value. Each attribute also follows a standard syntax for the type of information that can be stored as the attribute value. A list of required and optional attributes for a given entry type or object class. In pass-through authentication (PTA), the authenticating Directory Server is the Directory Server that contains the authentication credentials of the requesting client. The PTA-enabled host sends PTA requests it receives from clients to the host. . Digital file that is not transferable, cannot be forged, and is issued by a third party. Authentication certificates are sent from server to client or client to server in order to verify and authenticate the other party. Base distinguished name. A search operation is performed on the base DN, the DN of the entry and all entries below it in the directory tree. See base DN. Distinguished name used to authenticate to Directory Server when performing an operation. See bind DN. In the context of access control, the bind rule specifies the credentials and conditions that a particular user or client must satisfy in order to get access to directory information. An entry that represents the top of a subtree in the directory. Software, such as Mozilla Firefox, used to request and view World Wide Web material stored as HTML files. The browser uses the HTTP protocol to communicate with the host server. Also virtual view index. Speeds up the display of entries in the Directory Server Console. Browsing indexes can be created on any branchpoint in the directory tree to improve display performance. See Certificate Authority.. A collection of data that associates the public keys of a network user with their DN in the directory. The certificate is stored in the directory as user object attributes. Company or organization that sells and issues authentication certificates. You may purchase an authentication certificate from a Certification Auth. A method for relaying requests to another server. Results for the request are collected, compiled, and then returned to the client. A changelog is a record that describes the modifications that have occurred on a replica. The supplier server then replays these modifications on the replicas stored on consumer servers or on other masters, in the case of multi-master replication. Distinguishes alphabetic characters from numeric or other characters and the mapping of upper-case to lower-case letters. Encrypted information that cannot be read by anyone without the proper key to decrypt the information. See consumer-initiated replication. Specifies the information needed to create an instance of a particular object and determines how the object works in relation to other objects in the directory. See CoS. A classic CoS identifies the template entry by both its DN and the value of one of the target entry's attributes. See LDAP client. An internal table used by a locale in the context of the internationalization plug-in that the operating system uses to relate keyboard keys to character font screen displays. Provides language and cultural-specific information about how the characters of a given language are to be sorted. This information might include the sequence of letters in the alphabet or how to compare letters with accents to letters without accents. Server containing replicated directory trees or subtrees from a supplier server. Replication configuration where consumer servers pull directory data from supplier servers. In the context of replication, a server that holds a replica that is copied from a different server is called a consumer for that replica. A method for sharing attributes between entries in a way that is invisible to applications. Identifies the type of CoS you are using. It is stored as an LDAP subentry below the branch it affects. Contains a list of the shared attribute values. Also template entry. A background process on a UNIX machine that is responsible for a particular system task. Daemon processes do not need human intervention to continue functioning. Directory Access Protocol. The ISO X.500 standard protocol that provides client access to the directory. The server that is the master source of a particular piece of data. An implementation of chaining. The database link behaves like a database but has no persistent storage. Instead, it points to data stored remotely. One of a set of default indexes created per database instance. Default indexes can be modified, although care should be taken before removing them, as certain plug-ins may depend on them. See CoS definition entry. See DAP. The logical representation of the information stored in the directory. It mirrors the tree model used by most filesystems, with the tree's root point appearing at the top of the hierarchy. Also known as DIT. The privileged database administrator, comparable to the root user in UNIX. Access control does not apply to the Directory Manager. Also DSGW. A collection of CGI forms that allows a browser to perform LDAP client functions, such as querying and accessing a Directory Server, from a web browser. A database application designed to manage descriptive, attribute-based information about people and resources within an organization. String representation of an entry's name and location in an LDAP directory. See directory tree. See distinguished name. See Directory Manager.. A DNS alias is a hostname that the DNS server knows points to a different host-specifically a DNS CNAME record. Machines always have one real name, but they can have one or more aliases. For example, an alias such as www. yourdomain.domain might point to a real machine called realthing. yourdomain.domain where the server currently exists. See Directory Server Gateway. A group of lines in the LDIF file that contains information about an object. Method of distributing directory entries across more than one server in order to scale to support large numbers of entries. Each index that the directory uses is composed of a table of index keys and matching entry ID lists. The entry ID list is used by the directory to build a list of candidate entries that may match the client application's search request. Allows you to search efficiently for entries containing a specific attribute value. The section of a filename after the period or dot (.) that typically defines the type of file (for example, .GIF and .HTML). In the filename). A constraint applied to a directory query that restricts the information returned. Allows you to assign entries to the role depending upon the attribute contained by each entry. You do this by specifying an LDAP filter. Entries that match the filter are said to possess the role. See Directory Server Gateway. When granted, indicates that all authenticated users can access directory information. Generic Security Services. The generic access protocol that is the native way for UNIX-based systems to access and authenticate Kerberos services; also supports session encryption. A name for a machine in the form machine.domain.dom, which is translated into an IP address. For example, is the machine www in the subdomain example and com domain.. Hypertext Transfer Protocol. The method for exchanging information between HTTP servers and clients. An abbreviation for the HTTP daemon or service, a program that serves information using the HTTP protocol. The daemon or service is often called an httpd. The next generation of Hypertext Transfer Protocol. A secure version of HTTP, implemented using the Secure Sockets Layer, SSL. In the context of replication, a server that holds a replica that is copied from a different server, and, in turn, replicates it to a third server. See also cascading replication. Each index that the directory uses is composed of a table of index keys and matching entry ID lists. An indirect CoS identifies the template entry using the value of one of the target entry's attributes. Speeds up searches for information in international directories. Also Internet Protocol address. A set of numbers, separated by dots, that specifies the actual location of a machine on the Internet (for example, 198.93.93.10). International Standards Organization. Lightweight Directory Access Protocol. Directory service protocol designed to run over TCP/IP and across multiple platforms. Version 3 of the LDAP protocol, upon which Directory Server bases its schema format. Software used to request and view LDAP entries from an LDAP Directory Server. See also browser. See LDAP Data Interchange Format. Provides the means of locating Directory Servers using DNS and then completing the query via LDAP. A sample LDAP URL is ldap://ldap.example.com. A high-performance, disk-based database consisting of a set of large files that contain all of the data assigned to it. The primary data store in Directory Server. LDAP Data Interchange Format. Format used to represent Directory Server entries in text form. An entry under which there are no other entries. A leaf entry cannot be a branch point in a directory tree. See LDAP.. A standard value which the SNMP agent can access and send to the NMS. Each managed object is identified with an official name and a numeric identifier expressed in dot-notation. Allows creation of an explicit enumerated list of members. See MIB. A data structure that associates the names of suffixes (subtrees) with databases. See SNMP master agent. The server that contains the master copy of the directory trees or subtrees that are replicated to replicas. The master server is read-write. Provides guidelines for how the server compares strings during a search operation. In an international search, the matching rule tells the server what collation order and operator to use. A message digest algorithm by RSA Data Security, Inc., which can be used to produce a short digest of data that is unique with high probability and is mathematically extremely hard to produce; a piece of data that will produce the same message digest. A message digest produced by the MD5 algorithm.. Management Information Base namespace. The means for directory data to be named and referenced. Also called the directory tree. Specifies the monetary symbol used by specific region, whether the symbol goes before or after its value, and how monetary units are represented.. The server containing the database link that communicates with the remote server. The problem of managing multiple instances of the same information in different directories, resulting in increased hardware and personnel costs. Multiple entries with the same distinguished name. Allows the creation of roles that contain other roles. Network Management Station component that graphically displays information about SNMP managed devices (which device is up or down, which and how many error messages were received, etc.). See NMS. Network Information Service. A system of programs and data files that UNIX machines use to collect, collate, and share specific information about machines, users, filesystems, and network parameters throughout a network of computers. Also Network Management Station. Powerful workstation with one or more network management applications installed. Red Hat's LDAP Directory Server daemon or service that is responsible for all actions of the Directory Server. See also slapd. Defines an entry type in the directory by defining which attributes are contained in the entry. Also OID. A string, usually of decimal numbers, that uniquely identifies a schema element, such as an object class or an attribute, in an object-oriented system. Object identifiers are assigned by ANSI, IETF or similar organizations. See object identifier. Contains information used internally by the directory to keep track of modifications and subtree properties. Operational attributes are not returned in response to a search unless explicitly requested. When granted, indicates that users have access to entries below their own in the directory tree if the bind DN is the parent of the targeted entry. See PTA. In pass-through authentication, the PTA directory server will pass through bind requests to the authenticating directory server from all clients whose DN is contained in this subtree. A file on UNIX machines that stores UNIX user login names, passwords, and user ID numbers. It is also known as /etc/passwd because of where it is kept. A set of rules that governs how passwords are used in a given directory. In the context of access control, permission states whether access to the directory information is granted or denied and the level of access that is granted or denied. See access rights. Also Protocol Data Unit. Encoded messages which form the basis of data exchanges between SNMP devices. A pointer CoS identifies the template entry using the template DN only. Allows searches for entries that contain a specific indexed attribute. A set of rules that describes how devices on a network exchange information. See PDU. A special form of authentication where the user requesting access to the directory does not bind with its own DN but with a proxy DN. Used with proxied authorization. The proxy DN is the DN of an entry that has access permissions to the target on which the client-application is attempting to perform an operation. Also Pass-through authentication. Mechanism by which one Directory Server consults another to check bind credentials. In pass-through authentication ( PTA), the PTA Directory Server is the server that sends (passes through) bind requests it receives to the authenticating directory server. In pass-through authentication, the URL that defines the authenticating directory server, pass-through subtree(s), and optional parameters. Random access memory. The physical semiconductor-based memory in a computer. Information stored in RAM is lost when the computer is shut down. A file on UNIX machines that describes programs that are run when the machine starts. It is also called /etc/rc.local because of its location. Also Relative Distinguished Name. The name of the actual entry itself, before the entry's ancestors have been appended to the string to form the full distinguished name. Mechanism that ensures that relationships between related entries are maintained within the directory. . A database that participates in replication. A replica that refers all update operations to read-write replicas. A server can hold any number of read-only replicas. A replica that contains a master copy of directory information and can be updated. A server can hold any number of read-write replicas. See RDN. Act of copying directory trees or subtrees from supplier servers to consumer servers.. Request for Comments. Procedures or standards documents submitted to the Internet community. People can send comments on the technologies before they become accepted standards. An entry grouping mechanism. Each role has members, which are the entries that possess the role. Attributes that appear on an entry because it possesses a particular role within an associated CoS template. The most privileged user available on UNIX machines. The root user has complete access privileges to all files on the machine. The parent of one or more sub suffixes. A directory tree can contain more than one root suffix. Also Simple Authentication and Security Layer. An authentication framework for clients as they attempt to bind to a directory.. See SSL. When granted, indicates that users have access to their own entries if the bind DN matches the targeted entry. Java-based application that allows you to perform administrative management of your Directory Server from a GUI. The server daemon is a process that, once running, listens for and accepts requests from clients. A directory on the server machine dedicated to holding the server program and configuration, maintenance, and information files. Interface that allows you select and configure servers using a browser. A background process on a Windows machine that is responsible for a particular system task. Service processes do not need human intervention to continue functioning. Server Instance Entry. The ID assigned to an instance of Directory Server during installation. See SASL. See SNMP. The most basic replication scenario in which two servers each hold a copy of the same read-write replicas to consumer servers. In a single-master replication scenario, the supplier server maintains a changelog. See supplier-initiated replication. LDAP Directory Server daemon or service that is responsible for most functions of a directory except replication. See also ns-slapd. Also Simple Network Management Protocol. Used to monitor and manage application processes running on the servers by exchanging data about network activity. Software that exchanges information between the various subagents and the NMS. Software that gathers information about the managed device and passes the information to the master agent. Also subagent. Also Secure Sockets Layer. A software library establishing a secure connection between two parties (client and server) used to implement HTTPS, the secure version of HTTP. index maintained by default. A branch underneath a root suffix. See SNMP subagent. Allows for efficient searching against substrings within entries. Substring indexes are limited to a minimum of two characters for each entry. The name of the entry at the top of the directory tree, below which data is stored. Multiple suffixes are possible within the same directory. Each database only has one suffix. The most privileged user available on UNIX machines. The superuser has complete access privileges to all files on the machine. Also called root. Server containing the master copy of directory trees or subtrees that are replicated to consumer servers. In the context of replication, a server that holds a replica that is copied to a different server is called a supplier for that replica. Replication configuration where supplier servers replicate directory data to consumer servers. Encryption that uses the same key for both encrypting and decrypting. DES is an example of a symmetric encryption algorithm. Cannot be deleted or modified as it is essential to Directory Server operations. In the context of access control, the target identifies the directory information to which a particular ACI applies. The entries within the scope of a CoS. Transmission Control Protocol/Internet Protocol. The main network protocol for the Internet and for enterprise (company) networks. See CoS template entry. Indicates the customary formatting for times and dates in a specific region. Also Transport Layer Security. The new standard for secure socket layers; a public key based protocol. The way a directory tree is divided among physical servers and how these servers link with one another. See TLS.. Also browsing index. Speeds up the display of entries in the Directory Server Console. Virtual list view indexes can be created on any branchpoint in the directory tree to improve display performance.
http://www.redhat.com/docs/manuals/dir-server/install/7.1/glossary.html
crawl-002
en
refinedweb
Opened 9 years ago Closed 9 years ago Last modified 9 years ago #6699 closed (invalid) contrib.auth decorators aren't using proper decorator syntax Description The related documentation () is totally incorrect. Neither of the 2.3 or 2.4 decoration methods stated will work: my_view = login_required(redirect_field_name='redirect_to')(my_view) @login_required(redirect_field_name='redirect_to') since the definition is: def login_required(function=None, redirect_field_name=REDIRECT_FIELD_NAME): Other decorator methods in this are also wrong. Change History (3) comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by This is mistaken. Notice that login_required behaves differently based on whether function is passed in or not: one case returns a decorator (function=None), the other returns the decorated function. comment:3 Changed 9 years ago by Duh... I typed this up while answering an IRC problem without obviously looking too closely at the code. Sorry for the noise, Malcolm. #6700, has a possible solution.
https://code.djangoproject.com/ticket/6699
CC-MAIN-2017-04
en
refinedweb
I have a byte say 1 byte of size of 8.. How do i get the bit each values? for example, I want 16th bit value, 17th bit, 18th bit so on.. byte[] _byte = new byte[8]; If you want the Xth bit in your Byte Array (I think that is what your asking at least), you need to index the correct Byte from the array and then extract the bit public static Boolean GetBitX(byte[] bytes, int x) { var index = x/8; var bit = x-index*8; return (bytes[index] & (1<<bit)) != 0; }
https://codedump.io/share/EP9guop29Ces/1/how-to-get-bit-values-from-byte
CC-MAIN-2017-04
en
refinedweb
I'd like to create a Python decorator that can be used either with parameters: @redirect_output("somewhere.log") def foo(): .... @redirect_output def foo(): .... I know this question is old, but some of the comments are new, and while all of the viable solutions are essentially the same, most of them aren't very clean or easy to read. Like thobe's answer says, the only way to handle both cases is to check for both scenarios. The easiest way is simply to check to see if there is a single argument and it is callabe (NOTE: extra checks will be necessary if your decorator only takes 1 argument and it happens to be a callable object): def decorator(*args, **kwargs): if len(args) == 1 and len(kwargs) == 0 and callable(args[0]): # called as @decorator else: # called as @decorator(*args, **kwargs) In the first case, you do what any normal decorator does, return a modified or wrapped version of the passed in function. In the second case, you return a 'new' decorator that somehow uses the information passed in with *args, **kwargs. This is fine and all, but having to write it out for every decorator you make can be pretty annoying and not as clean. Instead, it would be nice to be able to automagically modify our decorators without having to re-write them... but that's what decorators are for! Using the following decorator decorator, we can deocrate our decorators so that they can be used with or without arguments: def doublewrap(f): ''' a decorator decorator, allowing the decorator to be used as: @decorator(with, arguments, and=kwargs) or @decorator ''' @wraps(f) def new_dec(*args, **kwargs): if len(args) == 1 and len(kwargs) == 0 and callable(args[0]): # actual decorated function return f(args[0]) else: # decorator arguments return lambda realf: f(realf, *args, **kwargs) return new_dec Now, we can decorate our decorators with @doublewrap, and they will work with and without arguments, with one caveat: I noted above but should repeat here, the check in this decorator makes an assumption about the arguments that a decorator can receive (namely that it can't receive a single, callable argument). Since we are making it applicable to any generator now, it needs to be kept in mind, or modified if it will be contradicted. The following demonstrates its use: def test_doublewrap(): from util import doublewrap from functools import wraps @doublewrap def mult(f, factor=2): '''multiply a function's return value''' @wraps(f) def wrap(*args, **kwargs): return factor*f(*args,**kwargs) return wrap # try normal @mult def f(x, y): return x + y # try args @mult(3) def f2(x, y): return x*y # try kwargs @mult(factor=5) def f3(x, y): return x - y assert f(2,3) == 10 assert f2(2,5) == 30 assert f3(8,1) == 5*7
https://codedump.io/share/ubjQhYh1G3d5/1/how-to-create-a-python-decorator-that-can-be-used-either-with-or-without-parameters
CC-MAIN-2017-04
en
refinedweb
I know that asynchronous programming has seen a lot of changes over the years. I'm somewhat embarrassed that I let myself get this rusty at just 34 years old, but I'm counting on StackOverflow to bring me up to speed. What I am trying to do is manage a queue of "work" on a separate thread, but in such a way that only one item is processed at a time. I want to post work on this thread and it doesn't need to pass anything back to the caller. Of course I could simply spin up a new Thread Queue BlockingCollection Task async await TaskScheduler () => LocateAddress(context.Request.UserHostAddress) LocateAddress To create an asynchronous single degree of parallelism queue of work you can simply create a SemaphoreSlim, initialized to one, and then have the enqueing method await on the acquisition of that semaphore before starting the requested work. public class TaskQueue { private SemaphoreSlim semaphore; public TaskQueue() { semaphore = new SemaphoreSlim(1); }(); } } } Of course to have a fixed degree of parallelism other than one simply initialize the semaphore to some other number.
https://codedump.io/share/wuOEZ11mNzAH/1/best-way-in-net-to-manage-queue-of-tasks-on-a-separate-single-thread
CC-MAIN-2017-04
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byRemington Blye Modified about 1 year ago 1 Priority Queues CSC 172 SPRING 2002 LECTURE 16 2 Priority Queues Model Set with priorities associate with elements Priorites are comparable by a < operator Operations Insert and element with a given priority Deletemax Find and remove from the set the element with highest priority 3 PriorityQueue Interface public Interface PriorityQueue { int size(); boolean isEmpty(); void add(Object element); Object getMin(); Object removeMin(); } 4 Sorted Linked List implementation Deletemax is O(1) – take first element Insert if O(n) – on average, go halfway down the list 5 Partially Ordered Tree A binary tree with elements and priorities at nodes Satisfying the partially ordered tree (POT) property The priority at any node >= the priority of its children 6 Balanced POT A POT in which all nodes exist, down to a certain level, and for one level below that, “extra” nodes may exist, as far left as possible 7 Insertion on a Balanced POT 1. Place new element in the leftmost possible place 2. Bubbleup new node by repeatedly exchanging it with its parent if it has higher priority than its parent Restores the POT property Maintains balanced property O(log n) because an n-node balanced tree has no more than 1 + log 2 n levels 8 Deletemax on Balance POT 1. Select element at root 2. Replace root element by rightmost element at lowest level 3. Bubbledown root by repeatedly exchanging it with the larger of its children, as long as it it smaller than either Restores POT property Preserves balance O(log n) because O(1) at each step along a path of length at most log 2 n 9 Heap Data Structure Represent a balance POT by an array A n nodes represented by A[1] through A[n] A[0] not used Array holds elements level-by-level A node represented by A[k] has parent A[k/2] 10 Alternate Heap Data Structure Represent a balance POT by an array A n nodes represented by A[0] through A[n-1] A[0] is used Array holds elements level-by-level A node represented by A[k] has parent A[(k-1)/2] Children of A[k] are: A[2*k+1] A[2*k+2] 11 Bubbleup on Heap At position k: 1. If k == 1, then done 2. Otherwise, compare A[k] with A[k/2]. If A[k] is smaller, then done Otherwise swap and repeat at k/2 12 Bubbledown on Heap At positition k 1. If 2k > n, then done 2. Otherwise, see if A[k] is smaller than A[2k] or A[2k + 1] (if 2k+1 <= n) If not, done If so, swap with larger and repeat 13 Insert on heap 1. Put new element in A[n+1] 2. Add 1 to n 3. Bubbleup 14 Insert on heap – O(log n) 1. Put new element in A[n+1] O(1) 2. Add 1 to n O(1) 3. Bubbleup O(log n) 15 Deletemax on heap 1. Take A[1] 2. Replace it by A[n] 3. Subtract 1 from n 4. Bubbledown 16 Deletemax on heap – O(log n) 1. Take A[1] O(1) 2. Replace it by A[n] O(1) 3. Subtract 1 from n O(1) 4. Bubbledown O(log n) 17 Heapsort 1. Heapify array A by bubbling down A[k] for: k = n/2,n/2-1,n/2-2,…,1 in order How long does this take? Might seems like O(n log n) but really O(n) 2. Repeatedly deletemax until all are selected Priority is reverse of sorted order n O(log n) each == O(n log n) 18 Big-Oh of Heapify Bubble down n/2,n/2-1,n/2-2,…,1 in order 19 Big-Oh of Heapify Bubble down n/2,n/2-1,n/2-2,…,1 in order 20 Big-Oh of Heapify Bubble down n/2,n/2-1,n/2-2,…,1 in order 21 Big-Oh of Heapify Bubble down n/2,n/2-1,n/2-2,…,1 in order 22 Big-Oh of Heapify Number of swaps bound at each level No swaps At most one At most two 23 Big-Oh of Heapify …3210 n/2 n/4n/8n/16… (n/2)(1/2+2/4+3/8+4/16…..) 24 Big-Oh of Heapify 1/2+2/4+3/8+4/16….. 1/2+(1/4+1/4)+(1/8+1/8+1/8) +….. 1/2+ 1/4 + 1/8 + … = 1 1/4 + 1/8 +….. = 1/2 1/8 +….. = 1/4 ….. = … So, O(2*n/2) = O(n) 25 Comparators class Decreasing implements Comparator{ public int compare(Object o1, Object o2){ return ((Integer)o2).compareTo((Integer)o1); } // compare method }// class Decreasing 26 Comparators class ByLength implements Comparator{ public int compare(Object o1, Object o2){ // positive when o1 longer than o2 return ((String)o1).length() – ((String)o2).length(); } // compare method }// class Decreasing 27 Java Implementation public class Heap implements PriorityQueue{ protected Object[] heap; protected int size; protected Comparator comparator; public Heap(); // for Comparable objects public Heap(Comparator comp);// user supply 28 Using the Comparators Heap myHeap = new Heap(new Decreasing()); Heap heap2 = new Heap(new ByLength()); 29 Compare method private int compare (Object e1, Object e2) { return (comparator == null ? ((Comparable)e1).compareTo(e2) : comparator.compare(e1,e2); }//method compare 30 Constructors public Heap() { final int DEFAULT_INITIAL_CAPACITY = 11; heap = new Object[DEFAULT_INITIAL_CAPACITY]; }// default constructor public Heap (Comparator comp) { this(); comparator = comp; }// constructor with Comparator 31 Add Method public void add(Object element){ if (++size == heap.length) { Object[] newHeap = new Object[2*heap.length]; System.arraycopy(heap,0,newHeap,0,size); heap = newHeap; } // if heap[size-1] = element; bubbleUp(); }// method add 32 bubbleUp method protected void bubbleUp() { int child = size –1, parent; Object temp; while (child >0) { parent = (child-1)/2; if (compare(heap[parent],heap[child]) <= 0) break; temp = heap[parent]; heap[parent] = heap[child]; heap[child] = temp; child = parent; }//while }method bubbleUp 33 getMin method public Object getMin() { if (size == 0) throw new NoSuchElementException( “Priority Queue Empty”); return heap[0]; }//method getMin 34 removeMin method public Object removeMin() { if (size == 0) throw new NoSuchElementException( “Priority Queue Empty”); Object minElement = heap[0]; heap[0] = heap[size –1]; heap[--size] = minElement;// helps for heapsort bubbleDown(0); return minElement; }//method removeMin 35 bubbleDown method public bubbleDown(int start) { int parent = start, child = 2 * parent +1; Object temp; while (child < size) { if (child < size-1 && compare(heap[child],heap[child+1])>0) child++; if (compare(heap[parent],heap[child] <= 0) break; temp = heap[child]; heap[child] = heap[parent]; heap[parent] = temp; parent = child; child = 2 * parent + 1; }// while }//method getMin 36 Heapsort 1. Heapify array A by bubbling down A[k] for: k = n/2,n/2-1,n/2-2,…,1 in order How long does this take? Might seems like O(n log n) but really O(n) 1. Repeatedly deletemax until all are selected Priority is reverse of sorted order n O(log n) each == O(n log n) 37 Heapsort public void heapSort(Object[] a) { Object temp; int length = a.length, i; heap = a; size = length; for ( i = length/2; i>=0;i--) bubbleDown(i); while (size > 0) removeMin(); for (i = 0 ; i Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/4035052/
CC-MAIN-2017-04
en
refinedweb
CodePlexProject Hosting for Open Source Software Hi, I have 3 different roles (Administrators, Editors and Users) but I only want the pages widget to appear for 2 of them (Administrators and Editors). Is ir possible to hide this widget from Users and people who are not logged in? Thanks, Phil You can use the System.Security.Principal namespace's objects and methods to detect what type of role the current user is in and act accordingly, here's an example if(Page.User.IsInRole("[ROLENAME]")){ //Code which displays the widget } The only role you're going to be able to programmatically look up the name of is the AdministratorRole, I believe, which you can do using this: if (Page.User.IsInRole(BlogSettings.Instance.AdministratorRole)){ //Code which executes only upon requests from users authenticated as admins... } Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://blogengine.codeplex.com/discussions/217162
CC-MAIN-2017-04
en
refinedweb
Now Only Llano Journal The 50 ¢ Vol. 10 No. 37 COUNTY Wednesday, March 6, 2013 Llano, Texas Red Top Jail funds in jeopardy BY LYN ODOM HIGHLAND LAKES NEWSPAPERS Lake Country Life: A topaz hunter’s dream Inside State funds to renovate the Llano Red Top Jail may disappear July 1 if foundation work isn’t completed on time, the city council heard Monday night. “Masons started working today and there is a problem with asbestos,” Community Development Director Doris Messer said. “We need to get the underpinnings done or there will be no more grant money after July 1. “Zack Webb with Sparks Engineering is optimistic we will be finished by May, but we could lose money if we don’t get it done on time. Sparks understands the timing and Frank (Rowell) has done a substantial amount of work he needs to get paid for.” The Texas Historical Commission awarded Llano a $25,000 Texas Preservation Trust Fund grant in 2011 to use toward stabilizing the foundation of the jail. The grant requires a bimonthly review, which is due to show expenditures of $29,672.50, the balance expected by the THC, by the end of April. The grant is a 50 percent reimbursement, meaning $50,000 must be spent on the jail project and proven before Friends of the Llano Red Top Jail can be reimbursed the $25,000 grant monies. In order to fulfill the grant’s requests, the city must spend $29,672.50 for foundation repair within the next two months to qualify for the $25,000 award. Messer said they are at risk of losing the reimbursement if they don’t meet the deadlines of the THC. There was no action needed on Messer’s report. Friends of the Llano Red Top Jail attended the city council meeting Monday to update the council on grant monies and to discuss salvaging materials no City, school races draw opponents 2nd time farmers lose water BY JAMES WALKER HIGHLAND LAKES NEWSPAPERS BY LYN ODOM HIGHLAND LAKES NEWSPAPERS Sports: Ladies take second in Blanco Page 1B Kingsland: Wind can change a name Page 2A Seats are being contested on the Llano school board, Llano City Council and Sunrise Beach City Council races to be decided in May. Llano school board candidates include Cody Fly running for Place 2 and Paul Hull and Trevor Dupuy challenging Place 6 incumbent Lyn Jenkins. Letitia McCasland is challenging school board president Ronnie Rudd, who represents Place 7. Current Llano city council members whose terms end in May include Sherry Simpson and Dustin McLeod, as well as Mayor Mike Reagor. Reagor is seeking reelection and for the upcoming term is being challenged by Mikel Virdell. Councilwoman Kelli Tudyk, who resigned during the Feb. 19 meeting, and whose term expires May of 2014, has drawn three candidates, so far. Allan Hopson, Rich Staley and Cheryl Crabtree have filed to run. The filing deadline is March 7. Other Llano residents Elections ... see Page 8A Judge Turns Dealer STAFF PHOTO BY LYN ODOM Usually a judge but Saturday night a black-jack dealer, Gil Jones, seated, was watched over by chamber directors Don Bynam, left, and Bill Biesik at Saturday night’s Lake Buchanan Inks Lake Chamber of Commerce Casino Night fundraiser held at the Hill Country Hall in Buchanan Dam. 2 A.M. Find us on Facebook Stay up to date on the latest news, events and sports in Llano County by becoming a fan of The Llano County Journal page on Facebook. This is just another way for you to get your daily news alerts and stay in the know. So, search for Llano County Journal on Facebook today or visit our webpage and click on the Facebook link. We hope to see you soon. or follow us on Twitter @LlanoJournal Residents of an RV Park in the shadow of the US 281 bridge demolition are trying to escape choking clouds of concrete dust Highland Lakes cities, residents and business owners apprehensive about what another possible parched summer might bring now know that most of the water in the lakes Buchanan and Travis reservoirs, whatever the amount, won’t be sent downstream for rice farmers to use in flooding their fields to kill weeds. When the clock hit 11:59 p.m. March 1 with just under 823,000 acre feet of water in the two reservoirs, it assured that water from the lakes will be cut off for most of the farmers for the second year in a row. The fact that there was less than 850,000 acre feet in the two lakes at the March 1 deadline was the trigger point in an emergency drought relief order requested by the Lower Colorado River Authority and approved by the Texas Commission on Environmental Quality on Feb. 13. Several Burnet and Llano County communities, including Spicewood Water ... see Page 8A RV residents fleeing dust BY ADAM TROXTELL HIGHLAND LAKES NEWSPAPERS Jail ... see Page 8A billowing from the project that is making life difficult for them, especially those who are elderly or suffer with respiratory problems. “We’ve got people moving out left and right,” said RV Bridge... see Page 3A Reader’s Choice results coming This is not a story about how the dog ate our homework. But we’re going to be a little late delivering the 2014 Readers’ Choice Awards magazine with your newspaper. We planned to publish all the award results along with short takes on the lives of many of the winners and advertising information and offers from some of our community’s leading merchants 2013 in February. But we’ve had such an upbeat and enthusiastic response from winning businesses, it’s taken us a little longer than we planned to make sure this year’s issue is as complete as we can make it – the best ever, in fact. Look for all the results in our 14th Anniversary Edition of the Readers’ Choice Magazine the week of March 17. ~The Publisher Llano County Journal READERS’ CHOICE AWARD WINNER 14th Anniversary PHOTO BY TOM SUAREZ Pfulgerville resident Timothy Weicht, center, is removed from his vehicle by Llano County Sheriffs Deputies Marke Burke, left, and Buck Boswell, right, following a 75 mile chase Tuesday night. Tire shot out to end chase BY ALEXANDRIA RANDOLPH HIGHLAND LAKES NEWSPAPERS A high-speed chase Feb. 26th led officers from Bertram PD, Burnet PD and Llano County Sheriffs Department on a 75 mile pursuit on TX 29 ending in Mason after lawmen shot out a tire of a car driven by a Pflugerville man. Timothy Allen Weicht, 49, was arrested on charges of evading arrest, reck- less driving, possession of drug paraphernalia and driving with an invalid license. He was booked in the Burnet County Jail at 10:30 p.m. According to Llano County Sheriff Bill Blackburn, the chase began in Bertram between 8:30 and 9 p.m. and headed west on TX 29 through Burnet and Llano, ending in Mason. Blackburn said that Llano County deputies joined the chase between 9 and 9:30 p.m. after the driver had sped through the city of Llano at roughly 80 miles per hour in a white Honda Civic. Speeds during the chase ranged from 50 to 100 miles per hour, and the average speed was around 80 miles per hour, Blackburn said. Blackburn said the chase began when a BerChase ... see Page 8A Page 2A Wednesday, March 6, 2013 Didn’t know I had that Wind Can Change a Name photo by nola hopkins Kingsland Health and Fitness Center lost the “s” from its sign during last week’s high winds, resulting in this humorous message. Some areas experienced down trees and minor wind damage, officials reported. HCM offers diabetes seminar The Hill Country Memorial Wellness Center will conduct a Be Well with Diabetes program from 9 a.m.-noon Thursday, March 7 at the HCM Wellness Center’s conference room in Fredericksburg. Designed to help those with diabetes manage the disease, the self-management seminar is recognized by the American Diabetic Association and is recommended for anyone with diabetes or for those caring for someone with diabetes. This seminar will provide diabetics the knowledge and skills necessary to help control blood sugars and to keep their bodies healthy. Attendees will have the opportunity to develop an individualization self-care plan. Spouses and significant others are encouraged to attend for no extra charge. For more information on all diabetes-related care issues contact Patsy Glasscock, RN, CDE at 830-997-1357 or Kim Thornton, RDLD, CDE at 830.997.1355. The cost of the seminar is covered by most insurance companies and by Medicare. A doctor’s order is required for insurance coverage. Space is limited and registration is required by calling the HCM Wellness Center at 830.997.1355. Seminar instructor Patsy Glasscock is a registered nurse and a certified diabetes educator. This seminar will cover foot care, dental care, sick day care, travel considerations, glucose-monitoring-asa-diabetes management tool and the role of exer cise and activity in daily diabetes management. Library to host gardener program The Kingsland Library will host the Master Gardener free program, “All You Ever Wanted To Know About Tomatoes/Spring Gardening” on Wednesday March 13 at noon. This is a Green Thumb Program by the Highland Lakes Master Gardners. For more information call 325.388.8849. You could fill a library with all of the things I’ve yet to learn. You could probably fill several libraries if you included subjects like: saying the right thing at the right time, understanding women, or how not to lose one’s hair. Every day is a lesson and the wisest people seem to listen more than they talk. I’m still working on this. So, there I was, sitting in the office of an ENT (ear, nose, and throat doctor) to have my hearing checked. I’d developed a ringing in my ears, which is quite common for dentists and those of us who grew up shooting at dove and deer with no ear protection. Next thing I knew, we were discussing my chronic sinus issues, my sleeping habits, and my undiagnosed ADHD. I left his office with more problems than when I walked in. I felt like a dental patient. After a week or so of nose spray and a schedule of antibiotics, I found myself sleeping better. The bags under my eyes had dissipated and the ringing in my ears was reduced. Unfortunately, my attention span is most likely forever locked Chip Parrish, DDS in the ADHD mode of a six-year-old. SQUIRREL! (This is an Up reference and joke on being easily distracted.) This same scenario occurs quite frequently in our dental office. We routinely spend a good part of each day addressing health problems outside of the mouth. In school, we learned to fix teeth. In practice, we’ve learned to fix people and a lot of dental related problems they never knew they had. The following are a few areas that modern dentistry can help with that might not have occurred to most of us: Headaches/Chronic Pain/TMD (Jaw Joint Pain) – Every week, we see a handful of patients that have been misdiagnosed with some type of pain disorder (fibromyalgia, migraines, etc.) that is actually related to their bite or the way that their jaw works. Many of these patients are taking strong medications that could be avoided with proper dental treatments. Most don’t even know that a dentist could help them. A few years back, we didn’t know we could help them, either. Sleep Apnea /Airway – We are constantly seeing studies about how airway and obstructive sleep apnea (OSA) are serious threats to health. For mild to moderate OSA cases, your dentist can make an appliance that many patients find is more comfortable than that CPAP or fighter pilot mask. I know this because I wear one to help me (and my wife) sleep better each night. The Fountain of Youth – Okay, I am exaggerating a bit…your dentist cannot make you young again. The truth is that we can often make you LOOK younger. Whiter teeth, a cosmetic denture, fuller lips, straighter teeth, less wrinkles around your mouth…these are all things that a dentist can improve. The most fun we have is when we make patients feel good about themselves. Until next week, keep smiling. Questions or comments can be sent to Drs. Parrish at. com. Genealogical Society to meet March 12 Members of the Kingsland/Highland Lakes Genealogical Society will find ”Tips and Tricks for Using the Computer for Genealogical Research” at their next meeting. The session will be Tuesday, March 12, at 2 p.m. at the Kingsland Branch Library, 125 W. Polk. Guest speaker will be researcher John Hemmeken, and visitors are welcome at the program free of charge. The Kingsland group meets the second Tuesday of each month at the library, where members help maintain a large collection of genealogical materials, and also offer Reference Room assistance each Wednesday. For further information, contact Shirley Shaw, 830.385.7070, or Raye Lokey at 830.613.1577. Wednesday, March 6, 2013 Page 3A Llano County Journal News Public poses questions on bridge implosion BY ADAM TROXTELL HIGHLAND LAKES NEWSPAPERS Representatives from the Texas Department of Transportation and its general contractor met with the Marble Falls Daybreak Rotary Club Tuesday morning in the first of two meetings to allay resident concerns about the use of explosives in the US 281 bridge demolition later this month. The presentation included the safety measures TxDOT and general contractor Archer Western will implement to ensure the demolition goes off without a hitch. Questions from Rotarians and non-Rotarians attending the 7 a.m. meeting at River City Grille were also addressed. “I’m not saying things don’t go wrong,” Eric Hiemke, Archer Western project manager, said. “But, I believe we have put the proper safeguards in place.” Howard Lyons, TxDOT area engineer, led the presentation while input was provided by Eric Hiemke and Mike Wood, Archer Western project manager and project superintendent, respectively, on the bridge project. No one representing Omega Demolition Corporation, the subcontractor that will be conducting the implosion, was present. “They reassured me this isn’t their first rodeo in blowing up a bridge,” Daybreak Rotary Club President Jim Warden said. Both meetings open to the public were held after residents of Gateway Park, which sits adjacent to the bridge on the southeast side, requested the city host one with TxDOT, Archer Western, and Omega present so they could ask questions regarding the safety of their homes and properties. Gateway residents were present at the Rotary meeting. One concern the residents have is over any increase in traffic within Gateway when RV Bridge From Page 1A.” Buchanan said of the 49 spaces in the park, 44 have trailers in them that have between two to four people in each. Buchanan said she knows of three children in the park between three and nine years old who are struggling to deal with asthma. However, Bobby Butterfield, who has lived in a trailer at the RV park for four years, said he had not suffered from prior respiratory illnesses when he went to Texas Hills Urgent Care Center in Marble Falls on Sunday because breathing had become difficult. “The doctor told me I had an upper respiratory infection and that it is from the dust in the bridge,” he said. “I know they have a job to do, but people are getting sick, and it just shouldn’t happen.” Meanwhile, Marble Falls police cited workers for violating city noise ordinances after residents of the park complained construction extended deep into the night. Kelli Reyna, public information officer for the Texas Department of Transportation (TxDOT) that is overseeing the bridge project, said the bridge is scheduled to be imploded. Lyons said areas will be appointed for the public to watch the implosion so people won’t wander around looking for the best view and that Gateway residents can work with the city to set up any barriers. “We’ll have Lakeside Pavilion open as a place we’d like people to go and watch the demolition,” Lyons said. “We���re going to do a walkthrough, and we’ll have police officers controlling traffic in the area,” Wood said. “If anyone is in that zone, they will be pulled out.” It is anticipated vehicle traffic will be stopped for between 10 and 30 minutes around the time of detonation, while boat traffic will be stopped four hours prior to and 24 hours after the explosion, during which time enough debris will be removed from the lake to create a channel lake traffic can travel through. The entire bridge truss that will fall into the lake after being imploded will take about six days to extract, according to TxDOT’s schedule. “It will take about 10 minutes to check the demolition was complete, to make sure all the set charges were spent, and to make sure no damage was done to the existing [new bridge],” Lyons said. “We shouldn’t be generating the same kind of vibration as they get by blasting in the Huber mine (in southern Marble Falls).” Hiemke said no damage is expected to occur to the new bridge based on the type of explosives that are being used. “Any debris that might blow into the new bridge structure would not be at sufficient force to do any damage,” he said. Another presentation was given to the Marble Falls City Council Tuesday night at 6 p.m. during their general meeting at City Hall. the agency and general contractor Archer Western are aware of the complaints from the RV park. She said while the claims that the construction dust is causing health problems are not being discounted, research done to monitor effects of concrete dust on workers shows there may be more contributing to the breathing problems. “We have consulted OSHA, U.S. Health Services, and medical journals, and what we have gathered is that to suffer even acute issues, a person would have to have three months or more of intense exposure,” Reyna said. “While we don’t want to dismiss the claims, we believe there may be more at play. If someone has asthma or respiratory issues and there is a strong wind that blows in a lot of dust that would exacerbate the problem.” However, that seems to be exactly what is happening to the park. Even on days with little or no wind, like Sunday when Butterfield went to Urgent Care. On its website, OSHA reports exposure to cement dust can irritate eyes, nose, throat and the upper respiratory system. Reyna said air quality monitors are on both sides of the bridge and “no health related issues have been reported” from Omega workers. Reyna said the only thing construction crews can do is to water down dust piles, which they have been doing throughout the demolition process. However, Buchanan and other observers say large mounds of concrete and debris seem to only be watered down erratically and with little effect. Dust is not the only problem facing RV park residents, as construction workers were given a citation for violating the city’s noise ordinance. Marble Falls Police Capt. Glenn Hanson confirmed a citation was issued for the construction noise after RV park residents complained. The Marble Falls city code states construction work can be cited for a noise nuisance if it happens outside of the appointed hours, 7 a.m. to 8 p.m. “We’ve received several complaints and asked that they knock off the construction except between the hours of 7 a.m. and 8 p.m.,” Hanson said. The citation carries a fine of up to $500, if convicted. Reyna said the removal of the concrete deck, which has been causing the issues for RV park residents, is scheduled to be complete this week. Butterfield is one of about a half-dozen people nearest the bridge who have been asked to move their dwellings when the bridge truss is imploded, tentatively scheduled early Sunday, March 17. Earlier, Reyna had said contractor Archer Western would pay for hotel rooms at the Hampton Inn & Suites across the lake in Marble Falls if they were displaced for about 24 hours or more. Now that the time of displacement has been dialed back to a few hours, she said they may be on their own now. Butterfield said he only knew he would have to move around the time of explosion, but was not told how long it Hochheim Prairie Farm Mutual Insurance/Branch 84 donates to Llano Senior Center. Standing, in back from left to right are branch director J. Elliott Pancoast, Jill Tate, Gwen Brumm (Meals on Wheels director) and Meals on Wheels volunteer Vera Ellis. Seated at the table are, from left, branch director Shirleyan Patterson and Meals on Wheels volunteer Hattie Sagebiel. Hochheim Insurance helps Meals on Wheels The Llano Senior Center has been serving Llano since 1968. That’s 45 years of assisting the citizens of Llano. Hard economic times affect each of us and our Meals on Wheels program is no exception, center director Gwen Brumm says. That’s why this year branch members and directors of Hochheim Insurance donated $2,000 to Llano’s Meals on Wheels Program. Hochheim Insurance gives back to their communities. For each policy issued, $5 is returned to the community. Each donation helps this program continue to provide food and human contact for individuals that can sometimes “slip through the cracks.” would be when he, other residents at the RV park, and Manager Mike Warren met with officials in charge of the bridge construction over a month ago. TxDOT officials have said they will communicate details about the explosion and how and when it will impact RV park residents through Warren. Although Butterfield’s trailer is within 300 feet of the bridge, he said Tuesday afternoon he has not received any written information on TxDOT’s plans such as the list of common questions and answers distributed by the agency at a Rotary Club meeting Tuesday morning. “Our organization was happy to help with this need,” said Branch 84 President Jackie Hatfield. Joey Carney Agency Principal P&C Agent Life Agent Do you know someone who snores? Do you hate your CPAP? This could be a sign of sleep apnea. People with sleep apnea are: • 4 times more likely to have a heart attack • twice as likely to die in their sleep • 7 times more likely to have an automobile accident We have the Allergy Answer! • more likely to have sexual impotence • Allergy Skin Testing & Treatment • more likely to develop diabetes • at increased risk of a stroke • Friendly and Knowledgable Staff • Most Insurances Accepted Saturday, March 9 | 7:30 PM | $15 (830) 693-9127 Lantex Theatre • 325-247-5354 COURTESY PHOTO The Eighth Annual Vernon West Memorial Roping Classic will be at the Llano Events Center Friday, Saturday and Sunday, March 8, 9 and 10, showcasing the talents of calf ropers and barrel racers ages 19 and under. The barrel exhibition begins at 3 p.m. Friday and the weekend’s events honor Jim Bob Altizer, Bud Smith and Vernon West of Del Rio and Mack Yates of Cherokee – all Texas cowboys now deceased. Sherry Ingham of Sonora is Altizer’s daughter who said the four men were the best of friends. “My dad, Jim Bob, was the Professional Rodeo Cowboys Association World Champion calf roper in 1959 and steer roper in 1967 and 1997,” she said. He, Bud, Vernon and Mack were all calf ropers, all loved the sport, loved to watch a good calf roping, loved children, were all ranchers, loved to watch a good match and were really good friends.” West’s grandson Ryan and his father Bud are the men who launched the roping classic in 2008, and on Saturday beginning at 9 a.m. girls breakaway roping and boys tie-down roping is scheduled. Also on Saturday a match junior roping competition between Sam Powers of Sonora and Kody Mahaffey of Sweetwater is scheduled and honors Ray Wharton, who was another friend of the men and is still living in Bandera. Wharton was the 1957 PCRA Champion calf roper. For more information, call Bud West at 325.280.7971. • at a 40% great risk for having depression Llano Country Opry FROM HEE HAW & THE PORTER WAGONER SHOW BUCK TRENT WITH BRANSON’S FINEST VOCALIST KENNY PARROT BY LYN ODOM HIGHLAND LAKES NEWSPAPERS TIRED OF SNEEZING AND COUGHING?!? Call for more information or to set up an appointment Presents Roping classic this weekend Starting March 8th Open Fridays instead of Wednesdays Dr. Gar y Albertson Mustang Plaza • 503 FM 1431, Suite 201 Marble Falls, Texas 78654 (Corner of 1431 and Ave. E) We have your simple Sleep Apnea solution Ask us about our oral appliance instead of CPAP Bee Cave Dental Center (512) 263-3330 Corinne R. Scalzitti, D.M.D., M.A.G.D. 3900 RR 620 South Austin, Texas 78738 Page 4A Llano County Journal Wednesday, March 6, 2013 Obituaries Gail Hopson Smith Feb. 25, 1945 ~ Feb. 26, 2013 Gail Hopson Smith, a lifetime resident of Llano passed away on Tuesday, Feb. 26 after a 15-year battle with carcinoid cancer. Gail was born on Feb. 25, 1945 to Cecil and Dorothy Hopson. She was captain of the school basketball team, head cheerleader, voted Ms. LHS and graduated from Llano High School. She was later inducted in to the LHS Sports Hall of Fame for basketball. Gail met and married Pete Smith on April 16, 1966 and raised three kids. She received an RN degree and worked for the Llano Hospital for many years and was dedicated to raising Smith awareness about carcinoid cancer. She loved traveling, slot machines and attending the PBR National Finals Rodeo in Las Vegas every year. Most of all she loved watching her grandchildren’s school and sports activities. Survivors include her husband of 47 years, Pete Smith of Llano; sister, Shirley Appell of Llano; brother, Don Ray Hopson of Trinity, Texas; son, Pete Smith and wife Annalisa of Llano; daughters, Teresa Smith and Travis Simons of Liberty Hill, Texas, and Sharla Smith and Casey Mosier of Llano; grandchildren Ty Smith, Laramie Smith, Reilly Martinez, Cecelia Overstreet, Riley Mosier, Landry Lancaster, Brooke and Kristina Simons. She was preceded in death by her mother, father and her beloved “Granny,” who raised her. Visitation was held Thursday, Feb. 28 from 5 – 7 p.m. at the Waldrope Hatfield Hawthorne Funeral Home. Graveside services were held Friday, March 1 at Six Mile Cemetery at 3 p.m. with Reverend Jerald Moore officiating. In lieu of flowers, memorials may be made to the Carcinoid Cancer Foundation, 333 Mamaroneck Avenue # 492 White Plains, New York 10605. Funeral arrangements made under the direction of Waldrope-Hatfield-Hawthorne Funeral Homes, Inc. Llano. E-mail condolences may be sent to whhfuneral1@verizon. net or you may log onto for online condolences. Kimberly Kay Williams July 23, 1960. ~ Feb. 28, 2013 Kimberly (Kym) Kay Townsend Williams, 52, of Buchanan Dam passed away on Feb. 28 in Buchanan Dam. Funeral services were at 2 p.m. on Monday, March 4 at Wylie Baptist Church. Brother John Fanning of View Baptist Church officiated. Burial was at 4 p.m. at the Lakeview Cemetery in Winters, Texas. The family received friends from 5 to 6 p.m. on Sunday at The Hamil Family Funeral Home, 6449 Buffalo Gap Road. There will also be a celebration of Kym’s life on Wednesday, March 6 at 4 p.m. at the Chapel of the Hills Baptist Church in Buchanan Dam. Pastor Steve Leftwich will Williams officiate. Kym was born in Abilene, Texas to Evelyn and Irvin Townsend on July 23, 1960. She attended school at Wylie before graduating from TSTC Waco in 1980. Kym worked as a correctional officer for the Burnet County Jail for 3 years. Kym was preceded in death by her father Irv Townsend and her grandparents B.R. Thetford and Francis Williams Thetford. Kym is survived by her mother, Evelyn Townsend; brother Dewey Townsend and his wife Traci; sister Kelly and husband John Collums; nieces and nephews Marcos Reyes, Ali Dockter, Taylor and Cole Townsend, Maddie, Jake, and Gus Davis. She is also survived by many beloved aunts, uncles, and cousins, as well as many dear friends, including her dear friend Ed, who has been a great help to her. Pallbearers were Vance Long, Mike Bounds, Marcos Reyes, John Collums, Gary Linn, Troy Cooper and Dale Doby. In lieu of flowers, the family asks that memorials be given to the Wylie Baptist Church Fund, 6097 Buffalo Gap Road, Abilene, TX 79606; Rescue the Animals, SPCA, 5933 South 1st St., Abilene, TX 79605; or Chapel of the Hills Baptist Church Food Pantry at 19135 TX-29, Buchanan Dam, TX 78609. The guestbook may be signed and condolences submitted online at:. Kathleen ‘Kathy’ Henderson Feb. 21, 1936 ~ Feb. 13, 2013 Kathleen “Kathy” Henderson, 76, of Highland Haven passed away Feb. 13. She was born to Deforest and Ila (Arnet) Ramsey on Feb. 21, 1936 in Westwood, Calif. Kathy married John Shelly in 1951 and they had four In Loving Memory Robert (Bob) Holland 2-11-36 / 2-6-2012 A memorial luncheon was held in Roberts memory by Patricia Holland (wife) at their home on Saturday February 9, 2013. A good fellowship was had by those attending: Merlene Holland, Sallye Baker, Joan and Gene Wright, Mary Lusinger, Ronnie Powell and Bro Max Copeland who was ill and could not attend. Robert you are missed each and everyday. Not anything could take your place. Robert was a wonderful man caring and thoughtful of everyone. I know you are in heaven a peaceful place no pain or worries and in Gods hands. We truly miss you Robert our Love, Patricia Holland, other family members and Friends. In Memory of Roberts birthday February 11, 2013 Patricia Holland and Patti Benker took a basket of flowers to Honey Creek Cemetery, and released balloons for his birthday. An Earth bound Angel whose now in Heaven. children. They divorced and she married James “Jim” Henderson in 1983 and moved to Marble Falls. He passed away in 1997. She is also preceded in death by her parents and a great-grandson, Brandon Shelly. Kathy was a homemaker, charter member of the Highland Singles Club and she was one of the last three charter members, enjoyed bowling leagues, flower gardening, traveling and dancing. She is survived by her companion, Jay Fullmer of Marble Falls; children, John A. and Diane Shelly of Nevada, Ronald W. Shelly of Sheridan, Wyo., Rick A. and Patricia Shelly of Bastrop, Texas, Cynthia M. Dilworth of Maricopa, Ariz.; five grandchildren and 11great-grandchildren. Kathy’s home was always open and her grandchildren remember playing lots of games with her and that she was always there for them. Celebration of life services were held Saturday, March 2, at 1 p.m. at the Putnam Funeral Home. An online guest register may be signed at. There will be a potluck dinner at Kathy and Jays home following the service. Local funeral arrangements made by Putnam Funeral Home, Kingsland. Murray Ellis Feb. 22, 1929 ~ Feb. 25, 2013 Murray Ellis, 84, a lifetime resident of Llano, passed away Monday, Feb. 25. He was born Feb. 22, 1929 in Kingsland to Mattie Estella (Masters) Ellis and Allen Lee Ellis. Murray was a Veteran of the U.S. Marines. He was a member of the Llano Volunteer Fire Department, served as a Deputy Sheriff for Llano County and retired from the Central Texas Electric Co-op. Murray loved fishing, hunting, working on farm equipment, feeding and watching the wildlife, but most of all he loved the company of Ellis his family and friends. Survivors include his daughter, Nickey Lawrence and husband John of Liberty Hill, Texas; son, Mack Ellis and wife Lynn of Liberty Hill; grandchildren, Mark Decker, Kelsey Ellis and Michael Ellis; and one greatgrandchild, Kaylee Ann White. He was preceded in death by his parents. Visitation will be held Friday, March 1 from 6 – 8 p.m. at the Waldrope Hatfield Hawthorne Funeral Home in Llano. Graveside services were held Saturday, March 2 at 11 a.m. at the Llano City Cemetery with Reverend Richard Vandventer officiating. Military Honors will be provided by the Highland Lakes Honor Guard. In lieu of flowers, memorials may be made to the American Heart Association, 7272 Greenville Avenue, Dallas, Texas 75231. Funeral arrangements made under the direction of Waldrope-Hatfield-Hawthorne Funeral Homes, Inc. Llano. E-mail condolences may be sent to whhfuneral1@ verizon.net or you may log onto. com for online condolences. Nicole Andrea Columbus Aug. 17, 1973 ~ Feb. 23, 2013 Nicole Andrea Columbus, 39 of Dallas, formerly of San Angelo, passed away Saturday, Feb. 23. She was born Aug. 17, 1973 in London, Ontario, Canada to Barbara (Weaver) Columbus and Donald Columbus. Nicole graduated from Central High School in San Angelo. She earned a Bachelor’s of Science degree and a Bachelor of Arts degree from Angelo State University. Nicole received a Master’s degree in Science specializing in Kinesiology from the University of North Texas in Denton. She worked for Texas Instruments as their wellness Columbus program director. She then was employed as a Drug Representative with Pfizer Pharmaceuticals, in Dallas. Survivors include her mother, Barbara Columbus of San Angelo; father, Dr. Donald Columbus and wife Debby of Horseshoe Bay; and brother, Benjamin Donald Columbus and wife Felicia of Austin; step-sisters, Rachel Johnson of Andrews, Texas, Mackenzie Tuckness of Junction; and step-brother, Marcus Morriss of Dallas. Memorial services were Friday, March 1 at 11 a.m. at the Waldrope Hatfield Hawthorne Funeral Home Chapel in Llano with family friend Dan Wilson and Reverend James Sanderson officiating. In lieu of flowers, memorials may be made to the Highland Lakes Pregnancy Resource Center, P.O. Box 1524 Kingsland, Tex- as, 78639. Cremation arrangements made under the direction of Waldrope-Hatfield-Hawthorne Funeral Homes, Inc. Llano. E-mail condolences may be sent to whhfuneral1@verizon.net or you may log onto for online condolences. Vaughn D. Sanders Feb. 14, 1928 ~ March 2, 2013 Vaughn D. Sanders passed away peacefully in Llano, Texas, on March 2 at the age of 85 after a lengthy battle with cancer. He was born Feb. 14, 1928 in Milam County, Texas, to B. N. Sanders and Velma Emil Bownds Sanders. He was preceded in death by his parents and his brother, Ed Sanders. He is survived by his wife of 63 years Pat Sanders; daughter Jeanie Larremore and husband Gary; son Mike Sanders and wife Janet; sisters Clara Juricek, Mary Barnett and husband Prentice; sister-in-law Camilla Sanders; Sanders brother and sister-in-law Don and Nancy Godwin; brother-in-law Jack Churchill and wife Louraine; grandchildren Will Larremore and wife Justine, Bart Larremore, Lesa Scott and husband Warren, Natalie Sanders, Geoffrey Sanders, Audrey Wohlrab and husband Jay; four great-grandchildren, and numerous nieces and nephews. Vaughn graduated from Taylor High School and married Patricia Fay Godwin on Sept. 2, 1949 in Austin. They became proud parents of two children, Jeanie and Mike. Vaughn spent most of his life in the floor and window covering business, but his greatest work and love was raising and enjoying his family. He also enjoyed ranching with all classes of livestock and raising horses. He was an avid hunter and also liked to fish. Vaughn was a member of the Cherokee Baptist Church, and also served willingly in the communities where he lived on the school board, chamber of commerce, and other organizations. He was very active in the Masonic Lodge and had been a Master Mason for 38 years, serving as Master of the Lodge in both San Gabriel Lodge No. 89, and in Llano Lodge No. 242, where he also served as District Deputy Grand Master of that district. Vaughn (or Big Daddy to the grandchildren) will be greatly missed by his family and friends, but he leaves us with a wonderful legacy and many good memories. Graveside Services will be held on Wednesday, March 6 at 11 a.m. in the I.O.O.F Cemetery in Georgetown, Texas with Craig Churchill officiating. Pallbearers will be Mike Sanders, Gary Larremore, Will Larremore, Bart Larremore, Geoffrey Sanders, and Shawn Sanders. Serving as Honorary Pallbearers will be Bill Jenkins, Rowe Caldwell, Bill Edmiston, and Sam Center. In lieu of flowers, memorial contributions may be made to the Llano County 4-H Scholarship Fund, Cherokee Baptist Church, or the Llano County Library. Funeral arrangements made under the direction of Waldrope-Hatfield-Hawthorne Funeral Homes, Inc. Llano. E-mail condolences may be sent to whhfuneral1@verizon.net or you may log onto for online condolences. Robert Leander Robertson Robert Leander Robertson, 92, of Horseshoe Bay, passed away Friday, March 1. Robert was born in Waller County on a farm/ranch to Robert Moore Robertson and Jenny Gail (Betka) Robertson. He was a veteran of the U.S. Army. Robert retired from Exxon after 42 years and he and his wife, Kate, moved to the hill country where he resided for 34 years. After 32 years of marriage his wife passed away in 1996. Robert was a big member of the Blue Lake Golf Association. He had numerous friends all over the state of Texas. Robertson Survivors include his brother, James Calvin Robertson of Taft, Texas; son, Thomas Eugene Robertson and wife Linda of Kentfield, Calif.; daughter, Stacey Smedick and Dennis Noey of Normangee, Texas; grandchildren: Sean Robertson of Taft, Le Anne Atwood of Humble, Texas and Michael Robertson of California; and seven great-grandchildren. Visitation was Tuesday, March 5 from 5-7 p.m. at Clements-Wilcox Funeral Home in Marble Falls. Funeral service is Wednesday, March 6 at 10 a.m. at Clements-Wilcox Chapel with Charles Waugh officiating. Graveside service will be at 3 p.m. at Coleman City Cemetery, Coleman, Texas. He was a great man and will be greatly missed by all who knew and loved him. Condolences may be offered to the family at. UNEXPECTED? Floods, drought, car-wrecks, inflation, recession, medical bills, law suits, emergencies are part of our everyday life. They are no longer an occasional occurrence. The result being that many of us can barely pay our bills. No one is immune and yet we live in the greatest country on earth. WHAT CAN YOU DO? You no longer have discretionary income CONCERNED CAPITALIST “Answers for Living” Confrontational Education & Discussion Meeting at Cross Stone Church Commerce & State St., Marble Falls - 512-228-6370 Sat., March 9nd, 11 am 2, pm, 5 pm Following Mon., Tues., Thurs., Fri. at 6 pm Wednesday, March 6, 2013 Page 5A Llano County Journal Calendar of Events Do you have an upcoming event, meeting or fundraiser? Please send information to editorial@burnetbulletin.com, newscopy@highlandernews.com, or newscopy@llanocj.com. Art & Entertainment March 9 •Concert in the Cave5:45 to 8 p.m., Longhorn Cavern, 6211 Park Rd. 4, Burnet. Information and Reservations: or 512.756.4680. March 16 •Welcoming Party at the Train Depot- 11:45 a.m., Burnet Train Depot, 401 E. Jackson, Burnet. Guest country singer Juanita. Information: 512.756.4297. March 23 •15 th Annual Hill Country Lawn & Garden Show9 a.m. to 4 p.m., Burnet Community Center, 401 E. Jackson, Burnet. Information: 512.588.0696 or www. yantislakesidegardens. com/mghome/show. March 30 •@Last Llano Art Studio Tour- A map of this year’s locations will be available at the Llano Visitors Center, 100 Train Station Dr., Llano. Information: 325.247.5354. •Family Easter on the Vineyard- noon to 4 p.m., Fall Creek Vineyards, 1820 CR. 222, Tow. Information: 325.379.5361. Service Clubs For a full list of service clubs in the area, go to w w w. h i g h l a n d e r n e w s . c o m and access the Community menu to the Civic Clubs heading. Through March 19 •Master Gardener New Member Training- Tuesday’s 1 to 5 p.m., Hoppe Room Courthouse Annex, 101 E. Cypress, Johnson City. Information: 830.868.7167. March 7 •Highland Lakes Birding and Wildflower Society Meeting- 9:30 a.m., Marble Falls Library, 101 Main St., Marble Falls. Information: 830.693.0184. March 12 •Kingsland/Highland Lakes Genealogical Society Meeting- 2 p.m., Kingsland Library, 125 W. Polk St., Kingsland. Information: 830.385.7070 or 830.613.1577. •Chamber of Commerce Winter Texan Dinner- 6 p.m., Kingsland Commu- nity Center, 3451 Rose Hill Dr., Kingsland. Information: 325.388.3321. March 13 •Highland Lakes Master Gardener free Green Thumb program- Kingsland Library, 125 W. Polk St., Kingsland. Information: 325.388.8849. •Lambda Nu Monthly Meeting- 9 a.m. to 1 p.m., Kingsland Community Center, 3451 Rose Hill Dr., Kingsland. Information: 325.388.3321. March 14 •Burnet County Republican Women Meeting- 11:30 a.m., Hidden falls Restaurant, 220 Meadowlakes Dr., Meadowlakes. Information: 830.598.1850. March 15 •H/L Flyer Set-up- Kingsland Community Center, 3451 Rose Hill Dr., Kingsland. Information: 325.388.3321. March 16 •Highland Lakes Native Plant Society of Texas Meeting- 1:30 p.m., Marble Falls Library, 101 Main St., Marble Falls. Information: 830.693.3023. Food & Fundraisers Through March 22 •Fish Fry- 5:30 to 7:30 p.m., Every Friday, St. John the Evangelist Parish Activity Center, 105 FM 1431, Marble Falls. Information: 830.265.2505. March 6 •Lunch & Learn Seminar- 11:30 a.m. to 1 p.m., Kingsland Community Park, 710 Williams St., Kingsland. Subway will be providing lunch. Topic will be Resolve Seasonal Business Challenges, Easy Operations & Marketing Tips. Please register by March 4. Information: 325.388.6211. Through March 22 •Fish Fry- 5:30 to 7 p.m., Every Friday, Our Mother of Sorrows Parish Hall, 507 Buchanan Dr., Burnet. Information: 512.756.4410. March 12 •Winter Texan Dinner- 5 p.m., Kingsland Community Center, 3451 Rose Hill Dr., Kingsland. Call to inquire food to bring. Information: 325.388.3321. Events & Meetings Through June 25 •EMT Basic CourseTuesday and Thursday, 6 to 10 p.m., Burnet Airport, 2302 S. Water, Burnet. Information: 512.553.3491 or lraiford@cityofburnet.com. Through March 26 •Non-Credit Basic Conversational French COURTESY PHOTO This cedar bench designed by John Newell of J Craft Company will be raffl ed at the Llano Master Gardener Lawn and Garden Show Lawn & Garden Show coming This free show and sale of native plants and techniques for beauty and survival in the Texas Hill Country takes place from 9 to 2 p.m. March 16 at St. James Lutheran Church, 1401 Ford St. in Llano. Come for demonstrations and informative talks to help your garden and lawn spring back this year. Call the Llano Chamber at 325.247.5354 or go to llanochamber.org for more information. Both the Texas Agrilife Class- 6 to 8 p.m., Lampasas County Higher Education Center(LCHEC), 208 E. Ave. B, Lampasas. Information: or 512.556.8226. Through March 21 •Medical Administrative Assistant Training Class5:30 to 9 p.m., Lampasas County Higher Education Center (LCHEC), 208 E. Ave. B, Lampasas. Information: or 512.556.8226. Through March 28 •A Matter of Balance class- 9:30 to 11:30 a.m., every Tuesday and Thursday in March, Woodlands Active Senior Living Community, 700 Janet St., Burnet. To Register and Information: 512.756.2145. March 7 •English as a Second Language- 10 to 11:30 a.m., Herman Brown Free Library, 100 E. Washington St., Burnet. Information: 512.715.5228. •Home School Book Extension Service and the Llano Master Gardeners sponsor the show. Speakers start At 9:15 a.m. with Keenan Fletcher speaking on Heirloom Bulbs. At 10 a.m. Larry Payne offers Tips on Building an Energy Efficient Greenhouse. At 10:45 a.m. Sheryl Smith-Rodgers’ topic is Windows on the Texas Landscapes. At 11:30 a.m., Bill Luedecke will be speaking on Getting Your Garden Ready for Spring Planting. At 12:15 p.m. Mike ReClub- 2 to 4. Landscape Irrigation Checkup Program- 5 to 7 p.m., Helping Center Garden, 1315 Broadway, Marble Falls. Information: karen48.kw@ gmail.com. March 8 •Genealogy Research Assistance- 10 a.m. to noon, Email burnetcgs@ gmail.com for appointment. Herman Brown Free Library, 100 E. Washington, Burnet. Information: 512.715.5228. BY LYN ODOM HIGHLAND LAKES NEWSPAPERS The Two-Pro Golf Management contract the city entered into in March of 2012 is near ing expiration and the city council Monday discussed whether to renew the contract or make other plans for keeping the city’s Llano River Golf Club up and running. Interim City Manager L ynda Kuder said since having Two-Pro on board, revenues from the course are less than past totals. “We contracted with them 11 months ago for a one year contract,â€? Kuder said. “They haven’t been focused on revenue or in the pro shop. They’ve had two employees there who didn’t work out and another pro startServing the Hill Country with Fast, Friendly Service Since 1945 MAGILL ing March 15 who is supposed to focus on membership. There has been a $180,000 loss for the year under their management.â€? Kuder said a clause in the Two-Pro contract allows either party to terminate the contract, with or without cause, with a 30-day notice. “If they can’t turn it around by July, they need to leave Llano they can’t help us out,â€? Kuder said. Live streaming of city council meetings will be handled in-house and will be available soon, the council decided Monday. City technical coordinator Josh Oebel said he can install video and audio equipment to record and stream council meetings that can be viewed live on the city’s website. The council rejected bid proposals from two private firms. Swagit Productions and Granicus turned in proposals, with Swagit initially charging $23,6470 with a $695 monthly maintenance fee and Granicus charging $11,188 the first year and $6,5880 per year thereafter, plus equipment, labor and “all other items required for cameras.â€? Oebel said he is ready to manage putting video himself on the website for a much lower cost than the two proposals. “It will be one perspective video and can be heard very well,â€? Obel said. Main Street Market Days! Marble Falls Saturday, March 9th 9:00am-4:00pm Arts, Crafts, Food & Live Music! Sponsor: Service Title Company WELL SERVICE Pump Sales & Service Solar Pumps • Windmills Zane Magill, Owner (325) 456-5090 2519 CR 405 • Castell 3rd Generation Est. 1945 License#4168 a.m. to 4 p.m., Between First and Fourth St. and Main, Marble Falls. Information: 830.693.4449. •Ladies Night Out- City Wide Shopping, Burnet. Information: 830.385.50023. March 11 •AARP Tax Aide- 9 a.m. to 1 p.m., Herman Brown Free Library, 100 E. Washington St., Burnet. Information: 512.715.5228. March 12 •Children’s Story Time- March 9 •Oak Haven Ministry9 a.m. to 3 p.m., Kingsland Community Center, 3451 Rose Hill Dr., Kingsland. Information: 325.388.3321. •Main St. Market Day- 9 Golf course Council Streaming goes in-house ‘not up to par’ BY LYN ODOM HIGHLAND LAKES NEWSPAPERS agor will give an Update on Llano Water Planning. Last, but not least, Dave and Inell Franks will present all we need to know about Xeriscape Landscaping and Gardening starting at 1 p.m. The Children’s Booth returns with hands on fun activities. Many plants will be for sale including herbs, vegetables, heirloom tomatoes, native plants, cacti and succulents. There will be also be raffles, door prizes, and fun. 830-693-2815 10:30 to 11:30. March 13 •Low Cost Spay/Neuter Clinic- Pet Pals, 2003 Hwy 1431, Marble Falls. Appointments and Information: 830.598.7729. March 14 •Kona Ice Ribbon Cutting- 4:30 p.m., Burnet Community Center, 401 E. Jackson, Burnet. Information: 512.756.4297. •English as a Second Language- 10 to 11:30 a.m., Herman Brown Free Library, 100 E. Washington St., Burnet. Information: 512.715.5228. •Coffee Talks- 1:30 to 4 p.m., Author Pete Smith, Herman Brown Free Library, 100 E. Washington St., Burnet. Information: 512.715.5228. March 15 •Genealogy Research Assistance- 10 a.m. to noon, Email burnetcgs@gmail.com for appointment, Herman Brown Free Library, 100 E. Washington St., Burnet. Information: 512.715.5228. •Amber Oaks HOA- 5:30 p.m., Herman Brown Free Library, 100 E. Washington St., Burnet. Information: 512.715.5228. March 15 to 17 •World Series Team Roping Qualifier- 10 a.m. Friday, 8 a.m. Saturday and Sunday, Llano Events Center, 220 RR 152, Llano. Information: 325.247.5354. •Texas High School Bass Fishing TournamentNumerous Lakes around Burnet County. Information: 830.693.2815 or bill@marblefalls.org. March 16 •Master Gardeners Lawn & Garden Show- 9 a.m. to 2 p.m., St. James Schorlemmer Hall, 1401 Ford St., Llano. Information: 325.247.5354. Fax 325.388.4356 - 877.496.6585 1006 FM 1431 (across from Wells Fargo Bank) KINGSLAND, TX B I N G O Every Sunday KY`WcaYK]bhYf HYlUbg Every Wednesday warm-up at 2:00 pm warm-up at 6:00 pm regular bingo regular bingo at 3:00 pm at 7:00 pm Doors open 1 hour before warm-ups The Bingo Hall is SMOKE FREE 6&7 0/34 VETERANS DRIVE „ -ARBLE &ALLS Good Food & Good Fun Millers Fun Run!! Paragon Casino Trips Marksville, LA - CASINO 2 nights and 3 days $85.00 per person or $135 single Dates March 13th-15th April 24th-26th May 13th-15th June 12th-14th July 11th-13th July 17th-19th Dates Bus leaves Burnet HEB parking lot 6:00am *Never Too Old to Have FUN!* August 4th-6th August 28th-30th September 25th-27th October 23rd-25th November 18th-20th December 9th-11th Hostess: Emmalea Miller • 512-525-0082 • 512-756-4008 Page 6A Llano County Journal Opinion Wednesday, March 6, 2013 Medicare statement now easier to understand The Llano County Journal is published weekly by Highland Lakes Newspapers, Inc. Periodicals postage paid at Llano, Texas 78643; USPS 025-124. Member of the Suburban Newspapers of America. Offices are located at 507 Bessemer, Suite A, in Llano, Texas 78643. POSTMASTER Send address changes to Llano County Journal P. O. Box 1000 Marble Falls, Texas 78654 Corrections The Llano County Journal will gladly correct any error found in the newspaper. To request a correction or clarification, please call 325.248.0682 and ask the editor. A correction or clarification will appear in the next available issue. Subscriptions Subscription rates for the Llano County Journal are $26 annually for addresses in Burnet and Llano counties; $36 annually in other Texas counties; and $52 annually outside of Texas within USA. Call 325.248.0682 or 830.693.4367 to order by phone. Contact us: Publisher and Editor Roy E. Bode 830.693.4367 x224 Associate Publisher Ellen Bode 830.693.4367 Executive Editor/General Manager Phil Schoch 830.693.4367 x226 Assistant Editor 830.693.4367 x222 Lyn Odom newscopy@llanocj.com 325.248.0682 FAX: 325.248.0621 Sports Editor Mark Goodson mgoodson@llanocj.com 830.693.4367x220 Retail advertising John Young advertising@llanocj.com 325.248.0682 Classified advertising classifieds@llanocj.com 325.248.0682 Business Manager Sharon Pelky 830.693.4367 x217 Circulation Audra Ratliff 830.693.4367 x216 Production advertising@burnetbulletin.com Melanie Hogan Sarah Randle Mark Persyn Eric Betancourt 830.693.4367 x218 For Video Highlights Visit: Medicare has redesigned and simplified the statement it mails to beneficiaries every three months to explain the claims and benefits they’ve recently received. The Medicare Summary Notice, as the statement is called, describes all of the health care services or supplies billed to Medicare on your account, how much of the bill Medicare paid and how much you still may owe the health care providers or suppliers. The Summary Notice goes to everyone enrolled in traditional Medicare -- private Medicare Advantage health plans send out their own claims reports to their members. The notice isn’t a bill, but you still should carefully review it when it comes in the mail. The Medicare statement helps you keep tabs on your medical care and your out-of-pocket expenses. It also helps you detect billing errors and even possible fraud. Until now, though, checking your Summary Notice hadn’t always been easy. If you had had a number of doctor visits or medical procedures during the previous three months, the statement often ran a dozen pages or more. And the form contained many medical terms, codes and abbrevia- Bob Moos Centers for Medicare & Medical Services tions that confused all but health care professionals. No wonder that some people simply ignored the Summary Notice when it arrived. In redesigning the statement, Medicare visited with beneficiaries and asked how to make the notice clearer and more useful. Consumer advocates also weighed in with suggestions. The results of the makeover will land in your mailbox soon. If you’d prefer not to wait, the new statement is already available at Medicare’s secure online service —. gov — where you can create a personal account and track your claims. Either way, you’ll find the new, consumer-friendly format, including: • A clear notice of how to check the form for important facts and potential fraud. • A snapshot of what you’ve paid toward your annual deductible, a list of the health care providers that made claims, and whether Medicare paid those claims. • Simple descriptions, in plain English, for medical procedures. • Definitions of all terms used on the statement. • Larger type to make the notice easier to read. • Information on preventive services available to you. The improved Summary Notice is meant to empower you in a couple of ways. First, you can more easily file an appeal if a claim is denied. The new statement clearly explains what to do if you disagree with a payment decision and how to get help submitting an appeal. There’s also a form attached that you can complete and mail to the address provided. Next, you can more easily spot billing mistakes or potentially fraudulent charges. Improper payments and outright fraud cost Medicare billions of dollars each year and can lead to taxpayers and beneficiaries paying more for health care. So it’s to everyone’s benefit to make sure your Summary Notice doesn’t include questionable charges. To check its accuracy, compare your claims notice with the bills, statements and receipts you’ve received from your health care providers or suppliers during the previous three months. Do the dates, billing codes and descriptions of services match? If you see an entry for services or supplies you didn’t receive, get in touch with the provider and ask about it. It may be a simple billing error the hospital or doctor’s office can correct. The correction will then show up on your next Summary Notice. If you still have questions about your Medicare statement or there’s something you and your health care provider can’t resolve, call Medicare at 1.800.633.4227. Medicare has stepped up its efforts to prevent unscrupulous providers and suppliers from filing false claims, but the best line of defense against fraud remains you – the health care consumer. Bob Moos is the Southwest regional public affairs officer for the Centers for Medicare & Medicaid Services Why Texans celebrate Texas Independence Day On March 2, 1836, some 60 Texan delegates congregated at Washington-on-the-Brazos to sign the Texas Declaration of Independence, a document which was drafted overnight. The brazen doctrine opens with a scathing denouncement of the Mexican government under the despotic rule of Antonio Lopez de Santa Anna. .” Texas was one of seven Mexican territories to revolt under the oppressive rule of Santa Anna, who cultivated the moniker, “the Napoleon of the West.” Troy Fraser State Senator Dist. 24 Texas was the only territory to eventually retain its freedom. Some theorized that Santa Anna’s cruelty against reprisals played a role in his undoing. The New York Post said at the time, .” Davy Crockett, who was already a living legend, had fallen on hard times and had his own reasons for heading west. He lost his bid for re-election to the 24th U.S. Congress in 1835 and was in need of a fresh start. Before departing his native Tennessee, Crockett was quoted as saying “You may all go to hell and I will go to Texas.” Crockett would eventually perish defending the Alamo in San Antonio at the hands of Santa Anna’s army just four days after Texas declared its independence. His manner of death and impact on the battle is widely debated. But two facts are agreed upon; he did die at the Alamo and in doing so, Crockett, along with an estimated 188 other heroes, laid down his life in the name of Texas. Their sacrifice along with those at the Goliad Massacre would serve as a rallying cry in the eventual capture PUZZLES FIND TODAY’S ANSWERS ON PAGE 7B of Santa Anna and obliteration of his army on April 21, 1836 in the Battle of San Jacinto. Led by Sam Houston, Texas’ victory over Mexico at San Jacinto was so decisive, it is still considered one of the most lopsided battles in recorded history. The Republic of Texas would last a decade before joining the U.S. as the 28th state. So we celebrate. From the Gulf Coast, to the Hill Country, to the plains of the Panhandle, we celebrate Texas Independence Day every March 2 with parties, parades, concerts and reenactments. Perhaps Crockett summed up best why Texas is worth fighting for in a letter penned less then two months before his death, “I must say as to what I have seen of Texas it is the garden spot of the world. The best land and the best prospects for health I ever saw, and I do believe it is a fortune to any man to come here.” Wednesday, March 6, 2013 Page 7A Llano County Journal Community What is the Expectation? BY TODD KEELE PRINCIPAL, LLANO JUNIOR HIGH SCHOOL I believe having “wants” is typical of every single junior high and high school student, or anyone for that matter. As a junior high and high school student I had a whole list of “wants.” I wanted to be successful in my classes. I wanted to be on the “A” team as a seventh-grade football player. I wanted to win in the athletic events I competed in. I had numerous friends and classmates that had their own list of “wants” in many different areas. However, simply wanting to (fill in the blank) is never enough. Several factors determine if those wants will actually turn into reality. Hard work, determination, planning, practicing, recovery from mistakes and or failures are all factors that can place that “want” within reach. In my experience, one factor rises above all others in the pursuit of our wants. That key factor is the expectation that is placed on the individual (or the team, the class, the school, the school district) pursuing those wants. Expectations say a lot about an individual or an organization. I was fortunate to have been coached by a group of men that had high expectations, both on and off the field. Those expectations were stated and talked about on a daily basis. The coaches’ expectations became the team’s expectations. Those team expectations became each individual player’s expectations, and the results were evident. We were very successful. We didn’t play to “not lose,” we played to win! We won because that was the expectation. As a school administrator I try to apply that same use of expectations to Lla- no Junior High School. Our expectations are simple and easy to understand. When applied on a daily basis, success follows. One of the main areas that we use expectations is in the behavior of our students. As a staff we developed expectations for our common areas (cafeteria, restrooms, hallways, etc.) and made those known to the students from day one. I believe these expectations have been effective and the reduction of discipline referrals helps support that. In the first six weeks we had 81 percent fewer referrals than the first six weeks of the previous year. In two words … expectations work. As a school district it is important that we have high expectations in all areas. One of the new expectations that we are starting is a reworking of the Advanced Academic Program. I feel that this is an important step in allowing our students to be competitive in today’s world. Another area in which expectations have and are continuing to change is in the realm of our extra-curricular activities. I believe that we are headed back in the right direction and that the expectations are there, both from the community and the staff that is in place. As we as a community pursue the various “wants” that we have for our great school district I ask that you apply another factor essential to success … patience. I heard it said once that “patience is the ability to down-shift when you want to strip your gears.” Being patient and having high expectations are essential to achieving our goals. If you ever have any questions or want to talk about the goals and expectations of Llano ISD, my door is always open. My email is tkeele@llanoisd.org and my office phone is 325.247.4659. Casino night was loads of fun Thank you to all the sponsors and participants of Cadillac Cowboy Casino Night. Our annual fundraiser was a huge hit and loads of fun. We appreciate everyone’s support of the chamber and our community. The chamber welcomes Condor Document Services to the Hill Country. Condor Document Services provides customers in the Texas Hill Country with secure document shredding. The chamber is hosting a ribbon-cutting for Condor at 5 p.m. this Thursday, March 7 at the Chamber. All are welcome. Buck Trent will be at the Llano Country Opry at 7:30 p.m. this Saturday, March 9. Tickets are $15 and are available at the Chamber/Visitor Center. Doors open at 6:30 p.m. for the show. The Chamber and Visitor Center are hosting an appreciation breakfast for the Winter Texans on at 9 a.m. Tuesday, March 12. We appreciate our Winter Texans selecting Llano as their home-away-from home and wish them all safe travels. For more information on Llano and local happenings please go to our website at, or call us at 325.247.5354, and make sure to ‘Like Us’ on Facebook. Hope to see you around town. By Patti Zinsmeyer Executive Director Llano Chamber of Commerce COURTESY PHOTO And the Winner is… Lou Moyer, left, presents the red Kitchen Aid Mixer to winner JoAnn McDugall for the Llano Woman’s Culture Club raffl e for Llano student scholarship. JoAnn’s son, Carl Stewart, bought a ticket for her at the club’s annual Home Cooked Buffet on Feb. 24, and she became the lucky winner. She is pictured in front of the chuckwagon exhibit at the Llano County Historical Museum where she is a hostess. The presentation was made by vice president of the club and event chair Lou Moyer. The buffet fund-raiser and raffl e was a big success. The club wishes to thank to all who participated in this effort to fund scholarships for worthy Llano High School students. Llano’s Special Olympians rock COURTESY PHOTO On Saturday, March 2, 2013, Llano’s Special Olympics team traveled to Texas State University to compete in the Area Basketball Competition. Over 25 teams came from the area to compete in individual skills, team-building skills, medal in their divisions. Caleb Hinton and Hannah DeVault both earned Bronze medals in their divisions. Valerie Ozanne (head coach) and Kerry Harvey (coach), for the Llano Special Olympics Team stated that all the athletes have put forth excellent effort and it has paid off. The team will immediately begin practices this week for the Track and Field Competitions to be held in April. STAFF PHOTO BY LYN ODOM Now 14, my old hound Beej is now retired and spends a great deal of time napping. The native New Yorker, who was given to me by a kid in a cattle auction parking lot, far prefers Texas to the cold East Coast. Making allowances for retired pets In an email from a pet sitting client the other day she told me of changes regarding her dogs that have been made in order to make allowances for one of them who is aging, and that got me to thinking about what we do for our senior pets. It’s not unusual for pet owners to buy softer chows and treats, stairs so the old guys can more easily get on furniture and even diapers for leaky bladders. The biggest issue I’m having right now is my old Beejhound. She is 14 and has all the old dog problems: fewer teeth, worn out hips, less balance, lumps, bumps and warts. I consider Beej retired and I’m working at making her retirement pleasant. At this age I don’t see putting her through any surgeries as long as she is basically being the dog she always has been. I know what the lumps are, but have opted, due to her busy lifestyle with four other 50-pound dogs, not to have surgery wounds on her. Children’s art competition deadline set Art Frog Art Academy will hold its Second Annual Children’s Art Competition this month. Deadline to enter is March 22 at 2 p.m. All children ages 0-18 may enter with one piece of 2 or 3 dimensional art. This year’s theme is “Nature Fest”-exploring Nature’s Power of Beauty. Art will be displayed from March 29-April 8 at First State Bank of Burnet. A reception will be March 29 at 3:30 p.m. sponsored by Cookie Café, Hoovers, State Farm, HEB and other local businesses. Art Frog Academy is a 501c3 nonprofit educational organization offering free arts programs to students of all ages. Call 830.613.0692 for more information on classes and workshops. Lyn Odom Critters & Creatures What is most surprising is that she gets bumped around and knocked down every now and then by the other dogs and she just gets up. No anger, no whimpers of pain, no squeals. She just keeps on keeping on to be a part of the pack. She has always been alpha and that is waning. But she seems to be taking it in stride and still rules the food bowls. And she has altered her own behavior. She knows when everyone is dashing up the stairs that it is best she stand back until the dust settles. She chooses her time to ascend the stairs of the house and that way it is safe passage for her. On occasion she has to wait for me to lift her backend onto the couch. Not often though, and I wonder why sometimes she needs help and not others. Same as old folks I guess, some v days are better than others. I keep watching for the sign she’ll give me that it is time to let her go, but so far I haven’t seen it. And from experience I know the body goes well before the mind, which makes it extremely difficult to make that final decision. I have clients of both beliefs - let pets go naturally versus walking them to the rainbow bridge. Neither is easy to do. I’ve always said one of the hardest things to accept is the relatively short lifespan of pets. Sometimes I expect to come home and find Beej gone, but she continues to come to the gate when I drive up, though at a much slower pace. And I make a lot of allowances for her like pushing other dogs away from her, giving her the choicest treats, throwing her a morsel when I’m cooking and just holding her head and my hands and cooing to her. I’ve taken to calling her “Old Woman,” and she doesn’t seem to mind. At this late date in her life, the one thing that’s working just fine and for sure is her wagging tail. That’s important, and a sure sign the flame of life is still burning. TEXAS-SIZE SAVINGS Call today to see if you can save up to 40% on your auto insurance Kirk Winfrey, Agency Manager Bill Wooten, Agent Elaine Schuessler, Agent Shane Lynch, Agent TEXAS FARM BUREAU INSURANCE AUTO / HOME / LIFE 303 Hwy. 71 E. Llano, TX 78643 325-247-4161 2249 W. R.R. 1431 Kingsland, TX 78639 325-388-6400 Coverage and discounts are subject to qualifications and policy terms, and may vary by situation. © 2012 Farm Bureau Insurance Companies. 1801 King Rd. Marble Falls (Behind Anchor of Hope Church) Visiting Hours on Site M-W-F 2:00-5:30PM Pre-Lease for Special Prices Call Now! “Giving new meaning to life” Opening Soon Page 8A Llano County Journal News Jail From Page 1A longer needed and how those materials might be utilized in renovating the historical jail. Interim City Manager Lynda Kuder presented the council with a request of us- Water From Page 1A courtesy photo From left to right, Kayla Argo, Anne Little, Ann Argo, Angela Dowdle, Jennifer Dickison, Mandy McCary; and center, Aurora Argo and Cheryll Mabray, cuddle up in their “jammies” at the fundraiser benefitting the Highland Lakes Family Crisis Center. Pajama Party aids help groups “Girls Night Out “ pajama party on March 2 in Burnet was a fun event for Highland Lakes area attorneys Cheryll Mabray, Anne Little, Mandy McCary, Angela Dowdle and members of her staff. The event benefited the Highland Lakes Family Crisis Center and the Open Door Recovery House. This was the second year for the fundraiser called “The Fabric of Our Lives” with catchy sponsorships of cashmere, silk, linen, denim and lace, wool and leather and burlap. Many local vendors and indulgence providers such as hair, nails, massage, makeup and gifts were on hand for the Girls Only function. The Highland Lakes Family Center is a sexual assault and domestic crisis center providing a 24-hour hotline, emergency shelter and advocacy, serving Burnet, Llano, Blanco and Lampasas counties. The Open Door Recovery House is a faith-based drug and alcohol recovery house for women. Both organizations provide well needed services for our area. Next year the party is scheduled for Saturday, March 1, 2014, at the Marble Falls Lakeside Pavilion. Make home ‘wildfire ready’ It’s windy. It’s dry. And there seems to be just more of the same for the coming months. Along with these conditions comes an even greater threat of wildfire. Is your home protected? Learn how to make your home a “Wildfire Ready Home” at the March 12 program at the Lakeshore Branch Library at 2:30 p.m. “Ready, Set, Go, Wildfire Action Plan” is a program designed for saving lives and property through advanced planning. Do you know what a hardened home is? Learn how to make your home ready before it is threatened by a wildfire. Mary Leathers with Texas Forest Service will share tips on how to protect your home, property and loved ones during this program. Have you noticed there seems to be a big movement toward “Getting Back to Basics” -- “Living Off The Land” -- “Sustainable Living” -- or whatever name you want to attach to it? Perhaps you have thought about it but just don’t know where to begin. Well, there will be a fiveweek series of workshops at the Llano Library cover- Tommi Myers ing a variety of topics where people from right here in the community will share their knowledge. The programs will start on Tuesday, March 19 at 5:30 p.m. and continue every Tuesday for five weeks. The first program will be Rainwater Harvesting with Jannie Vaught sharing her extensive knowledge on the topic. On March 26, Jannie will team up with Marion Bishop to cover Wind and Solar Power. With the gardening season in high gear, we’ll have Martha Rowlett and Roy Petty, both Master Gardeners, sharing their know-how on growing and harvesting and Marilyn Hale covering preserving in Growing, Harvesting and Preserving Fruits, Herbs and Vegetables on April 2 at 5:30 p.m. For many years, James and Sue Reeves have raised their own beef, and at times chickens, for their home freezer, and they will share their years of hands-on experience in Raising Livestock on a Small Scale. This program will be on April 9, and yes, it will cover processing and packaging meat. To round out this series of workshops, Jannie Vaught will share her many ideas and recipes for Homemade for the Home. Have you ever used soap nuts? Thought about making your own laundry detergents – green, natural detergent – for pennies per load? Learn more at this program. If you would like more information about this series of workshops, contact Tommi at 325.247.5248 or email llanocountylibrary@yahoo.com For more information about any of the programs or events, you may contact the library – Llano Library at 325.247.5248, Kingsland Library at 325.388.3170, and the Lakeshore Library at 325.379.1174. Also, find the Llano County Library System on Facebook to stay up-todate with all of the programs and events. ‘Now, Forager’ shows Friday The Lantex Film Society presents the next Texas Independent Film Network movie, Now, Forager, Friday, March 8 at 7:30 p.m. All seats are $5. The movie is directed by Jason Cortlund & Julia Halperin http:// nowforager.com/ NOW, FORAGER is a film that follows the trials and tribulations of a couple who forage for exotic mushrooms that they sell to high-end restaurants. Beach, have been dealing with drinking and vital household water shortages for more than a year and the situation could worsen dramatically this summer if forecasts of continued drought conditions with little in-flows into the lakes prove to be accurate. LCRA General Manager Becky Motal said little about the debilitating effects the drought has had on Highland Lakes residents and business owners, choosing instead to focus on the hardships the rice farmers will endure because of the water cutoff. “This drought has been tremendously difficult for the entire region, and we know that going without water for the second year in a row will be painful for the farmers and the economies they help support in Matagorda, Wharton and Colorado counties,” Motal said. “This was a difficult decision, but LCRA has to protect the water supply of its municipal and industrial customers during this prolonged drought.” Last year was the first time LCRA cut off Highland Lakes water to farmers. This year marks the second time. The two-year cutoff of water to the farmers comes after Elections From Page 1A who have filed for council seats include incumbent Sherry Simpson, Gail Lang, John Ferguson, Todd Keller and Brian Miller. Bobbie Lou Gray of Llano filed to run, but withdrew Monday. The City of Sunrise Beach has four candidates running for three seats. Incumbents Fred Butler, Tommy Martin and Bill Murphy are seeking re-election and are being challenged by Dr. Bill Cargill. Each position is a twoyear term. Polling places Llano City Council Early Voting will be held at the Llano County Library, 102 E. Haynie St. April 29 through May 7 from 8 a.m. to 4:30 p.m. Chase From Page 1A tram officer pulled over the suspect in a routine traffic stop. “The Bertram officer pulled him over for a traffic stop. He got antsy while the officer was checking his license and he just took off,” Blackburn said. “The Bertram officer went in pursuit, and several Burnet officers Wednesday, March 6, 2013 ing salvaged materials from the jail for the purpose of fundraising. “They have been working on the stabilization of the jail and have removed floor joists that could be made into benches or sold with the money being used in jail renovations,” Kuder said. “They could also be used to create furniture to be sold for fundraising or for the jail to use as furniture in the jail.” Mayor Mike Reagor, along with the other council members, unanimously adopted Resolution 2013-03-04 authorizing the Friends of the Llano Red Top Jail to use the salvaged materials for the aforementioned projects. LCRA’s disastrous decision to send more than 450,000 acre feet of water downstream to the farmers in 2011, which was one of the driest years in Texas history. “That was a terrible decision and the lakes and the Highland Lakes region have never recovered,” Central Texas Water Coalition President Jo Karr Tedder and Lakeside irrigation divisions will not receive any water from the Highland Lakes this year. Farmers in the Garwood Irrigation Division are entitled to about 20,000 acre-feet of Highland Lakes water this year based on the purchase agreement of the Garwood water right. The water flowing into the Highland Lakes, called in-flows, was the lowest on record in 2011 at roughly 10 percent of the historical average. In 2012, in-flows were roughly 32 percent of the historical average. Central Texas Water Coalition last week suggested an alternative approach to preserving water in the lakes, advocating that LCRA considering buying rice farmer lands or their farming rights. LCRA quickly dismissed the idea even though CTWC said the cost for such a plan would be about $130 million as opposed to a cost of $206$250 million for one off-channel reservoir. Extended hours on April 29 are 7 a.m. to 7 p.m. and May 6 from 7 a.m. to 7 p.m. the Lakeshore Library, located at 7346 Ranch Road 261 in Buchanan Dam. * Precincts 203 and 307, the Kingsland Library. * Precinct 108 at the Sunrise Beach City Hall, 124 Sunrise Dr. * Precincts 102 and 109 at the Horseshoe Bay Fire Hall, located at 1 Community Dr., or Highway 2147 in Horseshoe Bay. Llano ISD school board Early voting will be held at the Llano County Library, 102 E. Haynie St. April 29 through May 7 from 8 a.m. to 4:30 p.m. Extended hours on April 29 are from 7 a.m. to 7 p.m. and May 6 from 7 a.m. to 7 p.m. Additional early voting sites include the Kingsland Public Library, 125 W. Polk St., April 29 to May 7 from 9 a.m. to 4 p.m. and at the Horseshoe Bay Property Owners Association, 708 Red Sail, from April 29 to May 7 from 9 a.m. to 3:30 p.m. Polling places on election day May 11 from 7 a.m. to 7 p.m. are: * Precincts 101 and 410, the Llano County Library. * Precincts 204 and 205, followed until they reached the Mason County line. He (the suspect) refused to stop at any type of traffic control,” Blackburn said. “He avoided the stop sticks. We couldn’t get in front of him because he was driving so aggressively.” Blackburn said the driver attempted to hit any police vehicle that came alongside him, and managed to hit a Mason PD vehicle, however there were no injuries. Police and county deputies put up several road-blocks to stop Sunrise Beach Early voting by personal appearance will be conducted each weekday at the Sunrise Beach Civic Center and City Hall, 124 Sunrise Dr., from 9 a.m. until noon April 29 through May 7 with the exception of April 30 and May 7, polls will be open 12 hours, or 7 a.m. to 7 p.m. Applications for ballot by mail must be received by the end of the business day May 3 by City Secretary, 124 Sunrise Dr., Sunrise Beach, Texas, 78643. cross traffic and pedestrians from entering the path of the driver. “We wanted to get him off the road. We were afraid that he would hit a bystander,” Blackburn said. “Everybody (all officers) did a good job and maintained composure.” Blackburn said that after the driver drove through Mason, he reversed course and headed east on the highway. The driver was forced to stop in Mason after his tire was shot out, and Blackburn completed a pit maneuver tactic with his vehicle – bumping the other car to cause it to spin. The driver was forcibly removed from his vehicle by Llano County Sheriffs Deputies and taken into custody by Bertram Police. Blackburn said that the suspect will face compound charges. “He had several traffic violations and was also arrested for assault in Mason because he hit a car, fleeing, and assault of a police officer who he hit. Charges will be filed in Burnet, Llano and Mason counties. There were no injuries to officers or him,” authorities said. “He had hard drug paraphernalia in the car and probably got rid of some of it during the chase.” Weicht remains in Burnet County Jail on a $24,000 bond. Go to: llanocj.com for complete coverage, sports photos and video Sports Page 1B Llano County Journal March 6, 2013 Llano softball takes second at Blanco Lady Jackets gain ground for district By Mark Goodson LCJ Sports Editor The Llano Lady Jackets won four of five games in the Blanco Tournament and will head to Fredericksburg this week to compete in the Billies’ tournament. Llano (7-7) has emerged with a strong-hitting team led by Claire Williams, Cierra Jordan and Courtney Mize. The team has also solidified its pitching with the return of junior Angela Jackson. Llano opens the Fredericksburg Tournament with three games on Thursday. The Lady Jackets open with Comfort at 8 a.m. in the eight-team round robin tournament. The Lady Jackets play St. Anthony’s at 2 p.m. and Fredericksburg at 8 p.m. on Thursday. On Saturday, the Lady Jackets catch St. Anthony’s again at 4 p.m. In the Blanco Tournament, the Lady Jackets rolled into the title game against Blanco before losing 7-6. In the title game, Ashlyn Clough was 3-for-3 with an RBI and Jackson Softball ... see Page 10B Photo By Tom Suarez Lacey Redden, a sophomore, makes a lay behind the plate for the Lady Yellowjackets. The Lady Yellowjackets took second in the Blanco Tournament last week. The team will compete in the Fredericksburg Tournament this weekend. Jackets ride LBJ tourney roller coaster By Jim Goodson LCJ Corresponent JOHNSON CITY – It was the best of tournaments. It was the worst of tournaments. The LBJ Eagle Baseball Tournament welcomed the Llano Yellow Jackets and seven other teams to McKinney Field Feb. 28-March 2. Llano won its first game this season, wiping out a nine-run deficit to defeat Harper High 14-10. But the Jackets lost a tight opener to LBJ High 3-2 then lost their final game 17-10 after leading 10-7 going into the final inning against district foe Lampasas. Infield defense and pitching will surely be high on coach Mike Ridings’ agenda as the 1-9 Jackets prepare for District 8-3A play. “We just made too many mistakes,” the head coach said. “Even in the game we won. We have to tighten things up, but we’re just starting out. The big games are still in front of us.” Llano hosts its Hill Country Baseball Tournament this week, Thursday through Saturday, March 7-9. Llano 14, Harper 10: The best of Johnson City times took place on opening night, when the Jackets rallied to win 14-10 against Harper, a Kerrville-area school from District 29-A. Llano scored 11 runs in its last three innings to nail down its first win. Starting pitcher Holden Simpson excelled at the plate, too. He walked twice and hit a double and two singles, getting on base in five of his six at-bats. The big hit of the game came from senior Rhett Brooks. The centerfielder came to the plate in the fourth inning with the bases loaded and Llano trailing 10-3. On a 3-2 pitch Brooks got all of a high fastball that hit the top of the wall in left centerfield. The triple cleared the bases and, most importantly, set the tone for the rest of the game. Although Llano still trailed 10-6, you got the idea the Jackets were going to win this one. Brooks scored on an error by Longhorn third baseman Drew Vick, who couldn’t handle Llano shortstop Chance Ware’s hard smash down the line. Vick committed four errors in the game. Yellow Jacket Logan Bauer, who came in to pitch in the third, also scored a run in the fourth after singling as Llano pulled to within 10-9. But Bauer’s most important contribution came on the mound. He pitched four scoreless innings after Harper scored 10 runs in its first two innings. He got the win – Llano’s first this season. The Yellow Jackets scored five runs in the sixth. A balk and two more infield errors hurt Harper but doubles by Simpson and right fielder Erich Burch provided the impetus for Llano – a team that was not going to lose on a cold, chilly Thursday night. Harper Llano 190 000 102 605 10-5-6 14-10-2 LBJ 3, Llano 2: The best-played game occurred in the Jackets’ tournament opener when LHS’ Taylor Sorenson and Chance Ware held Lyndon B. Johnson High (District 27-2A) to five hits. They re- Llano Hill Country Classic Thursday, March 7 Field No. 1 11 a.m. Liberty Hill/Bandera 1:30 p.m. Liberty Hill/Canyon Lake 4 p.m. Llano/Fredericksburg 6:30 p.m. Llano/Wimberley Field No. 2 11 a.m. Taylor/Canyon Lake 1:30 p.m. St. Michaels/Wimberley 4 p.m. Taylor/Bandera 6:30 p.m. Fredericksburg/St. Michaels Friday March 8 Field No. 1 11 a.m. St. Michaels/Faith Academy 1:30 p.m. Comfort/Fredericksburg 4 p.m. Wimberley/Cameron Yoe 6:30 p.m. Llano/Bandera corded seven strikeouts. But it was a hit batsman in the bottom of the seventh that did-in Llano. Ninth hitter Dalton Croft scored the winning, unearned run after being plunked. He made it to second base on a ground out, stole third and scored on a catcher’s error. The Jackets scored the game’s first run in the first when leadoff hitter Wil Siegenthaler singled up-the-middle, stole second and scored on an error. In the fourth inning Llano took a 2-1 lead when leftfielder John Winn reached on a fielder’s choice, stole second and scored on an error by Eagle centerfielder Field No. 2 11 a.m. Liberty Hill/Taylor 1:30 p.m. St. Michaels/Canyon Lake 4 p.m. Liberty Hill/Faith Academy 6:30 p.m. Cameron Yoe/Comfort Saturday, March 9 Field No. 1 11 a.m. Taylor/Comfort 1:30 p.m. Liberty Hill/Wimberley 4 p.m. Llano/Faith Academy 6:30 p.m. Llano/Comfort Field No. 2 11 a.m. St. Michaels/Bandera 1:30 p.m. Canyon Lake/Cameron Yoe 4 p.m. Cameron Yoe/Fredericksburg 6:30 p.m. Faith Academy/ Fredericksburg Jacob Wagner. Ware took the hard-luck loss despite pitching well, giving up only one run. Although it was unearned, it was the game winner. Ware struck out four in his four innings of work. Eagle starter Croft got credit for the win. Two years ago, LBJ High won the Class A state championship. The win over Llano, however, was LBJ’s first of the 2013 season against two losses. Llano 100 LBJ 000 100 101 0 – 2-4-4 1 – 3-5-6 Jackets ... see Page 10B Llano track team starts fast at Blanco From Staff Reports Llano’s varsity track team had strong performances in all three relays to lead the team to a fifth place finish in the Blanco Panther Relays, Thursday, Feb. 28. The team of Carter Tatsch, Matt Center, Justin Wyatt and Layton Rabb won both the 4x100 and 4x200. They ran a 44.01 in the 4x100 and clock a 1:32.5 I the 4x200. The varsity boys fin- ished with 90 points. Jake Moore finished third in the long jump with a jump of 19-7, while Baylor Jordan finished fifth. Baylor Jordan also finished fourth in the high jump with a jump of 5-8. Jake Moore finished fifth in the triple jump. Wyatt finished third in the 100 with a time of 11.7. Tatsch finished second in the 200 meters with a time of 23.17. Grayson Freeman finished third in the 1,600 meters with a time of 5:04. The 4x400 Relay Team of Alec Hasty, Center, Mason Ladd and Layton Rabb finished second with a time of 3:37.4. “I was very proud and pleased with our performance in the first meet,’’ Llano coach Jarrett Vickers said. “I love the way we competed. The field event guys did an excellent job adjusting to a shortened event. The sprinters and distance guys did an out- standing job getting loose and staying warm during a very frigid night. We got the first one under our belt, and definitely have something to build on.’’ The JV boys finished fifth with a total of 70 points. Lance Reven placed first in the discus with a throw of 105-3. Justin Long placed fourth in long jump with a jump of 16-8. Airon Layton placed first in pole vault and Cody Miller placed fourth both with a vaults of 9-6. Logan Ashabranner placed fourth in the 3,200 meters with a time of 12:26. The 4x100 relay team of Justin Long, Reven, Hayden Wyatt and Jalen Bauman placed sixth. Dalton Ward placed first in the 110 hurdles with a time of 17.93. The 4x200 relay team of Long, Reven, Wyatt and Bauman placed fifth. Dalton Ward placed first in the 300 hurdles with a time of 47.04. Uriel Bentiez placed sixth in the 200 meters. Ashabranner placed third in the 1,600 meters with a time of 5:32, while Josh Petty placed sixth. The 4x400 relay team of Danny Hansen, Dalton Ward, Hayden Wyatt, and Trevor Penny placed fifth. Llano will compete in the Bandera Relays Thursday and will resume their schedule March 21 with the Jacket Relays at Llano. Page 2B Wednesday, March 6, 2013 Highland Lakes Newspapers Burnet & Llano Counties Highland Lakes Marketplace REACH OVER 35,000 NEWSPAPER READERS THROUGHOUT BURNET & LLANO COUNTIES PLUS THOUSANDS MORE ON THE INTERNET! Call: 830.693.4367 in the Marble Falls area • classifieds@highlandernews.com 20 words or less 512.756.6136 in the Burnet area • classifieds@burnetbulletin.com in ALL our Publications 325.248.0682 in Llano County • classifieds@llanocj.com and on the Internet Deadlines: 2 p.m. Friday for the Tuesday Highlander and the Wednesday Llano County Journal, (non-commercial) and Burnet Bulletin and 2 p.m. Wednesday for the Friday Highlander. Only $16 ANNOUNCEMENTS TRANSPORTATION ANNOUNCEMENTS TRANSPORTATION Lost and Found 2011 SUBARU LEGACY 29k mi., Premium, $19,995 4DR Sedan, Blue. stk BD12295A 2601 S. Water Burnet (512) 756-2128 FOUND: Found a dog, on Saturday morning. Feb. 23rd. Marble Falls, park area. Please call to identify. 830-693-7075. LOST AND FOUND PET ADS ARE FREE! ADS FOR A LOST OR FOUND PET ARE FREE. IF YOU WOULD LIKE TO MAKE A DONATION IN PLACE OF PAYMENT TO THE CHARITY OF YOUR CHOICE, WE WILL RUN YOUR AD FREE OF CHARGE FOR AS LONG AS NEEDED. CHRISTYODER ANIMAL SHELTER/ADOPTION CENTER RECEIVES HUNDREDS OF DOGS/CATS WITH NO ID OR IDENTIFIED OWNERS, BURNET/LLANO COUNTIES. CALL 512-793-5463. ALSO YOU CAN CONTACT M.F. ANIMAL CONTROL, AT 830-693-3611, FOR A LOSTFOUND ANIMAL; OR TO BE ADOPTED. REMEMBER TO KEEP YOUR ANIMALS VACCINATED AND REGISTERED WITH THE CITY. Business Personals 2009 DODGE CHALLENGER 24k mi., SRT8, $30,995 2DR Coupe, Orange. stk 27192508A 2601 S. Water Burnet (512) 756-2128 Llano Nursing & Rehabilitation is currently hiring for Director of Nursing, Weekend RN Supervisor and PT Activities Assistant. Please come by 800 West Hanie St., Llano or call (325) 247-4194. Special Notices Miscellaneous Autos 2011 HYUNDAI ELANTRA $16,995 Sedan, Silver stk OP636 2601 S. Water Burnet (512) 756-2128 We Buy Wrecked, Burned, and Junk Vehicles. Used Parts and Installation Available. 24 hour Towing. Call 830-693-3226. 512-755-1153. Building, Contracting *REMODELING *PAINTING/STAINING *DRYWALL/FENCES *DECKS *PRESSURE WASHING *WEATHER PROOFING *BLOWN INSULATION *MOBILE HOME REPAIR *WINDOWS/DOORS 2601 S. Water Burnet (512) 756-2128 Trucks 2001 Ford Lariat Pickup. 4 Door. 124K. ForSale. Call 830-596-7479. Boats and Water Equipment 19ft. 2000 Promaster, center console. 130HP outboard/ with trailer. $9,500.00 Call 512-755-3616. BUSINESS SERVICES Air Conditioning & Heating Clarkson & Company Heating & Air Conditioning Personal & 42 Years Experience Tx. Lisc. #TACLB 00012349C 2009 DODGE RAM 1500 27k mi., Lone Star, $25,995 Crew Cab, Yellow stk UP8353 SERVICES Ruben Ortiz Concrete Co., Inc. Slabs, Sidewalks, and Patios/Lakefront work. Retaining Walls/Boat Docks/25 Years serving the Highland Lakes! 830-693-3282 512-755-1115. Tree Cutting 2601 S. Water Burnet (512) 756-2128 texasnurseconnection.com Please call Mon.-Fri 8am-5pm Executive Assistant The Texas Housing Foundation is seeking an Executive Assistant to assist Vice President of Affordable Housing Programs. Bachelor’s Degree in Management or Minimum of 3 years’ Experience in Executive Level Position required. Please send resumes to: lweber@txhf.org or fax (830) 798-1036 830-220-2121 2007 CHEVROLET SILVERADO 68k mi., Crewcab, $29,995 4WD, Dually, Automatic, White. stk B12084A All shifts needed (325) 670-0090 Work with abused children in village setting near Kerrville or Leakey. Training, certification, career ladder, benefits, housing. Couple or singles. Background check. Hill Country Youth Ranch. 830-367-6111. EOE. (TOBY SOWERS) P & R Tractor Service. Bush Hog-Pad Sites. Box Blade-RoadsDriveways. Shredding-Lot Cleaning Backhoe Paul Reese 512-585-6571 Woodworking Shop in Bertram looking for help. Construction experience needed. Pay based on experience. Call 4pm-7pm ONLY. 512-529-6117 Professional Parent(s) TSOWERS17@YAHOO.COM Dozer, Tractor Work ments. 155 Hillcrest Ln., Liberty Hill, 78642. Apply in person, 512548-6280. Equal Opportunity Employer. Help Wanted FOR ALL YOUR HANDY NEEDS “FREE ESTIMATES” QUALITY WORK AT AFFORDABLE PRICES DEEP SOUTH WELDING STRUCTURAL ORNAMENTAL FABRICATION DESIGN REPAIRS, REASONABLE RATES. JACK 830-385-3656 EMPLOYMENT EMPLOYMENT HANDYMAN & CONSTRUCTION SERVICE, ETC. 2601 S. Water Burnet (512) 756-2128 EMPLOYMENT T N C NEEDS RN’S LVN’S and C.N.A.’S 2601 S. Water Burnet (512) 756-2128 “TOBY’S” 2010 TOYOTA AVALON 46k mi., XLS , $24,995 V6, Leather, Loaded, Black. stk BP2005 EMPLOYMENT 2007 DODGE RAM 2500 44k mi., Laramie, $28,995 5.9L Diesel, Leather, Gray. stk 26858784A Mark Crouse 830 598-6606 2004 Buick Rendezvous (Suv)-$5,700. Very clean V6, automatic, FWD. Up to 24 mpg. No accidents. 127,700 miles. “CarFax” and owner’s Maintenance Log available. Call (830)-220-2281. 2010 FORD F-350 SUPERDUTY 59k mi., Diesel, $39,995 4WD Crewcab, Maroon. stk B12286F Topsoil, Sand, Rock & Gravel 32 years experience in the stone industry longhorngranite.com 2601 S. Water Burnet (512) 756-2128 SERVICES longhorn granite 2009 DODGE RAM 3500 43k mi., SLT, $34,995 6.7 Diesel, Orange. stk B12061A 1811 N Hwy. 281 Marble Falls ~ 830.693.6594 Burnet ~ 512.756.2579 NOW ENROLLING! Discount on registration fee. Kid’s Connection. Kingsland. Texas. Call 325-388-9000. SERVICES ORTIZ TREE SERVICE Trimming, removal, specializing in Oaks, and Pecan trees. Clear waterfront lots. “Serving the Hill Country since 1978.” (830693-2338). MIsc. Services Equal Opportunity Employer 2004 CHRYSLER PT CRUISER 88k mi., Limited, $6,995 Automatic, Teal. stk B12286F 2601 S. Water Burnet (512) 756-2128 Handyman and honeydo services from small home repais to new construction. Mature degreed professional doing remodels, decks, fencing, painting, pressure washing, plumbing, landscaping and much more. References available upon request. Call for any size job! 512-588-9215. BCMT Services looking for Bobtail Dump Truck Driver; with clean driving record. Class A. Contact Bobby; 512-755-5989. Working in Austin M-F. Want to make finding a job easier and make more money? Take a QuickBooks course to enhance your administrative skills. For more information call Trudy Kelley/CPA. (325) 388-8386. Apartment Maintenance. Maintenance needed immediately, HVAC EPA refrigerant cert, electrical, plumbing, pool, Salary + Benefits, San Gabriel Crossing Apart- Office Help. Office Job-Front Desk Position: Answering Phones and performing all general office duties. Must be fluent in Spanish and English, capable of Multi tasking and knowledge of MS Office and Quick Books a plus. Pre employment screening required. No Phone Calls, send resume to qromex.main@gmail.com or Fax to 830-596-2601. Non-Physician Provider: Austin Heart seeks PA, NP or CNS for Marble Falls Cardiology Clinic to assist in patient care. Part Time position. HCA provides competitive salary, benefits, 401-K and generous PTO. Please email resumes to HCAPS.HRAustinHeart@ hcahealthcare.com or fax to 512-407-1817. Excellent opportunity for the following positions: Full Time Cook, Full Time Housekeeper, Part Time Attendant. Prefer applicants with long-term care experience, but will train the right person. Great working environment. Competitive salary and great benefits. Please come by 605 Gateway Central in Gateway Park (south of the bridge) in Marble Falls to apply. EOE MASTER GARDNER The Flying W Ranch in Burnet has an opening for a Master Gardener. The work involves planting and maintenance of the Fruit Tree Orchards, Vegetable Gardens, management of the Greenhouse, and other duties to be determined. Extensive hands on knowledge with various plant and tree maintenance required. Preemployment drug screening and Criminal Background Check is required. Email a letter & qualifications or resume to stephenmirra@senox.com or you may fax 512-9895968 or call 512-251-3333 ext. 147. 2601 S. Water Burnet (512) 756-2128 Successful OPTICIAN needed. Skilled at considering patient lifestyle to recommend and fit eyewear, and perform adjustments and repairs. Computer skills, knowledge of insurance/vision plans, and self-motivated. No evening/weekend hours. Will work in Burnet and Llano, TX offices. Send resume to burneteyecarejobs@yahoo.com or fax 512-756-7831. 2013-2014 2013-2014 Burnet & Llano Counties EMPLOYMENT Office Staff Needed for busy Chiropractic Practice in Marble Falls. Professional, Caring, Reliable, with insurance billing experience. E-mail Resume to hr@drconnie.com (.doc or .pdf only) Highland Lakes Newspapers Classified MERCHANDISE MERCHANDISE Appliances HELP WANTED. Now taking applications for all positions. Contact Tom: (325) 247-5470 FUMC Bertram, Seeks dependable and responsible person, for Child Care; during Worship hours and Church events, Contact Pastor Chris. 512-355-3210. Experienced Office Help Wanted. Must be proficient in MS Office Programs, knowledge of Quick Books, able to multi task, highly motivated willing to adapt to fast pace environment. Pre-employment Screening Required. No Phone Calls, Fax to 830-596-2601 or email to qromex.main@gmail.com. MERCHANDISE Saling Craft 2001 17FT. ALUMINUM BOAT W/TRLR. 125 HP MERCURY MOTOR. MINNKOTA TROLLING MOTOR, DEPTH FINDER, FISH FINDER, AM/ FM RADIO/2 LIVE WELLS. RECENTLY TUNED UP WITH NEW PROP AND SPARK PLUGS. ALWAYS BEEN SHEDDED. VERY CLEAN. EXCELLENT CONDITION. NICE FAMILY BOAT. PERFECT FOR SKIING, TUBING AND FISHING. CALL EVENINGS 605-280-2406. Spas, Hot Tubs Indoor 2-person Luxury Spa$3,000 (Burnet, Tx.) Husband not able to use spa due to health concerns. Works great, used 1 season, installed indoors-never used outdoors. Have all original paperwork, warranties, excellent condition, steps & custom cover included. New $7,490, excluding tax. Contact: jkcarley@yahoo.com or 512-234-8115. Serious buyers only. Estate Sales Estate Sale March 8, 9, 10 8AM - 4PM J Bar is hiring for FT Class A CDL Driver. Local hauling. No over the road. Download application and submit to jobs@gojbar.com or 512-949-5041 fax. 176 Shore St., Tow Over 90 years of collectibles, antiques, furniture, clothes, tools, appliances, dishes, glassware, books, records and much more. LIVESTOCK Pets Blue Heeler Pups for sale. Females only $50.00 each. Call 830-598-2258. Feed, Supplies Hay for Sale. 3,000 Round Coastal, Gigs, and Blue Stem. $45-$55. Delivery available. Call 713-562-0601. RENTALS Apartments Kingsland Trails Bedroom Furniture for Sale. Dresser, Chest of Drawers, Nightstand, Solid Wood, good condition. $200. Call Jerri. 512-756-2144. Garage Sales WINTER SPECIAL: FREE RENT DEAL. Granite Shoals CampgroundTrailers/Lots for rent. $85 weekly-and up. With utilities & WiFi included. Call 830598-6247. CALL 830-693-4367 in Llano 100 Legend Hills Blvd 1, 2 & 3 bedrooms Income Restricted (325) 247-5825 Granite Countertops Dishwashers • Microwaves Swimming Pool ALL BILLS PAID! Daily-Weekly-Monthly. Huge, fully furnished efficiencies, on Highway 29, near Inks Lake. Free Cablevision and WiFi. Call 512-793-2838. VOTED THE BEST APARTMENT COMMUNITY IN MARBLE FALLS THE VISTAS APARTMENTS 1700 MUSTANG DR., MARBLE FALLS, TX $150 total move-in with 1st month rent free on all 3 bedrooms Call today for availability and to schedule your tour! *Income Restricted pstandly@txhf.org BURNET: Spacious 1BR./ 1BA. garage apartment; great value; great location; small pets considered; $495/ mo; TJM Realty Group; 830-693-1100; tjmrealtygroup.com TJM Realty Group LOGO Now Leasing $99 Move-In Special with 1st Month Free rent Turtle Creek Townhomes 2, & 3 Bedrooms REAL ESTATE SUNRISE BEACH: 3BR./ 2BA. Furnished Waterfront Home; Boat dock and lift; huge deck; yard care included; $1295/mo; TJM Realty Group; 830693-1100; tjmrealtygroup.com ROOM, 1 BATH MOBILE $650. ALL NONSMOKING. 830-7989723. For Lease: 2829 sq. ft. Commercial Building, on Hwy. 281 in Marble Falls. Great location! Daycare ready. Available immediately. 512-755-1619. Courtyard, Laundry Room, Water/Trash paid. No Pets. Townhome, Condo Rentals LAKE LBJ: Efficiency, 1BR. & 2BR. Condos; swimming pool; tennis courts; lake access; water/sewer/trash & basic cable included; from $525/mo; TJM Realty Group; 830-693-1100; tjmrealtygroup.com TJM Realty Group LOGO Duplexes for Rent MARBLE FALLS: 3BR.-2BA. Luxury Duplexes; granite counters; custom cabinets; garage; Move-in Special! $300 Off 1st Month’s Rent with 1Year Lease! $875/mo; Tjm Realty Group; 830-693-1100; tjmrealtygroup.com Tjm Realty group LOGO Horseshoe Bay Area. Large 3BR./3BA. 1 Car garage. Fireplace. Call 512-567-0804. 2 Bedroom-2 bath Duplex, all electric-full size washer/ dryer connections, ceramic tile thru out, large walk in closets. Park like setting near historic downtown Burnet square. Fenced in back yard. $750 monthly. Call 512-755-5930. Houses for Rent HORSESHOE BAY: Stunning 3BR./3.5BA hillside home; 3250 sq. ft; dramatic views of Lake LBJ; 3 car garage; must see; $1800/mo; TJM Realty Group; 830-6931100; tjmrealtygroup.com TJM Realty Group LOGO Oak Ridge, near Fluor plant. 3/3/2, all kitchen appliances. Credit check and deposit. No smokers/pets. $1250. 830-598-4577. 3BR.-Brick-2 lots, fenced yard/ storage shed. $900/monthnew ac/tile floors. Kitty ok/dog/ plus deposit. Stainless smooth-top oven, big fridge. Cottonwood Shores. 512.264.2217-No messages. GRANITE SHOALS: 3BR./ 2BA; approximately 1495 sq. ft; fireplace; 2 blocks from waterfront park; Marble Falls schools; $1195/mo; TJM Realty Group; 830-693-1100; tjmrealtygroup.com TJM Realty Group LOGO House for Rent. Burnet. Large 3BR./2BA, Fireplace, water furnished. Large lot. $850/month. $850/deposit. 512-756-8514. Perfect family home. Large 4/3 with fenced yard. New carpet & paint. Marble Falls schools. $1150 month plus deposit. Walker & Assoc. 830-693-5549. Remodeled one 2BR. Houses, also nice furnished 1BR. trailers. Fenced yards, partial bills paid. Reasonable. Yucca/Burnet. 512-756-0502. 3BED./3BATH. 6 Acres. 3000+sqft. w/huge fenced back yard. Office, 2 living areas. Five minutes from new hospital. Marble Falls RealEstate.com. 830-613-0074. Beautiful home on 6 acres with lots of space. 3bed/3bath. 3000+ sqft. w/ huge fenced back yard. Office, 2 living areas. Five minutes from new hospital. MarbleFallsRealEstate.com 830-613-0074 VARIOUS: SMITHWICK TRIPLEWIDE & HOUSE607 & 605 CR343A, 3 BEDROOM, 2 BATH, 2020 SQ. FT., FIREPLACE, AND 3 BEDROOM, 2 BATH WITH FENCED YARD NEAR SCHAFFER BEND PARK $825 EACH; COTTONWOOD SHORES HOUSE 657 CYPRESS, 2 BEDROOM, 1 BATH, FENCED YARD, WOODBURNING STOVE, TILE FLOOR $665, AND 630 MAGNOLIA, 2 BEDROOM, 1 BATH HOUSE, $650. SPICEWOOD-3 BED- Full Size Washer & Dryer Connections CAR. SELL CLASSIFIEDS. Marble Falls 2 Bedroom 1 Bath $575/mo. $575/dep 830-265-0696 1, 2 and 3 Bedrooms Available FOR THAT NEW AUTO IN THE $99 Move-In + First Month Free Special only for 3 bedroom units! (830)798-8171 325-388-4491 RENTALS TJM Realty Group LOGO TJM RealtyGroup LOGO Southwest Village Garage/Estate Sale. Saturday, March 9th. 8am-4pm. 407 Crestview, in Marble Falls. Lots of misc. 3 Bedrooms, 2 Baths Parkside Apartments Health Products 1 Bedroom Mobility Scooter for sale: Golden Companion 3-wheel scooter for sale. Includes Bruno Power Lift. $1295. Additional medical equipment and supplies available. Call John @ 512-736-3139 or call or text Bill 512-803-9638. Page 3B RENTALS RENTALS MARBLE FALLS: Lone Oak Luxury Apts; 1BR. + bonus room; covered parking; washer-dryer included; Upscale living in downtown Marble Falls; $725/mo; TJM Realty Group. 830-693-1100; tjmrealtygroup.com Reduced rent and deposit on all 1 & 2 bedrooms Call for details Furniture RENTALS New $99.00 Move-in Special MAKE ROOM YOUR OLD Wednesday, March 6, 2013 830-798-8259 B JOIN OUR FAMILY Now hiring for: Aides All Shifts Available Sign-on bonus We offer: Benefits, Competitive Wages, Paid Holidays & Sick Leave Apply in Person 3727 W. RR 1431 Call (325) 388-4538 or 325-247-6206 Fax resume to: 325-388-0465 Email resume to: annette.smith@pcitexas.net E.O.E. Highland Haven 3/2/2 with second 2 car garage, for shop or boat storage. Huge yard, big trees. Neighborhood boat launch and swim area. $1200/month, plus $900/deposit. References required. 830-265-4207 830-613-5555. Country house for rent, in Buchanan Dam. 4BR./3BA. Bonus room, CACH, FP, DW. No smokers or pets. $1200/month, 1st & last advanced + $500/dep. References required. Call 830-798-6726 or email: huntermthomas@yahoo.com NOTICE All rental and real estate ads are subject to the Federal Fair Housing Act which makes it illegal to advertise any preference, limitation or discrimination based on race, color, religion, sex, handicap, or family status or national origin. This newspaper will not knowingly accept any advertising for real estate which is in violation of the law! REAL ESTATE Houses for Sale FSBO. Meadowlakes. Gated community. Unique 2033 sq/ft. home. Three blocks from lake. Recreational amenities include parks, tennis, pool, golf, club house, and restaurant. 830-693-7221. Cottonwood Shores. Newly remodeled 3BR./2BA. 1,100 s.f. Home on 2 lots. New granite, tile and loaded with extras. Quiet street at 742 Driftwood, $92,000. 512-755-0915. Lots, Acreage RV Slips for Rent Mobile Homes for Rent 2BR./2BA. Mobile home for rent. $600/month, $300 deposit. Buchanan Dam. Past rental history required! Dogs welcome under 25lbs. Call 512-793-2105. GOOD BUY. Cemetery lots. Lakeland Hills Cemetery. Garden 8. Lot 125. Space 6. $1,000. Contact Jeff. 512-755-4760 Mobile Homes for Sale Mobile Home for rent. 4BR./2BA. Approx. 1800 sq. ft. Huge yard. Refrigerator, stove, CACH. $875/month plus/deposit. Available March 1st. 512-755-4539. 3BR. Mobile home, on 2.5 acres. Bertram. $750/month, $600/deposit. Horses and pets ok, with a deposit. 254220-9198. Business Rentals WAREHOUSES FOR LEASE 830-798-0000 RECEIVE YOUR PAPER IN THE MAIL!! SUBSCRIBE TODAY! 830-693-4367. Page 4B Wednesday, March 6, 2013 REAL ESTATE STATEWIDES STATEWIDES Taylor; 1-318-245-8800, www. taylormadeauctions.com #836 CDL-A with 1-year tractor-trailer experience required. 1-888-703-3889 or apply online at www. comtrak.com and empty miles. Daily hometime 24/7 dispatch. Great fuel and tire discounts. New, larger facility with free parking for O/Oís. Third party lease purchase program available. CDL-A with 1-year tractor trailer experience required. Call 1-888-703-3889 or apply online at www. comtrak.com Real Estate Wanted USED HOMES NEEDED, Single-wides, double-wides, for the oil patch. Pay top dollar, Cash or Trade. Village Homes. 1-866-899-5394. rbi-3223. STATEWIDES AUCTIONS CATTLE AUCTION March 16th; 1100+ head sell. Hays Bros. Angus ranch Arcadia, LA. Bulls, pairs, breeds. Open regular and commercial. Dusty DEDICATED RUN- We have a great opportunity for 5-drivers to get home nightly! Dedicated run from OK, Mon-Fri, guaranteed daily rate. Must have CDL Class A, 1-year OTR experience, excellent driving record. Must be able to start immediately 1-800-888-0203, DRIVER - DAILY or weekly pay. 1¢ increase per mile after 6-months and 12-months. 3¢ enhanced quarterly bonus. Requires 3- months OTR experience. 1-800-414-9569 DRIVERS - COMPANY DRIVERS $1000 signon bonus. New, larger facility. Home daily. 80% drop and hook loads. Family health and dental insurance. Paid vacation, 401k plan. L/P available. OWNER OPERATORS $5,000 sign-on bonus. Paid FSC on loaded TEAM DRIVERS $2500 Sign-on bonus per driver. Super excellent home time options. Exceptional earning potential and equipment. CDL-A required. Students with CDL-A welcome. Call 1866-955-6957 or apply online at YOU GOT THE DRIVE, we have the direction. OTR drivers, APU Equipped, Pre-Pass, EZ-pass, pass e n g e r p o l i c y. N e w e r equipment. 100% NO touch. 1-800-528-7825 STATEWIDES EDUCATION/TRAINING AIRLINES ARE HIRING Train for hands on aviation maintenance career. FAA approved program. Financial aid if qualified, housing available. Call Aviation Institute of Maintenance, 1-877523-4531 ATTEND COLLEGE ONLINE from home. Medical, Business, Criminal Justice, Hospitality. Job placement assistance. Computer available. Financial aid if qualified. SCHEV authorized. Call 1-888-205-8920, www. CenturaOnline.com CAN YOU DIG IT? Heavy equipment operator training. 3-week hands on program. Backhoes, bulldozers, excavators. Lifetime job placement assistance with National certifications. VA benefits eligible. 1-866-3626497 PUBLIC NOTICE As of March 1, Dr. Jayasankar K. Reddy will no longer be providing nephrology services at Seton Burnet Healthcare Center, 200 County Road 340A, Bldg I, Burnet, TX 78611. Please contact us at (512) 715-3110 if you have any questions, would like to obtain a copy of your patient record, or choose to arrange transfer of your records to another physician. Classifieds are Affordable Call 830-693-4367 to Place Your Ad Burnet & Llano Counties Classified STATEWIDES DRIVERS SUPER SPECIAL. Large 3/2 Double-wide, extra nice; all the goodies, only $299.00 month. 5% Down, 6%. 25 Years. Call 1-866-8995394. rbi-3223. Highland Lakes Newspapers STATEWIDES STATEWIDES STATEWIDES HIGH SCHOOL DIPLOMA from home. 6-8 weeks, accredited, get a diploma, get a job! No computer needed. Free brochure; 1-800-2648330. Benjamin Franklin HS information call 1-830460-8354 housing OK! Guaranteed financing with 10% down. Lots starting as low as $6900, Call Josh, 1-903-878-7265 MEDICAL OFFICE TRAINEES needed! Train to become a medical office specialist at Ayers Career College. Online training gets you a job ready ASAP. Job placement when program completed. 1-888-3681638. WEST TEXAS - mule deer, high desert south of Sanderson, Indian Wells Ranch #53, 173+ acres, $265/acre, low down, owner financed. 1-210-734-4009. www. westerntexasland.com ACREAGE REPO with septic tank, pool, pier, ramp. Owner finance. Granbury 1-210-4223013 AFFORDABLE RESORT LIVING on Lake Fork. RV and manufactured VACATION WEEKEND GETAWAY available on Lake Fork, Lake Livingston or Lake Medina. Rooms fully furnished! Gated community with clubhouse, swimming pool and boat ramps. Call for more information: 1-903-8787265, 1-936-377-3235 or 1-830-460-8354 INTERNET HIGHSPEED INTERNET EVERYWHERE by Satellite! Speeds up to 12mbps! (200x faster than dial-up.)Starting at $49.95/month, Call now and go fast! 1-888-6436102 Miscellaneous SAWMILLS FROM ONLY $3997.00. Make and save money with your own bandmill.Cut lumber any dimension. In stock ready to ship. Free information/DVD,. com 1-800-578-1363 Ext. 300N REAL ESTATE $106 MONTH BUYS land for RV, MH or cabin. Gated entry, $690 down, ($6900/10.91%/7yr) 90days same as cash, Guaranteed financing, 1-936-377-3235 A B S O L U T E LY T H E BEST VIEW Lake Medina/Bandera, 1/4 acre tract, central W/S/E, RV, M/H or house OK only $830 down, $235 month (12.91%/10yr), Guaranteed financing, more RECEIVE YOUR PAPER IN THE MAIL!! for 830-693-4367 JOIN THE BUSINESS AND SERVICE DIRECTORY! ADVERTISE IN ALL HIGHLAND LAKES NEWSPAPERS FOR ONE LOW RATE! 830-693-4367 Burnet & Llano Counties Highland Lakes Newspapers Wednesday, March 6, 2013 Classified Page 5B Support your Local Businesses! Pink Lily Photography Specializing in Kids, Families, Senior Portraits… all Occasions! Call to schedule your Portrait Session TODAY! 830.220.2361 Over 35 Years of Experience Quality Gunite Pools Re ct e l f ing the B es t Owner Designed and Supervised TEXAS RIVER POOLS Kingsland, Texas Office: (325) 388-5500 Cell: (512) 784-6863 Dayton R. Warden, Jr., R.Ph., D.D.S. Ryan J. Robbins, D.D.S. General Dentistry * Cosmetic * Implants TexasRiverPools@DishMail.net Marble Falls has a Place to Host Your Next Event • Capacity for 250 people • Affordable Food and Beverage Packages • Monday-Wednesday 8am-5pm, Thursday 8am-6pm, Friday 7am-2pm 512-355-2115 Care Credit Financing 1220 Hwy 29 W. P.O. Box 444 Bertram, Texas 78605 Great Location 105 Highway 1431 E • Marble Falls TX For More Information: (830) 693-5134 Outsource Financial & Tax Accounting Personal & Business Management Real Estate Management Restaurant Consulting Bill Allen williamallencpa@gmail.com (830) 798-5386 Experienced / Confidential STATE • COUNTY • SATELLITE BONDS 2 4 H r. S e r v i c e • Ha v e B o n d Wi l l Tr a v e l We Ac c e p t C r e d i t C a r d s • Te r m s Av a i l a b l e A-ACTION BAIL B BOND$ INSTANT RELEASE 512-756-8855 w w w. A- Ac t i o n B a i l B o n d s . c o m 903 S. WATER STE. 200 BURNET, TX 78611 brust concrete Concrete Construction Will Brust Foundations - Decorative Concrete Stain & Seal - Epoxy Coatings PO Box 492 Marble Falls, TX 78654 830-265-8133 will@brustconcrete.com 512-755-5399 830-637-0807 Commercial • Residential Free Estimates Don Wilder - Owner All Types of Roofing Get Your Yard Ready “Just in Time” for Spring!! Residential Lawn Care * Grass Cutting * Trimming/Brush Cutting * Lot Clearing * Clean Up * Haul Off * Skid Steer Work AND MUCH MORE!! D.E. Residential * Commercial * Farm and Ranch WILDER Call us today for a FREE Estimate Or send us an email TreyFerguson@live.com ROOFING Texas Land Care 3rd Generation Roofing Company (Est. 1989) Scan for Androids DON’T MISS OUR BUSINESS & SERVICE COUPON SAVINGS! Download our Lake Country Life App for Android & iPhone or at m.lakecountrylife.com & Deals & Steals 512-939-1201 Scan for iPhones Helping Christians to Grow In God’s Word to Become Mature Believers Eph 6:10-15 • II Tim 3:16-17 The Bible Church of the Lakes (830) 596-0100 24101 St Hwy 71 E Horseshoe Bay Located Hwy 71 East 1 Mile West of 2147 Sunday Service 9:30 am Sunday School 11:00 am Wed Bible Study 6:30 pm Fred L. Bates, Pastor Page 6B Wednesday, March 6, 2013 Highland Lakes Newspapers Classified Burnet & Llano Counties JOIN THE BUSINESS AND SERVICE DIRECTORY ADVERTISE IN ALL HIGHLAND LAKES NEWSPAPERS FOR ONE LOW RATE! 830-693-4367 Burnet & Llano Counties Highland Lakes Newspapers Classified Puzzle Answers for March 6, 2013 Wednesday, March 6, 2013 Page 7B Page 8B Wednesday, March 6, 2013 RENTALS RENTALS HOUSE FOR RENT 2BR./1BA. Furnished house on Llano River. CR 103, CACH. Lease $800. No pets. References required. 325-247-2255. STATEWIDES STATEWIDES DRIVER - DAILY or weekly pay. 1¢ increase per mile after 6-months and 12-months. 3¢ enhanced quarterly bonus. Requires 3- months OTR experience. 1-800-414-9569 OWNER OPERATORS $5,000 sign-on bonus. Paid FSC on loaded and empty miles. Daily hometime 24/7 dispatch. Great fuel and tire discounts. New, larger facility with free parking for O/Oís. Third party lease purchase program available. CDL-A with 1-year tractor trailer experience required. Call 1-888-7033889 or apply online at ATTEND COLLEGE ONLINE from home. Medical, Business, Criminal Justice, Hospitality. Job placement assistance. Computer available. Financial aid if qualified. SCHEV authorized. Call 1-888-205-8920, www. CenturaOnline.com CATTLE AUCTION March 16th; 1100+ head sell. Hays Bros. Angus ranch Arcadia, LA. Bulls, pairs, breeds. Open regular and commercial. Dusty Taylor; 1-318-245-8800, #836 DRIVERS DEDICATED RUN- We have a great opportunity for 5drivers to get home nightly! Dedicated run from OK, Mon-Fri, guaranteed daily rate. Must have CDL Class A, 1-year OTR experience, excellent driving record. Must be able to start immediately 1-800-888-0203, Home Repair • Maintenance • Remodeling By Tom Suarez Installations: Storm Doors, Ext. Doors, Cabinets, Laminate, Floors, Faucets, Sinks, Toilets, Ceiling Fans, Appliances and many more. General Maintenance and Repairs. Serving Llano, Kingsland and Horseshoe Bay. Free Estimates in most cases. 20 Years Experience Burnet & Llano Counties Classified/Records STATEWIDES DRIVERS - COMPANY DRIVERS $1000 sign-on bonus. New, larger facilEMPLOYMENT ity. Home daily. 80% drop and hook loads. Family HELP WANTED health and dental insurAlco accepting applications ance. Paid vacation, 401k online at: Alco.com for plan. L/P available. CDL-A Group Manager Positions with 1-year tractor-trailer and Team Leader. experience required. 1-888703-3889 or apply online at STATEWIDES AUCTIONS Llano County Journal TEAM DRIVERS $2500 Sign-on bonus per driver. Super excellent home time options. Exceptional earning potential and equipment. CDL-A required. Students with CDL-A welcome. Call 1-866-955-6957 or apply online at YOU GOT THE DRIVE, we have the direction. OTR drivers, APU Equipped, Pre-Pass, EZpass, passenger policy. Newer equipment. 100% NO touch. 1-800-5287825 EDUCATION/TRAINING AIRLINES ARE HIRING Train for hands on aviation maintenance career. FAA approved program. Financial aid if qualified, housing available. Call Aviation Institute of Maintenance, 1-877-523-4531 CAN YOU DIG IT? Heavy equipment operator training. 3-week hands on program. Backhoes, bulldozers, excavators. Lifetime job placement assistance with National certifications. VA benefits eligible. 1-866362-6497 HIGH SCHOOL DIPLOMA from home. 6-8 weeks, accredited, get a diploma, get a job! No computer needed. Free brochure; 1-800-264-8330. Benjamin Franklin HS MEDICAL OFFICE TRAINEES needed! Train to become a medical office specialist at Ayers Career College. Online training gets you a job ready ASAP. Job placement when program completed. 1-888-368-1638. INTERNET HIGHSPEED INTERNET EVERYWHERE by Satellite! Speeds up to 12mbps! (200x faster than dial-up.)Starting at $49.95/ month, Call now and go fast! 1-888-643-6102 JOIN THE BUSINESS AND SERVICE DIRECTORY! ADVERTISE IN ALL HIGHLAND LAKES NEWSPAPERS FOR ONE LOW RATE! All Work Guaranteed 830.613.0136 325.248.0682 Did you hear the news? HCM again named one of top hospitals Hill Country Memorial Hospital in Fredericksburg has been named one of the nation’s 100 Top Hospitals® by Truven Health Analytics for the second year in a row. “To be a Top 100 US hospital two years in a row clearly recognizes the efforts of the remarkable HCM team as they serve others each and every day,” said Jayne E. Pope, HCM’s newly appointed CEO. “Our remarkable journey began five years ago under the leadership of Mike Williams and this is a result of his, and the HCM team’s leadership in an industry facing enormous challenges,” said Pope. “What we provide at HCM is sourced in the Remarkable HCM Values—Others First, Compassion, Innovation, Accountability and Stewardship. It’s simply the way we serve.” The 100 Top Hospital recognition follows quickly on the hills of HCM’s announcement two weeks ago of their ranking by Centers for Medicare and Medicaid as 1st in Texas and fourth in the nation for Patient Experience. HCM is one of seven hospitals in Texas receiving the 100 Top Hospital recognition. The nearest recipient to HCM is St. David’s in Austin. The Truven Health 100 Top Hospi- tals HCM has been recognized with this honor—2003, 2012 and 2013. Truven Health Analytics is a leading provider of information and solutions to improve the cost and quality of healthcare.. 25 edition of Modern Healthcare magazine. 33rd District Court J. Allan Garrett Presiding Llano, Texas March 7 at 1:30 p.m. CR6636 Daniel Wayne Logan, arraignment hearing, aggravated assault with deadly weapon (second degree felony); CR6637 David Charles Willard, manufacture deliver controlled substance PG1>4G<200G (first degree felony); CR6638 Jimmy Dale Lackey, burglary of habitation (second degree felony); CR639 Heather Teagan McQuatters, aggravated assault with deadly weapon (second degree felony); CR6547 Russell Scott Lewis, status, obstruction or retaliation (third degree felony); CR6578 Russell Scott Lewis, aggravated assault with deadly weapon (second degree felony); CR4051 Adam Morgan McDonald, burglary of habitation (second degree felony); CR6532 Ollie Joe Conely, assault family house member impeded breath/circulation (third degree felony); CR6583 Stephen Franklin Sheppard, evading arrest detention with vehicle (third degree felony); CR6609 Deanna Lynn Handsel, secure execution document deception involving State Medicaid Program >$1,500<$20K (state jail felony); CR6545 Roy Gonzalez, plea, For the abandon endanger child with intent to return (state jail felony); CR6554 Anthony Chance Hutson, possession of controlled substance PG2>1G<4G (third degree felony); CR6592 Adam Morgan McDonald, criminal mischief >$1,500<$20K destroy school (state jail felony); CR6599 Zhivago Kohrs, aggravated sexual assault (first degree felony) and sexual assault (second degree felony); CR5793 Wayne Alton Shaw, motion for early release of probation, injury child elderly disabled with intent of bodily injury (felony unassigned); CR6579 Charles Eugene Finke III, pretrial motion to revoke probation, burglary of building (state jail felony). Record Llano County Abstract of Judgment •James Michael Baird vs Dan Valentine, Plaintiff, $2,106 Assumed Names •Bill W. Roundup, 105-D Lachite, Horseshoe Bay, Terry Newby •Turf Nutrition & Tea, 300 Crest, Kingsland, James Roy Kizer ! e r e h It’s ! w o n Call Llano Weekly Classified Ads Call (325) 248-0682 - 9 am-1 pm to place your Llano classified or garage sale Call (830) 693-4367 -1 pm-5 pm ask the operator for Llano classified ads al i c e Sp ctory u d o r t In rices! P Deeds of Trust •Travis Hinson, 5.00 acres abstract 32, Llano County, Security State Bank & Trust •Brian D. & Rebecca E. Welling, 1100 The Cape #201, Horseshoe Bay, RandolphBrooks Federal Credit Union •Lloyd & Lisa Ruth, 912 Red Sails, Horseshoe Bay, Regions Bank Rodney K. & Sara A. Teague, 404 Sunstone, Horseshoe Bay, Pentagon Federal Credit Union •Derek E. & Susan R. Naiser, 351.348 acres abstract 516, Llano County, Capital Farm Credit •David Barnes, 1007 Broken Hills, Horseshoe Bay, RMC Vanguard Mortgage •George W. Sr. & Kathy Susan Cox, 18.09 acres abstract 707, Llano County, Capital Farm Credit •Luann Scobie and Donna J. Benton, 103 Park Road, Sunrise Beach, Independent Bank •Chaddick & Brandy Bartee, 110 Miradora, Horseshoe Bay, First Capital Bank of Texas •Timeless Texas Investments LLC, Lot 158 Summit Rock, Plainscapital Bank •Wiatrowski Family Trust, Lot 33006 Shoreline Townhomes, Stifel Bank & Trust Warranty Deeds •Joe & Lana Tighe, Lot 15 River Valley Ranch Airstrip, Betty M. Yokie and John O. Yokie Estate •Glenn R. Leduc, Lot 22233, Horseshoe Bay, Sirus Homayun •Derek E. & Susan R. Naiser, 351.348 acres abstract 516, Llano County, Thomas E. & Jane M. Clark •Jerry Dwayne & Connie Lee Newham, part lot 41 River View, Robert A. & Anna Womack •Stephen L. & Cynthia M. Lubke, Lot 15, Sunrise Beach, Christine Langenfeld Byron •Chaddick & Brandy Bartee, Lot 137A Escondido, Post Rail Builder LLC •Timeless Texas Investments LLC, Lot 158, Summit Rock, Summit Rock Communities LLC •R. L. Nobles, Lots 5, 6 Miller Addition, Llano, Charles Willbern & Vicky Ann Robinson •Charles Nave, Lots 19, 20 part 18, 21, Rose Hill, Deutsche Bank National Trust Co. •Dennis R. Johnson, Lots 1, 2, 3, 4, 5, 6, Miller Addition, Llano, Bert S. Jr. & Leann Myers and Kenneth & Delhia Myers •Curtis C. & Julie F. Borho, 986.554 acres and numerous abstracts, Llano County, •Sutton Family Partnership LTD and Sutton Family Limited Partnership •Malone Girls LLC, Lots 331, 332, 333, Buchanan Lake Village, Howard T. & Mary Wilhelm Llano County Jail Log The following have been booked into the Llano County Jail on the dates listed. Their inclusion in this list is not intended to be a judgment of guilt or innocence and should not be construed as such. Feb. 25 Matthew Wayne Manning, 44, of Llano, on a charge of driving while license invalid with previous suspension; released on bond. Aaron Sanchez, 27, of Lampasas, on a charge of driving while license invalid with previous suspension; released on bond. Feb. 26 Jeremy Scott Herron, 30, of Buchanan Dam, on charges of aggravated assault with deadly weapon and assault family/ house member impede breath/ circulation; release information unavailable. Schaneece Illeen Hunsaker, 39, of Rosenberg, on charges of possession of controlled substance PG 1<1G and an order to revoke bond-forgery; release information unavailable. Blaise Ward Paschall, 21, of Kingsland, on a charge of criminal trespass; released on bond. Feb. 27 Charles Jeffrey Brassfield, 31, of Llano, on a charge of sexual exploitation child; security transport. Ashlee Sheanel Knowles, 20, of Llano, on a charge of failure to display driver’s license; released to see judge. Christopher Glen Livers, 28, of Kingsland, on charges of theft of property >$1,500<$20K and possession of drug paraphernalia; released on bond. Feb. 28 Joe Arthur Ramon, 42, of Granite Shoals, on charges of public intoxication and failure to appear burglary of vehicle-Burnet County; released on bond. March 1 Marshall Elijah McDaniel, 35, of Burnet, on a judgment-driving while intoxicated; release information unavailable. Lana Kay Springer, 52, of Liberty Hill, on a judgment-pos- session of controlled substance PG1>200G<400G; release information unavailable. Shiloh Amon Wiggins, 35, of Round Mountain, on a judgment-driving while intoxicated 3rd or more; release information unavailable. March 2 John Anthony Birdwell, 25, of Buchanan Dam, on charges of possession of marijuana < 2oz, possession of drug paraphernalia, public intoxication and no driver’s license on person; released on bond. Robert Mathew Hogeda, 36, of Austin, on a charge of criminal trespass; released on bond. March 3 Daniel Thomas Barner, 24, of Kingsland, on a charge of driving while intoxicated; released on bond. Heather Mickal Cardullo, 30, of Kingsland, on charges of theft of property <$1,500 with two precious convictions and criminal mischief-Burnet County; release information unavailable. Wednesday, March 6, 2013 Page 9B Llano County Journal Religion Quilters have new group project The Texas Hills Quilters met Feb. 18 for its monthly meeting. Seems this group does more than create quilts. Members enjoyed a pot-luck lunch. Besides quilting, we’ve got some very good cooks. Several of the members continue to make lap robes and bags to hang on walkers for the local nursing homes. They also make pillowcases and laundry bags for the Cherokee Children’s Home. Non-perishable food items are collected and delivered to the Llano Food Pantry. The club has a new group project called “Project Linus”. The first Saturday of each month anyone and everyone is encouraged to meet at The Country Quilt Shop, 100 Exchange in Llano to assemble, sew, crochet or knit. You do not have to be a member of the Texas Hills Quilter group to participate. The blankets that are created are donated to seriously ill children or others that are in need within a three county community. Texas Hills Quilters meet the 3rd Monday of each month. Everyone is welcome to come join the fun of learning and sharing the love of quilting. Inquiries, contact 325.248.0300. Llano Church Briefs COURTESY PHOTO Taking a break through the food line are Wilma Holt, Kay Savage, Ron Thomas and Jackie Bertram. Church Services Calendar Bethal Tabernacle United Pentecostal, 401 W. Dallas, Llano 325.247.4680. Sundays: 10 a.m. worship. Wednesdays: 7 p.m. worship. Bible Baptist Church, 700 E. Young, Llano 325.247.5440. Sundays: 10 a.m. Sunday School, 11 a.m. children and adult worship, 6:30 p.m. worship. Wednesdays: 7 p.m. worship and prayer meeting. An Independent Baptist Church that uses the King James Version Bible. Buchanan West Baptist, 850 Lillian Dean, Buchanan Dam 512.793.2190. Sunday: 9:30 a.m. Sunday School, 10:45 a.m. worship, 6:30 p.m. worship. Wednesday: 6:30 p.m. worship. Calvary Apostolic Church, 1010 Ashton Avenue, Llano 325.956.9652. Sundays: 10 a.m. and 6 p.m. worship. Wednesdays: 7:30 p.m. Fridays: Youth 7 p.m. Chapel of the Hills Baptist Church, 19135 E. SH 29, Buchanan Dam 512.793.2453. Sunday: 9:10 a.m. Prayer, 9:30 a.m. Sunday School, 10:45 a.m. worship, 6 p.m. worship and youth program. Fifth Sunday: after service dinner-onthe-grounds Wednesday: 6 p.m. Bible study and youth program, 7 p.m. Praise singers and musicians practice under direction of Dennis Hoover. Third Wednesday: 6 p.m. potluck supper. Thursday: 9 a.m., food & clothes pantry from 9 a.m. to 12 p.m. or phone for appointment. Last Monday: Ladies Bunko Friendship Night at 6 p.m. This is a fun, social, dice game of 100 percent luck-no skills required. A light meal is hosted prior to the game. Please RSVP to 512.793.2453. Christian Worship Center 879 FM 3404, (Slab Road) Kingsland 325.388.4929. Sundays: 10:30 a.m. worship. Wednesdays: 7 p.m. worship. church of Christ 901 Lillian Dean, Buchanan Dam 512.793.2123. Sundays; 9:30 a.m. Sunday school, 10:30 a.m. worship, 6 p.m. worship. Wednesdays: 10 a.m. Ladies Bible Class, 6 p.m. Bible study. Church of Christ, 402 W. Main, Llano 325.247.4426. Sundays: 9:30 a.m. Bible Study, 10:30 a.m. worship, 6 p.m. worship. Wednesdays: 6:30 p.m. Bible study. Wednesdays 9:30 a.m. Ladies Bible Class. Covenant Prayer House 2651 East SH 29, Llano 325.247.7990. Saturdays: 6 p.m. worship. Cross & Spurs Cowboy Church East SH 29, Buchanan Dam (7/10 mile from RR 1431) 325.423.0539. Sundays: 9 a.m. Bible study, 10 a.m. worship. First Assembly of God, 301 SH 71 East, Llano 325.247.5962. Sundays: 9:45 a.m. Sunday school, 10:30 a.m. worship, 6 p.m. worship. Wednesdays: 7 p.m. worship. First Baptist Church of Kingsland, 3435 RR 1431 W., Kingsland 325.388.4507. Sundays: 9 a.m. traditional worship service, early Sunday School. 10:45 a.m. contemporary children’s church (ages 4 years – 6th grade), youth worship (7th – 12th grade), late Sunday School. 6 p.m. Evening worship. Wednesdays: 5 p.m. Family dinner (at cost). 6 p.m. Prayer meeting, Youth hang time, Team KID (ages 4 years – 6th grade), worship choir rehearsal, Mission U-Too Bible Study (adults). 6:30 p.m. Pastor’s Bible Study. Nursery provided for ages birth – 3 years during all services. First Baptist Church of Llano, 107 West Luce, Llano 325.247.4803. Sundays: 8:30 a.m. contemporary worship, 9:45 a.m. Bible study, 11 a.m. traditional worship, 6 p.m. worship and youth band rehearsal & game time. Wednesdays: 6 p.m. family meal, 6:45 p.m. children’s activities, adult Bible study, adult prayer meeting; 7 p.m. youth fusion. First Baptist Church of Sunrise Beach, 606 RR 2233, Sunrise Beach 325.388.4113. Sundays: 9:30 a.m. Sunday school, 10:45 a.m. worship, 6:30 p.m. worship. Wednesdays: 6:30 p.m. Bible study. First Baptist Church of Tow, 16529 RR 2241, Tow 325.379.3918. Sundays: 9:45 a.m. Sunday school, 11 a.m. worship, 6 p.m. worship. Wednesdays: 7 p.m. worship. First Christian Church, 1105 Oatman, Llano 325.247.5309. Sundays: 9:45 a.m. Sunday School, 10:45 a.m. worship. Wednesdays: 5 p.m. Bible study. First Presbyterian, 1306 Ford St., Llano 325.247.4917. Sundays: 9:45 a.m. Sunday School, 11 a.m. worship. Tuesdays: 7 a.m. Prayer in the parking lot. Wednesdays: 7 p.m. Bible Study. Thursdays: 12:05 p.m. Worship at Lunch-includes scripture, prayer and discussion. Second & Fourth Tuesdays: Jamming Session, bring your instrument or voice, everyone welcome. For more information call Jeff White 325.248.4114. Genesis Lutheran, 15946 SH 29, Buchanan Dam 512.793.6800. Sundays: 8:30 a.m. traditional worship, 9:45 a.m. Bible study, 11 a.m. contemporary worship with modern format and music. First & Third Mondays: 10:30 a.m. WINGS. Fourth Mondays: 9:30 a.m. Piecemakers quilting. First & Third Tuesdays: 10 a.m. to 3 p.m. Arts and Crafts Second Wednesdays: 2 p.m. fellowship. Fourth Sundays: after worship service fellowship. Grace Episcopal, 1200 Oatman, Llano 325.247.5276. Sundays: 9:30 a.m. Sunday school, 10:30 a.m. worship. Mondays: 6:30 p.m. AL-ANON; 8 p.m. AA. Thursdays: 8 p.m. AA. Highland Lakes Baptist, 716 RR 2900, Kingsland 325.3883540. Sundays: 9:45 a.m. worship, 10:45 a.m. Sunday school, 6 p.m. worship. Wednesdays: 7 p.m. prayer meeting. Highland Lakes Church of Christ, RR 1431, Kingsland 325.388.6769. Sundays: 9 a.m. Sunday school, 10 a.m. worship, 5 p.m. worship. Wednesdays: 7 p.m. Bible study. Highland Lakes United Methodist Church, 8303 RR 1431, Buchanan Dam 325.388.4187. Sundays: 9:30 a.m. Sunday school, 10:45 a.m. worship, Second Sundays: covered dish. Fourth Sundays: 8:15 a.m. early worship. Tuesdays: 10 a.m. Women Bible study, 1:30 p.m. recycled card workshop. First Wednesdays: 6:30 p.m. game night. Third Wednesdays: 2:30 p.m. United Methodist Women. Thursdays: 1:30 p.m. recycled card workshop. Hi-Way Of Hope Assembly 14201 RR 1431, Kingsland 830.598.2948. Sundays: 10:30 a.m. worship, 6 p.m. Sunday Night Live. Wednesdays: 7 p.m. Fellowship. Holy Trinity Catholic, 708 Bessemer, Llano, 325.247.4481. Saturdays: 5 p.m Vigil. Sundays: 10 a.m. Mass. Wednesdays: 7:30 a.m. Mass, Tuesdays/Thursdays/Fridays: 6 p.m. Mass. Kingsland Community Church, 1136 RR 1431, Kingsland, 325.388.4516. Sundays: 9:45 a.m. Sunday school, 10:45 a.m. worship. First Tuesdays: 9:30 a.m. Get Acquainted Coffee Day. Second Tuesdays: 7:30 a.m Men’s Breakfast. Llano Cowboy Church, SH 29 West, Llano. 325.3609. Sundays: 10 a.m. worship, 6 p.m. Bible study. Tuesdays: 7 p.m. worship. Llano Church of God of Prophecy, 807 Anniston Ave., Llano 325.247.1880. Sundays: 10 a.m. Sunday school, 11 a.m. worship, 6:30 p.m. worship. Wednesdays: 7 p.m. Bible Study. Thursdays: 6:30 p.m. prayer service. Lone Grove Church of Christ, 2625 CR 202, Llano. Sundays: 10 a.m. Bible class, 10:30 a.m. worship, 6 p.m. worship. Wednesdays: 7 p.m. worship Lutie Watkins Memorial United Methodist Church, 800 Wright St., Llano 325.247.4009. Sundays: 8:45 a.m. praise, 9:40 a.m. Sunday School, 5 p.m. Youth Fellowship, 7 p.m. choir. First Sundays: 9 a.m. Sunday school, 10 a.m. worship, 11:15 a.m. meal. Wednesdays: 6 p.m. youth. Our Lady of the Lake Catholic, RR 2233, Sunrise Beach Village 325.388.3742. Saturdays: 4 p.m. Mass. Packsaddle Fellowship, 508 RR 2900, Kingsland 325.388.8202. Sundays: 9:30 a.m. Adult and Children Worship Service, 10:45 a.m. Bible study for all ages. Pittsburg Avenue Baptist Church, 709 Pittsburg Ave., Llano 325.247.4042. Sundays: 9:45 a.m. Sunday school, 11 a.m. worship, 5 p.m. how to study the Bible, 6 p.m. casual service. Wednesdays: 6:30 p.m. how to study the Bible/nursery provided. Angel Food Orders: at the Fuel Coffee House deadline is second Monday of the month, pick up is the last Saturday of month at the church. Providence Reformed Baptist Church, 516 Juniper, Kingsland 830.265.0538. Sundays: 10:30 a.m. Sunrise Beach Federated Church, 105 E. Lakeshore Drive, Sunrise Beach 325.388.6835 or 325.388.3685. Sundays: 9 a.m. Worship, followed by a fellowship hour with refreshments. Third Sundays: Discussion after fellowship. Third Fridays: 6 p.m. Potluck Supper. St. Charles Catholic, 205 Trinity Dr., Kingsland 325.388.3742. Tuesdays-Fridays 8 a.m. Mass. Saturdays: 5:30 p.m. Mass. Sundays: 9 a.m. Mass. St. James Lutheran, 1401 Ford, Llano 325.247.4906. Sundays: 9 a.m. Christian Education, 10 a.m. Worship (Holy Communion on first & third Sundays). Last Sunday of each month: 10 a.m. less liturgy and more hymns. Fifth Sundays: 10 a.m. Super Singing with hymn requests and special music by Les Hartman. Third Wednesdays: 6:30 p.m. Family game night, bring finger foods. St. John’s Lutheran, RR 152, Castell 325.247.3115. Sundays: 9 a.m. worship. Following worship: Children’s Sunday school. River of Life Fellowship 2651 East SH 29, Llano 325.247.7990. Sundays: 10 a.m. Tow Church of Christ, CR 2241, Tow. Sundays: 10 a.m. Sunday school; 10:45 a.m. worship, 4 p.m. worship. Wednesdays: 6 p.m. worship. Trinity United Methodist Church, 142 Old Schoolhouse Lane, Castell 325.247.4238. Sundays: 9:30 a.m. worship, 10:45 a.m. Sunday school. Casual Dress. United Methodist Church, CR 408-D, Valley Spring (off SH 71 West). Sundays: 9 a.m. worship Valley Spring Primitive Baptist Church CR 407 (off SH 71 West), Llano. Second & Fourth Sundays: 10:30 a.m. Song service, 11 a.m. worship. Covered Dish follows. Cross and Spurs Cowboy Church will host Mike & Suzy Gayle as special singers on Sunday, March 10. Bobby G. Rice will sing on March 17. Watch for more news on the first arena event with Paul Daily from Wild Horse Ministries April 27 & 28. Highland Lakes United Methodist Church will observe weekly Lenten Lunches except Good Friday. A light lunch of soup and bread will be served at 11:30 a.m. followed by a short lesson. This year The Forty Days of Love Spiritual Adventure will be observed: each week perform a special task for someone else instead of or in addition to giving up something for Lent. March 11 will be lunch and learn with Dorothy Clark teaching crochet at 11:30 a.m. Bring your sack lunch. Genesis Lutheran Church will observe Lent with Wednesday weekly lessons through March 20.The weekly Lenten services begin at 5:30 p.m. with a light meal and lesson begins at 6:30 p.m. The “Celebrate Green” fundraiser for the building remodel will be a dinner and silent auction March 15. Tickets are on advance sale now, $12; will not be sold at the door. Menu is Irish stew, cornbread, salad, desserts. Proceeds will be used for expanding the church structure at the community area. Please contact the church at 512.793.6800, for more information or to purchase tickets. Lutie Watkins Memorial United Methodist Church will observe weekly services through Lenten season. Exercise classes have changed time back to 9 a.m. on Monday, Wednesday and Friday. First Baptist Church Llano will host AWANA Wednesday at 6:30 p.m. It is Spring Forward, Leaps & Bounds Night and Grand Prix Cars will be handed out. AWANA is dismissed March 13 for spring break. There will be combined worship March 10 at 11 a.m. Betty Sue Hoy will lead the Dave Ramsey Financial Peace class on Tuesdays from March 19- May 14 at 6:30 p.m. Bless Llano Now Outreach Center is now open at 911 Bessemer. The Center will distribute free supplies (as available) to citizens of Llano. This ministry will survive only by donations. Please make a donation today. Our Lady of the Lake Catholic Church, Sunrise Beach, will hold Easter Mass Sunday, March 31 at 11 a.m. There will be no Saturday Mass on Easter weekend. St. Charles Catholic Church, Kingsland, will observe Holy Thursday Mass at 7 p.m. and Holy Hour after Mass until midnight. Veneration of the Cross will be on Good Friday at 7 p.m. Saturday Easter Vigil will be at 8:30 p.m. Easter Sunday Mass will be at 9 and 10:30 a.m. Prayer on the Square, Interdenominational group of Christians meeting in the gazebo on the square in Llano every Thursday, announce the time has changed to 6 p.m. for prayer and also at Prosperity Bank in Kingsland at 6 p.m. For prayer anytime call 325.247.1880. 700 Springs Ranch tour set The 700 Springs Ranch located southwest of Junction will welcome visitors Saturday, March 9 who want to see the beautiful springs that feed the South Llano River with crystal clear water. Llano City citizens rely on these and other springs in that area as the most important source of their municipal water supply. The tour will begin promptly at 10 a.m. at the Kimble County Courthouse in Junction with a 20 minute motorcade to the ranch. Visitors may bring a picnic lunch to enjoy after informative program which will include the history of the ranch and surrounding area. Additionally you may wish to include some comfortable chairs for yourself and your passengers. Please note that the ranch owner does not allow any fishing or swimming in the River on his property that day. Mrs. Frederica Wyatt, Chairperson of the Kimble County Historical Commission will be pleased to answer any questions you have about the tour. She can be contacted at 325.446.4219 or 325.446.2477. We’ve Moved! We invite you to visit us at our new location 701 Ave G • Marble Falls Granite Wealth Management, LLC Wells Fargo Financial Network, LLC 830.693.6760 Page 10A Llano County Journal Wednesday, March 6, 2013 Sports Llano’s special Olympians compete at Texas State event On Saturday, March 2, Llano’s special olympics team traveled to Texas State University in San Marcos to compete in the area basketball competition. Over 25 teams came from the area to compete in individual skills, team-building, medals in their divisions. Caleb Hinton and Hannah DeVault both earned bronze medals. Valerie Ozanne (head coach) and Kerry Harvey (coach), for the Llano Special Olympics Team said that all the athletes have put forth excellent effort and it has paid off. The team will begin practices this week for the Track and Field Competitions to be held in April. Jackets: Collects first victory From Page 1A Lampasas 17, Llano 10: The loss that eliminated Llano was as different as nightand-day from Thursday night’s stirring win over Harper. Llano took a 10-7 lead into the seventh inning but five pitchers gave up 10 runs to hand the Badgers a 17-10 win. Starter Isaac Hutto pitched well Saturday afternoon in what was the tournament’s consolation championship game. Hutto pitched five and two-thirds innings, yielding seven runs on six hits. He walked four and hit two Badger batters. Hutto, who plays all three team sports for the Yellow Jackets, threw 120 pitches before being lifted in the sixth inning for Eli Tiffin. Hutto, who had three base hits, moved to first base. Llano led by three at the time. Relievers Tiffin, Storey Tatsch, Chance Ware and Holden Simpson yielded four hits, three walks and hit two batters. Three passed balls also hurt the Jackets’ cause. After the Badgers scored 10 runs, coach Ridings turned to senior Rhett Brooks. Brooks dominated, getting two quick outs to end the uprising, although the damage had been done. Taylor Sorenson was the hitting starof-the-game for the Yellow Jackets. His triple in the fifth plated three LHS runs. Catcher Will Siegenthaler reached base four times. It was a bitter pill to swallow, but this early-season tournament will be long forgotten by the time district play begins at 7 p.m. March 19 at – yep – Lampasas. Fourteen teams – including Faith Academy – will participate in the Hill Country Baseball Tournament March 7-8-9 at Llano High. The Yellow Jackets play at 4 p.m. Thursday against Fredericksburg. Lampasas Llano 101 102 302 340 10 – 17-9-1 0-10-12-2 Softball: Prepares for district From Page 1A bashed a homer to lead the way. Other hitters in the title game were Lacey Redden (2-for-3), Claire Williams (2-for-3) and Jessica Wunderlich (1-for-3, two RBI). Llano opened the tournament with a 5-4 win over the San Marcos JV. Williams, Clough and Mize all had two hits and Clough and Mize batted in runs. In Game 2, the Lady Jackets shut out Lytle, 5-0, with Jackson pitching. Jordan, Williams and Wunderlich had two hits each. Jordan homered and drove in two. Wunderlich also drove in two runs. Llano 8, Harper 0: Catcher Redden was 3for-4 with two triples and an RBI. Williams drove in three runs on three hits. Jackson picked up the pitching victory. Llano 8, Johnson City 4: Jordan belted a 3-run homer and Mize was 2for-2 with an RBI. Clough also added three hits to the attack. The Lady Jackets will open District 8-3A action March 15 against Liberty Hill at Llano. The district opener starts at 2:45 p.m. Time for a Spring Cleaning? Advertise with us! GARAGE SALE We can help you have a successful sale! Plus 1 x 1.5 Classified Display Ad Packages Starting at 2995 $ Get Your Garage Sale Kit And Make Your Event a Success! Each Kit Includes: • 2 Fluorescent 11” x 14” All-weather Signs • 140 Bright Pre-Priced Labels • Successful Garage Sale Tips • Pre-Sale Checklist • Sales Record Form • E-Z Stake Assembly Kit including: (2) 24” Wooden Sign Stakes (2) Assembly Bands Upgrade Your Kit to include other garage sale tools! • Calculator • 2-pocket money apron • No Parking/Pay Here SIgns • Permanent marker • Oversize Price Tags for larger items For more information contact: Christa Delz 830.693.4367 PHOTO BY TOM SUAREZ Courtney Karnowski hits a drive during the Llano Tournament last week. The Llano girls golf team will be at the Comfort Tournament march 6. Faith Academy loses in state semifinals BY MARK GOODSON LCJ SPORTS EDITOR Faith Academy couldn’t make free throws and failed to deliver in crunch time Friday in the TAPPS Division II state semifinals at Mansfield’s Lake Ridge High School. Sherman Texoma Christian knocked off the Lady Flames, 46-43, despite neither team scoring in the final 1:24 of the wide-open game. Shooting wasn’t pretty by either team throughout the contest, especially at the free throw line where Faith Academy shot 43 percent, making just 16 of 37 attempts. Texoma Christian, which has won four of the last five TAPPS Division II state titles, was seven of 12 from the free throw line. Texoma Christian relied on its post player Hannah Bentson to score 23 points and lead the effort for the Lady Eagles, who have beaten the Flames in the Final Four before. Faith Academy’s Bailey Brinkley led the Lady Flames with 18 points, followed by Juliette Fisher’s 11 points. “You can’t beat any good team and shoot free throws like that,’’ Faith Academy coach Jerry English said. “It was tough for us to try to get back into the game.’’ Faith Academy trailed by as many as seven (13-6 with 1:20 left in the first quarter), but took the lead and were in command after Brinkley made a strong move down the right side of the lane for an old-fashioned 3-pointer to give Faith Academy a 1413 lead just 30 seconds into the second quarter. Fisher had a strong game on defense and pulled down 10 rebounds. Nine of her rebounds were on defense. “We could have forced them to the baseline more,’’ Fisher said. The Lady Flames shot 38 percent from the floor (13 of 34) with their 43 percent free-throw shooting. “We rushed it a little bit (on the free throws),’’ Fisher said. “This was good for us, it’ll make work that much harder to get back again.’’ The Lady Flames held the advantage throughout the second quarter but couldn’t capitalize on free throw attempts. The game was knotted at 21 apiece at halftime. The game stayed close throughout the rest of the game. In the final 36 seconds, Faith Academy had two good opportunities to cut into the 46-43 lead. Bentson had given the Lady Eagles the cushion with an inside shot against the Faith Academy defense. Faith Academy’s JoAnna Piatek, one of the team’s best passers, managed to get a 12-footer off with 30 seconds left and Juliette Fisher had a medium-range jumper with less than 10 seconds for the team’s best chances to win. Both shots were just off the mark, but were good shots. With four seconds left, there was a chance for the Lady Flames to get the ball back for one last shot. The officials took time to rule on a possession and the ball was given to the Lady Eagles. Texoma Christian inbounded the ball and ran out the clock. Faith Academy played solid defense and rebounded well, but the team’s shooting was a far cry from what they had shown all season. Two seniors finished their career. Brinkley hit five of nine shots from the floor and seven of 11 from the line in her final game for the Lady Flames. Rebecca Graham scored three points and added three rebounds as the other senior on the team. Returning starters are Taylor Denton, Piatek, Kristen Cherry and Fisher. Brinkley finished her senior season with 651 points (16.7 average) and made 50 percent of her field goals for the season. She led the team in free throw shooting, was second in rebounds, first in blocks, first in steals and second in assists. 3&/55)&1"-.4 #&"$))064&GPS EJTDPVOUFE XJOUFSSBUFT Port PortAransas, Aransas,TX TX Rent Rent nightly nightly or or 33 nights nights get get the the 4th 4th night night FREE! FREE! Early check in and late check outs... at no extra cost! 830.265.1234 March 6 - 12, 2013 Volume 7, No. 45 Lake Country Life A publication of the Highland Lakes Newspapers: Burnet Bulletin, The Highlander and The Llano County Journal ~ Cover Story, Page 9 Get our App! Put Highland Lakes businesses, events and coupons at your fingertips! Now for iPhones & Androids! or see us online at m.lakecountrylife.com SPECTACULAR VIEWS! VOTED BEST BUILDER 6 YEARS IN A ROW! THIS PROPERTY IS GORGEOUS 3.95+- ACRES WITH AN EXEMPLARY 4157 SQ FT CUSTOM STONE HOME WITH ALL OF THE BELLS AND WHISTLES, FABULOUS ENCLOSED POOL, AND A DETACHED 2000 SQ FT BUILDING HOSTING AN OFFICE, STORAGE, FULL BATH, RV STORAGE AND CONNECTIONS, AND A WORKSHOP; EASY GUESTHOUSE OPTIONS. THIS PROPERTY WAS DESIGNED TO OFFER THE GREATEST CONVENIENCES IN ONE ALL-INCLUSIVE PACKAGE. COUNTRY LIVING AT ITS FINEST! Winner mes Best of Category 1 Best Craftsmanship Best Interior Design Lake Country Events W W W . L A K E C O U N T R Y L I F E . C O M Ongoing ~Inks Lake State Park - 8 a.m. to 10 p.m., Park Road 4, Burnet. Scenic vistas overlook granite hills and 803-acre Inks Lake. Park offers camping, fishing, picnicking, wildlife observation, swimming, boating, and water sports. Campsite reservations, call 512.389.8900. ~Eagle Eye Observatory Star Viewing, Live Music & Programs by the Traveling Naturalist - times, Canyon of the Eagles, 16942 R2341 on NE Lake Buchanan, Burnet. For dates and times of appearance, call 512.334.2070 or visit canyonoftheeagles.com. ~Geology Tours - Longhorn Cavern State Park, Park Road 4, Burnet. For reservation, call 512.756.4680. ~Weekend Tours at Westcave Preserve - 10 a.m., noon, 2 p.m. and 4 p.m., Saturdays and Sundays, 24814 Hamilton Pool Rd., Round Mountain. Open to the public. Visit westcave.org or call 830.825.3442. Through March 22 ~Knights of Columbus Fish Fry - Fridays, 5:30 p.m., Mother of Sorrows Parish Hall, 507 Buchanan Dr. Burnet. The Knights of Columbus 8935 will hold a Fish Fry every Friday during Lent. Money raised goes to support the Scholarship Program for students at Burnet High School. For information, 512.756.4410. March 9 ~Ladies Play Day - Burnet’s Historic Square and all around town, Burnet. Presented by Burnet Association of Merchants. Over 30 participating businesses. Door prizes everywhere, discounts and fun. Come and have a good time. For information, 830.385.5002. ~The Painterly Approach - noon to 5 p.m., Marta Page 2 Lake Country Life Stafford Fine Art, 112 Main St., Marble Falls. An informal come and go invitation to watch an artist at work. Featuring Bob Rohm will also have a book signing. For information, 830.693.9999. March 9, 10 ~Race Across America - Shoreline of Lake Marble Falls and the Colorado River, Marble Falls. Come ride for fun, challenge yourself or race against the top endurance racers in the world. Bring your friends. Come for a day or the entire weekend. Register online at raamchallenge.com or on March 8, 4 p.m. For information, director@souleventsusa. com. March 9, 11 - 15 ~Spring Break at the Pioneer Museum - 10 a.m. to 5 p.m. daily. Closed Sunday, March 10, Pioneer Museum, 325 West Main St., Fredericksburg. Activities include making rope, quilting, spinning, hammering at the forge, and more. Re-enactors will help experience the days of Mountain Men, Buffalo Soldiers, and more. For information, 830.990.8441. March 15 - 17 ~State High School Bass Fishing Championship - Marble Falls will be the home for the Texas High School Bass Fishing State Championship for the next two years. We expect many of the over 300 High School anglers from around the state to bring their families to town for the tournament weekend. For information, 830.693.2815. ~World Series Team Roping - Friday, 10 a.m., Saturday and Sunday, 8 a.m., Llano Events Center, 2259 Ranch Road 152, Llano. Join us at this new venue for Shelley Productions World Series roping. For information, 325.247.5354 or www. shelleyproductions.net. Zina Rodenbeck Ho 2012 Parade of RE/MAX of Marble Falls 808 9th Street • Marble Falls, TX 78654 zinasells@gmail.com • 830-265-0310 • March 16 ~Welcoming Party at the Train Depot - 11:30 a.m., Austin Steam Train, Burnet. The Austin Steam Train will be hosting a South African television production starring country singer Juanita. There will be a train robbery, gunfight, Frito pies and chili, and a Petting Zoo for the kids. For information, 512.477.8468 or. ~Master Gardener’s Lawn and Garden Show - 9 a.m. to 1 p.m., St. James Lutheran Church, 1401 Ford St., Llano. Join the Master Gardeners as they demonstrate and have informative talks to help your garden and lawn spring back this year. For information, 325.247.5354 or llanochamber.org. ~Texas History Day - 7 p.m., Pioneer Museum, 325 W. Main St. Fredericksburg. Old West Rangers, Joaquin Jackson, other field reenactments, Duke Davis, and Austin Ladd Roberts will be some of the additional re-enactors. Dinner at 7 p.m. and concert by Mike Blakely at 8 p.m. For information, 830.990.8441. March 23 ~Annual Chamber Banquet - 5:30 p.m., Lakeside Pavilion, 307 Buena Vista, Marble Falls. We are excited about coinciding the Annual Banquet with the opening of the Visitor’s Center. We will run shuttles from the Pavilion to the Visitor’s Center prior to and after the banquet. For information, 830.693.2815. ~15th Annual Hill Country Lawn and Garden Show - 9 a.m. to 4 p.m., Burnet Community Center, 401 E. Jackson St., Burnet. Vendors feature plants for every garden, lawn and garden equipment, and yard décor are also available for purchase. There will be informative speakers, demonstrations, and a special children’s area. For information, 512.756.4297. ~Texas Topaz Day - 10 a.m., Historic Mason Square, 126 Ft. McKavitt St., Mason. Meet gem cutters in person, see live cutting demonstrations, and join a rock-hounding guided tour. Kids can spend the day as a quest of the International Gemologist Apprentice Program. For information, 325.347.0475 or. March 24 ~Annual Faceting Seminar - 9 a.m. to 5 p.m., Wildlife Ranch Lodge, 100 West TX 377, Mason. Join us for the Annual Faceting Seminar presented by the Texas Faceter’s Guild. Gem faceters and enthusiasts have the opportunity to learn from Texas gem cutters, and enjoy lectures by industry experts. For information, 325.347.0475,. March 30 ~Family Easter on the Vineyard - noon to 4 p.m., Fall Creek Vineyards, 1820 County Rd. 222, Tow. Easter Basket lunches on the patio, live Gospel music by the Lake Bottom Jazz Trio, wine toss for adults, egg hunt for the kiddos and a special appearance by the Easter Bunny. For information,. ~Easter Fires of Fredericksburg Pageant - 8 p.m., Gillespie County Fair Grounds, 530 Fair Dr., Fredericksburg. A story of bunnies and Indians, history and legend, and the Gillespie County Fair and Festivals Association, Inc., is rekindling the telling of that chapter in local history with the presentation of the Easter Fires of Fredericksburg Pageant. For information, 830.997.2359. ~@Last Llano Art Studio Tour 2013 - 10 a.m. to 6 p.m., Llano. “We live here, we create here.” Come join our local artists in their studios and homes for showings, sales, and demonstrations of various media such as clay, fabric, glass, iron, leather, metal, paint, photography, and wood. For information, 325.247.5354 or llanochamber. org. April 5, 6 ~Mason Art Walk - Friday until 7 p.m. , Saturday until 5 p.m., Mason. Art Walk starts Friday and continues through Saturday allowing members to welcome in art lovers. New lighting was installed around the Square lights up our sidewalks for safety of patrons. For information, 325.347.0475. March 6 - 12, 2013 Dining with gluten awareness For many diners, gluten versus gluten free foods are a significant, daily dietary concern. Gluten, with roots in the Latin word “gluten,” meaning glue, is a protein composite found in food that is processed from wheat or a related grain. Gluten is found in breads, baked goods and starchy meals. Gluten free options in restaurants are highly beneficial to people who cannot have products with gluten in them. People who have celiac disease or gluten intolerance cannot eat gluten. Some studies show that those with autism may see behavioral improvement when practicing a gluten free diet. In addition to clinical reasons, many choose gluten free foods based on general wellness. “One in five people normally have intolerances to gluten,” said Janet Wilson, owner of Tea Thyme Café on US 281 in Marble Falls. Wilson stands by the gluten-free diet, and offers a completely gluten free menu to patrons. “We don’t bring in anything with gluten, so there’s no contamination,” she said. “Gluten is like glue. It clogs up the intestinal pores and your body starts starving.” Wilson said that a gluten free diet can cure common ailments. “People start to feel better when they’ve gone off gluten,” she said. “They have more energy, no aches and pains, no more sniffles.” Tea Thyme Café offers many types of tacos, DINING several gluten free pasta options and is made to order. Pecan Street Brewing Company on Pecan Avenue in Johnson City has Americana offerings minus the gluten. “We have gluten free pizza and rolls for burgers and sandwiches,” said owner Patty Elliott. “Our salad of course is gluten free, as well as our mashed potatoes, chicken breast and steaks. We can also substitute oil and vinegar for STAFF PHOTO BY ALEXANDRIA any dressings that include gluten.” RANDOLPH The brewery also plans to add gluten free This gluten free panini can be found on the menu beer to their menu. Russo’s Texitally Restaurant on Steve at Saucy’s Restaurant in Cottonwood Shores. Hawkins Parkway in Marble Falls can also spinach pizza, garbanzo bean salad, a hummus make gluten free substitutions for patrons. sandwich, and many other breakfast and lunch “We sometimes substitute zucchini options, all gluten free. Prices range from $6.75 spaghetti for pasta,” said Kristal Ontiveros, to $8.95. assistant manager, “or we can make something Saucy’s Restaurant and Catering on US simple like ravioli with (gluten free) tomato 281 in Cottonwood Shores is another location sauce. We’re also working on getting gluten free to find gluten free food for lunch or dinner. breads.” They offer chicken marsala with mushrooms, Francesco’s Italian Restaurant and jalapeno lime tilapia, rustic sausage and polenta Pizzaria on US 281 in Marble Falls has pizza, and options from their salad and pasta bar that spaghetti, and several meals options without are gluten free. Prices range from $10 to $16. gluten, including grilled eggplant and a pesce “We try to have a little something for griglia – grilled salmon. everybody,” said Luciana McKeown, owner It’s All Good Bar-B-Q on TX 71 in of Saucy’s. “We try to have at least one item Spicewood offers “Meat by the Pound.” Several in every category – one dessert, one appetizer of their sides including pinto beans and green – that is gluten free.” beans are naturally gluten free. McKeown said that the pasta bar features Although not every restaurant will have gluten free options, there are ways to eat out and still be gluten free. Meals with no wheat products and plates with meat and fresh vegetables are usually gluten free. Avoid sauces, many of which have gluten in them. For salad dressings, substitute oil and vinegar. ~Alexandria Randolph Gluten Free Barbecue Sauce 1/2 cup gluten free ketchup 1/2 cup tomato sauce 1/2 cup apple cider vinegar 1/2 cup pineapple juice 1/2 cup brown sugar 1/4 cup gluten-free Worcestershire sauce 1/4 cup water 1 medium chopped onion 1/4 teaspoon celery powder (not salt) 1/2 teaspoon paprika Combine all ingredients in a heavy, medium sized saucepan. Bring to a boil and simmer on low for about 10 minutes. Cool and store in the refrigerator. Yields 1.5 pints of sauce. Serve over grilled chicken, ribs, pulled pork, burgers or brats. It is also a good dipping sauce for gluten free chicken tenders. ~ Recipe courtesy of Teri Gruss from Gluten Free Cooking at About.com Enjoy Weekends on the Waterfront! Affordable Dining, Drinks & Fun! Deck Opens This Saturday March 9 • 11 am Afterward Fridays 6 pm Sat. & Sun. 11 am E “The Best Family Traditions Start in the Kitchen.” March 6 - 12, 2013 Hwy. 281 Bridge at 700 First St. • Marble Falls, Texas Lake Country Life Page 3 W W W . L A K E C O U N T R Y L I F E . C O M E Owned & Operated by the Nguyen Family since 1987 Open: Sun-Thurs 11:00 am - 9:00 pm Fri & Sat 11:00 am - 10:00 pm CHINA KITCHEN 830-693-2575 Order online at: Dine In or Take Out!! Orders to Go! 705 First Street, Suite 102 Marble Falls, TX Across from the Hampton Inn Lake Country Life Published weekly by Highland Lakes Newspapers: The Highlander, Burnet Bulletin, The Llano County Journal Sat • March 30th Noon to 2 pm W W W . L A K E C O U N T R Y L I F E . C O M Easter Basket Lunch Call to Reserve Now! 325.379.5361 Enjoy Easter at Fall Creek! • Noon to 4 pm ~ Live Gospel music Lake Bottom Jazz Trio • Photos with the Easter Bunny • 2:30 pm ~ Children’s Easter Egg Hunt • 3 pm – 4 pm ~ Adults toss rings for wine Headquarters: 304 Gateway Loop Marble Falls, TX 78654 Subscriptions: 830.693.4367 or visit our website: For Advertising, please ask for a sales consultant at 830.693.4367 Please send news and calendar items to: lclife@highlandernews.com Editor & Publisher: Roy E. Bode Associate Publisher: Ellen Bode Editor: Phil Schoch Advertising: Tina Mullins, Lora Cheney, Sally McBryde, John Young Designers: Melanie Hogan, Sarah Randle, Mark Persyn and Eric Betancourt 1820 CR 222 Tow, TX 78672 Open: M-F 11-4 • Sat 11-5 • Sun 12-4 Find more information on Fall Creek at: Page 4 Lake Country Life Cover: This scene is something that every topaz hunter dreams about when headed for the annual Celebration of Texas Topaz Day in Mason. Cover story, page 9. Cover photo illustration by Melanie Hogan. Lake Country Restaurants Bertram El Rancho, 535 TX 29, 512.355.3759 Good Graz’in Café, 240 W. TX 29, 512.355.9340 Hungry Moose Family Restaurant, 360 W. TX 29, 512.355.3855 Las Rosas Mexican & American, 102 Castleberry Court, 512.355.3542 Young Guns Pizza and Cafe, 525 I TX 29, 512.355.2432 Buchanan Dam Area Bluffton Store, RR 2241 and RR 261, Bluffton, 325.379.9837 Hoover’s Valley Country Cafe, 7203 Park Road 4 W., 512.715.9574 Reverend Jim’s Dam Pub, Great food, good views and cold beer, 19605 E. TX 29, 512.793.3333 Rolling H Cafe´, 318 CR 222, 325.379.1707 Tamale King, 15405 E. TX 29, 512.793.2677 The Dam Grille, Always fresh, always good, 15490 E. TX 29, 512.793.2020 Burnet Aranya Thai Restaurant, 1015 E. Polk St., 512.756.1927 Burnet Feed Store BBQ Restaurant, 2800 S. Water St., 512.715.9227 The Overlook at Canyon of the Eagles, spectacular lakeside dining & resort, 16942 RR 2341, 800.977.0081 Café Twenty-Three Hundred, Great homestyle food at affordable prices, 2300 West TX 29, 512.756.0550 Chicken Express, 1510 S. Water St., 512.756.9191 Crazy Gal’s Café, 414 Buchanan Drive., 512.715.8040 Dairy Queen, 502 S. Water St., 512.756.2161 Don Pedro’s Mexican Food, 609 E. Polk St., 512.756.1421 El Rancho, 608 E. Polk St., 512.715.0481 Gude’s Bakery & Deli, 307 W. Polk St., 512.715.9903 Hacienda El Charro No. 2, 306 Water St., 512.756.7630 Highlander Restaurant & Steakhouse, 401 W. Buchanan Dr., 512.756.7401 Las Palmas, 200 S. West St., 512.234.8030 Juanes Mexican Restaurant, 504 Buchanan Dr., 512.715.0415 Las Comadres, 1001 S. Water St., 512.715.0227 Longhorn Cavern Grill, 6211 Park Road 4 (Longhorn Caverns), 512.756.4680 McDonald’s, 200 N. Water St., 512.715.0066 Payne’s BBQ-Shack, 616 Buchanan Dr., 512.8BBQ(227) Pizza Hut, 701 Buchanan Dr., 512.756.6918 Post Mountain BBQ, 310 S. Main St., 830.613.1055 Shangra-Lai, 1001 S. Water St., 512.156.7800 Sonic Drive-In, 904 Buchanan Dr., 512.756.8880 Storm’s, 700 N. Water St., 512.756.7143 Subway, 804 E. Polk St., 512.715.9430 Tea-Licious, 216 S. Main St., 512.756.7636 Texas Pizza Co., 903 Water St., Suite 400, 512.715.8070 The Cookie Café & Bakery, 107 E. Jackson St., 830.613.0199 The Maxican, 3401 S. US 281, 512.756.1213 Whataburger, 402 E. Polk St., 512.756.1507 Granite Shoals Autenticamente El Mexicano Taqueria Restaurant, 4110 Valleyview Lane, 830.596.1699 El Tapatio Mexican Restaurant, 6924 W. RR 1431, 830.598.2394 Farm House, 8037 W. RR 1431, 830.598.2934 La Cabana Mexican Food Restaurant, 7005 Hwy. 1431, 830.598.5462 Horseshoe Bay & Cottonwood Shores Hole in 1 Sports Bar and Grill, 7401 West FM 2147, 512.731.5320 Julie’s Cocina, 4119 W. RR 2147, Plaza del Sol, 830.265.5804 Lantana Grill & Bar, 200 Hi Circle N. 830.598.8600 On the Rocks, 4401 Cottonwood Dr. 830.637.7417 Pizza Mia, 4119 RR 2147, Ste. 3. Plaza del Sol, 830.693.6363 Rick’s Cowtown Bar-B-Q, 3803 RR 2147 West, 512.755.3963 Saucy’s Restaurant, Catering and Cooking Classes 4005 Hwy 2147, A, 830-693-4838 Subway, 4823 W. RR2147, 830.693.7799 Kingsland Alfredo’s Mexican Restaurant, 4139 RR 1431, 325.388.0754 El Bracero, 1516 RR 1431. 325.388.0022 Dairy Queen, 2000 W. RR 1431, 325.388.3160 Grand Central Cafe, 1010 King Court, 325.388.6022 Kingsland Coffee Co., 1907 RR 1431, 325.270.0863 Lighthouse Grill and Lounge, 118 Club Circle Dr., 325.388.6660 Mr Gatti’s Pizza, RR 2900, 325.388.6888 Mosca’s, 1640 RR 1431, 325.388.6486 Sonic, 1605 RR 1431, 325.388.2021 Spyke’s Bar-B-Que, 14601 W. RR 1431, 325.388.6996 Sweet Things Bakery, 3003 RR 1431, 325.388.3460 Subway, 1133 RR 1431, 325.388.2433 March 6 - 12, 2013 Marble Falls Arby’s, 1301 US 281 N., 830.693.9602 Bella Sera, 1125 US 281, 830.798.2661 Blue Bonnet Cafe, 211 US 281, 830.693.2344 Brothers Bakery, 519 US 281, 830.798.8278 Charlie’s Country Store and CafĂŠ, 1406 S. US Hwy 281, 830.693.5922 Chicken Express, 2408 US 281, 830.693.3770 Chili’s, 702 First St., 830.798.1298 China Kitchen, a Marble Falls tradition for Chinese, 705 First St., 830.693.2575 Chuspy’s Burritos, 1808 US 281 N, 830.693.1407 Dairy Queen, 915 RR 1431, 830.693.4912 Darci’s Deli, 909 Third St., 830.693.0505 Doc’s Fish Camp & Grill, Best seafood! Live music, Thurs-Sat, 900 RR 1431 W. and US 281, 830.693.2245 March 6 - 12, 2013 Double Horn Brewing Company, 208 Ave. H, 830.693.5165 El Rancho, 2312 N. US 281, 830.693.4030 Francesco’s Italian Restaurant & Pizzaria, Mama Mia! A local favorite for traditional Italian, 701 US 281, 830.798.1580 Ginger & Spice, 909 Second St., 830.693.7171 Golden Chick, 1507 W. RR 1431, 830.693.4459 Grand Buffet, 1208 RR 1431 830.693.7959 Hidden Falls, 220 Meadowlakes Dr., 830.693.4467 Houston’s Depot, 307 Main St., 830.637.7282 Inman’s Ranch House Bar-B-Que, 707 Sixth St., 830.693.2711 It’s Tea-Thyme, 2304 Hwy 281 N., 830.693.5273 Janie’s, 710 Ave. N, 830.693.7204 Janie’s Tacos & Deli, 909 Avenue H. 830.798.8226 Ken’s Catfish BBQ & Bakery, 1005 Main St., 830.693.5783 KFC Long John Silvers, US 281 & N. Ollie 830.798.2532 Lake Country Lanes, 112 North Ridge Rd., 830.693.4311 Main Street Coffee, 108 Main St., 830.613.5054 Margarita’s Mexican Restaurant & Cantina, 1205 W. RR 1431, 830.693.7434 McDonald’s, 1605 W. RR 1431, 830.693.4911 Noon Spoon CafĂŠ, 610 Broadway, 830.798.2347 Papa Murphy’s, 1008 US 281, 830.693.9500 Peete Mesquite BBQ, 2407 US 281, 830.693.6531 R Bar and Grill, Third & Main, 830.693.2622 Real New Orleans Style Restaurant, 1700 W. RR 1431, 830.693.5432 River City Grille, 700 First St., fabulous food, affordable prices, views and entertainment, 830.798.9909 Russo’s Texitally Cafe, a little Italy, a little Texas, 602 Steven Hawkins Pkwy., 830.693.7091 Schlotzsky’s Deli, 2410 US 281 N, 830.798.9333 Sonic Drive In, 1405 US 281, 830.693.5234 Sportsman’s Cafe, 14426 RR 1431, 830.693.0605 Storms Drive In, 1408 W. RR 1431, 830.693.0012 Subway, 318 US 281, 830.693.8980 Super Taco, 2200 US 281, 830.693.4629 Tea Thyme CafĂŠ, 2108 C US 281, 830.637.7787 The Taco Lady, 2101 RR 1431, 830.693.6090 Taco Bell, 1510 W. RR 1431, 830.693.2345 Taco Casa, 1400 W. RR 1431, 830.693.7789 Thai Niyom, 909 US 281, 830.693.1526 Tree House Bar & Grill, 806 Main, 512.755.7640 Wendy’s Old Fashioned Hamburgers, 1309 Mormon Mill Rd., 830.693.1304 Whataburger, 1204 US 281, 830.693.9149 Spicewood Area Angel’s Icehouse, 21815 TX 71, 512.264.3777 Down Under Deli & Eatery, 21209 TX 71 West, 512.264.8000 It’s All Good Bar-B-Q, 22112 TX 71 W., 512.264.1744 J5 Steakhouse, 21814 Hwy 71 West, 512.428.5727 La CabaĂąa, 21103 TX 71, 512.264.0916 Lee’s Almost by the Lake, Pace Bend & Bee Creek Rd., 512.264.2552 Little Country Diner, 22000 TX 71 W., 512.264.2926 Moonriver Bar & Grill, 2002 N. Pace Bend Road, 512.264.2064 Opie’s BBQ, 9504 Hwy 71 E, 830.693.8660 Poodie’s Hilltop Bar and Grill, 22308 TX 71, 512.264.0318 R.O.’s Outpost, 22518 W TX 71, 512.264.1169 Spicewood General Store, casual cafe, Hollingsworth Corner, 9418 TX 71, 830.693.4219 Willie’s Burgers & BBQ., 512.264.8866 Sunrise Beach Boater’s Bistro, 667 Sandy Mountain Dr., 325.388.9393 Mosca’s, 106 Sunrise Dr., 325.388.4774 Sunrise Cove Lakeside Grill, 218 Skyline Dr., 325.248.1505 GREAT THINGS ARE HAPPENING AT R E S TA U R A N T 10%Off Dinner Services Must present coupon at time of service. LUNCH HOURS Tuesday – Friday 11am – 2pm Saturday 11am – 3pm WINTER DINNER HOURS Thursday – Saturday 5:30pm – 8:30pm Italian, American & Southwest Cuisine! 4005 Hwy. 2147 Cottonwood Shores 830-693-4838 Quality Food Affordable Prices Lunch & Dinner 7 days/week ,)6% -53)# 4HURS -ARCH "ILLY "AHAMA 2ENEE &RI -ARCH *AYCE *OHNSON 3AT -ARCH *AMIE (ANKS*ODIE 0ROCTOR 4HURS -ARCH "ILLY "AHAMA 2ENEE Hwy 1431 West at Hwy 281 • Marble Falls, TX 830.693.2245 Lake Country Life Page 5 W W W . L A K E C O U N T R Y L I F E . C O M Llano Badu House Wine Pub, Amazing appetizers, wonderful wines, lunch, Monday-Wednesday, 601 Bessemer, 325.247.2238 Berry Street Bakery, 901 Berry St., 325.247.1855 Burger Bar Cafe, 608 Bessemer St., 325.247.4660 Castell General Store, 19522 TX 152 at Castell, 325.247.4100 Chicken Express, 200 E. TX 71 #b, 325.248.0900 China Wok, 103 E. Grayson St., 325.247.5522 Chrissy’s Homestyle Bakery, 501 Bessemer St., 325.247.4564 Coopers Old Time Pit Bar-B-Que, 604 W. Young (TX 29), 325.247.5713 Dairy Queen, 400 W. Young (TX 29), 325.247.5913 EL Patron, 102 W. Dallas, 325.247.5012 Fuel Coffee House, 106 E. Main, 325.247.5272 Inman’s Kitchen & Catering, 809 W. Young, 325.247.5257 Joe’s Bar and Grill, 107 Main St., 325.247.5500 Laird’s BBQ & Catering, 1600 S. Ford (TX 16 & 71), 325.247.5234 Pizza Hut, 308 W. Young (TX 29), 325.247.4032 Rosita’s Mexican Restaurant, 101 E. Grayson St. 325.247.3730 Sonic Drive In, 505 W. Young (TX 29), 325.247.1206 Stonewall’s Pizza Wings & Things, 101 W. Main St., 325.248.0500 Sweet Home Cookin’, 303 E. Young, 830.613.7893 Subway, 800 Bessemer Ave., 325.247.2141 Taco Bell, 309 W Young (TX 29), 325.247.1376 The Acme Cafe, 109 W. Main, 325.247.4457 150 acres - Burnet County Road 113 Hickey Branch Ranch - 168 acres Great Hill Country ranch with creeks, granite outcroppings, and long distance views. There is excellent cover, including live oaks and post oaks for the abundant game. This is a well shaped tract with plenty of road frontage on FM 2341. City water is available to this property. This ranch is conveniently located 5 miles west of Burnet, just off Hwy. 29. $6,500 per acre The Ranch at Eagle’s Nest - 81 acres Beautiful land with panoramic Hill Country views. This property will make for an ideal small ranch on the edge of town. This is that rare property that affords all of the benefits of land ownership in a growing area. Access is through the Eagleʼs Nest development. Owner Financing Available. $639,900 W W W . L A K E C O U N T R Y L I F E . C O M 220 acres - Park Road 4 Located on scenic Park Rd. 4, this beautiful Hill Country Ranch offers excellent home sites with spectacular views of the Colorado River Valley. Dramatic waterfalls, rock bluffs, good Rarely is everything good about the Hill Country available in one place. Dramatic views in all directions from a hilltop provide multiple building sites. Beautiful granite and sandstone outcrops are visible throughout much of the ranch. The hill slopes down to an incredible creek bottom featuring over a mile of Council Creek. Lush fields and huge oaks along both sides of the creek create quite a contrast to the more rugged portions of this exquisite Hill Country hideaway. A rustic home and barn are also included. The ranch is very secluded on a lightly traveled country road in an area of large ranches -- it would make a great hunting ranch. All of this in one of the most scenic parts of the Hill Country just 10 minutes northwest of Burnet and within an hour of Austin. $825,000 Home on 15.5 acres Burnet County Road 332 - Custom built country home. 2,175 sq. ft. with 3 bedrooms and 2 baths located five miles from Bertram and seven miles east of Burnet. Included is a large barn (1,500 sf.) with 2,500 sf. of canopy on the south side for lots of equipment parking, a round pen and 15 unrestricted acres. $395,000. Lake Travis Property - 40 acres. A fantastic pool and hot tub just off the back porch provide plenty of relief during the hot Texas summers while the multi layered view extends across a Bermuda field towards the shoreline. The gently sloping property provides excellent beach access for 893 feet of lake frontage. In addition to the main home, other improvements include a lakeside guest house, a second guest house, detached 2,400 sq. ft. four car garage, barn with stables and tack room, pens and cross fences, 6 hole putting green and much more. $1,985,000 cover and lots of game round out this rare offering in one of the most desirable areas of the county. Centrally located within 10 miles of Marble Falls or Burnet. Will Divide into 114 acre or 106 acre tracts. Page 6 Lake Country Life March 6 - 12, 2013 Premier Homes for Every budget. . . 301 Lighthouse Dr., Big open water views and gentle breezes epitomizes waterfront living at the Bay! This home has many upgrades: mesquite fl oors, chefs kitchen, new 4th bathroom, fl ood stoppers, two custom fl oor to ceiling fireplaces, faux finishes, & more. Great outdoor living with 2 large patios, one covered with kitchen & TV, the other on the Lake with fire pit. Spider-be-gone, new 2010 roof, boat house and 2 jet ski docks. With location, amenities & quality of life, this luxury property offers your clients an opportunity to have it all. $2,150,000 201 Tarbet Trail, Hill Country contemporary home in Bay Country designed by Ron Bradshaw. Four bedrooms, 4.5 baths, a 3 car garage and 3 living areas. This home has many extra features! Not just an ordinary barn, designed to match the home, includes an office with heat & a/c with other extras! 5.3 acres of scenic land. This home is designed to bring the outdoors in, beautiful windows with open views all around. MLS ##121690. $1,890,000 218 Buffalo Peak, Quality custom Tuscan-style home overlooking pond with magnificent panoramic view of Applerock Golf Course. Distinctive exterior design and exquisite interior details include natural stone and wood fl oors, alder cabinets and 8-ft. doors. Elegant master bath with air-jet tub and spacious shower. Classic kitchen with granite countertops, 36” Wolf dual-fuel range, builtin appliances and stone accents. Home includes 3-car garage, above-average storage throughout and is very energy efficient. MLS #121420. $1,298,500 107 Birdie, Beautiful large home on #5 Slick Rock Golf Course. Spectacular outdoor living. Kitchen has ice maker, gas stove with 3 ovens including convection and warming drawer. Tons of storage. Travertine throughout. Master suite has his and her bathrooms and large master closet. This is not your typical HSB home. Wrought iron fence. MLS #105502. $449,900 606 B Port, Nestled in the heart of Horseshoe Bay, newly remodeled for today’s market, a 3 bedroom, 3 bath, 2 car garage. Walking distance to The Marriott & Club. Covered patio at house & a second open for entertaining down by the lake. Double boat slip & 2 wave runner ramps. Extensive landscaping & beautiful tree’s. Ready for a weekend or a lifetime! MLS #107850. $895,000 203 Plenty Hills, Prestigious Tuscan home in a private setting on # 18 green of Ram Rock GC. Open fl oor plan w/rock fireplace in living area. Distinctive kitchen that has granite countertops, upgraded appliances w/gas cook stove. Upgraded fixtures through-out. Master bedroom has large bath, large closet & a delightful patio. Patio on south side has summer kitchen &beautiful views of golf course. Patio on north side of living and dining area has unobstructed views of Lake LBJ & Hill Country. Outdoor patio is surrounded by large trees and pond with waterfalls. MLS #118382. $799,000 322A & 323A Pack Saddle Dr., Open water front lots with approximately 237’ of water. These are a nice lots with 194 feet of street frontage, huge mature oak, pine, and cypress trees, and concrete bulkhead in place. MLS #101251. $1,175,000 1425 & 1427 Hi Circle North, New construction built in 2011, very nice. Each side of duplex has 2 bedrooms and 2 baths. Good builder with good history of building in Horseshoe Bay. MLS #121737. $ 230,000 412 HSB N Blvd., A slice of Venice in Horseshoe Bay. Totally updated condo. All granite, stainless appliances, all new fixtures. Full bath on main level, each bedroom has own bath and walk in closet. Being sold furnished, just bring your toothbrush. Boat is negotiable. Check it out! You will love it! Easy to show. MLS #121812. $415,000 103 Dawn, #5, Gazebo #5 is a single level townhouse with attached garage. Shady corner location with fenced side yard. Open fl oor plan with 2 bedrooms, 2 baths & loft. Some furniture and washer/dryer set to convey. Quiet, totally owner-occupied of nine units. Affordable quarterly HOA fee of $250. Perfectly suited for primary residency as well as second home living. MLS #120074. $114,000 Bay Center Sales Office located on Hwy 2147 at “Bay Center” March 6 - 12, 2013 830-598-2553 Lake Country Life Page 7 W W W . L A K E C O U N T R Y L I F E . C O M Painting a new path ARTISTS spacious, gated community along Park Road 4 south of Burnet for the past four years, which matches perfectly with what For most of her life, Deborah Itschner Itschner enjoys painting the most. “Floral and animal pieces are my was dedicated to serving the favorite, but I can do about military and raising her only anything,” she said. “With son, Kevin. flowers, I like the bright But now she has been able colors and the movement to, quite literally, paint a new you can portray.” way in life by using a talent Recently, some of that always brings her joy. her animal paintings “I’ve always had an took Itschner to another art talent,” Itschner said. “I level as an artist when always found myself doing she received two awards something with art. I probably – two of them juried by a should’ve been doing this my STAFF PHOTO BY panel of art experts – at the whole life, but you just get ADAM TROXTELL Fifteenth Annual Stars of caught up with other things.” Itschner said she has Deborah Itschner stands by Texas Juried Art Exhibit been painting most of her life, her three award winning in Brownwood, Texas. Her even when she served in the paintings, from left to right, painting of a longhorn, Air Force between 1972 and “Proud Millie,” “Sooo re- called “Sooo relaxed,” won a $500 Merit Award, while 1978 and worked at the VA laxed,” and “First Date.” “First Date”, a piece with hospital in Albuquerque until she retired in 1998. But, it wasn’t until her two cardinals sitting on a tree branch, won son, Kevin, got married and had to move a People’s Choice Award. Another piece, away that painting became a vital part of “Proud Millie,” won a People’s Choice Award in the non-juried portion of the her life. A native Texan, Itschner started same exhibit. Itschner’s works can be found at the painting regularly about five years ago while she and her husband, Roy, still lived Highland Art Guild in Marble Falls and in New Mexico. They have lived in in a Prellop Fine Art Gallery in Salado. BY ADAM TROXTELL HIGHLAND LAKES NEWSPAPERS ! n u F k e a r Spring B Smoked Ribs and Homemade Tamales* Kid-Size Kits for Cakes & Cookies 6-in.Boboli Pizza Kits * On Saturdays Mon. – Sat. 10 am – 7 pm / Closed Sun. LIKE US ON 830.598.6300 9710 W. 2147 W W W . L A K E C O U N T R Y L I F E . C O M (Just East of the Horseshoe Bay post office) Showcasing work by national and Bob Rohm, “Full Sail” oil, 36”x36” regional artists, our collection combines figurative works, impressionistic landscapes and representational imagery with contemporary expressionism. We are Pleased to Offer the Work of …. Kaye Franklin Qiang Huang Mitch Caster John Austin Hanna Richard Prather All of whom have been chosen to exhibit in The National Oil Painters of America Show this Spring. 112 Main St., Marble Falls • 830.693.9999 • martastaffordfineart.com Page 8 Lake Country Life March 6 - 12, 2013 Panning for the Texas Topaz “There are going to be gem cutters and faceting demonstrations and still others with microscopes showing the characteristics of The Texas Topaz – there aren’t many the topaz.” Rock-hounding tours on local ranches Texans who don’t know what the lovely will be offered and cost $15 per person. native gem is. The topaz is the official state gem, and There will also be an apprentice program for in all it’s glory can be faceted to have the the kids hosted by the International School Star of Texas cut into the bottom of the gem, of Gemology out of San Antonio. Other items available will include rockwhich shines through when viewing the finished piece, and is referred to by faceters working supplies and general facet roughs (rocks). There will also be finished topaz, as the Lone Star Cut. both loose and set in jewelry, including necklaces, earrings and rings. “And I have an incredible tennisstyle bracelet that has 19 carats of topaz set in it,” Eames said. Eames said the “4C’s” apply to topaz, just as they apply to diamonds. “Carat, color, clarity, and cut apply to topaz with color, especially, applying more than it would to diamonds,” Eames said. Topaz occurs naturally in granite outcroppings and the granite sand. In the small Texas towns of Mason, Streeter, Grit and Katemcy, topaz can be found in creeks and ravines, and even on top of the ground. The Lone Star Cut in this blue topaz is traditionally “And any color of topaz is a state what people ask for when buying topaz jewelry. The gem, as long as it comes out of Texas sterling ring holds a six-carat rectangular star-cut blue Texas topaz cut by Mason Gemologist Diane dirt,” Eames said. “And the majority of topaz found is clear.” Eames. According to Eames, topaz resembles quartz and ranges in color The topaz is going to be celebrated in from clear to brown to yellow to sky blue. the only Texas County they can be found, The early settlers called the topaz stones that they found “desert ice” because of which is mostly around Mason. The 6th Annual Celebration of Texas the frosted surface, which is produced by Topaz Day will be held March 23 for the tumbling in streams and rivers for eons. public and March 24 for members of the Topaz in Texas is not mined commercially, so the only way to obtain stones is to buy Texas Faceters’ Guild. Professional gemologist Diane Eames them or find them on private ranches - with is the founder of Texas Topaz Day and permission. There are two Mason ranches said the events on Saturday centers on her opened year-round for topaz hunting, the store, Gems of the Hill Country Lapidaries Seaquist Ranch and the Lindsay Ranch, but and Jewelers, located at 126 Fort McKavitt for Texas Topaz Day, several other privately St., and will spill out onto the sidewalk and owned properties allow guided rockdown to other storefronts, located around the hounding tours. Reservations are required for tours starting at Eames shop, and leaving courthouse square. “Rock shops from throughout the Hill at 10 a.m. on March 23. “Finding topaz isn’t that easy,” said Country are going to be there,” Eames said. Deloris Lindsay of the Lindsay Ranch. “You need tools such as gloves, rakes and shovels. You get to keep what you find and we photograph what is found. We also have screens you can use for going through dirt and sand.” According to Lindsay, the Llano Uplift is a mineral rich region and an island of volcanic rock that is some of the oldest exposed rock in Texas. It includes pink crystalline granite, schist and marble. Topaz and quartz Mason gemologist Diane Eames and Dalan Hargrave, a wellcan both be found among the known gem cutter from Boerne, search for naturally made rocks’ foundations, low-lying topaz, which can only be found in the Mason area. areas and streambeds. Topaz hunting is for those COVER BY LYN ODOM HIGHLAND LAKES NEWSPAPERS March 6 - 12, 2013 COURTESY PHOTOS who can hike, dig and kneel. Sometimes a rock hound may have to dig in upwards of three feet down. It is a get-dirty, sometimes even muddy or wet hobby. But the reward is topaz rough that can be faceted into lovely gemstones. Value varies when trying to figure out how much a topaz is worth. According to a gem value scale at, topaz is valued anywhere from $11 to nearly $500 per carat. It all depends on the 4C’s. And beware the synthetics. It is not unusual for gems of many types to be created in a lab. Your best bet is to have a certified Professional Gemologist of Mason, Diane Eames, cuts a three-carat topaz at her shop, Gems of the Hill Country Lapidary and Jewelers. gemologist check out any gem purchase you are about to make to ensure you’re getting the real thing. Most jewelry shops have a certified gemologist on staff. Topaz can also be found in Russia, Brazil, Madagascar, Mexico and Pakistan in many other colors that can’t be found in the United States. Colors from other countries include red, brown, green, pink and deep yellow. The Brazilian Imperial Topaz is mainly a peach colored gem with some cut gemstones weighing in at more than 40 carats. Prices soar into the thousands of dollars, but then so do some that are cut much closer to home. Eames said Mason has been known for clear topaz for a long time and that there were many hobbyist cutters for years. “They are all gone now,” she said. “I’ve been a jeweler forever-and-a-half and I also learned to cut gems. I’m a Native Texan, a longtime jeweler and gemologist, and many years ago followed an aunt to Mason. Opening a shop and offering what I know just seemed to make sense.” Eames said the March 23 event would give people an opportunity to meet gem cutters in person and see live demonstrations. Sunday’s event, from 9 a.m. to 5 p.m., includes a faceting seminar from notable gem cutters and gemologists and is presented by the Texas Faceters’ Guild and will be held at the Wildlife Ranch lodge that is just off the Mason square. “It is for members only, but nonmembers can attend for $20,” Eames said. For more information about either event, log on to or call Eames at 325.347.0475. Lake Country Life Page 9 W W W . L A K E C O U N T R Y L I F E . C O M A beautifully cut topaz gem sits atop what is referred to as “rough,” named ‘Sugar Daddy,’ that weighs just under a pound. The cut gem is from that particular rough, was cut by Dalan Hargrave and sits in Mason Gemologist Diane Eames’ wedding ring. Everything turnin’ up Cajun BY ALEXANDRIA RANDOLPH HIGHLAND LAKES NEWSPAPERS For Blanco resident and Louisiana native Harvey Trahan, Cajun is not only a heritage, it’s style of expression. Trahan is the rubboard player and vocalist for the band Zydeco Blanco, a Blanco based group that will be performing at Pecan Street Brewing Company on Pecan Avenue in Johnson City on March 9 at 7 p.m. The band, with a total of six members, saw humble beginnings in 1997 when Trahan learned of accordion player Leif Oines at a party in Blanco. Trahan made contact with Oines, and the two met to play together for fun. “At first I started with the harmonica and some conga hand drums, but we weren’t clicking very well,” Trahan said. “Then Leif pulled out a rubboard and I started playing on that.” Oines and Trahan had their first performance in Blanco just a few months after they met. Occasionally they would host a party where they cooked gumbo and played music. “Then we started playing at the Feed Mill (Café) in Johnson City,” Trahan said. “That’s where we met Steve Lamphier and Brandon Aly and we invited them to come jam with us.” The group now consists of Trahan on rubboard, Oines on the accordion, Justin Murray on guitar, Lamphier on bass, Aly on drums and Brad Houser on the saxophone. Although the group has not done any major tours, they have played at a large number of venues in the Hill Country and have traveled to Dallas, San Antonio and San Angelo for performances. The group sees a busy season during March and April, in which Cajun culture comes to light for Mardi Gras. “We don’t really need to (tour),” Trahan said. “All of us have day jobs and we’re just having fun with it.” Trahan said that the style of Zydeco comes from Black American culture in Lousiana. Trahan grew up watching Zydeco performed live and enjoyed seeing the intricate steps of the dancers. “The signature instruments of Zydeco are the accordion and the rubboard,” he said. “They have steps to go with the music. It’s really beautiful to watch… Different people took the beat, but Zydeco is more of a Carribean style.” Trahan said that at one time, he had joked with his band mates that “everything COURTESY PHOTO is turning up Cajun,” Zydeco Blanco is a six man Zydeco band who play Cajun style music meaning that the Cajun label was on almost with Caribbean flavor at various venues in the Hill Country. A Rare Public Opportunity W W W . L A K E C O U N T R Y L I F E . C O M When you play Everybody wins! One special course open to everyone. Play the Course of a Lifetime! 30th Annual Rotary Club of Marble Falls Charity Golf Tournament Only $195 per player ! Includes Green Fees, Carts, Lunch, Fairway Drinks, Snacks, Prizes & Accessories. Tom Fazio Course Helps Dozens of Local Charities Limited to 112 Golfers - Sign up Now! Monday, April 29, 2013 See us at our website or contact Tom Meier at 512.264.1892. Deadline for entry is April 22. Page 10 Lake Country Life LIVE Showtimes March 6 •No Bad Days - Open Mic hosted by Mark Allan Atwood - 8 p.m., Poodie’s Hilltop Bar & Grill, 22308 SH 71 West, Spicewood. 512.264.0318 •Ben Beckendorf - 5 p.m., Luckenbach, 412 Luckenbach Loop, Fredericksburg. 830.997.3224 •john Arthur martinez - 7 p.m., River City Grille, 700 First St., Marble Falls. 830.798.9909 •Wednesday Night Live Sessions hosted by Debbie Walton and Donnie Price - 9 p.m., AJ’s Piano Bar, 909 Third St., Ste. C, Marble Falls. 830.693.6699 •Over the Hump Wednesday with Eddie Shell & the Not Guilties - 6:30 p.m., Cadillac Bar at Dos Conchas Ranch, 1375 FM 1855, Marble Falls. 830.385.4745 COURTESY PHOTO Harvey Trahan, a born and raised Cajun residing in Blanco, sings and plays the rubboard for the band Zydeco Blanco. everything these days. For Trahan, Zydeco music is true Cajun culture. “They know how to do it in Louisiana,” he said. “It (Zydeco) would bring all kings of ages and races together… I used to get more of a rise out of people when I played Cajun songs than any other song. This music has an energy that makes people giddy.” •Spitfire - 7:30 p.m., Angel’s Icehouse, 21815 SH 71 West, Spicewood. 512.264.3777 •Billy Bright & Geoff Union - 8 p.m., Badu. 325.247.2238 •Biscuit Grabbers - 9 p.m., Beach Club, TX 29 West and CR 301, Buchanan Dam. 512.793.2725 March 7 •Brianna Lea Pruett - 4 p.m., Poodie’s. 512.264.0318 •Kem Watts - 6 p.m., Poodie’s. 512.264.0318 •Down Home - 8:30 p.m., Poodie’s. 512.264.0318 •The Texas KGB - 9:30 p.m., Poodie’s. 512.264.0318 •Brigitte London - 5 p.m., Luckenbach. 830.997.3224 •Texas Music Thursdays - 7 p.m., Badu House, 601 Bessemer Ave., Llano. 325.247.2238 •Bahama Billy & Renee - 7 p.m., Doc’s Ice Chest (inside Doc’s Fish Camp & Grill), 900 FM 1431 and US 281 South, Marble Falls. 830.693.2245 March 9 •Bobby Bridger - 7 p.m., Poodie’s. 512.264.0318 •Rodney Parker & 50 Peso Reward - 9 p.m., Poodie’s. 512.264.0318 •The Hang - 11 p.m., Poodie’s. 512.264.0318 •Zydeco Blanco - 7 p.m., Pecan Street Brewing, 106 East Pecan Dr., Johnson City. 830.868.2500 •The Rankin Twins, Brennen Leigh & Noel McKay - 1 p.m., Luckenbach. 830.997.3224 •Roger Creager + Micky & the Motorcars - 9 p.m., Luckenbach. 830.997.3224 •The Square Gloves - 7:30 p.m., Angel’s. 512.264.3777 •The Ledbetters Bluegrass Concert - 5:45 to 8 p.m., Longhorn Cavern, 6211 Park Rd. 4, Burnet. 512.756.4680 •Texas Terraplanes - 9 p.m., Beach Club. 512.793.2725 •Informal jam session - 6:30 p.m., Fuel Waystation, 106 E Main St., Llano, 325.247.5272 •Bill Evans - Alaska’s Timber Tramp - 7 p.m., VFW Post No. 10376, 1001 Veterans Avenue, Marble Falls. 830.693.2261 •Llano Country Opry: Buck Trent and Kenny Parrot - 7:30 p.m., LanTex Theater, 113 W. Main St., Llano. 325.247.5354 March 8 •Jerry Kirk - 6 p.m., Poodie’s. 512.264.0318 Chad Johnson Band - 9 p.m., Poodie’s. 512.264.0318 •Brandon Bolin - 11:30 p.m., Poodie’s. 512.264.0318 •Jake Martin - 4 p.m., Luckenbach. 830.997.3224 •Rodney Hayden CD Release - 8 p.m., Luckenbach. 830.997.3224 March 10 •Sons of Fathers & Mike Stinson - 1 p.m., Luckenbach. 830.997.3224 •Jason Boland & The Stragglers + Six Markey Blvd - 8 p.m., Luckenbach. 830.997.3224 •Tessy Lou & The Shotgun Stars - 4 p.m., Poodie’s. 512.265.0318 •Bracken Hale - 7:30 p.m., Poodie’s. 512.264.0318 March 6 - 12, 2013 Rescue a Shepherd PETS BY LYN ODOM HIGHLAND LAKES NEWSPAPERS Storing stogies TREASURE Hudson has Bucks Cigar humidor at his space in Finds! located at 901 S. Water St. on the east side of the road. Hudson’s humidor is 18 inches high and If you are a fan fine cigars and like to get ever bit of pleasure and potential possible from 27 inches wide with a depth of 20 inches and your expensive stogie, chances are you have or likely goes back to the late 19th century or would like to have a humidor. early 20th century, he said. And if you want something more than a “Most of the ones I seen like it go back to common desktop or tabletop 12-cigar capacity about 1910,” Hudson said. “I bought it in the humidor and have an appreciation for antiques Waco area and it probably spent time in an oldthat can be functional in the modern world, John time cigar shop.” Hudson at the Finds! antique and collectibles The humidor, which has an outer skin store in Burnet might have just what you are of tin and an inner tin liner over a wood looking for. base, has a capacity of up to 150 cigars, according to a research of information on the internet. It is inscribed with the words “Buck Cigars King of the Range” at the top of its front side and “Made Good Always Good” at the bottom. “It’s in its original condition,” Hudson said. Hudson has a $750 price tag on his humidor and says that’s a bargain. “I know a guy who had one just like it but not in as good a shape and he sold it for $2,450,” Hudson STAFF PHOTO BY JAMES WALKER said. This Bucks Cigar humidor is owned by John Hudson and The phone number for Finds! is can be seen at the Finds! antique and collectibles store in 512.756.7400 and Hudson’s phone Burnet. number is 512.431.2154. BY JAMES WALKER HIGHLAND LAKES NEWSPAPERS Beverly Gainer is glad she didn’t listen to some discouraging words when she started to rescue German Shepherds 18 years ago. “I was discouraged by friends and family to get in to rescue,” she said. “And so the more determined I became.” Gainer is the founder and president of German Shepherd Rescue of Central Texas, in Dripping Springs. The nonprofit’s mission statement is “to save as many German Shepherd and German Shepherd mixes from neglect, abuse and premature death as possible.” “I initially got started doing this when I got my first purebred Shepherd after my daughter grew up,” Gainer said. “I got involved with a German Shepherd club and became a trainer and then director of the club. I was approached by a woman who rescued large breeds and suggested I take on the German Shepherds.” Gainer agreed and started going to shelters in hopes of finding dogs that could be rehomed. Volunteers, who play, socialize, walk and hike with the dogs also groom them, teach them tricks and do many things to make the dogs more adoptable. “Volunteers can pick up a dog in the morning, spend the day with the dog and bring him back that evening,” Gainer said. “Many of these same people foster the dogs too.” COURTESY PHOTO German Shepherds of all ages, sizes and temperaments are available for adoption and fostering. At the German Shepherd Rescue of Central Texas, volunteer staff first assess dogs before they’re ready for adoption. The rescue’s website has photos of adoptable dogs as well as success stories. There is also Paypal for those wanting to make a monetary donation. “The perfect person for a German Shepherd is a strong person who will be in charge. Shepherds are intelligent, so you have to be kind and assertive or they will take over.” For more information about adopting or fostering dogs from Gainer’s rescue, log on to. Please Join us for Ladies Play Day Saturday, March 9th, 2013 Burnet, Texas On the Square and All Around Town Door Prizes Everywhere, Discounts & Fun Come to Burnet and Have a Good Time Participating Merchants: All Mixed Up Boutique, gifts & more on the square American Patriot Gold Inexpensive vintage jewelry 211 S. Water Anything Goes Antiques & Collectables 205 S. Water Beautiful Reflections Skin care & Massage 207 S. Water Burnet Antique Mall Antiques & Collectables On the Square Burnet Cleaners Dry cleaning/alterations On the Square March 6 - 12, 2013 Burnet Flea Market Indoor/outdoor Flea Market 2791 W. Hwy 29 Cafe Twenty Three Hundred The Eating Place 2300 W. Hwy 29 Crazy Gals Café Home-cooked food 414 Buchanan Dr. Designers Market & Antiques Antiques, collectables & More 416 Buchanan Dr. Dragon Tails Children’s Consignment 120 S. Main Highlander Restaurant Buffet/Menu/Steaks 401 Buchanan Dr. Hummingbird Hollow Gifts & apparel plus 516 Buchanan Dr. Jamie’s Clothes, jewelry & Gifts On the Square JB’s Quick Shop Toys, Crafts & Supplies 602 S. Water St. Jimmy’s Antiques Antiques & Collectables 104 Washington Junk Sisters Antiques & Vintage Stuff 201 S. Boundary St. Kaleidiscope Coffee & Collectables Coffee & Gifts 106 Washington Knot Hole Indoor Flea Market 604 S. Water St. Las Palmas Mexican Restaurant Authentic Mexican Food 2100 S. West Looking Good Salon Keeps you looking good 1001 S. Water St. Maxican Restaurant Authentic Mexican Food 3401 S. Hwy 281 Picker’s Paradise Collectables/Furniture & more 103 Silver St. Post Mountain BBQ Authentic Texas BBQ 310 S. Main Rave Studio Vintage Unique Ladies Apparel On the Square Sage General Store Apparel, dry goods, food On the Square Salem’s Jewelry Jewelry, gifts & watches 101 S. Water St. Sassy Ann’s Clothes, jewelry & more 300 S. Main Shangri La Restaurant Best Chinese Food 1001 S Water St. Tea-Licious Restaurant Dining, gifts & gourmet pickles On the Square Texas Pizza Sicilian-style Pizza 903 S. Water St. The Bar-B-Q Shak Good Food 616 Buchanan The Grapevine Gifts and Home Furnishings On the Square Uptown Salon Full service hair/nails 105 Jackson St. Lake Country Life Page 11 W W W . L A K E C O U N T R Y L I F E . C O M Burnet Association of Merchants FOR SALE! Llano County’s Luxury Bed & Breakfast Property... Located on 6 acres with W W W . L A K E C O U N T R Y L I F E . C O M Llano River Frontage • Turn-key operation • Positive cash flow • 16 guest accommodations • Kitchen & dining facilities • Pool and hot tub • Manager accommodations Dennis Kusenberger GRI, CRS Re/Max Town & Country 116 E. Austin • Fredericksburg Office: 830-990-8708 Cell: 830-456-6327 ~ $1.69 million Page 12 Lake Country Life March 6 - 12, 2013
https://issuu.com/milessmith8/docs/lcj03.06.13
CC-MAIN-2017-04
en
refinedweb
22 March 2010 By clicking Submit, you accept the Adobe Terms of Use. All You can create reusable components by using ActionScript, and reference these components in your applications as MXML tags. Components created in ActionScript can contain graphical elements, define custom business logic, or extend existing Flex components. Flex components are implemented as a class hierarchy in ActionScript. Each component in your application is an instance of an ActionScript class.. This article covers: Note:SimpleASSimpleAS> to refer to the component. Tip: In real-world applications, you may see custom components placed in a package structure that uses a reverse domain name structure (for example,.ComponentSimple.mxml --> <s:Application xmlns: <custom:CountryComboBoxSimpleAS/> </s:Application> The CountryComboBoxSimpleAS class extends the ComboBox class, so you can reference all of the properties and methods of the ComboBox control within the MXML tag that initializes your custom component, or in the ActionScript specified within an <fx:Script> tag. The following example specifies an event listener for the close event for your custom control:ComponentInheritanceAS <s:Label </s:Panel> </s:Application> blue border color, sets rounded corners, and hides the drop shadow. Using this custom component is simpler—that is, results in less code and has no redundancy—than setting style properties every time you use a Panel component.Styling.mxml --> <s:Application xmlns: <custom:PaddedPanel <s:Label </custom:PaddedPanel> </s:Application>'s value. In the following example, the CountryComboBoxAS Flash® Builder™.. Flex Builder uses the public/protected/private/static, etc. identifiers. Note: There is more to creating advanced components in ActionScript than this introductory Quick Start can describe. For a more comprehensive review of the subject, see Create Advanced Spark Visual Components in ActionScript. components/CountryComboBoxAS.as // components/CountryComboBoxAS.as package components { import flash.events.Event; import mx.collections.ArrayList; import spark.components.ComboBox; public class CountryComboBoxAS extends ComboBox {; } } } Main application MXML file <?xml version="1.0" encoding="utf-8"?> <!-- ASAS <!-- Use data binding to display the latest state of the property. --> <s:Label <s:controlBarContent> <s:Button </s:controlBarContent> <s:controlBarLayout> <s:HorizontalLayout </s:controlBarLayout> </s:Panel> </s:Application> ContainerGoup class. In the NumericDisplay component, you create a TextInput control and a TileGroup container and add these components as children of your container. Because the VGoup class contains built-in logic to layout its children vertically, the TileGroup // components/NumericDisplay.as package components { import flash.events.Event; import flash.events.MouseEvent; import mx.events.FlexEvent; import spark.components.Button; import spark.components.TextInput; import spark.components.TileGroup; import spark.components.VGroup; public class NumericDisplay extends VGroup { private var display:TextInput; private var buttonsTile:TileGroup = new TileGroup(); // Expose the _numButtons property to the // visual design view in Flex Builder 2. [Inspectable(defaultValue=10)] private var _numButtons:uint = 10; public function NumericDisplay() { super(); addEventListener(FlexEvent.INITIALIZE, initializeHandler); } // numButtons is a public property that determines the // number of buttons that is displayed [Bindable( <custom:PaddedPanel <custom:NumericDisplay </custom:PaddedPanel> </s:Application>
https://www.adobe.com/devnet/flex/quickstarts/building_components_in_as.html
CC-MAIN-2017-04
en
refinedweb
The primary focus of this book is not on creating your own web services?there are numerous other texts that describe how to do this (including several from O'Reilly)?but on using existing web services in productive and useful ways. However, it is helpful to understand how to create a web service because it illustrates some of the complexities when consuming a web service. Here, we will create both a client and a server for a simple "Hello World" web service using Tomcat and Axis. In addition to the learning value, this exercise tests out the tools mentioned, ensuring you're ready for the more complex examples in the rest of the book. First, copy the contents of the axis-1_1\webapps\axis directory to your Tomcat webapps directory, as shown in Figure 3-3. After copying the files, launch Tomcat. After Tomcat launches, point your browser at. You should see the status message shown in Figure 3-4. Clicking the link to "Validate" your installation, you will find that you are missing three needed components, activation.jar, mail.jar, and xmlsec.jar. The first, activation.jar, is required to get Axis off the launchpad. The second, mail.jar, is required for examples later in this book that use the SMTP. Links to these libraries (available from Sun Microsystems) are displayed directly in the error message. These two libraries are used by many web applications; you are best off placing them in the Tomcat shared/lib directory, as shown in Figure 3-5. You must restart Tomcat to load the libraries. After restarting, reload the URL,, to verify the needed libraries are now successfully loaded (as shown in Figure 3-6). The third optional library, xmlsec.jar (XML Security), digitally signs XML documents?a potentially useful capability but (sadly) one that is not supported by any of the web service offerings covered in this text. Let's create a simple SOAP-based web service using Axis. Axis supports a method of web services generation similar to the way you create a web page using a JSP: an ordinary Java source file, when given the .jws extension and placed in an Axis-enabled web application, is automatically converted into a Java web service (hence the .jws extension). Create a file called CurrentDate.jws in the Axis webapp directory. The results should be as shown in Figure 3-7. The contents of CurrentDate.jws are shown in Example 3-1. As you can see, a .jws file is essentially an ordinary Java source file. Axis automatically compiles and wraps this file as a web service when requested by an appropriate client (such as a web browser or a SOAP client). The dynamic compilation process is similar to that provided by JSP; you can modify the JWS file, switch to your browser, hit refresh, and the file is automatically recompiled and made available as a web service?no need for manual compilation or a complex deployment process. public class CurrentDate { public String now( ) { return new java.util.Date( ).toString( ); } public int add(int a, int b) { return a + b; } } Public classes and methods in .jws files wrapped and exposed by Axis are automatically made available to inquiring web services. Pointing our web browser at, notice that Axis is aware of the *.jws file at this location, as shown in Figure 3-8. The *.jws file isn't actually compiled until you request either the service or the WSDL file. Compilation errors are reported in the web browser, similar to JSP. Clicking on the link for the WSDL shows the automatically generated WSDL file, as seen in Figure 3-9. The automatically generated WSDL file shows that the CurrentDate.jws has been compiled and is now available for use by clients. By a dynamic SOAP client, I mean that the information about the web service is constructed at runtime, as opposed to being precompiled. This runtime construction of a web service request allows for very rapid development when initially connecting to a web service: simply cut and paste the relevant elements, such as the URL and the SOAP method names into your code. Example 3-2 shows a simple dynamic SOAP example, which accesses the SOAP Server as described earlier. package com.cascadetg.ch03; // axis.jar import org.apache.axis.client.Call; import org.apache.axis.client.Service; // jaxrpc.jar import javax.xml.namespace.QName; public class DynamicCurrentDateClient { public static void main(String[] args) { try { // Create a Axis client Service service = new Service( ); // Create a method call object Call call = (Call)service.createCall( ); // Point to the web service call.setTargetEndpointAddress( new java.net.URL( "")); // First, let's call the now method call.setOperationName(new QName("now")); System.out.print( "According to the web service, it is now "); System.out.println((String)call.invoke(new Object[0])); // Let's reuse the same method object, but this time, // call the add method call.setOperationName("add"); // Create the parameters to pass to the add method. // Notice that we are creating Integer objects, which // are automatically bound to the xsd:int data type Object[] params = { new Integer(3), new Integer(4)}; // Now, we call the add method, passing in the two // integers. System.out.print("The result of 3+4 is "); System.out.println((Integer)call.invoke(params)); } catch (Exception e) { System.err.println(e.toString( )); } } } Unfortunately, as you can see in Example 3-2, the code contains a lot of bookkeeping (in particular, potentially dangerous casting). Also, there is no way for an IDE to assist with the development of dynamic client code; remote method names are passed as hardcoded strings. Most programming languages support accessing SOAP-based web services in this fashion. A static SOAP client uses a provided WSDL to generate a set of corresponding Java classes. This makes it easier to build and maintain an application, allowing for stricter type information and less bookkeeping. It also makes it easier for an IDE to assist you in your development (e.g., providing automatic code completion). To generate client-side Java objects that can access the remote SOAP service we just created, run the Axis WSDL2Java tool as shown in Example 3 -p localhost The command shown in Example 3-3 should be entered as a single line, and if successful there will be no visible output (make sure Tomcat is running before running this command). Four files are generated as shown in Figure 3-10. You'll want to copy the localhost directory to your source directory. Make sure you don't manually make changes to these source files; they are machine-generated, and they should be regenerated using WSDL2Java if changes to the WSDL are made (you may wish to automate this process using an Ant task: see). Example 3-4 shows a simple static client that uses the bindings generated by WSDL2Java. Notice the lack of casting and the natural passing of parameters to the add( ) method. The flow of the code is straightforward and common to other Axis-generated bindings used in this book; a Locator object is used to retrieve a Service. Methods called on the returned Service actually make requests to the remote service. Consider the underlying work being performed by the myDate.add(3, 4) line in the example shown: the parameters are bundled, transformed into a SOAP message, and sent; and the result is retrieved, converted back into a Java object, and returned. package com.cascadetg.ch03; import localhost.*; public class StaticCurrentDateClient { public static void main(String[] args) { CurrentDateService myService = new CurrentDateServiceLocator( ); try { CurrentDate myDate = myService.getCurrentDate( ); System.out.print( "According to the web service, it is now "); System.out.println(myDate.now( )); System.out.print("The result of 3+4 is "); System.out.println(myDate.add(3, 4)); } catch (Exception e) { e.printStackTrace( ); } } } In the remainder of this book, we access SOAP-based web services using the static method because it provides for a safer, easier method of access. You may be wondering how applications using statically generated bindings are affected when changes occur to the WSDL. Generally speaking, they work as you would expect; removing methods causes problems for applications that uses the methods, changes to signatures cause the problems you would expect, and new methods aren't called if not needed. Applications can respond to WSDL changes much more effectively if they use generated bindings. Consider a WSDL file that adds a method: if the new method is needed, a new set of binding can instantly be generated. If a WSDL removes a method or changes a signature when the bindings are regenerated, the now-broken usage of that method appears as a compilation error. If dynamic bindings are used, there is no way to know that a failure was waiting until runtime, and finding out there is a problem on a production server is far worse than a compile-time error.
http://etutorials.org/Programming/web+services/Chapter+3.+Development+Platform/3.2+Test+Drive/
CC-MAIN-2017-04
en
refinedweb
Selection Sort Algorithm and its Implementation Selection sort is the most fundamental, simple and most importantly an in-place sorting algorithm. This algorithm divides the input list into two sub arrays- - A sub array of sorted elements which is empty at the beginning and keep on increasing with each item added to it. - An unsorted sub array of remaining elements. This is equal to the input size in the beginning and its size reduces to zero as we reach the end of algorithm. The basic idea is that in each iteration of this algorithm we pick an element (either largest or smallest, this depends on the sorting scenario) and appends it to the sorted element list, reducing the size of unsorted list by one. Let’s understand it with an example- Yellow – Sorted part of the list Red – Key selected from the unsorted list, which is the smallest element of unsorted subarray in this case Blue – This element in unsorted subarray, which is compared with the selected key (in red). Double headed Arrow – Show the elements swapped So in the above picture we can see that in each iteration we choose the smallest element from the unsorted sub array and swaps it with the first element of unsorted sub array due to which sorted sub array keeps on increasing in size until complete array is sorted. Iterative Pseudocode: selectionSort(array a) //Search for the minimum element and adds to the sorted sub array for i in 0 -> a.length - 2 do minIndex = i //Find minimum element in the remaining sub array and update the minIndex for j in (i + 1) -> (a.length - 1) do if a[j] < a[minIndex] minIndex = j //Swap the minimum value find with the first element of unsorted subarray swap(a[i], a[minIndex]) Asymptotic Analysis Since this algorithm contains two nested loops and none of the iteration depends on the value of the items in the array it is easy to find the complexity of this algorithm. To find first element it requires (n-1) comparisons, for second element (n-2) and so on. So for (n − 1) + (n − 2) + … + 2 + 1 = n(n − 1) / 2 ∈ Θ(n2) comparisons. Summarizing all this – Data structure used -> Array Time Complexity (Best, worst and average case) -> O(n2) Worst case space complexity -> O(n) total space, O(1) Auxiliary space ITERATIVE Implementation of Selection sort in C programming language // C program for implementation of selection sort #include <stdio.h> void swap(int *x, int *y) { int temp = *x; *x = *y; *y = temp; } void selectionSort(int arr[], int n) { int i, j, minIndex; // After every iteration size of sorted array increases by one for (i = 0; i < n-1; i++) { // Find the minimum element in unsorted array minIndex = i; for (j = i+1; j < n; j++) if (arr[j] < arr[minIndex]) minIndex = j; // Swap the found minimum element with the first element swap(&arr[minIndex], &arr[i]); } } // Function to print an array void printArray(int arr[], int size) { int i; for (i=0; i < size; i++) printf("%d ", arr[i]); printf("\n"); } // Main function to test above implemented methods int main() { int arr[] = {9, 4, 2, 3, 6, 5, 7, 1, 8, 0}; int n = sizeof(arr)/sizeof(arr[0]); selectionSort(arr, n); printf("Sorted array: \n"); printArray(arr, n); return 0; } Output:- Sorted array: 0 1 2 3 4 5 6 7 8 9 Implementation of Selection Sort Algorithm in Java package com.codingeek.algorithms.sorting; public class SelectionSortAlgorithm { //Main method to launch program public static void main(String a[]) { int[] arr1 = { 9, 4, 2, 3, 6, 5, 7, 1, 8, 0}; doSelectionSort(arr1); printArray(arr1); } // This method sorts the input array public static void doSelectionSort(int[] arr) { for (int i = 0; i < arr.length - 1; i++) { int index_min = i; // Search for the minimum element in unsorted array for (int j = i + 1; j < arr.length; j++) { if (arr[j] < arr[index_min]) { index_min = j; } } //Swap minimum element with element at index i swapNumbers(arr, i, index_min); } } //Swap numbers in the given array private static void swapNumbers(int[] arr, int i, int index) { int smallerNumber = arr[index]; arr[index] = arr[i]; arr[i] = smallerNumber; } //prints array private static void printArray(int[] arr2) { System.out.println("Sorted Array - "); for (int i : arr2) { System.out.print(i + " "); } } } Output:- Sorted array: 0 1 2 3 4 5 6 7 8 9 Comparison to other O(n2) sorting Algorithms While comparing to Bubble Sort, Selection sort is better in every scenario as it has same number of comparison but write operations are lot less as compared to Bubble Sort. While considering Insertion Sort it is observed that insertion sort generally makes half number of comparisons as made by the selection sort as Insertion sort scans only the required number of elements to find the correct place for the required element whereas Selection sort scans completely every time. But this behavior of Insertion sort makes is less stable and its time changes greatly with the order of input. Moreover if performing swap (or memory operation) operation is more costly than comparison than Selection sort outperforms Insertion as Insertion sort requires O(n2) swaps where as Selection sort requires O(n) swaps. Do share the wisdom and share this to motivate us to keep writing such online tutorials for free and do comment if anything is missing or wrong or you need any kind of help. Keep Learning.. Happy Learning.. - Mayur Patil - hiteshgarg21 - hiteshgarg21
http://www.codingeek.com/algorithms/selection-sort-algorithm-and-its-implementation/
CC-MAIN-2017-04
en
refinedweb
Frame With Multiple Popup Options - Online Code Description This code shows how to create frames with multiple popup actions Source Code import java.awt.BorderLayout; import java.awt.Color; import java.awt.Component; import java.awt.Container; import java.awt.Graphics; import java.awt.GridLayout; import java.awt.Polygon; import java.awt.event.Acti... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/2269/Frame_with_Multiple_Popup_Options
CC-MAIN-2017-04
en
refinedweb
Counting cancer cells with computer vision for time-lapse microscopy April 21, 2016 byArtem Kaznatcheev 7 Comments Some people characterize TheEGG as a computer science blog . And although (theoretical) computer science almost always informs my thought, I feel like it has been a while since I have directly dealt with the programming aspects of computer science here. Today, I want to remedy that. In the process, I will share some Python code and discuss some new empirical data collected by Jeff Peacock and Andriy Marusyk. [1] Together with David Basanta and Jacob Scott, the five of us are looking at the in vitro dynamics of resistance to Alectinib in non-small cell lung cancer. Alectinib is a new ALK-inhibitor developed by the Chugai Pharmaceutical Co. that was approved for clinical use in Japan in 2014, and in the USA at the end of 2015. Currently, it is intended for tough lung cancer cases that have failed to respond to crizotinib . Although we are primarily interested in how alectinib resistance develops and unfolds, we realize the importance of the tumour’s microenvironment, so one of our first goals — and the focus here — is to see how the Alectinib sensitive cancer cells interact with healthy fibroblasts. Since I’ve been wanting to learn basic computer vision skills and refresh my long lapsed Python knowledge, I decided to hack together some cell counting algorithms to analyze our microscopy data. [2] In this post, I want to discuss some of our preliminary work although due to length constraints there won’t be any results of interest to clinical oncologist in this entry. Instead, I will introduce automated microscopy to computer science readers, so that they know another domain where their programming skills can come in useful; and discuss some basic computer vision so that non-computational biologists know how (some of) their cell counters (might) work on the inside. Thus, the post will be methods heavy and part tutorial, part background, with a tiny sprinkle of experimental images. [3] I am also eager for some feedback and tips from readers that are more familiar than I am with these methods. So, dear reader, leave your insights in the comments. Automated microscopy A few years ago, I wrote about droids taking academic jobs from a typical theorist’s focus on automating literature review and writing. However, the real automation breakthroughs are on the experimental side. There used to be a time when people would have to look through microscopes by eye, count cells manually, and sketch what they see. I think I even did that at some point in school. Now, Jeff can prepare a 96-well plate and put it in a machine that will take precise photos of the same exact positions in each well, every 2 hours. [4] Six days later, I can ssh into the same machine and get about 51 thousand images — 40 GB of TIFs — to analyze. However, even under hundredfold magnification, cells are hard to see and tell apart. So a number of physical and biological tricks are used before the images are even recorded. The first of these is phase-contrast. In my grade-school experience with microscopes, I had a light source under the sample, it would shine through it, with some of the light being absorbed (or otherwise obstructed) by the cell, and thus creating darker patches for me to identify as life. This is called bright-field microscopy , and only takes advantage of the amplitude of the light — that which makes things brighter or darker to our eyes and cameras. But light carries a second channel: phase. Phase is shifted by passing through the different media of the cell membrane and other components, with the amount of shift depending on thickness and refractive index of the object. This information is then captured by the digital camera, combined with the standard bright field information, and converted into an optical thickness for each pixel. The result is a phase-contrast image that is much clearer than standard bright-field and that allows us to see details that would normally only be visible through staining of dead cells. But even if you can see the cells, it is still often impossible to tell them apart because different cell types can be morphologically identical. To overcome this, a biological engineering trick can be combined with fluorescence imaging. We can take our separate cell lines — in our case: parental cancer cells, resistant cancer cells, and fibroblasts — and add an extra piece of DNA to code for a fluorescing (but otherwise non-functional) protein. Then, during image capture, we shine a light of the excitatory wavelength that is absorbed by the protein and then re-emitted at a longer wavelength which is recorded (after the source wavelength is physically filtered out) by the digital camera. Thus, instead of looking at obstructions (and phase-shift) of an external light source, we end up using the protein itself as the light source and recording where that light is brightest. By giving different cell lines different proteins that are excited by and emit different wavelengths, we can then tell those cells apart by the wavelength of their fluorescence. The practical result is that for each snapshot of part of a well, [5] we end up with three photos: one phase-contrast, and one color channel for each of the two fluorescent proteins. The programming problem becomes transforming these 3 photos into a measurement of size for the populations of cancer cells and fibroblasts. An example of the 3 channels that are recorded from microscopy. From left to right: phase contrast, GFP, mCherry. All data is intensity, but I used the green and red colour space for the 2nd and 3rd image to make them easier to distinguish at a glance. In our experimental system, we created two variants of each of our cancer and fibroblast cell lines by inserting the genes coding for GFP (absorption 395 nm , emission 509 nm ) into one and mCherry ( 587/610 nm ) into the other. These genes are inserted throughout the genome, and so we expect them to be expressed and thus to fluoresce whenever proteins are being transcribed. Further, our fluoroproteins are not confined to any specific part of the cell and distribute themselves haphazardly throughout the cell. This allows the fluorescent area to serve as a good unit of size for our populations . Conveniently — and not at all by coincidence — fluorescent area is much easier to measure than the number of cells. Mostly, this is due to the tendency of the cancer cells to clump, making segmentation difficult, and the lanky and faint fibroblasts making it hard to find their boundaries. [6] Of course, this means that as we eventually transform this into replicator dynamics, we will have bemindful of our units being fluorescent area and not cell count, but that choice of units makes no difference to the mathematics. Basic image processing “If you want to make an apple pie from scratch,” said Carl Sagan in Cosmos (1980), “you must first invent the universe.” This is equally true for programming. If I want to build my analysis code ‘from scratch’ then I can go arbitrarily far back to progressively more basic building blocks. At some point, if I don’t want to ‘invent the universe’, I have to pick a set of packages to start from. I decided to start from the Python equivalent of store-bought crust and canned apple pie filling: the OpenCV package (and numpy, of course). [7] import numpy as np import cv2 OpenCV makes file operations with images extremely simple: def AreaCount(col_let, row_num, fot_num, dirName = '', ret_img = True): #load the images head = dirName + col_let + str(row_num) + '-' + str(fot_num) rdImage = cv2.imread(head + '-C1.tif',cv2.IMREAD_UNCHANGED) gnImage = cv2.imread(head + '-C2.tif',cv2.IMREAD_UNCHANGED) if ret_img: grImage = cv2.imread(head + '-P.tif',cv2.IMREAD_UNCHANGED) In this case, the outputs of the cv2.imread functions are simply two-dimensional arrays of intensities. Although the microscope itself creates different kinds of TIFs for the phase-contrast and fluorescent photos, so the output arrays require a conversion into a common 8-bit format: [8] #switch to 8bit with proper normalizing if np.amax(rdImage) > 255: rdImage8 = cv2.convertScaleAbs(rdImage, alpha = (255.0/np.amax(rdImage))) else: rdImage8 = cv2.convertScaleAbs(rdImage) if np.amax(gnImage) > 255: gnImage8 = cv2.convertScaleAbs(gnImage, alpha = (255.0/np.amax(gnImage))) else: gnImage8 = cv2.convertScaleAbs(gnImage) To go beyond file operations, thought, I need a brief detour into the real apples of OpenCV. Straight out of the box, OpenCV gives me tools like histogram equalization for enhancing image contrast. Histogram equalization for contrast works on the premise that what makes an image high contrast is a maximization of information conveyed by the intensity of each pixel. In other words, we want to stretch out the intensity histogram into the highest-entropy version: a uniform distribution. This is a relatively straightforward process, just take the cumulative distribution for the pixels, and then create a function such that a pixel of value with maps to a new value , that way in the new image . Not a hard function to implement for yourself, but with OpenCV it is just one line: img_hicon = cv2.equalizeHist(img) . An example of histogram equalization that makes obvious the colour gradient present in the mCherry channel. On the right of each image is the histogram for their intensity distribution, with the raw histogram in red and the cumulative distribution as a black line. For the equalized image, I am using a different color map with blue corresponding to low intensity and red to high intensity. In the bottom histogram, I am also including bigger bins to make it clearer how the distribution is moved towards a uniform one. What you might notice right away is the colour gradient radiating outwards. This is not an issue in the phase-contrast image, but the fluorescence images have a glow around their outside, like reverse vignetting , [9] that is not apparent to the eye until you turn up the contrast (or try to threshold). With this problem and tool in mind, let me walk through the simple function FluorescentAreaMark that sits at the base of my current image processing. To get rid of the above glow I use a variant of the method we used to notice it: contrast limited adaptive histogram equalization. def FluorescentAreaMark(img): #1: contrast limited adaptive histogram equalization to get rid of glow clahe = cv2.createCLAHE(clipLimit=50.0, tileGridSize=(20,20)) clahe_img = clahe.apply(img) mCherry channel after processing with adaptive histogram equalization. On the left is the image with blue corresponding to low intensity and red to high intensity, and on the right is the histogram of intensities with the cumulative distribution in black. The algorithm works by dividing the image into 400 tiles (20 tiles by 20 tiles), and normalizing the contrast in each one. Each pixel is then given a value by bilinear interpolation between the 4 tiles nearest it. This gives a sharp distinction of the cells from their background, and we can then detect them with a simple threshold, followed by some noise canceling [10] : #2: threshold and clean out salt-pepper noise ret, thresh_img = cv2.threshold(clahe_img,127,255, cv2.THRESH_BINARY) kernel = np.ones((3,3),np.uint8) clean_img = cv2.morphologyEx(thresh_img, cv2.MORPH_OPEN, kernel, iterations = 2) return clean_img The threshold in line 11 simply converts the 8bit image to a binary one, by setting all pixels with intensity lower than 127 in the original image to 0 in the final image and those with intensity of 127 and higher to 1. The noise filtering step in line 15 is morphological opening — a fancy name but a straightforward algorithm. Its two basic components are erosion and dilation , both require a kernel that defines a local neighbourhood. In our case the kernel in line 13 is the one-distance Moore neighbourhood . Erosion is an AND operation on the binary mask: for a focal pixel, if every pixel in its local neighbourhood is 1 in the original image than that pixel is a 1 in the final image, else it is zero. Dilation is an OR operation: if at least one pixel in the local neighbourhood of the focal pixel is 1 in the original image then the focal pixel is a 1 in the final image. Morphological opening cycles erosion followed by dilation a number of times given by iterations — two in our case. Thus, lonely pixels that are most likely due to noise are eliminated, but the overall area of the non-noise pixels is not decreased. The returned clean_img on line 17 is then a binary two dimensional array with a ‘1’ for each fluorescent pixel and ‘0’ otherwise. If we want the totalfluorescent area then we just sum this array, outside a small buffer area around the edges which tends to contain more mistakes in correcting the edge glow. If desired, we can also combine the raw phase-contrasts images with the two processed fluorescent areas for a final visualization of the distribution of fluorescence throughout the spatially structured population: #get the area masks rdFA = FluorescentAreaMark(rdImage8) gnFA = FluorescentAreaMark(gnImage8) ign_buf = 30 #how big of an edge do we ignore? rd_area = (rdFA[ign_buf:-ign_buf,ign_buf:-ign_buf] > 0) gn_area = (gnFA[ign_buf:-ign_buf,ign_buf:-ign_buf] > 0) #create image to save img_out = [] if ret_img: bW = 0.85 img_out = cv2.merge(( cv2.addWeighted(grImage,bW,rdFA,1- bW,1), cv2.addWeighted(grImage,bW,gnFA,1 - bW,1), cv2.addWeighted(grImage,bW,np.zeros_like(grImage), 1 - bW, 1))) return [np.sum(rd_area),np.sum(gn_area)], img_out Distribution of fluorescent area. Non-small cell lung cancer cells are marked with mCherry in red, and non-cancerous fibroblasts are marked GFP in green. Colours in this image are a binary mask after the processing from this post. In the near future, I plan to use this sort of spatial distribution information to learn about operationalizing the local environment . For the next post, however, I will just look at the overall population dynamics estimated by this method and what we can learn from them. Although the basics of computer vision that I hacked together in this post are sufficient for our purposes with this project, it does violate my ambitions as a computer scientist. If there is time and interest then I will also write some future posts to refine this code to minimize categorization errors and introduce some proper cell segmentation. Notes - I was also inspired by Jan Poleszczuk’s Compute Cancer blog to start posting actual code snippets to allow for easier collaboration and engagement with programmers interested in biology, or biologists looking to start programming. I recommend checking out his blog for many code snippets and tricks across various languages. For example, if you want to oppose my choice of Python in this post then check out his speed comparison of Java, C++, MATLAB, Julia or Python for cellular automatons post for ammunition. Also, let me know what you think of interspersing code within this post. If there is a lot of interest in this format then I will try to include it in more future posts. - I am, of course, reinventing the wheel. A lot of great packages and programs already exist for processing and analyzing microscopy data, and we even have some in-house code here at the IMO. However, I wanted to go through the motions of this as a learning exercise. This is to make sure that I understood exactly what the algorithm is doing, and to make sure that it is a good fit for the particulars of our experimental set up. - Having several target audiences means that the first section on microscopy is probably too basic for biologists, and the second section on computer vision is probably too basic for programmers. So feel free to jump around. As with many of my posts, this is also meant as a tutorial for future Artem, for when I return to this experiment and code after a few months and have no idea what it does, why it exists, and how to get it running. I will try to confine the more pedantic parts of this documentation to footnotes. - In fact, as automation continues its perpetual march forward, even the preparation step is being eliminated. Labs like the Foundry at Imperial College London are automating the whole bench biology pipeline . I wonder if this will soon allow us to program in vitro experiments from basic components and then run them in remote automated labs, much like how we currently send in silico simulations to clusters. My optimistic hope is that the communication with machines forced by automation will also encourage more and more experimental biology graduate students to learn more and more computer science. Maybe that exposure to CS will allow for a flourishing ofalgorithmic biology. My cynical side, however, wonders at the disconnect between programming — a skill that everyone, from any field, should pick up — and the more niche aspects of algorithmic thinking or theoretical computer science. - At our magnification in this post, the field of view for a single photo is about 1/5th of the diameter of the well. The microscope takes photos of three different positions — the same positions at every 2-hour time step — in each well. - These limitations of not segmenting single cells can be overcome with more work. I will have to return to them in future blogposts if I want to look at single-cell tracking and lineage tracing. - Python has packages for almost anything, and usually in many versions, so there are plenty of alternatives to OpenCV. I chose it due to its great documentation, and from chatting with Chandler Gatenbee — the IMO expert on computer vision — who also builds his code with this package. Chandler’s warning was that the hardest part of OpenCV is getting it installed. I use the Anaconda distribution of Python 3, and it is pretty easy to get it with the built-in package manager from binstar: conda install -c opencv. It is also a good idea to have a separate environment for opencv to work in, to avoid breaking dependencies. This can be done with conda create -n opencv numpy scipy scikit-learn matplotlib python=3and then accessed later from command line (before running jyputer notebook, for example) with source activate opencv. For a slightly more detailed tutorial, see this . The code I discuss in this post is available on my GitHub , and the numbered lines of code correspond to the numbers in the first commit FluorescentArea.py. The images through out this post were generated from the jyputer notebook in which I hacked the code together and paint. It is straightforward to generate these images with OpenCV, but if you want the details then ask me in the comments and I will post some more code snippets. - The function cv2.converScaleAbs tends to clip-down the intensities, setting every pixel above 255 to 255 , so if there are intensities above 255 in the 16-bit images, it is better to first renormalize (as is done in lines 30,35 by setting alphato something other than 1 ) to make the brightest pixel 255 . Note that this renormalization means that I cannot make physical sense of the raw intensity numbers and have to rely on contrast. I could use a consistent renormalization to allow using raw intensity numbers, but since the rest of the processing focuses on contrast, there is no reason to worry about it. - We are not sure what physical process causes this glow. The apparent circular symmetry makes me think it has something to do with the lens. Andriy suspects it has more to do with partial fluorescence from the well walls, and to remedy this, we are using a different type of 96 well plate in follow up experiments.
http://126kr.com/article/129zv0wcxi
CC-MAIN-2017-04
en
refinedweb
Encrypting a PDF for individual users A PDF can be protected in a number of ways. Most of the time it's sufficient to add a password - anyone who knows the PDF password can open it, and (if the document author allowed it) can print, copy text and so on. However if you want to allow only some users to print, this requires a different aproach. The simplest approach is to use Public Key Cryptography. Anyone familiar with public key crypto may raise an eyebrow at the use of the word "simple" there, but our API hides a lot of the complexity. Read on... Generating a Key The first thing Public Key cryptography requires is a key pair. This consists of one public key and one private one - the private you keep to yourself for decryption, and the public one you give out, usually in the form of an X.509 Certificate. The first way to do this is to use the Java keytool program. At it's simplest you could do something like the following: keytool -genkeypair -alias jamesbond What is your first and last name? [Unknown]: James Bond What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: MI6 What is the name of your City or Locality? [Unknown]: London What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: GB Is CN=James Bond, OU=Unknown, O=MI6, L=London, ST=Unknown, C=GB correct?That will create a new keypair in the Java Keystore - you now need to export the X.509 certificate for that key, which you can do with the command [no]: yes Enter key password for jamesbond (RETURN if same as keystore password): keytool -exportcert -alias jamesbond -file jamesbond.cer Alternatively, our PDF viewer can easily create and export a new key. The clip below shows how to create a new keypair and export the X.509 certificate to a file. You can also do the same operation in Acrobat. Here's how to create a new identity and export the certificate file in Acrobat 9. Encrypting a PDF with a public keyNow you have the X.509 certificate file containing the details of the public key. To encrypt a PDF so that it can be opened by the owner of that key you'd need something like the following code: import org.faceless.pdf2.*; import java.security.*; import java.security.cert.*; import java.io.*; public void encrypt(PDF pdf, String file) throws Exception { CertificateFactory cf = CertificateFactory.getInstance("X.509"); InputStream in = new FileInputStream(file); X509Certificate cert = (X509Certificate) cf.generateCertificate(in); in.close(); PublicKeyEncryptionHandler handler = new PublicKeyEncryptionHandler(5); handler.addRecipient(cert, StandardEncryptionHandler.PRINT_HIGHRES, StandardEncryptionHandler.CHANGE_ALL, StandardEncryptionHandler.EXTRACT_ALL); pdf.setEncryptionHandler(handler); } Note we're adding a recipient to the handler - the PublicKeyEncryptionHandler can encrypt a document for more than one recipient, with different permissions if required. So it's quite possible to create a document which allows one recipient to print it but not another. Just call addRecipient once for each certificate you are encrypting for. Opening the PDF Opening the PDF in question couldn't be easier, in both Acrobat and our PDF Viewer. Simply open the PDF as normal - if the user has one or more private keys in their keystore which can decrypt the PDF they will be prompted to select one. If you want to open the PDF using our API, this is easily done too. If you know which key you're going to use, then you can simply specify it like so: EncryptionHandler handler = new PublicKeyEncryptionHandler(keystore, "myalias", "password".toCharArray()); PDF pdf = new PDF(new PDFReader(new File("file.pdf"), handler));If you want to choose the key dynamically, then you can override the chooseRecipientmethod on the PublicKeyEncryptionHandlerclass. This is given a list of recipients the PDF was encrypted for, and you can choose an appropriate key and call setDecryptionKeyto decrypt the PDF. The viewer has an example of this called PublicKeyPromptEncryptionHandler.java, and the source code for that is included with the viewer.
http://bfo.com/blog/2009/11/12/public_key_encryption_and_pdf/
CC-MAIN-2017-04
en
refinedweb
JDOM is fully namespace aware Namespaces are represented by instances of the Namespace class rather than by attributes or raw strings Always ask for elements and attributes by local names and namespace URIs Elements and attributes that are not in any namespace can be asked for by local name alone Never identify an element or attribute by qualified name
http://www.cafeconleche.org/slides/acmdc/xmlandjava/284.html
CC-MAIN-2017-04
en
refinedweb
Generic code to execute any stored procedure/batch of stored procedures with different number of parameters and data types Latest C# Articles - Page 99. Access Newly Available Network Information with .NET 2.0 A new namespace in the upcoming 2.0 release of the Microsoft .NET Framework adds support for some very useful network-related items. Explores some of these new items and how you can use them to your advantage. Batched Execution Using the .NET Thread Pool The .NET thread pool's functionality for executing multiple tasks sequentially in a wave or group is insufficient. Luckily, a Visual C++.NET helper method that uses other types within the System.Threading namespace provides this batch-execution model. Asynchronous Socket Programming in C#: Part II Second part of the C# asynchronous socket example, showing more features in socket programming..
http://www.codeguru.com/csharp/csharp/588/
CC-MAIN-2017-04
en
refinedweb
Today we explore how to do this in ASP.NET MVC using KnockoutJS to give our application a fast responsive feeling. The Master Details Use CaseWe will pick a simple master-detail style use case where we have a User (master) with multiple addresses (details) and we need to create an AddressBook application. The application will have the following 1. It will have the default scaffolding for doing CRUD operations on Users and Address. 2. It will have a dedicated ‘AddressBook’ page for showing the master details relationship. This page will show the list of all Users in the system and on selecting a User, it should show the addresses associated with the user in an adjacent panel. With that laid out, let’s start building our application. Building the ASP.NET MVC Master Details Application The project LayoutStep 1: We start off with an ASP.NET MVC 4 project using the Internet Template. Step 2: Next we update the jQuery, jQuery UI and Knockout packages from Nuget PM> update-package jquery -version 1.9.1 PM> update-package jquery.validation PM> update-package jquery.ui.combined PM> update-package knockoutjs Notice the version number for jquery. This is because jQuery 2.0 is now available in Microsoft’s CDN however it is incompatible with Microsoft’s Unobtrusive Validator plugin for jQuery. As a result you cannot install jQuery 2.0 as of now! This should be fixed pretty quickly. Also install the Knockout.Mapping plugin using the following command PM> install-package knockout.Mapping We’ll be using the Mapping plugin to map our server side data into Knockout View Models. The ModelToday we are focusing on the UI implementation, so we’ll simply add models classes to the Model folder and build an EF DBContext out of it instead of an elaborate repository pattern implementation. The Model is simple, it has a User object that contains a list of Addresses. Models\User.cspublic class User { public int Id { get; set; } public string Name { get; set; } public string Email { get; set; } public List<Address> Addresses { get; set; } } Models\Address.cspublic class Address { public int Id { get; set; } public string Street { get; set; } public string House { get; set; } public string City { get; set; } public string State { get; set; } public string Country { get; set; } public string Zip { get; set; } public int UserId { get; set; } [ForeignKey("UserId")] public User ParentUser { get; set; } } Scaffolding the ControllersWith the model laid out, we’ll build the application and scaffold a controller and the default views for the User entity. Right click on the Controller folder and Select ‘New Controller’ to bring up the wizard. Select the Option to use an MVC Controller with read/write actions and view, using EF. This will scaffold the controller and the views. We repeat the same for the Address entity and call it AddressController. So we now have two Controllers that can take care of the CRUD operations for User and Address Entities. Time to create an empty controller called AddressBook controller. This will contain the action methods that we call using AJAX to fetch data for the Master/Details view. We can go ahead and add some test data using the generated UI. We add Two Users and two addresses for each. The User Index looks like this The default Address Index is as follows Though fully functional, this is not how the end users would like to see data represented. Moreover when the data volumes are higher and things are not so nicely ordered, finding all addresses for a particular user via the above screen will be rather painful. To fix this, we will build the Address book page that will show a List of Users and on selecting a particular user, it will show all the addresses associated with that user! Let’s prepare the AddressBookController to return data that our UI will need. We have the default Get method for Index. We add a Post Method that returns JsonResult. In the method, we use MasterDetailsViewsContext to load Users and Include the Addresses while loading. Word of Caution: In a real world app we should be paginating and getting only a small set of Data. public class AddressBookController : Controller { private MasterDetailsViewsContext db = new MasterDetailsViewsContext(); public ActionResult Index() { return View(); } [HttpPost] public JsonResult Index(int? var) { List<User> users = db.Users.Include("Addresses").ToList(); return Json(new { Users = users }); } } As we can see, the list of users returned is wrapped in an anonymous object and used to create the JsonResult that is sent back to the client. However, we have one minor issue here. Remember our Address Entity has a ‘ParentUser’ entity. This causes a circular reference and crashes the Json Serializer. To fix it, we’ll simple tag the ParentUser property such that it’s not used for serialization. This is done as follows [ForeignKey("UserId")] [ScriptIgnore] public User ParentUser { get; set; } The ViewModel and the ViewNow if we run the application, we will be able add users and addresses, however we don’t have one place to view Users with their associated addresses. This is what we are going to build. Step 1: Since we created an empty controller for AddressBook, there are no views associated. So let’s add an ‘AddressBook’ folder under ‘View’ in our project. To the ‘View’ folder, add an Index page. Use the ‘New View’ dialog to create the page. Step 2: Our first step is to use the data coming from server to build a Knockout ViewModel and start rendering the UI accordingly. To do that, let’s start with the Knockout ViewModel. Let’s add a JavaScript file called addressbook-vm.js. We start off with the following ViewModel /// <reference path="_references.js" /> var viewModel = function (data) { this.SelectedUser = ko.observable({ Addresses: ko.observableArray([]), Name: ko.observable("") }); this.Users = ko.observableArray(data); } It has two simple elements, - SelectedUser is the user whose address details we want to see. Initially it is empty. - Users is the list of Users in our database. To populate the ViewModel, we need to do an Ajax call to get the data from the Server. From a best-practices point of view, we’ll write the client side logic in a new JavaScript file called addressbook-client.js. But first let’s bind our ViewModel to the view. Step 3: The AddressBook/Index The Index page primarily is split into Left panel and Right panel. The Left panel will list all the users whereas the right panel will have the addresses. The markup for the Left Panel is as follows <div class="left-aligned-section"> <ul data- <li> <div class="user-item-border"> <div> <label data-</label> </div> <div> <label data-</label> </div> <div> <a href="#" class="show-details">Show Details</a> </div> </div> </li> </ul> </div> As we can see, we have used KO foreach binding to bind the <ul> to the list of Users() in our ViewModel. As a result, KO will treat everything in the <li> as a template and will repeat it. We simply bind the Name and Email of the user to two labels first. The third element is an anchor tag to which we’ll attach a click event handler, and the click even handler will set the current user as the selected user. Next we see the markup of the Right pane. This will display all the addresses for that particular user. We have used the KO ‘with’ binding to scope the binding. Everything inside the div that is bound to the ‘SelectedUser’ assumes this new scope. So data-bind to the Name field actually looks for the Name in the SelectedUser ViewModel. Rest of the binding is pretty similar, we have a <ul> that’s bound the Addresses for the selected user. <div class="right-aligned-section" data- <div class="address-header"> <div class="left-aligned-div"><strong>Address for </strong></div> <div class="left-aligned-div" data-</div> </div> <ul data- <li> <div class="address-item-border"> <div> <div class="address-label">Street: </div> <div style="font-weight: bold" data-</div> </div> <div> <div class="address-label">House: </div> <div style="font-weight: bold" data-</div> </div> <div> <div class="address-label">City: </div> <div style="font-weight: bold" data-</div> </div> <div> <div class="address-label">State: </div> <div style="font-weight: bold" data-</div> </div> <div> <div class="address-label">Country: </div> <div style="font-weight: bold" data-</div> </div> <div> <div class="address-label">Zip: </div> <div style="font-weight: bold" data-</div> </div> </div> </li> </ul> </div> Now that our ViewModel and Databinding is all set, we have to actually get the data and assign it to the ViewModel. Let’s see the code in our addressbook-client.js. Once the document load is completed, we POST a query to the AddressBook’s Index action. This returns us a JSON object with one property called Users. Once data is returned, we use the Knockout Mapping Plugin to convert the JSON object into a KO ViewModel. In our case, we get an array of Objects that have ko.observable properties. Knockout by default converts everything it finds in the JSON array to either ko.observable or if it’s an array, into a ko.observableArray. This Auto Mapping plugin helps us avoid the manual task of looping through each object received from the server and converting it into a ko view model object. In our case, the data.Users array (which contains each user along with each of their addresses) is first mapped and then passed into the ‘constructor’ of our ViewModel. vm = new viewModel(userslist); Once the ViewModel is ready, we use ko.applyBindings(…) to do the actual DataBind. Since on load nothing will be selected, we hide the right pane. NOTE: Remember, when you return JSON as an array, it is considered a ‘valid’ JavaScript. This can potentially lead to script hacks. Hence it is always advisable to return JSON arrays wrapped in an object. /// <reference path="addressbook-vm.js" /> $(document).ready(function () { $.ajax({ url: "/AddressBook/Index/", type: "POST", data: {}, success: function (data) { var userslist = ko.mapping.fromJS(data.Users); vm = new viewModel(userslist); ko.applyBindings(vm); } }); $(".right-aligned-section").hide(); }); $(document).delegate(".show-details", "click", function () { $(".right-aligned-section").fadeIn(); var user = ko.dataFor(this); vm.SelectedUser(user); }); Now that the data has been bound, the anchor tag that said ‘Show Details’, it’s ‘click’ event is bound via the jQuery delegate. We add a little bit of style by fading in the Right hand pane. And then use ko.dataFor(this) to extract the user for which select was clicked. We assign this user as the Selected User. This kicks in the databinding for the right hand side pane and it updates it with all the Addresses for the selected user. Our final view looks like this. Download the Entire Source Code here (Github) Will you give this article a +1 ? Thanks in advance 2 comments: Suprotim, I really appreciate this article you put together as it should answer a whole lot of questions for me. But, I find that when I run the AddressBook/Index view, it doesn't seem to call the [HttpPost] Index function in the controller. I copied the source files, including the scripts, from your source file. I verified that the $(document).ready() function activated, but nothing comes back and when I set breakpoints on the controller code, nothing breaks. I am using VS2013 Express. Any ideas? when i run it (with my own data) i get "Error: The argument passed when initializing an observable array must be an array, or null, or undefined." and "ReferenceError: vm is not defined"
http://www.devcurry.com/2013/04/master-details-knockout-aspnet-mvc.html
CC-MAIN-2017-04
en
refinedweb
Game.Tournament Contents Description Tournament construction and maintenance including competition based structures and helpers. This library is intended to be imported qualified as it exports functions that clash with Prelude. import Game.Tournament as T The Tournament structure contain a Map of GameId -> Game for its internal representation and the GameId keys are the location in the Tournament. Duel tournaments are based on the theory from. By using the seeding definitions listed there, there is almost only one way to generate a tournament, and the ambivalence appears only in Double elimination. We have additionally chosen that brackets should converge by having the losers bracket move upwards. This is not necessary, but improves the visual layout when presented in a standard way. FFA tournaments use a collection of sensible assumptions on how to optimally split n people into s groups while minimizing the sum of seeds difference between groups for fairness. At the end of each round, groups are recalculated from the scores of the winners, and new groups are created for the next round. = Int Computes both the upper and lower player seeds for a duel elimiation match. The first argument is the power of the tournament: p :: 2^num_players rounding up to nearest power of 2 The second parameter is the game number i (in round one). The pair (p,i) must obey p > 0 && 0 < i <= 2^(p-1). The location of a game is written as to simulate the classical shorthand WBR2, but includes additionally the game number for complete positional uniqueness. A Single elimination final will have the unique identifier let wbf = GameId WB p 1 where 'p == count t WB'. Constructors Instances data Elimination Source Duel Tournament option. Single elimation is a standard power of 2 tournament tree, wheras Double elimination grants each loser a second chance in the lower bracket. Constructors Instances The bracket location of a game. For Duel Single or FFA, most matches exist in the winners bracket ( WB) , with the exception of the bronze final and possible crossover matches. Duel Double or FFA with crossovers will have extra matches in the loser bracket ( LB). Score a match in a tournament and propagate winners/losers. If match is not scorable, the Tournament will pass through unchanged. For a Duel tournament, winners (and losers if Double) are propagated immediately, wheras FFA tournaments calculate winners at the end of the round (when all games played). There is no limitation on re-scoring old games, so care must be taken to not update too far back ones and leaving the tournament in an inconsistent state. When scoring games more than one round behind the corresponding active round, the locations to which these propagate must be updated manually. To prevent yourself from never scoring older matches, only score games for which safeScorable returns True. Though this has not been implemented yet. gid = (GameId WB 2 1) tUpdated = if safeScorable gid then score gid [1,0] t else t TODO: strictify this function TODO: better to do a scoreSafe? call this scoreUnsafe count :: Tournament -> Bracket -> IntSource Count the number of rounds in a given bracket in a Tournament. TODO: rename to length once it has been less awkwardly moved into an internal part scorable :: GameId -> Tournament -> BoolSource keys :: Tournament -> [GameId]Source Get the list of all GameIds in a Tournament. This list is also ordered by GameId's Ord, and in fact, if the corresponding games were scored in this order, the tournament would finish, and scorable would only return False for a few special walkover games. TODO: if introducing crossovers, this would not be true for LB crossovers => need to place them in WB in an 'interim round'
http://hackage.haskell.org/package/Tournament-0.0.1/docs/Game-Tournament.html
CC-MAIN-2017-04
en
refinedweb
1 /*2 * QInputStream.java3 * Copyright (C) 2002 The Free Software Foundation gnu.mail.util;21 22 import java.io.*;23 24 /**25 * Provides RFC 2047 "B" transfer encoding.26 * See section 4.2:27 * <p>28 * The "Q" encoding is similar to the "Quoted-Printable" content-29 * transfer-encoding defined in RFC 2045. It is designed to allow text30 * containing mostly ASCII characters to be decipherable on an ASCII31 * terminal without decoding.32 * <ol>33 * <li>Any 8-bit value may be represented by a "=" followed by two34 * hexadecimal digits. For example, if the character set in use35 * were ISO-8859-1, the "=" character would thus be encoded as36 * "=3D", and a SPACE by "=20". (Upper case should be used for37 * hexadecimal digits "A" through "F".)38 * <li>The 8-bit hexadecimal value 20 (e.g., ISO-8859-1 SPACE) may be39 * represented as "_" (underscore, ASCII 95.). (This character may40 * not pass through some internetwork mail gateways, but its use41 * will greatly enhance readability of "Q" encoded data with mail42 * readers that do not support this encoding.) Note that the "_"43 * always represents hexadecimal 20, even if the SPACE character44 * occupies a different code position in the character set in use.45 * <li>8-bit values which correspond to printable ASCII characters other46 * than "=", "?", and "_" (underscore), MAY be represented as those47 * characters. (But see section 5 for restrictions.) In48 * particular, SPACE and TAB MUST NOT be represented as themselves49 * within encoded words.50 *51 * @author <a HREF="mailto:dog@gnu.org">Chris Burdess</a>52 */53 public class QInputStream54 extends QPInputStream55 {56 57 private static final int SPACE = 32;58 private static final int EQ = 61;59 private static final int UNDERSCORE = 95;60 61 /**62 * Constructor.63 * @param in the underlying input stream.64 */65 public QInputStream(InputStream in)66 {67 super(in);68 }69 70 /**71 * Read a character.72 */73 public int read()74 throws IOException75 {76 int c = in.read();77 if (c==UNDERSCORE)78 return SPACE;79 if (c==EQ)80 {81 buf[0] = (byte)in.read();82 buf[1] = (byte)in.read();83 try84 {85 return Integer.parseInt(new String (buf, 0, 2), 16);86 }87 catch (NumberFormatException e)88 {89 throw new IOException("Quoted-Printable encoding error: "+90 e.getMessage());91 }92 }93 return c;94 }95 96 }97 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/gnu/mail/util/QInputStream.java.htm
CC-MAIN-2017-04
en
refinedweb
The following is the explanation to the C++ code to create a single colored blank image in C++ using the tool OpenCV. Things to know: (1) The code will only compile in Linux environment. (2) To run in windows, please use the file: ‘blank.o’ and run it in cmd. However if it does not run (problem in system architecture) then compile it in windows by making suitable and obvious changes to the code like: Use <iostream.h> in place of <iostream>. (3) Compile command: g++ -w blank.cpp -o blank `pkg-config –libs opencv` (4) Run command: ./article Before you run the code, please make sure that you have OpenCV installed on your system. Code Snippet: // Title: Create a coloured image in C++ using OpenCV. // highgui - an easy-to-use interface to // video capturing, image and video codecs, // as well as simple UI capabilities. #include "opencv2/highgui/highgui.hpp" // Namespace where all the C++ OpenCV // functionality resides. using namespace cv; // For basic input / output operations. // Else use macro 'std::' everywhere. using namespace std; int main() { // To create an image // CV_8UC3 depicts : (3 channels,8 bit image depth // Height = 500 pixels, Width = 1000 pixels // (0, 0, 100) assigned for Blue, Green and Red // plane respectively. // So the image will appear red as the red // component is set to 100. Mat img(500, 1000, CV_8UC3, Scalar(0,0, 100)); // check whether the image is loaded or not if (img.empty()) { cout<<"\n Image not created. " "You have done something wrong. \n"; return -1; // Unsuccessful. } // first argument: name of the window // second argument: flag- types: // WINDOW_NORMAL If this is set, the user can // resize the window. // WINDOW_AUTOSIZE If this is set, the window size // is automatically adjusted to fit // the displayed image, and you cannot // change the window size manually. // WINDOW_OPENGL If this is set, the window will be // created with OpenGL support. namedWindow("A_good_name", CV_WINDOW_AUTOSIZE); // first argument: name of the window // second argument: image to be shown(Mat object) imshow("A_good_name", img); waitKey(0); //wait infinite time for a keypress // destroy the window with the name, "MyWindow" destroyWindow("A_good_name");.
http://126kr.com/article/6y74flahw0p
CC-MAIN-2017-04
en
refinedweb
The Samba-Bugzilla – Bug 447 make fails Last modified: 2004-02-17 08:45:40 UTC Trying to build SAMBA 2.2.8a on AIX 5.2, using IBM's C v6.0 compiler results in the error: Linking bin/smbd ld: 0711-317 ERROR: Undefined symbol: .SAFE_FREE ld: 0711-317 ERROR: Undefined symbol: .VA_COPY ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. make: 1254-004 The error code from the last command is 8 Upon investigation I discovered that the problem is with snprintf.c - whenever HAVE_PRINTF, HAVE_VSNPRINTF and HAVE_C99_VSNPRINTF are all defined (as they are in my environment) then the SAFE_FREE and VA_COPY macros never get defined, even though they are used later on in the code (e.g. vasprintf procedure). It may be that the #endif prior to the vasprintf procedure specification is misplaced. I just ran a normal configuration with no options set. Sorry, but the 2.2 is not under development any longer. If you can reproduce this bug against the latest 3.0 release, please reopen this bug and change the version in the report. Thanks.
https://bugzilla.samba.org/show_bug.cgi?id=447
CC-MAIN-2017-30
en
refinedweb
.xml.initializer;23 24 /**25 * Container.26 * 27 * @author <a HREF="adrian@jboss.com">Adrian Brock</a>28 * @version $Revision: 40495 $29 */30 public class Container31 {32 private Object value;33 34 public Object getValue()35 {36 return value;37 }38 39 public void setValue(Object value)40 {41 this.value = value;42 }43 }44 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jboss/test/xml/initializer/Container.java.htm
CC-MAIN-2017-30
en
refinedweb
Hi Brett, Here are some comments on your proposal. Sorry this took so long. I apologize if any of these comments are out of date (but also look forward to your answers to some of the questions, as they'll help me understand some more of the details of your proposal). Thanks! > Introduction > /////////////////////////////////////// [...] > Throughout this document several terms are going to be used. A > "sandboxed interpreter" is one where the built-in namespace is not the > same as that of an interpreter whose built-ins were unaltered, which > is called an "unprotected interpreter". Is this a definition or an implementation choice? As in, are you defining "sandboxed" to mean "with altered built-ins" or just "restricted in some way", and does the above mean to imply that altering the built-ins is what triggers other kinds of restrictions (as it did in Python's old restricted execution mode)? > A "bare interpreter" is one where the built-in namespace has been > stripped down the bare minimum needed to run any form of basic Python > program. This means that all atomic types (i.e., syntactically > supported types), ``object``, and the exceptions provided by the > ``exceptions`` module are considered in the built-in namespace. There > have also been no imports executed in the interpreter. Is a "bare interpreter" just one example of a sandboxed interpreter, or are all sandboxed interpreters in your design initially bare (i.e. "sandboxed" = "bare" + zero or more granted authorities)? > The "security domain" is the boundary at which security is cared > about. For this dicussion, it is the interpreter. It might be clearer to say (if i understand correctly) "Each interpreter is a separate security domain." Many interpreters can run within a single operating system process, right? Could you say a bit about what sort of concurrency model you have in mind? How would this interact (if at all) with use of the existing threading functionality? > The "powerbox" is the thing that possesses the ultimate power in the > system. In our case it is the Python process. This could also be the application process, right? > Rationale > /////////////////////////////////////// [...] > For instance, think of an application that supports a plug-in system > with Python as the language used for writing plug-ins. You do not > want to have to examine every plug-in you download to make sure that > it does not alter your filesystem if you can help it. With a proper > security model and implementation in place this hinderance of having > to examine all code you execute should be alleviated. I'm glad to have this use case set out early in the document, so the reader can keep it in mind as an example while reading about the model. > Approaches to Security > /////////////////////////////////////// > > There are essentially two types of security: who-I-am > (permissions-based) security and what-I-have (authority-based) > security. As Mark Miller mentioned in another message, your descriptions of "who-I-am" security and "what-I-have" security make sense, but they don't correspond to "permission" vs. "authority". They correspond to "identity-based" vs. "authority-based" security. > Difficulties in Python for Object-Capabilities > ////////////////////////////////////////////// [...] > Three key requirements for providing a proper perimeter defence is > private namespaces, immutable shared state across domains, and > unforgeable references. Nice summary. > Problem of No Private Namespace > =============================== [...] > The Python language has no such thing as a private namespace. Don't local scopes count as private namespaces? It seems clear that they aren't designed with the intention of being exposed, unlike other namespaces in Python. > It also makes providing security at the object level using > object-capabilities non-existent in pure Python code. I don't think this is necessarily the case. No Python code i've ever seen expects to be able to invade the local scopes of other functions, so you could use them as private namespaces. There are two ways i've seen to invade local scopes: (a) Use gc.get_referents to get back from a cell object to its contents. (b) Compare the cell object to another cell object, thereby causing __eq__ to be invoked to compare the contents of the cells. So you could protect local scopes by prohibiting these or by simply turning off access to func_closure. It's clear that hardly any code depends on these introspection featuresl, so it would be reasonble to turn them off in a sandboxed interpreter. (It seems you would have to turn off some introspection features anyway in order to have reliable import guards.) > Problem of Mutable Shared State > =============================== [...] > Regardless, sharing of state that can be influenced by another > interpreter is not safe for object-capabilities. Yup. > Threat Model > /////////////////////////////////////// Good to see this specified here. I like the way you've broken this down. > * An interpreter cannot gain abilties the Python process possesses > without explicitly being given those abilities. It would be good to enumerate which abilities you're referring to in this item. For example, a bare interpreter should be able to allocate memory and call most of the built-in functions, but should not be able to open network connections. > * An interpreter cannot influence another interpreter directly at the > Python level without explicitly allowing it. You mean, without some other entity explicitly allowing it, right? What would that other entity be -- presumably the interpreter that spawned both of these sub-interpreters? > * An interpreter cannot use operating system resources without being > explicitly given those resources. Okay. > * A bare Python interpreter is always trusted. What does "trusted" mean in the above? > * Python bytecode is always distrusted. > * Pure Python source code is always safe on its own. It would be helpful to clarify "safe" here. I assume by "safe" you mean that the Python source code can express whatever it wants, including potentially dangerous activities, but when run in a bare or sandboxed interpreter it cannot have harmful effects. But then in what sense does the "safety" have to do with the Python source code rather than the restrictions on the interpreter? Would it be correct to say: + We want to guarantee that Python source code cannot violate the restrictions in a restricted or bare interpreter. + We do not prevent arbitrary Python bytecode from violating these restrictions, and assume that it can. > + Malicious abilities are derived from C extension modules, > built-in modules, and unsafe types implemented in C, not from > pure Python source. By "malicious" do you just mean "anything that isn't accessible to a bare interpreter"? > * A sub-interpreter started by another interpreter does not inherit > any state. Do you envision a tree of interpreters and sub-interpreters? Can the levels of spawning get arbitrarily deep? If i am visualizing your model correctly, maybe it would be useful to introduce the term "parent", where each interpreter has as its parent either the Python process or another interpreter. Then you could say that each interpreter acquires authority only by explicit granting from its parent. Then i have another question: can an interpreter acquire authorities only when it is started, or can it acquire them while it is running, and how? > Implementation > /////////////////////////////////////// > > Guiding Principles > ======================== > > To begin, the Python process garners all power as the powerbox. It is > up to the process to initially hand out access to resources and > abilities to interpreters. This might take the form of an interpreter > with all abilities granted (i.e., a standard interpreter as launched > when you execute Python), which then creates sub-interpreters with > sandboxed abilities. Another alternative is only creating > interpreters with sandboxed abilities (i.e., Python being embedded in > an application that only uses sandboxed interpreters). This sounds like part of your design to me. It might help to have this earlier in the document (maybe even with an example diagram of a tree of interpreters). > All security measures should never have to ask who an interpreter is. > This means that what abilities an interpreter has should not be stored > at the interpreter level when the security can use a proxy to protect > a resource. This means that while supporting a memory cap can > have a per-interpreter setting that is checked (because access to the > operating system's memory allocator is not supported at the program > level), protecting files and imports should not such a per-interpreter > protection at such a low level (because those can have extension > module proxies to provide the security). It might be good to declare two categories of resources -- those protected by object hiding and those protected by a per-interpreter setting -- and make lists. > Backwards-compatibility will not be a hindrance upon the design or > implementation of the security model. Because the security model will > inherently remove resources and abilities that existing code expects, > it is not reasonable to expect existing code to work in a sandboxed > interpreter. You might qualify the last statement a bit. For example, a Python implementation of a pure algorithm (e.g. string processing, data compression, etc.) would still work in a sandboxed interpreter. > Keeping Python "pythonic" is required for all design decisions. As Lawrence Oluyede also mentioned, it would be helpful to say a little more about what "pythonic" means. > Restricting what is in the built-in namespace and the safe-guarding > the interpreter (which includes safe-guarding the built-in types) is > where security will come from. Sounds good. > Abilities of a Standard Sandboxed Interpreter > ============================================= > [...] > * You cannot open any files directly. > * Importation > + You can import any pure Python module. > + You cannot import any Python bytecode module. > + You cannot import any C extension module. > + You cannot import any built-in module. > * You cannot find out any information about the operating system you > are running on. > * Only safe built-ins are provided. This looks reasonable. This is probably a good place to itemize exactly which built-ins are considered safe. > Imports > ------- > > A proxy for protecting imports will be provided. This is done by > setting the ``__import__()`` function in the built-in namespace of the > sandboxed interpreter to a proxied version of the function. > > The planned proxy will take in a passed-in function to use for the > import and a whitelist of C extension modules and built-in modules to > allow importation of. Presumably these are passed in to the proxy's constructor. > If an import would lead to loading an extension > or built-in module, it is checked against the whitelist and allowed > to be imported based on that list. All .pyc and .pyo file will not > be imported. All .py files will be imported. I'm unclear about this. Is the whitelist a list of module names only, or of filenames with extensions? Does the normal path-searching process take place or can it be restricted in some way? Would it simplify the security analysis to have the whitelist be a dictionary that maps module names to absolute pathnames? If both the .py and .pyc are present, the normal import would find the .pyc file; would the import proxy reject such an import or ignore it and recompile the .py instead? > It must be warned that importing any C extension module is dangerous. Right. > Implementing Import in Python > +++++++++++++++++++++++++++++ > > To help facilitate in the exposure of more of what importation > requires (and thus make implementing a proxy easier), the import > machinery should be rewritten in Python. This seems like a good idea. Can you identify which minimum essential pieces of the import machinery have to be written in C? > Sanitizing Built-In Types > ------------------------- [...] > Constructors > ++++++++++++ > > Almost all of Python's built-in types > contain a constructor that allows code to create a new instance of a > type as long as you have the type itself. Unfortunately this does not > work in an object-capabilities system without either providing a proxy > to the constructor or just turning it off. The existence of the constructor isn't (by itself) the problem. The problem is that both of the following are true: (a) From any object you can get its type object. (b) Using any type object you can construct a new instance. So, you can control this either by hiding the type object, separating the constructor from the type, or disabling the constructor. > Types whose constructors are considered dangerous are: > > * ``file`` > + Will definitely use the ``open()`` built-in. > * code objects > * XXX sockets? > * XXX type? > * XXX Looks good so far. Not sure i see what's dangerous about 'type'. > Filesystem Information > ++++++++++++++++++++++ > > When running code in a sandboxed interpreter, POLA suggests that you > do not want to expose information about your environment on top of > protecting its use. This means that filesystem paths typically should > not be exposed. Unfortunately, Python exposes file paths all over the > place: > > * Modules > + ``__file__`` attribute > * Code objects > + ``co_filename`` attribute > * Packages > + ``__path__`` attribute > * XXX > > XXX how to expose safely? It seems that in most cases, a single Python object is associated with a single pathname. If that's true in general, one solution would be to provide an introspection function named 'getpath' or something similar that would get the path associated with any object. This function might go in a module containing all the introspection functions, so imports of that module could be easily restricted. > Mutable Shared State > ++++++++++++++++++++ > > Because built-in types are shared between interpreters, they cannot > expose any mutable shared state. Unfortunately, as it stands, some > do. Below is a list of types that share some form of dangerous state, > how they share it, and how to fix the problem: > > * ``object`` > + ``__subclasses__()`` function > - Remove the function; never seen used in real-world code. > * XXX Okay, more to work out here. :) > Perimeter Defences Between a Created Interpreter and Its Creator > ---------------------------------------------------------------- > > The plan is to allow interpreters to instantiate sandboxed > interpreters safely. By using the creating interpreter's abilities to > provide abilities to the created interpreter, you make sure there is > no escalation in abilities. Good. > * ``__del__`` created in sandboxed interpreter but object is cleaned > up in unprotected interpreter. How do you envision the launching of a sandboxed interpreter to look? Could you sketch out some rough code examples? Were you thinking of something like: sys.spawn(code, dict) code: a string containing Python source code dict: the global namespace in which to run the code If you allow the parent interpreter to pass mutable objects into the child interpreter, then the parent and child can already communicate via the object, so '__del__' is a moot issue. Do you want to prevent all communication between parent and child? It's not obvious to me why that would be necessary. > * Using frames to walk the frame stack back to another interpreter. Could you just disable introspection of the frame stack? > Making the ``sys`` Module Safe > ------------------------------ [...] > This means that the ``sys`` module needs to have its safe information > separated out from the unsafe settings. Yes. > XXX separate modules, ``sys.settings`` and ``sys.info``, or strip > ``sys`` to settings and put info somewhere else? Or provide a method > that will create a faked sys module that has the safe values copied > into it? I think the last suggestion above would lead to confusion. The two groups should have two distinct names and it should be clear which attribute goes with which group. > Protecting I/O > ++++++++++++++ > > The ``print`` keyword and the built-ins ``raw_input()`` and > ``input()`` use the values stored in ``sys.stdout`` and ``sys.stdin``. > By exposing these attributes to the creating interpreter, one can set > them to safe objects, such as instances of ``StringIO``. Sounds good. > Safe Networking > --------------- > > XXX proxy on socket module, modify open() to be the constructor, etc. Lots more to think about here. :) > Protecting Memory Usage > ----------------------- > > To protect memory, low-level hooks into the memory allocator for > Python is needed. By hooking into the C API for memory allocation and > deallocation a very rough running count of used memory can kept. This > can be used to prevent sandboxed interpreters from using so much > memory that it impacts the overall performance of the system. Preventing denial-of-service is in general quite difficult, but i applaud the attempt. I agree with your decision to separate this work from the rest of the security model. -- ?!ng
https://mail.python.org/pipermail/python-dev/2006-September/068673.html
CC-MAIN-2017-30
en
refinedweb
to get the record, and dereference it to get the object reference. def remove(self, recordID): myRecord = MyObject.selectBy(id=recordID) if myRecord.count(): myRecord = list(myRecord)[0] myRecord.destroySelf() else: #Could not find record with that id. If there are dependencies, you will have to delete the dependent records first (if at all.) If there is a better way to do this, please update.
http://trac.turbogears.org/wiki/SimpleDelete?version=2
CC-MAIN-2017-30
en
refinedweb
Disclaimer Please note that this is an example on how to do geocoding and Virtual Earth integration. There are probably a lot of different ways to do the same, and this might not work in all areas of the world. I do however think that there is enough information in the post that people can make it work anywhere and the good thing about the samples and demos I post in my blog is, that they are free to be used as partners and customers see fit. My samples doesn’t come with any warranty and if installed at customer site, the partner and/or customer takes full responsibility. Following this sample also does not release you from following any license rules of the products used in the blog (note that I don’t know these rules) Mappoint or Virtual Earth As you probably know there already is an integration to Mappoint in NAV 2009. This integration makes it possible to open a map for a given customer (or create route description on how to get there). The way it works is, that when you request an online map, a URL is created which will open a map centered on the requested customer. It is a one way integration – meaning that we can see a map and/or calculate routes. But… – it is a one way integration. Wouldn’t it be cool if we could request NAV for all customers in a range of 10 miles from another customer – display all customers on a map and have information included directly on Virtual Earth with phone numbers, orders and other things. That is what this is all about… But in order to do that, we need more information on our customers than just an address, a city and a country, as this information is hard to query. How would NAV know that Coventry is near Birmingham – if we don’t tell it. The idea is off course to add geocode information to all customers in our customer table. We can do this the hard way (typing them in), the other “easy” way (create an automation object which does the trick for you). Latitude and Longitude I am not (and I wouldn’t be capable of) trying to describe in details what Latitude and Longitude is – if you want this information you should visit and Not that it necessarily helps a lot, but there you have it. A simpler explanation can be found on, which is also, where this image is from Click the image to take you directly to the description. In this map, coordinates are described as degrees, minutes and seconds + a direction in which this is from the center (N, S, E, W). There are different ways to write a latitude and a longitude – in my samples I will be using decimal values, where Latitude is the distance from equator (positive values are on the northern hemisphere and negative values are on the southern) and Longitude is the distance from the prime meridian (positive values are going east and negative values are going west). This is the way Microsoft Virtual Earth uses latitude and longitude in the API. Underneath you will find a map with a pushpin in 0,0. Another location (well known to people in the Seattle area) is Latitude = 47.6 and Longitude = -122.33, which on the map would look like: Yes – the Space Needle. I think this is sufficient understanding to get going. Preparing your customer table First of all we need to create two fields in the customer table, which will hold the geocode information of the customer. Set the Decimalplaces for both fields to 6:8 and remember to create a key, including the two fields (else your searches into the customer table will be slow) You also need to add the fields to the Customer Task Page, in order to be able to edit the values manually if necessary. Virtual Earth Web Services In order to use the Microsoft Virtual Earth Web Services you need an account. I do not know the details about license terms etc., but you can visit for signing up and/or read about the terms. I do know that an evaluation developer license is free – so you can sign up for getting one of these – knowing of course that you probably cannot use this for your production data – please contact maplic@microsoft.com for more information on this topic. Having signed up for a developer account you will get an account ID and you will set a password which you will be using in the application working with the Virtual Earth Web Services. This account ID and Password is used in your application when connecting to Web Services and you manage your account and/or password at You will also find a site in which you can type in your Account ID and password to test whether it is working. On these sites there are a number of links to Mappoint Web Services – this is where the confusion started for me… I wrote some code towards Mappoint and I quickly ran into problems having to specify a map source (which identifies the continent in which I needed to do geocoding). I really didn’t want this, as this would require setup tables and stuff like that in my app. After doing some research I found out that Virtual Earth exposes Web Services to the Internet, which are different from Mappoint (I do not have any idea why). A description of the Virtual Earth Web Services API can be found here: and a description of the geocode service can be found here: and yes, your newly assigned account and password for Mappoint services also works for Virtual Earth Web Services. The way it works is, that when connecting to Virtual Earth Web Services you need to supply a valid security token. This token is something you request from a different web service and when requesting this token, you specify the number of minutes the token should be valid (Time-To-Live). Confused? Creating a COM automation object for geocoding addresses If you think you have the grasp around the basics of the Virtual Earth Web Services – let’s get going… First of all – fire up your Visual Studio 2008 SP1 and create a new Class Library (I called mine NavMaps). Add the CLSCompliant(true) to the AssemblyInfo.cs file (I usually to this after the ComVisible(false) line). // Setting ComVisible to false makes the types in this assembly not visible // to COM components. If you need to access a type in this assembly from // COM, set the ComVisible attribute to true on that type. [assembly: ComVisible(false)] [assembly: CLSCompliant(true)] After this you need to sign the assembly. Do this by opening properties of project, go to the Signing TAB, check the “Sign the assembly” checkbox and select new – type in a filename and password protect the key file if you want to (I usually don’t). Next thing is to create the COM interface and Class – the interface we want is: [ComVisible(true)] [Guid("B1F26FE7-0EA0-4883-BD6A-0398F8D2B139"), InterfaceType(ComInterfaceType.InterfaceIsDual)] public interface INAVGeoCode { string GetLocation(string query, int confidence, ref double latitude, ref double longitude); } For the implementation we need a Service Reference to: called GeocodeService Note that the configuration of this Service Reference will be written in app.config – I will touch upon this later. The implementation could be: [ComVisible(true)] [Guid("9090DF4C-FB24-4a4b-9E49-3924353A6040"), ClassInterface(ClassInterfaceType.None)] public class NAVGeoCode : INAVGeoCode { private string token = null; private DateTime tokenExpires = DateTime.Now; /// <summary> /// Geocode an address and return latitude and longitude /// Low confidence is used for geocoding demo data - where the addresses really doesn't exist:-) /// </summary> /// <param name="query">Address in the format: Address, City, Country</param> /// <param name="confidence">0 is low, 1 is medium and 2 is high confidence</param> /// <param name="latitude">returns the latitude of the address</param> /// <param name="longitude">returns the longitude of the address</param> /// <returns>Error message if something went wrong</returns> public string GetLocation(string query, int confidence, ref double latitude, ref double longitude) { try { // Get a Virtual Earth token before making a request string err = GetToken(ref this.token, ref this.tokenExpires); if (!string.IsNullOrEmpty(err)) return err; GeocodeService.GeocodeRequest geocodeRequest = new GeocodeService.GeocodeRequest(); // Set the credentials using a valid Virtual Earth token geocodeRequest.Credentials = new GeocodeService.Credentials(); geocodeRequest.Credentials.Token = token; // Set the full address query geocodeRequest.Query = query; // Set the options to only return high confidence results GeocodeService.ConfidenceFilter[] filters = new GeocodeService.ConfidenceFilter[1]; filters[0] = new GeocodeService.ConfidenceFilter(); switch (confidence) { case 0: filters[0].MinimumConfidence = GeocodeService.Confidence.Low; break; case 1: filters[0].MinimumConfidence = GeocodeService.Confidence.Medium; break; case 2: filters[0].MinimumConfidence = GeocodeService.Confidence.High; break; default: return "Wrong value for confidence parameter"; } GeocodeService.GeocodeOptions geocodeOptions = new GeocodeService.GeocodeOptions(); geocodeOptions.Filters = filters; geocodeRequest.Options = geocodeOptions; // Make the geocode request GeocodeService.IGeocodeService geocodeService = new ChannelFactory<GeocodeService.IGeocodeService>(new BasicHttpBinding(), new EndpointAddress("")).CreateChannel(); GeocodeService.GeocodeResponse geocodeResponse = geocodeService.Geocode(geocodeRequest); if (geocodeResponse.Results.Length == 0 || geocodeResponse.Results[0].Locations.Length == 0) { return "No locations found"; } latitude = geocodeResponse.Results[0].Locations[0].Latitude; longitude = geocodeResponse.Results[0].Locations[0].Longitude; return ""; } catch (Exception ex) { return ex.Message; } } } Before we add the last function – GetToken – I would like to draw attention to the line: GeocodeService.IGeocodeService geocodeService = new ChannelFactory<GeocodeService.IGeocodeService>(new BasicHttpBinding(), new EndpointAddress("")).CreateChannel(); This isn’t normally the way you would instantiate the service class. In fact normally you would see: GeocodeService.GeocodeServiceClient geocodeService = new GeocodeService.GeocodeServiceClient(); which is simpler, looks nicer and does the same thing – so why bother? Which configuration file to use? The primary reason is to avoid using the configuration file. The standard way of instantiating the Service Client is looking for a number of settings in the appSettings section in the config file – and you wouldn’t think that should be a problem – but it is. The problem is that it uses the application configuration file – NOT the DLL config file, and I couldn’t find any way to make it read the DLL config file for these settings. So if my NavMaps.dll should be accessible from the classic client, I would have to create a finsql.exe.config with the right configuration. Microsoft.Dynamics.Nav.Client.exe.config would be the configuration file for the Roletailored Client (if we are running the automation client side) and I would have to find a way to merge the config settings into the service tier configuration if we are running the automation server side. So, to avoid all that crap (and severe deployment problems), I instantiate my WCF Client manually through code – meaning that no app.config is necessary. Requesting a Virtual Earth Security Token First you need to add a Web Reference to called TokenWebReference The code for the GetToken could look like this: /// <summary> /// Check validity of existing security token and request a new /// Security Token for Microsoft Virtual Earth Web Services if necessary /// </summary> /// <param name="token">Security token</param> /// <param name="tokenExpires">Timestamp for when the token expires</param> /// <returns>null if we have a valid token or an error string if not</returns> private string GetToken(ref string token, ref DateTime tokenExpires) { if (string.IsNullOrEmpty(token) || DateTime.Now.CompareTo(tokenExpires) >= 0) { // Set Virtual Earth Platform Developer Account credentials to access the Token Service TokenWebReference.CommonService commonService = new TokenWebReference.CommonService(); commonService.Credentials = new System.Net.NetworkCredential("<your account ID>", "<your password>"); // Set the token specification properties TokenWebReference.TokenSpecification tokenSpec = new TokenWebReference.TokenSpecification(); IPAddress[] localIPs = Dns.GetHostAddresses(Dns.GetHostName()); foreach (IPAddress IP in localIPs) { if (IP.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork) { tokenSpec.ClientIPAddress = IP.ToString(); break; } } // Token is valid an hour and 10 minutes tokenSpec.TokenValidityDurationMinutes = 70; // Get a token try { // Get token token = commonService.GetClientToken(tokenSpec); // Renew token in 1 hour tokenExpires = DateTime.Now.AddHours(1); } catch (Exception ex) { return ex.Message; } } return null; } Note that I am giving the token a TTL for 70 minutes – but renew after 60 – that way I shouldn’t have to deal with tokens being expired. <your account ID> should be replaced by your account ID and <your password> should be replaced with your password. Using the geocode automation object Having build the assembly we need to put it to play. After building the assembly we need to place it in the “right” folder – but what is the right folder? It seems like we have 4 options - In the Classic folder - In the RoleTailored Client folder - In the Service Tier folder - Create a Common folder (next to the Classic, RoleTailored and Service Tier folders) and put it there Let me start by excluding the obvious choice, the Service Tier folder. The reason is, that if you have multiple Service Tiers, they will all share the same COM automation object and you don’t really know which directory the assembly is read from (the one where you did the last regasm). You could accidently delete the Service Tier in which the registered assembly is located and thus end up having denial of service. RoleTailored Client folder also seems wrong – there is absolutely no reason for running this object Client side – it should be on the Service Tier and there might not be a RoleTailored Client on the Service Tier. The same is case with the Classic folder – so I decided to create a Common folder and put the DLL there. Another reason for selecting the Common folder is, that you can install it on the Client (for being able to run it in the Classic Client) in a similar way as on the server. Your situation might be different and you might select a different option. After copying the DLL to the Common folder, we need to register the DLL and make it available as a COM automation object. C:\Windows\Microsoft.NET\Framework\v2.0.50727\regasm NAVMaps.dll /codebase /tlb Is the command to launch. Creating a codeunit for geocoding your customers What I normally do in situations like this is, to create a codeunit, which initializes itself when you Run it. In this case Running the codeunit should run through all customers and geocode them. OnRun() IF cust.FIND('-') THEN BEGIN REPEAT err := UpdateLatitudeAndLongitude(cust); IF (err <> '') THEN BEGIN IF NOT CONFIRM('Customer '+cust."No."+ ' - Error: '+err + ' - Continue?') THEN EXIT; END; UNTIL cust.NEXT = 0; END; Whether or not you want an error here or not is kind of your own decision. The function, that does the job looks like: UpdateLatitudeAndLongitude(VAR cust : Record Customer) error : Text[1024] CREATE(NavMaps, TRUE, FALSE); country.GET(cust."Country/Region Code"); query := cust.Address+', '+cust."Address 2"+', '+cust.City+', '+', '+cust."Post Code"+', '+country.Name; error := NavMaps.GetLocation(query, 2, cust.Latitude, cust.Longitude); IF (error = '') THEN BEGIN cust.MODIFY(); END ELSE BEGIN query := cust.City + ', ' + country.Name; error := NavMaps.GetLocation(query, 0, cust.Latitude, cust.Longitude); IF (error = '') THEN BEGIN cust.MODIFY(); END; END; Local variables looks like this Now we can discuss whether this is the right way to go around it – problem for me is, that the demodata are not real addresses – meaning that the street names are fine, city names are fine – but the street and number doesn’t exist in the city. So – what I do here is to start building a query with Address, City, Post code, Country Name – and tell Virtual Earth to make a High confidence search – this will return the location of correct addresses. In the demo database, this returns absolutely nothing. Next thing is to just give me the location of the city in the country – whether you want to do this for your customer table is kind of your own decision, but it works for the demo data. BTW – i tried this with addresses in Denmark (my former) and the US (my current) – and the format seems to suit Virtual Earth in these countries, but I didn’t try a lot of other addresses. The advantage of using the query is really to give Virtual Earth the freedom to interpret and look at the address – and it does a pretty good job. If anybody experiences problems with this in certain countries – please let me know (with a suggestion as to what should be changed) and I will include this. Can’t make it work? If (for some reason) this code doesn’t work for you – but you really want to get on with the next two parts of this walkthrough, I have included a table here with Latitudes and Longitudes for all the customers in the W1 database. Copy the entire table into a clean excel spreadsheet and use Edit In Excel to copy the data from the two columns to the two columns in your customer table in one go and save it back to NAV. Note that the Latitude and Longitude is only available in Excel if you - Have added the fields to the customer card - Updated the Web Reference in the Edit In Excel project to the customer card after doing step 1 (if you already had Edit In Excel running) If you don’t have the Edit In Excel – you can find it here Note that some of the customers have the same latitude, longitude – this is due to the demo data. Next steps Ok admitted – that was a long post – but hopefully it was helpful (and hopefully you can make it work) As usual – you can download the objects and the Visual Studio solution – and please do remember that this is NOT a solution that is installable and will work all over – you might have issues or problems and I would think the best place to post questions is mibuso (where my answer will get read by more people). In the next post I will create a website, which connects to NAV Web Services to get customer location information – and in the third post I will show how to create an action in the customer List and Card to open up an area map centered around a specific customer. So… – after step 2 you will see this Stay tuned! The forth post in this series is a surprise – and will require NAV 2009 SP1 in order to work – so stay tuned for this one… Enjoy and good luck Freddy Kristiansen PM Architect Microsoft Dynamics NAV Greetings Freddy, I have a question about fine tuning the geo-coding service query. I have used it to geo-code my real address info for a NAV database in the UK. As with your demo data, I have found duplicate longitude, latitude coordinates, actually many. Have you found a way to fine tune the query so the coordinates come back more unique? Best Regards, Robert The reason for the duplicate coordinates is this code: error := NavMaps.GetLocation(query, 2, cust.Latitude, cust.Longitude); IF (error = ”) THEN BEGIN cust.MODIFY(); END ELSE BEGIN query := cust.City + ‘, ‘ + country.Name; error := NavMaps.GetLocation(query, 0, cust.Latitude, cust.Longitude); IF (error = ”) THEN BEGIN cust.MODIFY(); END; END; Which basically says that if it cannot find a perfect match it goes for the mapping the city in the country. You could try changing the 2 (confidence level) to 1 or even 0, to see whether this gives you better results. You could also remove the "extreme relaxed" approach and leave the customers it cannot geocode with 0,0 – and then wait for part 4 (out of 4) which will include a way for you to point out the location of a customer (you would then need to edit the customers with no geocode information manually). You could also send me a couple of sample addresses (ones that doesn’t geocode), which I could try out. Thanks for the quick response. I went ahead and brought the confidence factor to 1 on the initial query and modified the lat/long to if nothing was found, ignoring the second query. The records I am concerned with still have an issue so let me give you some samples. Remember, the business entered these in not me. 🙂 Cust rec 1 name=King’s College London, Address = Biomedical and Health Sciences Stores, Address 2 = New Hunts House, City= London, Post Code= SE1 1UL, country-=UNITED KINGDOM. Rec 2 = Name=Queen Mary University of London, Address = Molecular Endocrinology, Ist Floor North, Address2= John Vane Science Building, City= London, Post Code= EC1M 6BQ, country-=UNITED KINGDOM. Perhaps I should change which fields are sent in the first query to name, post code, and country? Hi, I have COM DLL written in Visual Basic and its working fine with Navision 2009 classic client but when i try to use the same with RTC client of 2009 it doesnt work at all. i have changed the createobject function but it doesnt seem to work. Regards, Rajan You should post a question like that on Mibuso together with a description of what your COM DLL does? any visual elements won’t work. This is a GREAT article!! I had been battling an issue for several days, and the following code saved me!! "(new BasicHttpBinding(), new EndpointAddress("")).CreateChannel(); " Thanks for writing about VE! Jay staging.common.virtualearth.net/…/common.asmx looks like the webservice is broken! Hi, Thanks for the great article.. I have an issue. I have integrated the map in my site all controls but map displays in the div. Even the pushpins along with the data displays but the map do not display. Is there a way to identify where might be the issue?
https://blogs.msdn.microsoft.com/freddyk/2009/03/18/integration-to-virtual-earth-part-1-out-of-4/
CC-MAIN-2018-26
en
refinedweb
Most web developers know that HTML5 is available and is already supported (though not completely supported in all its features) in the leading browsers. See a summary of new tags and features in HTML5. Internet Explorer 8, Firefox 3.6+, and the latest versions of Safari and Chrome all have strong support for HTML5. Upcoming releases, including IE9, will improve the level of support. For this reason, in Microsoft’s recent beta release of the WebMatrix development platform, the templates for creating a new HTML page or a new ASP.NET page that uses the Razor syntax (pages with a .cshtml or .vbhtml file extension), all use the new HTML5 doctype. Often when I develop web pages, I don’t think much about the doctype that is used; I want to get onto the “good stuff” and start writing code. However, using the new HTML5 doctype and understanding it is worthwhile for the following reasons: - HTML5 offers some significant new tags and improved options for rendering content. - Using the HTML5 doctype declaration results in much simpler and cleaner code in your pages, as you’ll see. - Using the HTML5 doctype forces most browsers to render your pages in a “standards” mode (which is a good thing if you want to use new HTML features and validate your pages—see this article for more on browser modes). HTML5 Versus Previous DocTypes Let’s start by comparing a plain HTML page declared with an earlier doctype to one that is declared using the HTML5 doctype. The following block of code is the template for a new blank HTML page in Visual Studio 2010. VS uses this same doctype for creating HTML pages and also ASP.NET pages (the ASP.NET pages add some additional ASP.NET-related tags, but the doctype and HTML tags are otherwise the same). Visual Studio Page Template <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <title></title> </head> <body> </body> </html> Notice that the doctype declaration element specifies the XHTML 1.0 Transitional doctype, and (on the second line) a URI for the doctype declaration file. This doctype has been used for some time now in the VS templates for HTML and ASP.NET web pages. Also, note that the <html> tag specifies a default namespace that includes the XHTML 1.0 elements. WebMatrix Page Template <!DOCTYPE html> <html> <head> <title></title> </head> <body> </body> </html> Now in contrast, look at the above template that WebMatrix uses for HTML and ASP.NET pages with the Razor syntax. You can see right away how much cleaner and simpler this is: there’s no long DTD specifier or URI in the doctype, and you do not have to declare a namespace on the root <html> tag. This is really all you need to start taking advantage of HTML5 tags and content in your web pages. As mentioned, most of the leading browsers already support HTML5 so this doctype will validate and the majority of HTML5 tags should render properly in leading browsers. One More Item: Character Encoding One detail that the new page templates do not include is a <meta> tag that specifies character encoding. To be clear, this is not strictly required and you can validate pages without it. The above markup for the new page template validates at the W3 validation service even without the encoding declaration (although it does mention in a warning: “No character encoding declared at document level”). However, it is a really good idea, even a best practice, to declare a character encoding in your pages. Here are some possible consequences of not declaring a character encoding: - It can affect the rendering of your content. If you use certain characters, and no encoding is specified, a browser may render them in some odd way. - It can create security vulnerabilities. Without a character encoding, for example, users (whether intentionally or unintentionally) could enter characters in a form which would become active HTML (like a <script> tag) that would then be capable of executing. Ultimately, you should both validate user input and declare a character encoding, to minimize security vulnerabilities of this type. To declare a character encoding in an HTML5 page is very simple. In previous doctypes it required a fairly long, detailed <meta> tag with several attributes. But in HTML5 pages, all you need is a <meta> tag with a charset attribute, like the following examples which declares a utf-8 encoding: <meta charset="utf-8" /> If you add an encoding declaration like this one to the standard page template in WebMatrix, you’ll have a validating doc with the character encoding declared, which is a good thing. The following modified template is what I’ve been using to create my new HTML and .cshtml pages in WebMatrix, given that I use a utf-8 character encoding: WebMatrix Page Template with Character Encoding Declared <!DOCTYPE html> <html> <meta charset="utf-8" /> <head> <title></title> </head> <body> </body> </html> This updated form of the template also validates on the W3 validation service, and it gives you a clean, straightforward way to declare an encoding in your pages. The Bottom line The use of the new HTML5 doctype in the WebMatrix page templates gives you cleaner, simpler pages, and lets you take advantage of the new features in HTML5. It’s a good idea to add to the template a <meta> tag that declares the character encoding you will use in your web pages, because it gives you more consistent rendering and helps to reduce security vulnerabilities. Shouldn't the <meta> tag be placed inside of the <head> section? Awesome post! I cannot wait till HTML 5 is supported cross browser. You have cleared up many issues and uncertainties I had with HTML5. I was unaware that it could be used already. Thanks for the post.
https://blogs.msdn.microsoft.com/timlee/2010/08/06/working-with-html5-pages-in-webmatrix-and-the-razor-syntax/
CC-MAIN-2018-26
en
refinedweb
Hey guys. I am trying to take a normal queue and change it into a cyclic queue, which basically means that the enqueue operation will make the elements shift to make space if the rear pointer is pointing at the end of the queue. I am getting an error about a "Missing External Link," but I also am not too sure if I am implementing the class correctly as this is my first program using OOP. If you guys have any suggestions that would be awesome. Thanks :) Also, if you guys could let me know how to add line numbers to my code that would be huge, as I'm sure its annoying to try and help without them. #include <iostream> using namespace std; class CirQueue { private: int queue_size; int n; protected: int *buffer; int front; int rear; public: CirQueue(void) { front=0; rear=0; queue_size=10; buffer= new int[queue_size]; } CirQueue (int n) { front=0; rear=0; queue_size=n; buffer=new int[queue_size]; } ~CirQueue(void) { delete buffer; buffer= NULL; } void enqueue(int v) { if (rear<queue_size) buffer[rear++]=v; else if(rear==n) if(front>0) rear=0; else if(compact()) buffer[rear++]=v; } int dequeue(void) { if(front<rear) return buffer[front++]; else { return -1; } } private: bool compact(void); }; void main() { CirQueue Q1(5); Q1.enqueue(2); Q1.enqueue(8); int x = Q1.dequeue(); int y = Q1.dequeue(); }
https://www.daniweb.com/programming/software-development/threads/94451/cyclic-queue
CC-MAIN-2018-26
en
refinedweb
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Intro This project is based on the concept I used to aggregate Links, Posts, Episodes, etc for This Week In Django's rss feed. I initially implemented a more "reusable" version of it on my old, personal blog howiworkdaily. Compared to what I used for TWiD. Since I'm soon migrating off my own custom blog engine, I thought it would be a good idea to clean up the code, refactor, and make it available for reuse. Next up is making it further extensible for custom ProxyBase child classes. Current implementation example: from django.db import models from django_proxy.signals import proxy_save, proxy_delete ... - class Post(models.model): - - STATUS_CHOICES = ( - (1, _('Draft')), (2, _('Public')), ) title = models.CharField(max_length=150) body = models.TextField() tag_data = TagField()status = models.IntegerField(_('status'), choices=STATUS_CHOICES, default=2) - class ProxyMeta: - title = 'title' description = 'body'tags = 'tag_data' active = {'status':2} signals.post_save.connect(proxy_save, Post, True) signals.post_delete.connect(proxy_delete, Post)
https://bitbucket.org/danfairs/django-proxy
CC-MAIN-2018-26
en
refinedweb
Red Hat Bugzilla – Bug 55867 7.1 installer failure (after repeated attempts) Last modified: 2007-04-18 12:38:06 EDT Description of Problem: I have been trying to install redhat 7.1 on my Pentium200 class system for many days, and it invariably fails during part of the install. I now have an error message which I am instructed to report to bugzilla.redhat, and so I am following directions. Sometimes the failure occurs as anaconda begins to run, sometimes later in the process. Twice it has occured while partitioning the swap file, and this is the third occurance where I have been instructed to 'save to disk and report.' I thought maybe I should listen this time. Version-Release number of selected component (if applicable): 7.1 How Reproducible: after ~15 tries, have never completed the installation process. Steps to Reproduce: 1. insert 7.1 CD. turn on computer. ;) 2. install fails both in and out of text mode, but text seems to proceed father, more regularly. 3. Actual Results: Traceback (innermost last): File "/usr/bin/anaconda", line 520, in ? intf.run(todo, test = test) File "/var/tmp/anaconda-7.1//usr/lib/anaconda/text.py", line 1126, in run 409, in readCompsFile file = urllib.urlopen(filename) File "/usr/lib/python1.5/site-packages/urllib.py", line 59, in urlopen return _urlopener.open(url) File "/usr/lib/python1.5/site-packages/urllib.py", line 159, in open return getattr(self, name)(url) File "/usr/lib/python1.5/site-packages/urllib.py", line 330, in open_file return self.open_local_file(url) File "/usr/lib/python1.5/site-packages/urllib.py", line 334, in open_local_file import mimetypes, mimetools, StringIO File "/usr/lib/python1.5/site-packages/mimetools.py", line 5, in ? import rfc822 EOFError: EOF read where object expected Expected Results: Additional Information: I will attach the 'system state' that was printed. Created attachment 36801 [details] system state report Are you sure that the cd's are good? Random failures are almost always due to bad cd's or flaky hardware. Did you check the md5sums of the ISOs? I do not know how to check the md5sums, but I will check if someone tells me. I burned the CDs from iso's that I d/l'd from the redhat ftp site. I do have 2 HDs in the computer, one is 2 GB that I wanted to install Linux on (and in fact, had a successful 5.2 redhat on there, but hadn't played with it much, so I decided to put on 7.1 before I 'really got into Linux') and the other is an 8 GB WIN drive that I didn't want to format b/c it has a lot of mp3s that I still want access to. Could this unformatted WIN32 drive be throwing a wrench in the works? Last night I thought I should try an install without that drive connected, to see if it succeeded. The Win partition shouldn't cause the install to fail. Is the machine with the ISOs on it a Linux machine or a Windows machine? I can tell you how to check the md5sums on either one, but the procedure is a little different depending on which OS you run. the isos are on my WinME machine, to which I moved the MP3 drive earlier today (I don't know why I didn't consider that before), but moving that drive off the linux box has not helped the 7.1 install procedure. Ok, if you are on a Windows machine, go to and download md5sumer. You can use that program to check the md5sums of the ISO images. The values should match the values in the MD5SUM file on the ftp sites. For 7.1, the vaules are: If the numbers you get don't match those, then something got mixed up during the download. Does that help? Any more info here? Closing due to inactivity. Please reopen if you have more information.
https://bugzilla.redhat.com/show_bug.cgi?id=55867
CC-MAIN-2018-26
en
refinedweb
Android API Demos for Studio When learning to program Android apps nothing beats seeing working example code. Unfortunately rapid changes in the Android Operating System (OS), and the Google preferred tools, means that it is sometimes hard to find good working samples. (Hence the free Android Studio example projects provided here on Tek Eye.) Google does provide plenty of sample projects and code, even though the right piece of code can sometimes be hard to find. This article is about running the old Android API demos sample app in Android Studio. What Ever Happened to the Old Android Sample Projects Earlier versions of the Android Software Development Kit (SDK) shipped samples that could be loaded and run in the preferred Integrated Development Environment (IDE) of the time, the Eclipse IDE. Each new release of the SDK would usually introduce new sample projects, mainly to demonstrate a new Application Programming Interface (API). When Google switched to the new Android Studio IDE the Eclipse samples were moved to a legacy folder and new Gradle based samples were added. With the release of Android Nougat (API 23) the samples moved online. The legacy samples are useful because they cover many of the basic programming functions that new Android developers need to know about. In particular the API Demos legacy sample provides demos and code for many of the fundamental built in Android classes. The API Demos were even installed by default in some Android Virtual Devices (AVDs). What if you want to run the legacy API Demos app? It can be done, but it is a bit painful. You need to get hold of Android samples from Android Jelly Bean MR1 (API 17). The legacy folder has the API Demos Eclipse Project. Then you can import the Eclipse project. Sort out the Gradle syncing, rename a file, and begin working through a list of errors reported in Studio. Plus refactor the API Demo namespace if you don't want to remove the existing API Demos app in an AVD. Alternatively just use the Studio compatible API Demos project that is available here on Tek Eye. Running the Android API Demos in Studio The Android API Demos project for Studio will load and run. It is not perfect due to deprecated code and other Android changes over the years. But at least it will compile and run, and most of the demos work. This is great if you want to see some of the fundamental Android classes in action. Download the API Demos zip file. Extract the code to a directory on the PC (it will remain in that location). Then use the Android Studio Import Project option to load it up. (Selecting the build.gradle file.) Wait for Studio and Gradle to do it's thing. Click OK on the message Gradle settings for this project are not configured yet. (Unless you want to configure Gradle manually). Accept the sync message if it appears. Use the status messages at the bottom of the Studio window to monitor Gradle progress. Once loaded the API Demos should run on a suitable AVD, or an Android device configured for development (otherwise resolve any errors listed). Known Issue On some versions of Android, after API 19 (Android KitKat), the API Demos app will exception (error) on loading with java.lang.RuntimeException: Package manager has died. The error reported is !!! FAILED BINDER TRANSACTION !!! when a call is made to queryIntentActivities in the ApiDemos.java file. It appears a change in later versions of Android (APIs 21 to 23, Lollipop and Marshmallow) causes a limit to the amount of Activities that can be defined in AndroidManifest.xml. The manifest file for the API Demos app is over 3000 lines long. If about half the defined activites are removed from AndroidManifest.xml the app will run on APIs 21 to 23. Fortunately, as well as Anndroid KitKat and earlier versions, API Demos will load and run in Android Nougat and Oreo, APIs 24 and later. A few of the many demos in the API Demos app will also exception, due to chnages to the Android API over the years. The project was a straight conversion to a Studio project from the original Eclipse code. Some messaging (MMS) code examples were commented out due to later SDK incompatibilites, but no other code changes have been made. At least the project loads, compiles and runs in Studio (on the right API AVD or device), providing access to lots of API demo code. See Also - See the other Android Studio example projects to learn Android app programming. - For a full list of the articles on Tek Eye see the full site Index Author:Daniel S. Fowler Published:
https://tekeye.uk/android/examples/android-api-demos-for-studio
CC-MAIN-2018-26
en
refinedweb
Red Hat Bugzilla – Bug 7600 installation of 6.1 quits afer "running installer" & "running /sbin/loader" Last modified: 2015-01-07 18:39:58 EST Here are, in column format, are the screen messages leading up to the Signal 11: Uniform CDROM driver Revision :2.56 Floppy drive(s) : fd0 is 1.44M FDC0 is a post 1991 82077 md driver 0.90.0 MAX_MD_DEVS=256, MAX_REAL=12 raid5: measuring checksumming speed 8regs: 86.487.MB/sec 32regs: 63.627 MB/sec using fastest function: 8regs (86.487 MB/sec) scsi: 0 hosts. scsi: detected total md.c: sizeof(mdp_super_t) = 4096 Partition check: hda: hda1 hda2 <hda5 hda6> RAMDISK: Compressed image found at block 0 VFS: Mounted root (ext2 filesystem). Greetings. Red hat install init version 6.0 starting mounting /proc filesystems... done mounting /dev/pts (unix89 pty) filesystem....done checking for NFS root filesystem....no trying to remount root filesystem read write....done checking for writeable /tmp..... yes running install.... running /sbin/loader {at this point, my system went into a steady state with the cdrom light on steady and the harddisk light on steady. More text follows.... Traceback (Inermost last): File /usr/bin/anaconda, line 121 , in ? from todo import ToDo File /usr/lib/python1.5/site-package/dtodo.py, line 11 in ? from mouse importe Mouse File /usr/lib/python1.5/site-packages/mouse.py, line 3 in ? from snack import * File /usr/lib/python1.5/snack.py, line 7, in ? import _snack Import Error: libslang.so.1: cannot reead file data: Input/output error install exited abnormally sending termination signals .... done sending kill signals...done unmounting filesystems.... /mnt/sourde /dev/pts /proc you may safely reboot your system Very truly, John F. Kohler 217 Fairlawn Avenue Daly City, CA jkohler2@earthlink.net (650)756-7191 I have found this failure to be repeatable. I can still install Red Hat 5.2 and Red Hat 6.0 successfully. My system is a pentium -90 with 96 mbytes ram and 1.2 gig hard drive. I am assuming that you are performing a cdrom installation. Where did the install media come from? Is this a cdrom that was burned or one that was shipping in the boxed set? It appears that files are either missing or the permissions on the libraries are incorrect. The CD-ROM was from a Red Hat istallation package I purchased by mail order. Careless handling of the disk left finger marks on the readable surface, that prevented continuing installation to a successful end. Cure for the problem was soap and water and a dry lint free cloth After cleaning, the installation was flawless. Sorry for the bug report. There was no bug. John Kohler
https://bugzilla.redhat.com/show_bug.cgi?id=7600
CC-MAIN-2018-26
en
refinedweb
Indicates the progress of a page load #include <WebNotificationDelegate.h> virtual void progressNotification(WebFrame* frame ) = 0; This method is called by the WebView instance each time progress is made in loading the page. This can be called (using the estimateProgress() method) several times for the same WebView instance. startLoadNotification(), finishedLoadNotification()
https://www.qnx.com/developers/docs/6.4.1/webkit/dev_guide/progressnotification.html
CC-MAIN-2018-26
en
refinedweb
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. This project is now DEPRECATED. Currently there are better alternatives, please use those in new projects. GlorpenCompassConnectorBundle The better Compass integration for Symfony2. For forking and other funnies: - - What problems is it solving? This bundle: - adds bundle namespace for compass files - so you can do cross bundle imports or use assets from other bundle - ... and it should enable distributing bundles with compass assets - you don't need installed assets in your_app/web - connector uses files from eg. SomeBundle/Resources dir - assets recompiling/updating when any of its dependencies are modified - be it another import, inlined font file or just width: image-width(@SomeBundle:public/myimage.png); - for referencing files inside app/Resources dir use just @somefile.png (sprites, inline images, scss imports) How to install - first, you need to install ruby connector gem: gem install compass-connector - add requirements to composer.json: { "require": { "glorpen/compass-connector-bundle": "*" } } - enable the bundle in your AppKernel class app/AppKernel.php <?php class AppKernel extends AppKernel { public function registerBundles() { $bundles = array( ... new Glorpen\Assetic\CompassConnectorBundle\GlorpenCompassConnectorBundle(), ... ); } } - add assetic filter config in config.yml assetic: filters: compass_connector: resource: "%kernel.root_dir%/../vendor/glorpen/compass-connector-bundle/Glorpen/Assetic/CompassConnectorBundle/Resources/config/filter.xml" #apply_to: ".scss$" # uncomment to auto-apply to all scss assets If you are running into following error: Scope Widening Injection detected: The definition "assetic.filter.compass_connector.resolver" references the service "templating.asset.default_package" which belongs to a narrower scope. Generally, it is safer to either move "assetic.filter.compass_connector.resolver" to scope "request" or alternatively rely on the provider pattern by injecting the container itself, and requesting the service "templating.asset.default_package" each time it is needed. In rare, special cases however that might not be necessary, then you can set the reference to strict=false to get rid of this error. ... or if you just want to change generated assets url (for eg. CDN). You need to add following configuration to you project (remember to change urls): framework: templating: assets_base_urls: http: [""] ssl: [""] Usage There are five kind of "paths": - app: looks like @MyBundle:public/images/asset.png - app global: cannot be converted to URL, looks like @data/image.png and will resolve to app/Resources/data/image.png - absolute: starts with single /, should be publicly available, will resolve to web/ - vendor: a relative path, should be used only by compass plugins (eg. zurb-foundation, blueprint) - absolute path: starts with /, http:// etc. and will NOT be changed by connector Some examples: @import "@SomeBundle:scss/settings"; /* will resolve to src/SomeBundle/Resources/scss/_settings.scss */ @import "foundation"; /* will include foundation scss from your compass instalation */ width: image-size("@SomeBundle:public/images/my.png"); /* will output image size of SomeBundle/Resources/public/images/my.png */ background-image: image-url("@SomeBundle:public/images/my.png"); /* will generate url with prefixes given by Symfony2 config */ @import "@SomeBundle:sprites/*.png"; /* will import sprites located in src/SomeBundle/Resources/sprites/ */ This bundle uses Assetic and CompassConnector filter name is compass_connector. Confguration You can change default configuration by setting following DIC parameters: parameters: assetic.filter.compass_connector.plugins: "zurb-foundation": ">4" assetic.filter.compass_connector.imports: ["/some/path"] assetic.filter.compass_connector.cache_path: %kernel.root_dir%/cache/compassConnector assetic.filter.compass_connector.compass_bin: /usr/bin/compass assetic.filter.compass_connector.resolver.output_dir: %kernel.root_dir%/../web/compass assetic.filter.compass_connector.resolver.vendor_prefix: vendors As for assetic.filter.compass_connector.plugins you can provide arguments as a list eg. ["zurb-foundation"] or array with required gem version: {"zurb-foundation":">=4"}
https://bitbucket.org/glorpen/glorpencompassconnectorbundle
CC-MAIN-2018-26
en
refinedweb
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of New status. Section: 20.15.4.3 [meta.unary.prop] Status: New Submitter: Hubert Tong Opened: 2015-05-07 Last modified: 2015-08-17 Priority: 3 View other active issues in [meta.unary.prop]. View all other issues in [meta.unary.prop]. View all issues with New status. Discussion: I do not believe that the wording in 20.15.4.3 [meta.unary.prop] paragraph 3 allows for the following program to be ill-formed: #include <type_traits> template <typename T> struct B : T { }; template <typename T> struct A { A& operator=(const B<T>&); }; std::is_assignable<A<int>, int> q; In particular, I do not see where the wording allows for the "compilation of the expression" declval<T>() = declval<U>() to occur as a consequence of instantiating std::is_assignable<T, U> (where T and U are, respectively, A<int> and int in the example code).Instantiating A<int> as a result of requiring it to be a complete type does not trigger the instantiation of B<int>; however, the "compilation of the expression" in question does. Proposed resolution:
https://cplusplus.github.io/LWG/issue2496
CC-MAIN-2019-51
en
refinedweb
Addressing many LEDs with a single Arduino A fun little side project of mine is Arduino C/MRI, a library that lets you easily connect your Arduino projects up to the JMRI layout control software, by pretending to be a piece of C/MRI hardware. Hence the name. A common problem when using Arduino C/MRI is dealing with lots of inputs and outputs. As an example, lets wire up a simple non-CTC crossing loop here in New Zealand. It is about as simple as you can get: Each end consists of: - A turnout. We'll need 1 digital output to drive that. - A route indication signal on each leg of the turnout. We'll need an LED for red, and one for green (technically it'd be blue here in NZ). That's 3 pairs of outputs = 6 more. - A push button to control the turnout. That's 1 digital input. That's 8 pins right there, doubled for the other end of the loop, makes 16 pins. That's nearly an entire Arduino dedicated to just one piece of track! Naturally we'll be having more than just a single crossing loop on our railway, yet we have no more Arduino pins left. What are we to do? Expanding outputs The answer comes in the form of a 74 series logic chip, the 74HC595. This is a serial-in, parallel-out device. We send it the state of each pin using 3 data pins, and it updates each of its 8 pins. So already using 3 pins we're able to drive 8 output pins. But the best part? They can be daisy chained. That means with 3 data pins, we can control an unlimited number of 74HC595 devices! Suddenly our job just got a whole lot easier. The schematic below demonstrates how one might do this: Notice how the Q7' pin is daisy-chained to the next device, while the ST_CP and SH_CP pins are shared. Now using 3 data pins we're addressing 16 outputs. Fantastic. What does the code to deal with this look like? #include <CMRI.h> #define LATCH 8 #define CLOCK 12 #define DATA 11 CMRI cmri; // defaults to a SMINI with address 0. SMINI = 24 inputs, 48 outputs void setup() { Serial.begin(9600); // make sure this matches your speed set in JMRI pinMode(LATCH, OUTPUT); pinMode(CLOCK, OUTPUT); pinMode(DATA, OUTPUT); } void loop() { // 1: main processing node of cmri library cmri.process(); // 2: update output. Reads bit 0 of T packet and sets the LED to this digitalWrite(LATCH, LOW); shiftOut(DATA, CLOCK, MSBFIRST, cmri.get_byte(0)); digitalWrite(LATCH, HIGH); } You can see we're using a new method here, cmri.get_byte(n). Rather than inspecting a single bit, this returns an entire byte, which we then shift out to the 74HC595 using the shiftOut method. Toggling the LATCH pin is how we tell the 74HC595 that we're busy sending it data; it only updates the output pins once we take the LATCH pin high. More inputs That was pretty easy, but what if we have a massive CTC panel and want dozens and dozens of inputs? Or we have gone a little crazy with occupancy detectors? Can we do something similar? Luckily we can, using the CD4021 "8-Stage Static Shift Register". It's just the opposite of what we've seen above. The schematic is a little messier because of all the pulldown resistors, but you get the idea: 3 data lines to the Arduino. The code is a little more complex, but only slightly: (note: untested code) #include <CMRI.h> #include // pins for a 168/368 based Arduino #define SS 10 #define MOSI 11 #define MISO 12 /* not used */ #define CLOCK 13 CMRI cmri; // defaults to a SMINI with address 0. SMINI = 24 inputs, 48 outputs void setup() { Serial.begin(9600); // make sure this matches your speed set in JMRI SPI.begin(); } void loop() { // 1: main processing node of cmri library cmri.process(); // 2: toggle the SS pin digitalWrite(SS, HIGH); delay(1); // wait while data CD4021 loads in data digitalWrite(SS, LOW); // 3: update input status in CMRI, will get sent to PC next time we're asked cmri.set_byte(0, SPI.transfer(0x00 /* dummy output value */)); } The connections from the above schematic are: - dataPin -> MISO (12) - latchPin -> SS (10) - clockPin -> CLOCK (13) We're using a new method again here, the cmri.set_byte(n, b) which sets the given byte to the value read in from the CD4021. Putting it together Using a combination of the 74HC595 and CD4021, you should be able to easily address dozens of inputs and outputs from a single Arduino, while using only half a dozen pins. This leaves other pins free for more interesting tasks. Suddenly wiring up your entire goods yard is not only possible, but quite easy.
http://www.utrainia.com/45-addressing-many-leds-with-a-single-arduino
CC-MAIN-2019-51
en
refinedweb
How to Create A Web App With External API Access using Wix Code Subscribe On YouTube Wix.com is a cloud-based website builder that makes it easy to create beautiful, professional website. With the Wix Editor, users are able to arrange and customize any element on their site. Traditionally Wix.com has targeted people who want to create their web presence without coding. Now Wix has added a big new feature to their platform: Wix Code. With Wix Code, you can set up database collections to store all your site info, add dynamic pages and repeating layouts and create custom forms to gather user input. Take full control of your site by adding your own JavaScript and connecting to Wix and third-party APIs. They’ve taken care of stuff like security, hosting and maintenance so you don’t have to and everything you create is SEO compatible. In this tutorial we’ll take a first look at Wix Code and go through a practical real-world sample application. You can just follow along by creating your free Wix.com account and follow the step-by-step instructions included. Visit the Wix Code Resources page to learn more about its features, read articles, watch and video tutorials, view sample code, check out API references and more. To get started with Wix Code you first need to enable the Developer Tools for your Wix Project. You can do so by selecting option Developer Tools from the Tools menu like you can see in the following: Wix Code Core Functionality In A Glimpse Wix Code offers a wide range of functionalities allowing you to take full control of your website. Let’s take a quick look at the core features: Database Collections With the database functionality you can set up collections and use them to store visitor details, product info and more. You choose how and where to use this data on your site, and control who can add, edit and view it. Repeating Layout Create a single list or grid layout that automatically updates site pages with unlimited content from your database. Feature all your news stories, business listings and more. Dynamic Pages With dynamic pages you can use a single design style and create 100s of new pages, each with a custom URL and unique content. Organize by category and update content easily from your database. User Input Create application forms, review sections, quizzes and more without writing a single line of code. All the data you collect will be automatically stored in your database and can be used anywhere on your site. Using APIs Build any type of website with a little JavaScript and Wix Code APIs. This combination gives you full control over your site’s functionality, from Wix elements to your databases to backend files, including fetching & routing. Access 3rd party APIs to further enhance you website with dynamic data. Practical Use Case: Creating A Crypto Currency Information App With External API Access To get a first impression of what you can do with Wix Code let’s create a real-word sample application. Let’s build a web application which lets you retrieve up to date information for any crypto currency. The application enables the user to type in a crypto currency name and then retrieve information about the currency by clicking on a button in the website. In the background a 3rd party REST API will be used to retrieve the needed information. By using Wix Code we’re initiating the web service call and we’re making sure the the result is displayed to the user on the website. Let’s start … Creating The User Interface First, let’s design the user interface using the Wix Editor. We’ll drag and drop all the page elements we need and customize their look and feel. Of course you’re free to design the user interface by yourself. You should make sure that at least the following controls are available on your site, so that the code we’re going to implement in the next steps is working. - User input element, input settings type Text, ID #currencyInput - Button element, ID #button1 - Text control with ID #result (this is the text element right underneath the input element) Implementing The Backend Module To retrieve crypto currency data we’re going to use the free REST API which is provided by Coinmarketcap.com. An overview of the API can be found at. For our use case the relevant API endpoint is[CurrencyName]. In order to access this API endpoint and retrieve information about a specific crypto currency we’re implementing a backend module for our website. To create a backend module, go to the Site Structure panel (to the left of the Wix Editor tools), go to Backend and click on the + symbol to add a new .js file. Let’s name it serviceModule.jsw. See below: Add the following code into that new file: import {fetch} from 'wix-fetch'; export function getCryptoCurrencyInfo(currency) { const url = '' + currency + '/'; console.log("Url: " + url); return fetch(url, {method: 'get'}) .then(response => response.json()) } With the first import statement we’re making sure that fetch is imported from the wix-fetch package. The fetch method is then used within the service method getCryptoCurrencyInfo to initiate a HTTP GET request to the API endpoint. Therefore we need to pass two parameters to the method call: - The first parameter is a string containing the URL of the API endpoint. - The second parameter is a configuration object in JSON format. In our specific case this object is containing the property method which is set to string ‘get’ to define that we’re initiating a HTTP GET request. The build up the URL string which is passed into the fetch method call the service method getCryptoCurrencyInfo is expecting to get the name of the crypto currency as an input parameter. The complete URL is built by concatenating the base URL endpoint ‘’ with the currency name. Using The Backend Module To make use of the service method getCryptoCurrencyInfo in the website open the site’s code editor and insert the following import statement first at the very top: import {getCryptoCurrencyInfo} from 'backend/serviceModule'; In the next step we’re ready to create a click event handler method for the button’s click event. Insert the following code in the event handler method: export function button1_click(event, $w) { //Add your code for this event here: console.log("Retrieving information for " + $w("#currencyInput").value); $w("#result").text = "Retrieving information for " + $w("#currencyInput").value; getCryptoCurrencyInfo($w("#currencyInput").value) .then(currencyInfo => { $w("#result").text = "Name: " + currencyInfo[0].name + "\n" + "Symbol: " + currencyInfo[0].symbol + "\n" + "Rank: " + currencyInfo[0].rank + "\n" + "Price (USD): " + currencyInfo[0].price_usd + "\n" + "Market Capitalization (USD): " + currencyInfo[0].market_cap_usd + "\n" + "Percent Change 1h: " + currencyInfo[0].percent_change_1h + "\n" + "Percent Change 24h: " + currencyInfo[0].percent_change_24h + "\n" + "Percent Change 7d: " + currencyInfo[0].percent_change_7d; }); } The integrated code editor makes it easy to keep the overview: Here we’re calling the backend method getCryptoCurrencyInfo and passing in the name of the of the crypto currency the user has provided in the input field currencyInput. To read out the current value of that input field the following code is used: $w("#currencyInput").value. If you have been using jQuery before this syntax should look very similar to you. The getCryptoCurrencyInfo method is returning a Promise so that we’re able to connect the call of the then method to wait for the response from the external API. Once the result is available and the function which is passed into then is executed we’re making sure that the output is done by setting the text property of the result control. Trying It Out Now we’re ready to go! Let’s try out the final result by publishing the application. Hit the Publish button in the upper right corner of the Wix website and you should be able to see a popup windows: The application is made available in the Internet and you can access the URL by clicking on button View Site. Clicking on this button takes you directly to the application’s website: Now you can enter the name of a crypto currency in the input field, e.g. “Bitcoin” and hit the button Retrieve Information. The result is presented right underneath the input field like you can see in the following screenshot: Conclusion With Wix Code, you get advanced functionality that lets you add your own JS code along with the stunning design features of the Wix Editor. With a built-in database, JavaScript backend and IDE – all hosted in the secure Wix cloud – you have one-click deployment. All you need to start creating is your front-end or backend code, giving you more time to focus on your clients and their websites. With Wix Code you can start coding immediately. You don’t need to care for the infrastructure (like configuring a web server) or setting up a development environment by yourself. Everything is in place and you can start implementing right away. As Wix Code is using JavaScript you’re able to apply your existing skills directly. No new language, no new framework to learn! Combining the power of Wix Code with the traditional functionality of the Wix Editor creates lots of possibilities for the web. Without hassle, you can now create content-rich websites and impressive web Applications with Wix. Set up database, build forms, add custom interactions and change site before with Wix APIs, third-party APIs (like in our demo) and your own JavaScript. You can try Wix Code today. This post is sponsored by Wix.com. Thanks a lot for supporting CodingTheSmartWay.com! ONLINE COURSE: The Complete JavaScript Course Check out the great The Complete JavaScript Course: Build a Real-World Project with thousands of students already enrolled: The Complete JavaScript Course
https://codingthesmartway.com/how-to-create-a-web-app-with-external-api-access-using-wix-code/
CC-MAIN-2019-51
en
refinedweb
a version resource or put a TensorFlow SavedModel in a Cloud Storage location that your project can access. - If you choose to use a version resource for batch prediction, you must create the version with the mls1-c1-m2machine type. Set up a: -. For batch prediction, the version must use the mls1-c1-m2machine type. If you provide a Model URI (see the following section), omit these fields. - Model URI You can get predictions from a model that isn't deployed on AI Platform by specifying the URI of the SavedModel you want to use. The SavedModel must be stored APIs Client Library for Python, you can use Python dictionaries to represent the Job and PredictionInput resources. Format your project name and your model or version name with the syntax used by the AI Platform_name, input_paths, output_path, model_name, region, data_format='JSON',. You can use the Google APIs Client Library for Python to call the AI Platform Training and Prediction API without manually constructing HTTP requests. Before you run the following code sample, you must set up authentication. How to] import googleapiclient.discovery as discovery project_id = 'projects/{}'.format(project_name) ml = discovery.build('ml', 'v1') Console: Go to the AI Platform Jobs page in the Google Cloud Console: Go to the Cloud.
https://cloud.google.com/ml-engine/docs/tensorflow/batch-predict?hl=no
CC-MAIN-2019-51
en
refinedweb
General Information Desktop Web Controls and Extensions Mainteinance Mode Frameworks and Libraries General Information Desktop Web Controls and Extensions Mainteinance Mode Frameworks and Libraries BootstrapChartSeriesCollection Class Namespace: DevExpress.Web.Bootstrap Assembly: DevExpress.Web.Bootstrap.v19.2.dll Declaration public class BootstrapChartSeriesCollection : BootstrapCoordinateSystemChartSeriesCollection Public Class BootstrapChartSeriesCollection Inherits BootstrapCoordinateSystemChartSeriesCollection Remarks A series represents a grouping of related data points. The most important characteristic of a series is its type, which determines a particular visual representation of data. You can find more details on each series type in the Series Types help topic. To create a series, add an object defining the series to the BootstrapChartSeriesCollection array. In the series' object, specify the series type, data source fields, the appearance of the series points and other options. If you need to set similar values to properties of several series, use the BootstrapChartCommonSeries class. It exposes the properties that can be specified for all series at once and for all series of a particular type at once. Note that the values specified for a series individually (in the BootstrapChartSeriesCollection array) override the values that are specified for all series (in the BootstrapChartCommonSeries class). Inheritance Any other suggestions? Tell us here.
https://docs.devexpress.com/AspNetBootstrap/DevExpress.Web.Bootstrap.BootstrapChartSeriesCollection
CC-MAIN-2019-51
en
refinedweb
How to track recently used files and folders (XAML) [This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]. This: using Windows.Storage; using Windows.Storage.Pickers; FileOpenPicker openPicker = new FileOpenPicker(); openPicker.ViewMode = PickerViewMode.Thumbnail; openPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary; openPicker.FileTypeFilter.ReplaceAll([".png", ".jpg", ".jpeg"]); // Open the picker for the user to pick a file StorageFile pickedFile = await openPicker.PickSingleFileAsync(); When the asynchronous call returns, PickSingleFileAsync in the example, the file that the user picked is returned as a StorageFile that is stored in the pickedFilevariable. If you are letting your user pick a folder, instead of a file, your code uses PickSingleFolderAsync. The picked folder is then returned as a StorageFolder. Add the picked file to the most recently used list (MRU) by adding a line of code like this after your asynchronous call: String mruToken = Windows.Storage.AccessCache.StorageApplicationPermissions.MostRecentlyUsedList.Add(pickedFile, "profile pic"); MostRecentlyUsedList.Add is overloaded. In the example, we use Add(IStorageItem, String)(IStorageItem). code should now look similar to this: // Open the picker for the user to pick a file StorageFile pickedFile = await openPicker.PickSingleFileAsync(); // Add picked file to MRU String mruToken = Windows.Storage.AccessCache.StorageApplicationPermissions.MostRecentlyUsedList.Add(pickedFile, pickedFile.Name);: using Windows.Storage; using Windows.Storage.AccessCache; String mruFirstToken = StorageApplicationPermissions.MostRecentlyUsedList.Entries.First.Token; StorageFile retrievedFile = await StorageApplicationPermissions.MostRecentlyUsedList.GetFileAsync(mruFirstToken);: using Windows.Storage.AccessCache; AccessListEntryView mruEntries = StorageApplicationPermissions.MostRecentlyUsedList.Entries; if (mruEntries.Size > 0) { foreach (AccessListEntry entry in mruEntries) { String mruToken = entry.Token; // Continue processing the MRU entry } } else { // Handle empty MRU } The AccessListEntryView lets you iterate entries in the MRU., the oldest item (the item that was last accessed the longest time ago) is automatically removed..
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh972344%28v%3Dwin.10%29
CC-MAIN-2019-51
en
refinedweb
So for many weeks I've debated about what to post here. My colleagues and I have worked pretty hard since the release of Exchange 2000 to document the SMTP Transport stack. So I figured by now everyone knew everything there was to know about setting up SMTP domains for inbound and relay. We covered the topic in the Greenbooks (), web casts, KB articles (,), and more. But the truth is, it is complex, and is difficult to troubleshoot without understanding how it all works. So here's everything you ever wanted to know on how to configure SMTP domains for inbound and relay. First, you have to know what inbound and relay domains are. Clearly, it would be bad for Exchange to accept email for every SMTP domain. Either one of two things would happen, we'd either be generating a lot of non-delivery reports (NDRs) for emails we never should have accepted, or we'd be forwarding email to a lot of different domains around the world. Thus, we limit what we accept for inbound and relay. Inbound domains (authoritative) are domains that Exchange accepts mail for, and if there is no match in the directory (AD), we generate an NDR. Relay domains are domains that we accept mail for, check to see if there is a match in our directory, and if not, forward it using either DNS or to a smart host. So, out of the box, Exchange accepts email for just one authoritative inbound domain, your default domain. The Exchange Metabase Update agent (also known as DS2MB) replicates this domain from the default recipient policy object into the IIS metabase, where SMTP reads it. A good chunk of the SMTP related configuration is replicated in this manner. So this means that you may have to wait on the local Config_DC to complete replication, but shortly thereafter, DS2MB will notice the change and replicate them to the metabase. Even better (especially those who remember 5.5), few changes require any sort of restart of services. New installations will have the default domain of their AD namespace, which is typically not what you want to leave it as. You can of course change this default domain to be anything you wish, whether it resolves in DNS or not. In fact, that is one way that you can for all practical purposes prevent inbound email. Next you may want to configure it such that Exchange accepts email for other domains. Simply make sure that SMTP domain appears in a recipient policy. It need not be the default, although it can be, and you need not use the Recipient Update Service (RUS) to stamp all user accounts. For example, if you only had two users that need to accept email for the secondary domain, then you would simply create a new recipient policy with a blank LDAP filter (assuming you put the addresses on the two accounts manually) or create a filter that selects just those two users. In either case, the RUS would not stamp the rest of your accounts. Yes, there is no "reroute" functionality like Exchange 5.5, but this provides nearly identical results. You can add up to about 1000 domains total before you need to start thinking about performance and our "hosted Exchange" solution. Speaking of Exchange 5.5, if you are in a mixed site/AG, although you can add SMTP domains to any recipient policy, if you DO want the RUS to stamp addresses on the users, you will need to use the special recipient policy for that site. Yes, this policy supports multiple SMTP domains. Just make sure you're running the latest service pack and you should be able to do this. Now let's say you want a relay or non-authoritative domain. Since relay is considered mail not destined for the local organization, you do not configure this on the recipient policy. What you do is create a new SMTP connector with a specific address space (e.g., "myrelaydomain.msft") and check the box for "Allow messages to be relayed to these domains" on the same tab. You do not ever want to do this on a connector with address space of "*" because that would open you up to allow relay to all domains, which would clearly be bad. Selection of the bridgehead of this connector should be done with care. This checkbox is what allows you to receive mail for this domain, so this is one of the only places where a connector not only serves an outbound purpose, but an "inbound" (well, relay) as well. On the topic of RUS and relay/non-authoritative domains: you may wish to stamp SMTP addresses of the non-authoritative domain on users in your directory. This is where the checkbox for "This Exchange Organization is responsible for all mail delivery to this address" on the recipient policy comes in. This check box tells DS2MB not to replicate this domain to SMTP, not to accept the domain as inbound (thus limiting acceptance to the bridgehead with the connector mentioned in the previous paragraph), and also not to generate NDRs if no match is found for a particular address in the directory. A few words of warning here: First, never configure an authoritative domain for relay by adding a connector. This makes no sense to DS2MB and it will probably warn you with an event in your event log. You may also get unpredictable results. Secondly, ALL recipient policies should match on this setting. Complex? Sure, though the most common scenarios are actually easier than in 5.5 where you had to go to both Site Addressing AND the IMC Routing tab. In addition to getting into trouble by not following my words of warning above, there is at least one other major limitation. Allow me to help this be less complex as well - nothing related to this should ever have you modifying your SMTP VS settings. Just your recipient policies for inbound, and SMTP connectors for relay. <sidebar>By the way, under very few circumstances would I recommend using the setting "Forward unresolved recipients to smart host", even though we've presented it as an option in KB321721 (). Certainly, you cannot mix the "Method 1" with "Method 2" as they are mutually exclusive. </sidebar> Troubleshooting this comes down to looking at the NDR you're getting back: 5.7.1 means that you haven't either accepted the domain as inbound or relay 5.4.6 means loop -- that the server is configured non-authoritative for this domain, but then has no place to forward the message and is trying to deliver it back to itself (particularly if you are using contacts where the target SMTP address is a shared domain and/or DNS is resolving the domain back to the sending server) 5.1.1 means that the server couldn't find a match for the user in AD, and the server is authoritative for this particular domain It is certainly possible, on a server that ran Exchange 2000 prior to SP3 (even if it has SP3 or later now) that DS2MB is confused and cannot replicate. Usually you get events in your event log from MSExchangeMU and you're getting NDRs, even though you're positive that you configured this correctly. Optionally, you can perform the following steps: 1. Backup the metabase 2. Stop the SMTP service 3. Use Metaedit or MBExplorer to delete all subkeys under LM/SMTPVS/X/Domain 4. Use the same tool to delete the LM/DS2MB key 5. Restart the Exchange System Attendant & SMTP services If you aren't sure if this is your issue, or if you need help, please call Support Services and do not attempt this on your own. Editing the metabase is just like editing the registry and you should only proceed if you know what you are doing. It should take much less time to pick up the phone than it would to reinstall and re-patch Exchange and IIS. In conclusion, if you would like to pick my brain on other Transport related topics, please post your suggested topics feedback here:.
https://techcommunity.microsoft.com/t5/Exchange-Team-Blog/Configuring-SMTP-Domains-for-Inbound-and-Relay/ba-p/609935
CC-MAIN-2019-51
en
refinedweb
51877/python-splitting-numpy-2-d-array For splitting the 2d array,you can use two specific functions which helps in splitting the NumPy arrays row wise and column wise which are split and hsplit respectively . 1. split function is used for Row wise splitting. 2. hsplit function is used for Column wise splitting . The simplest way of splitting NumPy arrays can be done on their dimension. A 3 * 4 array can easily be broken into 3 sets of lists for Rows and 4 set of lists using Columns. Here is how we can do the same : import numpy as np arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) arr.shape # Displays (3,4) #Split Row Wise np.split(arr,3) #Splitted as --> [1,2,3,4] | [5,6,7,8] | [9,10,11,12] #Split Column Wise np.hsplit(arr,4) #Splitted as --> [1,5,9] | [2,6,10] | [3,7,11] | [4,8,12] Slicing is basically extracting particular set of ...READ MORE Hey @abhijmr.143, you can print array integers ...READ MORE If you have matplotlib, you can do: import ...READ MORE Here's a generator that yields the chunks ...READ MORE code from This uses the L1 distance ...READ MORE Hi, it is pretty simple, to be ...READ MORE Hi. Refer to the below command: import pandas ...READ MORE You can do it by specifying index. ...READ MORE Good question, glad you brought this up. I ...READ MORE Python doesn't have a native array data ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/51877/python-splitting-numpy-2-d-array?show=51878
CC-MAIN-2019-51
en
refinedweb
This reply is a little bit late (just got back from holiday) but I think it's an important topic and I was wondering if anybody has given this more thought in the meantime. With some minor adaptions I think your general description of the usage of XML namespaces sounds very good. > -----Original Message----- > From: Nicola Ken Barozzi [mailto:nicolaken@apache.org] > Sent: Freitag, 6. September 2002 23:54 > To: Ant Developers List > Subject: Re: xml namespace support in ant2 > > So instead we could say that > 1) all ant: namespace elements are core ant. > Well, what the namespace prefix is in the buildfile is really up to the user. It's just the URI that matters. So all of thw following should work: <project xmlns="urn:ant-namespace"> <target name="test"/> </project> <xyz:project xmlns: But specifying it as the default namespace as in the first example or using "ant" as the prefix will probably be the most common usages. When you say "core ant" I assume you're talking about <project>, <target>, all core and optional tasks and types in the distribution. > 2) different namespaced elements have to be defined before use as > taskdefs, typedefs or antlibs, and ant treats them accordingly > I think saying it the other way around would be more correct. I.e. Custom tasks and types have to be declared to use a different XML namespace. > 3) all namespaced tags that are not declared as in (2) must > be declared > beforehand as <namespacedef namespace="nms" handler="..."/>, > so that the > Ant ProjectHelper can use that particular Handler to process the > extension. A convenient nullHandler is created when the > handler is not > specified. > What would this be used for? Isn't it enough with task and type extenstions? Other extensions would certainly raise confusion... The primary benefits I see of using XML namespaces the described way are: - Mechanism to avoid name clashes of different tasks and types - Schemas (grammars) could be written for Ant buildfiles covering the Ant core elements and external tasks and types. The schemas in turn would allow the validation of tasks and types to take place externally and at an earlier stage. With externally I mean that the tasks and types wouldn't need code to do the validation themselves. At a later stage the schemas could maybe even be the basis for the documentation. Also, the different usages of a task could maybe be bound to the different processing modes somehow. Playing some with RELAX NG and Sun's MSV validator I found that it was quite easy to specify grammars for the different Ant tasks and types. Even writing grammars that support other namespaces isn't very hard. That way a grammar which validates buildfiles using other namespaces for external tasks and types can be constructed easily. It should also be quite easy to allow external tasks to specify their own grammar for validation. Any thoughts? Cheers, -- knut
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200209.mbox/%3C36E996B162DAD111B6AF0000F879AD1A76BF9F@nts_par1.paranor.ch%3E
CC-MAIN-2019-51
en
refinedweb
1 /* 2 * Fimex, UnitsConverterDecl.h 3 * 4 * (C) Copyright 2019, met.no 5 * 6 * Project Info: 7 * 8 * This library is free software; you can redistribute it and/or modify it 9 * under the terms of the GNU Lesser General Public License as published by 10 * the Free Software Foundation; either version 2.1 of the License, or 11 * (at your option) any later version. 12 * 13 * This library is distributed in the hope that it will be useful, but 14 * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY 15 * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public 16 * License for more details. 17 * 18 * You should have received a copy of the GNU Lesser General Public 19 * License along with this library; if not, write to the Free Software 20 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, 21 * USA. 22 */ 23 #ifndef UNITSCONVERTERDECL_H_ 24 #define UNITSCONVERTERDECL_H_ 25 26 #include <memory> 27 28 namespace MetNoFimex { 29 30 class UnitsConverter; 31 typedef std::shared_ptr<UnitsConverter> UnitsConverter_p; 32 33 } /* namespace MetNoFimex */ 34 35 #endif /* UNITSCONVERTERDECL_H_ */
https://fossies.org/linux/fimex/include/fimex/UnitsConverterDecl.h
CC-MAIN-2019-51
en
refinedweb
Porting Dojo Methods to Flash - Part 2 of 3 May 2nd, 2008 at 12:01 am by Mike Wilcox Adobe recently announced their new Open Screen project, which opens the licensing of the Flash Player and much more. We’re celebrating this event with a three part series on Dojo and ActionScript and previewing some of the work by the Dojo team. In part 1 of our series, we implemented dojo.hitch into AS. Now we can work on bringing the benefits of dojo.connect to ActionScript through our lang.connect function, which has a dependency on lang.hitch. This will be more challenging than the lang.hitch implementation, as Flash object-events work very differently from HTML object-events. We’re going to be porting the concept here more than actual Dojo code. The connection will take the source and event, and make a broadcaster and a listener. When the listener “hears” the broadcast of the event, it will trigger the callback: Let’s start by adding connect to our lang namespace: _global.lang = { hitch: function(scope, method){ ... }, connect: function(source:Object, event:String, target, callback):String{ // } } As in Dojo, the source’s event is what we listen to. Unlike Dojo, where the event can be either a string or a function, we can only accept strings due to a limitation in Flash. The data types in the previous example demonstrate what is expected in these arguments. The first two arguments must be an Object and a String, and the third and fourth are not typed, to allow for flexibility. lang.connect returns a String, which is the connection id used for removing the connection later, if desired. To use this functionality in the timeline or pre-AS2, simply remove the data typing. The first thing we’ll do is create a closure from the target and callback. Then we need a vanilla object to use as a listener to which we’ll attach the event: connect: function(source, event, target, callback){ var hitched = this._hitch(target, callback); var listener = new Object(); listener[event] = function(args){ //ex: listener.onRelease() hitched(); } } Now we need our source object to tell the listener when its event has been fired. Flash components can broadcast events, but natively, Flash objects and MovieClips do not. We use the AsBroadcaster singleton to inject the object with the necessary methods ( addListener, broadcastMessage, and removeListener). But we want to be able to connect to the source/event multiple times, so we only need do this once. Then we will add the listener to it: if(!source.broadcasters) { source.broadcasters = {}; // a hash map of our event types AsBroadcaster.initialize(source) } source.addListener(listener); Now we need to broadcast when our event fires. If we’ve already attached this type of event to the source, we don’t want to do it twice: if(!source.broadcasters[event]){ source[event] = function(){ source.broadcastMessage(event, arguments); } source.broadcasters[event] = true; } Note what we have done here and how it differs from dojo.connect(). We’ve written the method directly to the source object. We’re not really listening to it. Jumping ahead a little bit, here’s how we could break this code: On line 1 we successfully connected to the button’s onRelease event, but in line 2, we overwrote our connection. This is a great example of the power of dojo.connect() - using similar code, this would not overwrite the connection in a Dojo JavaScript file. The reason for this limitation in Flash is as I noted earlier, most Flash objects don’t natively broadcast events. Some do, like Mouse, Key, Select, and Flash components. If this code was written to target just those objects, we could avoid the overwrite limitation. We can finish up our method by returning an id that references this connection. The code will create an indexed id and add the listener and the source to a hash map: __conListeners: {}, __id: 0, ... var id = "connect_"+this.__id++; this.__conListeners[id] = {listener:listener, source:source}; return id; Our final code, including the lang.disconnect method: _global.lang = { __conListeners:{}, __id:0, connect: function(source, event, target, callback){ var hitched = this._hitch(target, callback); var listener = new Object(); listener[event] = function(args){ hitched(); } if(!source.broadcasters) { source.broadcasters = {}; AsBroadcaster.initialize(source) } source.addListener(listener); if(!source.broadcasters[event]){ source[event] = function(){ source.broadcastMessage(event, arguments); } source.broadcasters[event] = true; } var id = "connect_"+this.__id++; this.__conListeners[id] = {listener:listener, source:source}; return id; } disconnect: function(id){ var con = this.__conListeners[id]; if(con){ con.source.removeListener(con.listener); con = null; } } } And here is our test case: var object = { doFrame: function(){ trace("scoped onEnterFrame, 2nd connection"); } }; var count = 0; var conId = lang.connect(_root, "onEnterFrame", function(){ trace("onEnterFrame: " + count); count++; if(count>3){ lang.disconnect(conId); } }); lang.connect(_root, "onEnterFrame", object, "doFrame"); // outputs: scoped onEnterFrame onEnterFrame: 1 scoped onEnterFrame onEnterFrame: 2 scoped onEnterFrame testScope: dojo onEnterFrame: 3 scoped onEnterFrame scoped onEnterFrame .... We’ve successfully ported hitch and connect into Flash. More can be done on our lang.connect - a little error checking can speed up development by checking that the objects and methods exist. And our implementation is not forwarding any events. Since we pass the event as a string, these could easily be checked and extended: in this case, the frame number of the timeline could be returned, a connection to a button can return it’s event type. Connecting to the Mouse object could return the x/y coordinates of the mouse in that scope. Also without too much work, this code can be modified to work in an AS2 class file, and compiled with MTASC. This is another example of how Dojo is not limited to JavaScript nor even a browser, and how you can port your favorite Dojo functionality to other languages. In the final part of our series, we will explore connecting JavaScript objects to Flash objects. Tags: Flash ActionScript
http://www.sitepen.com/blog/2008/05/02/porting-dojo-methods-to-flash-part-2-of-3/
crawl-001
en
refinedweb
This chapter provides a set of design guidelines and techniques you can use to ensure that your Swing GUIs perform well and provide fast, sensible responses to user input. Many of these guidelines imply the need for threads, and sections 11.2, 11.3, and 11.4 review the rules for using threads in Swing GUIs and the situations that warrant them. Section 11.5 describes a web-search application that illustrates how to apply these guidelines. Note: Sections 11.2 and 11.3 are adapted from articles that were originally published on The Swing Connection. For more information about programming with Swing, visit The Swing Connection online at. Design work tends to be time-consuming and expensive, while building software that implements a good design is relatively easy. To economize on the design part of the process, you need to have a good feel for how much refinement is really necessary. For example, if you want to build a small program that displays the results of a simple fixed database query as a graph and in tabular form, there's probably no point in spending a week working out the best threading and painting strategies. To make this sort of judgment, you need to understand your program's scope and have a feel for how much it pushes the limits of the underlying technology. To build a responsive GUI, you'll generally need to spend a little more time on certain aspects of your design: The following sections describe four key guidelines for keeping your distributed applications in check: In a distributed application, it's often not possible to provide results instantaneously. However, a well-designed GUI acknowledges the user's input immediately and shows results incrementally whenever possible. Your interface should never be unresponsive to user input. Users should always be able to interrupt time-consuming tasks and get immediate feedback from the GUI. Interrupting pending tasks safely and quickly can be a challenging design problem. In distributed systems, aborting a complex task can sometimes be as time-consuming as completing the task. In these cases, it's better to let the task complete and discard the results. The important thing is to immediately return the GUI to the state it was in before the task was started. If necessary, the program can continue the cleanup process in the background. One way to do this is to use explicit notifications. For example, if the information is part of the state of an Enterprise JavaBeans component, the program might add property change listeners for each of the properties being displayed. When one of the properties is changed, the program receives a notification and triggers a GUI update. However, this approach has scalability issues: You might receive more notifications than can be processed efficiently. To avoid having to handle too many updates, you can insert a notification concentrator object between the GUI and the bean. The concentrator limits the number of updates that are actually sent to the GUI to one every 100 milliseconds or more. Another solution is to explicitly poll the state periodically-for example, once every 100 milliseconds. Imagine a program that displays the results of a database query each time the user presses a button. If the results don't change, the user might think that the program isn't working correctly. Although the program isn't idle (it is in fact performing the query), it looks idle. To fix this problem, you could display a status bar that contains the latest query and the time it was submitted, or display a transient highlight over the fields that are being updated even if the values don't change. Event processing in Swing is effectively single-threaded, so you don't have to be well-versed in writing threaded applications to write basic Swing programs. The following sections describe the three rules you need to keep in mind when using threads in Swing: repaintand revalidatemethods on JComponent.) invokeLaterand invokeAndWaitfor doing work if you need to access the GUI from outside event-handling or drawing code. SwingWorkeror Timer.For example, you might want to create a thread to handle a job that's computationally expensive or I/O bound. Realized means that the component's paint method has been or might be called. A Swing component that's a top-level window is realized by having setVisible(true), show, or (this might surprise you) pack called on it. Once a window is realized, all of the components that it contains are realized. Another way to realize a Component is to add it to a Container that's already realized. The event-dispatching thread is the thread that executes the drawing and event-handling code. For example, the paint and actionPerformed methods are automatically executed in the event-dispatching thread. Another way to execute code in the event-dispatching thread is to use the AWT EventQueue.invokeLater method. There are a few exceptions to the single-thread rule: initmethod. Existing browsers don't render an applet until after its initand startmethods have been called, so constructing the GUI in the applet's initmethod is safe as long as you never call showor setVisible(true)on the actual applet object. JComponentmethods can be called from any thread: repaint, revalidate, and invalidate. The repaintand revalidatemethods queue requests for the event-dispatching thread to call paintand validate, respectively. The invalidatemethod just marks a component and all of its direct ancestors as requiring validation. add<ListenerType >Listenerand remove<ListenerType >Listenermethods. These operations have no effect on event dispatches that might be under way. mainthread. For example, the code in Listing 11-1 is safe, as long as no Componentobjects (Swing or otherwise) have been realized. public class MyApplication { public static void main(String[] args) { JPanel mainAppPanel = new JPanel(); JFrame f = new JFrame("MyApplication"); f.getContentPane().add(mainAppPanel, BorderLayout.CENTER); f.pack(); f.setVisible(true); // No more GUI work here } } Constructing a GUI in the mainthread In this example, the f.pack call realizes the components in the JFrame. According to the single-thread rule, the f.setVisible(true) call is unsafe and should be executed in the event-dispatching thread. However, as long as the program doesn't already have a visible GUI, it's exceedingly unlikely that the JFrame or its contents will receive a paint call before f.setVisible(true) returns. Because there's no GUI code after the f.setVisible(true) call, all GUI processing moves from the main thread to the event-dispatching thread, and the preceding code is thread-safe. invokeLaterand invokeAndWaitmethods to cause a Runnableobject to be run on the event-dispatching thread. These methods were originally provided in the SwingUtilities class, but are part of the EventQueue class in the java.awt package in J2SE v. 1.2 and later. The SwingUtilties methods are now just wrappers for the AWT versions. invokeLaterrequests that some code be executed in the event-dispatching thread. This method returns immediately, without waiting for the code to execute. invokeAndWaitacts like invokeLater, except that it waits for the code to execute. Generally, you should use invokeLaterinstead., it doesn't wait for the event- dispatching thread to execute the code. Listing 11-2 shows how to use invokeLater. Runnable doWork = new Runnable() { public void run() { // do some GUI work here } }; SwingUtilities.invokeLater(doWork); Using invokeLater invokeAndWaitmethod is just like the invokeLatermethod, invoked code is running. Listing 11-3 shows how to use invokeAndWait. Listing 11-4 shows how a thread that needs access to GUI state, such as the contents of a pair of JTextFields, can use invokeAndWait to access the necessary information. void showHelloThereDialog() throws Exception { Runnable doShowModalDialog = new Runnable() { public void run() { JOptionPane.showMessageDialog(myMainFrame, "HelloThere"); } }; SwingUtilities.invokeAndWait(doShowModalDialog); } Using invokeAndWait void printTextField() throws Exception { final String[] myStrings = new String[2]; Runnable doGetTextFieldText = new Runnable() { public void run() { myStrings[0] = textField0.getText(); myStrings[1] = textField1.getText(); } }; SwingUtilities.invokeAndWait(doGetTextFieldText); System.out.println(myStrings[0] + " " + myStrings[1]); } Using invokeAndWaitto access GUI state Remember that you only need to use these methods if you want to update the GUI from a worker thread that you created. If you haven't created any threads, then you don't need to use invokeLater or invokeAndWait. Timerclasses: one in the javax.swingpackage and the other in java.util. Nearly every computer platform has a timer facility of some kind. For example, UNIX programs can use the alarm function to schedule a SIGALRM signal; a signal handler can then perform the task. The Win32 API has functions, such as SetTimer, that let you schedule and manage timer callbacks. The Java platform's timer facility includes the same basic functionality as other platforms, and it's relatively easy to configure and extend. //DON'T DO THIS! while (isCursorBlinking()) { drawCursor(); for (int i = 0; i < 300000; i++) { Math.sqrt((double)i); // this should chew up time } eraseCursor(); for (int i = 0; i < 300000; i++) { Math.sqrt((double)i); // likewise } }Busy wait loop A more practical solution for implementing delays or timed loops is to create a new thread that sleeps before executing its task. Using thefinal Runnable doUpdateCursor = new Runnable() { boolean shouldDraw = false; public void run() { if (shouldDraw = !shouldDraw) { drawCursor(); } else { eraseCursor(); } } }; Runnable doBlinkCursor = new Runnable() { public void run() { while (isCursorBlinking()) { try { EventQueue.invokeLater(doUpdateCursor); Thread.sleep(300); } catch (InterruptedException e) { return; } } } }; new Thread(doBlinkCursor).start(); Thread sleepmethod to time a delay works well with Swing components as long as you follow the rules for thread usage outlined in Section 11.4 on page 176. The blinking cursor example could be rewritten using Thread.sleep, as shown in Listing 11-6. As you can see, the invokeLatermethod is used to ensure that the drawand erasemethods execute on the event-dispatching thread. Using the Thread sleepmethod The main problem with this approach is that it doesn't scale well. Threads and thread scheduling aren't free or even as cheap as one might hope, so in a system where there might be many busy threads it's unwise to allocate a thread for every delay or timing loop. javax.swing.Timerclass allows you to schedule an arbitrary number of periodic or delayed actions with just one thread. This Timerclass is used by Swing components for things like blinking the text cursor and for timing the display of tool-tips. The Swing timer implementation fires an action event whenever the specified interval or delay time passes. You need to provide an Action object to the timer. Implement the Action actionPerformed method to perform the desired task. For example, the blinking cursor example above could be written as shown in Listing 11-7. In this example, a timer is used to blink the cursor every 300 milliseconds. Action updateCursorAction = new AbstractAction() { boolean shouldDraw = false; public void actionPerformed(ActionEvent e) { if (shouldDraw = !shouldDraw) { drawCursor(); } else { eraseCursor(); } } }; new Timer(300, updateCursorAction).start(); Blinking cursor The important difference between using the Swing Timer class and creating your own Thread is that the Swing Timer class uses just one thread for all timers. It deals with scheduling actions and putting its thread to sleep internally in a way that scales to large numbers of timers. The other important feature of this timer class is that the Action actionPerformed method runs on the event-dispatching thread. As a result, you don't have to bother with an explicit invokeLater call. java.utilpackage. Like the Swing Timerclass, the main java.utiltimer class is called Timer. (We'll call it the "utility Timerclass" to differentiate from the Swing Timerclass.) Instead of scheduling Actionobjects, the utility Timerclass schedules instances of a class called TimerTask. The utility timer facility has a different division of labor from the Swing version. For example, you control the utility timer facility by invoking methods on TimerTask rather than on Timer. Still, both timer facilities have the same basic support for delayed and periodic execution. The most important difference between javax.Swing.Timer and java.util.Timer is that the latter doesn't run its tasks on the event-dispatching thread. The utility timer facility provides more flexibility over scheduling timers. For example, the utility timer lets you specify whether a timer task is to run at a fixed rate or repeatedly after a fixed delay. The latter scheme, which is the only one supported by Swing timers, means that a timer's frequency can drift because of extra delays introduced by the garbage collector or by long-running timer tasks. This drift is acceptable for animations or auto-repeating a keyboard key, but it's not appropriate for driving a clock or in situations where multiple timers must effectively be kept in lockstep. The blinking cursor example can easily be implemented using the java.util.Timer class, as shown in Listing 11-8. final Runnable doUpdateCursor = new Runnable() { private boolean shouldDraw = false; public void run() { if (shouldDraw = !shouldDraw) { drawCursor(); } else { eraseCursor(); } } }; TimerTask updateCursorTask = new TimerTask() { public void run() { EventQueue.invokeLater(doUpdateCursor); } }; myGlobalTimer.schedule(updateCursorTask, 0, 300); Blinking the cursor with java.util.Timer An important difference to note when using the utility Timer class is that each java.util.Timer instance, such as myGlobalTimer, corresponds to a single thread. It's up to the program to manage the Timer objects. Timerclass is preferred if you're building a new Swing component or module that doesn't require large numbers of timers (where "large" means dozens or more). The new utility timer classes give you control over how many timer threads are created; each java.util.Timer object creates one thread. If your program requires large numbers of timers you might want to create several java.util.Timer objects and have each one schedule related TimerTasks. In a typical program you'll share just one global Timer object, for which you'll need to create one statically scoped Timer field or property. The Swing Timer class uses a single private thread to schedule timers. A typical GUI component or program uses at most a handful of timers to control various animation and pop-up effects. The single thread is more than sufficient for this. The other important difference between the two facilities is that Swing timers run their task on the event-dispatching thread, while utility timers do not. You can hide this difference with a TimerTask subclass that takes care of calling invokeLater. Listing 11-9 shows a TimerTask subclass, SwingTimerTask, that does this. To implement the task, you would then subclass SwingTimerTask and override its doRun method (instead of run). abstract class SwingTimerTask extends java.util.TimerTask { public abstract void doRun(); public void run() { if (!EventQueue.isDispatchThread()) { EventQueue.invokeLater(this); } else { doRun(); } } } Extending TimerTask Cross-fade animation This animation is implemented using the java.util.Timer and SwingTimerTask classes. The cross-fade is implemented using the Graphics and Image classes. Complete code for this sample is available online,1 but this discussion concentrates on how the timers are used. A SwingTimerTask is used to schedule the repaints for the animation. The actual fade operation is handled in the paintComponent method, which computes how far along the fade is supposed to be based on the current time, and paints accordingly. The user interface provides a slider that lets the user control how long the fade takes-the shorter the time, the faster the fade. When the user clicks the Fade button, the setting from the slider is passed to the startFade method, shown in Listing 11-10. This method creates an anonymous subclass of SwingTimerTask (Listing 11-9) that repeatedly calls repaint. When the task has run for the allotted time, the task cancels itself. public void startFade(long totalFadeTime) { SwingTimerTask updatePanTask = new SwingTimerTask() { public void doRun() { /* If we've used up the available time then cancel * the timer. */ if ((System.currentTimeMillis()-startTime) >= totalTime) { endFade(); cancel(); } repaint(); } }; totalTime = totalFadeTime; startTime = System.currentTimeMillis(); timer.schedule(updatePanTask, 0, frameRate); } Starting the animation The last thing the startFade method does is schedule the task. The schedule method takes three arguments: the task to be scheduled, the delay before starting, and the number of milliseconds between calls to the task. It's usually easy to determine what value to use for the task delay. For example, if you want the cursor to blink five times every second, you set the delay to 200 milliseconds. In this case, however, we want to call repaint as often as possible so that the animation runs smoothly. If repaint is called too often, though, it's possible to swamp the CPU and fill the event queue with repaint requests faster than the requests can be processed. To avoid this problem, we calculate a reasonable frame rate and pass it to the schedule method as the task delay. This frame rate is calculated in the initFrameRate method shown in Listing 11-11. public void initFrameRate() { Graphics g = createImage(imageWidth, imageHeight).getGraphics(); long dt = 0; for (int i = 0; i < 20; i++) { long startTime = System.currentTimeMillis(); paintComponent(g); dt += System.currentTimeMillis() - startTime; } setFrameRate((long)((float)(dt / 20) * 1.1f)); } Initializing the frame rate The frame rate is calculated using the average time that it takes the paintComponent method to render the component to an offscreen image. The average time is multiplied by a factor of 1.1 to slow the frame rate by 10 percent to prevent minor fluctuations in drawing time from affecting the smoothness of the animation. For additional information about using Swing timers, see How to Use Timers in The Java Tutorial.2 If it might take a long time or it might block, use a thread. If it can occur later or it should occur periodically, use a timer. Occasionally, it makes sense to create and start a thread directly; however, it's usually simpler and safer to use a robust thread-based utility class. A thread-based utility class is a more specialized, higher-level abstraction that manages a worker thread. The timer classes described in Section 11.3 are good examples of this type of utility class. Concurrent Programming in Java3 by Doug Lea describes many other useful thread-based abstractions. Swing provides a simple utility class called SwingWorker that can be used to perform work on a new thread and then update the GUI on the event-dispatching thread. SwingWorker is an abstract class. To use it, override the construct method to perform the work on a new thread. The SwingWorker finished method runs on the event-dispatching thread. Typically, you override finished to update the GUI based on the value produced by the construct method. (You can read more about the SwingWorker class on The Swing Connection.4) The example in Listing 11-12 shows how SwingWorker can be used to check the modified date of a file on an HTTP server. This is a sensible task to delegate to a worker thread because it can take a while and usually spends most of its time blocked on network I/O. final JLabel label = new JLabel("Working ..."); SwingWorker worker = new SwingWorker() { public Object construct() { try { URL url = new URL(""); return new Date(url.openConnection().getLastModified()); } catch (Exception e) { return ""; } } public void finished() { label.setText(get().toString()); } }; worker.start(); // start the worker thread Checking the state of a remote file using a worker thread In this example, the construct method returns the last-modified date for java.sun.com, or an error string if something goes wrong. The finished method uses SwingWorker.get, which returns the value computed by the construct method, to update the label's text. Using a worker thread to handle a task like the one in the previous example does keep the event-dispatching thread free to handle user events; however, it doesn't magically transform your computer into a multi-CPU parallel-processing machine. If the task keeps the worker thread moderately busy, it's likely that the thread will absorb cycles that would otherwise be used by the event-dispatching thread and your program's on-screen performance will suffer. There are several ways to mitigate this effect: The example in the next section illustrates as many of these guidelines and techniques as possible. It's a front end for web search engines that resembles Apple's Sherlock 2 application5 or (to a lesser extent) Infoseek's Express Search application.6 These types of user interfaces push the limit of what works well in the HTML-based, thin-client application model. Many of the operations that you might expect to find in a search program, such as sorting and filtering, can't easily be provided under this dumb-terminal-style application model. On the other hand, the Java platform is uniquely suited for creating user interfaces for web services like search engines. The combination of networking libraries, HTTP libraries, language-level support for threads, and a comprehensive graphics and GUI toolkit make it possible to quickly create full-featured web-based applications. Search Party application The Search Party application, shown in Figure 11-2, provides this kind of Java technology-based user interface for a set of web search engines. It illustrates how to apply the guidelines and techniques described in this chapter to create a responsive GUI. You can download the complete source code for the Search Party application from. Search Party allows the user to enter a simple query that's delivered to a list of popular search engines. The results are collected in a single table that can be sorted, filtered, and searched. The GUI keeps the user up-to-date on the search tasks that are running and lets the user interrupt a search at any time. Worker threads are used to connect to the search engines and parse their results. Each worker thread delivers updates to the GUI at regular intervals. After collecting a couple hundred search hits, the worker thread exits. If the user interrupts the search, the worker threads are terminated. The following sections take a closer look at how the worker threads operate. Thread.MIN_PRIORITY.The thread-priority property allows you to advise the underlying system about the importance of scheduling the thread. How the thread- priority property is used depends on the JVM implementation. Some implementations make rather limited use of the priority property and small changes in thread priority have little or no effect. In other JVM implementations, a thread with a low priority might starve (never be scheduled) if there are always higher-priority threads that are ready to run. In the Search Party application, the only thread we're concerned about competing with is the event-dispatching thread. Making the worker threads' priorities low is reasonable because we're always willing to suspend the worker threads while the user is interacting with the program. When the Thread.interrupt method is called, it just sets the thread's interrupted boolean property. If the interrupted thread is sleeping or waiting, an InterruptedException is thrown. If the interrupted thread is blocked on I/O, an InterruptedIOException might be thrown, but throwing the exception isn't required by the JVM specification and most implementations don't. Search Party's SwingWorker subclass, SearchWorker, checks to see if it's been interrupted each time it reads a character from the buffered input stream. Although the obvious way to implement this would be to call Thread.isInterrupted before reading a character, this approach isn't reliable. The isInterrupted flag is cleared when an InterruptedException is caught or when the special "interrupted" test and reset method is called. If some code that we've implicitly called happens to catch the InterruptedException (because it was waiting or sleeping) or if it clears the isInterrupted flag by calling Thread.interrupted, Search Party wouldn't realize that it's been interrupted! To make sure that Search Party detects interruptions, the SwingWorker interrupt method interrupts the worker thread and permanently sets the boolean flag that is returned by the SwingWorker method isInterrupted. What happens if the interrupted worker thread is blocked on I/O while waiting for data from the HTTP server it's reading from? It's unlikely that the I/O code will throw an InterruptedIOException, which means there's a potential thread leak. To avoid this problem, SearchWorker class overloads the interrupt method. When the worker is interrupted, the input stream it's reading from is immediately closed. This has the nice side effect of immediately aborting any pending I/O. The SearchWorker implementation catches and ignores the I/O exception that results from closing the thread's input stream while a read was pending. You can download the code for this and other examples from Mary Campione and Kathy Walrath, The Java Tutorial: Object-Oriented Programming for the Internet, Second Edition. Addison-Wesley, 1998.3 Doug Lea, Concurrent Programming in Java: Design Principles and Patterns, Second Edition. Addison-Wesley, 1999.4 Visit The Swing Connection online at For more information about Sherlock, see For more information about Express Search, see © 2001, Sun Microsystems,Inc.. All rights reserved.
http://java.sun.com/docs/books/performance/1st_edition/html/JPSwingThreads.fm.html
crawl-001
en
refinedweb
. You can download the bits here . The "value-add" package is now called "Futures" and it's available as Fox nous a TRES brievement annoncer la mis en ligne de la beta 2 de Microsoft Ajax Extension : ASP.NET Jon: so the other frameworks can use short aliases, but we can't? All $ aliases are shortcuts for a properly namespaced function, as the post explains. That means that if any of those collide, you can still use the fully-qualified name and include the conflicting library after ours (that's why the aliases are not used from within the core library's code). It seems like we're the only ones to actually care about not colliding with existing aliases from competing frameworks, and we don't even get some credit for that? Jon: fair enough, but what other (short) prefix could we have used (MS is not an option for legal reasons)? It's becoming a common convention that such aliases start with $ (which is the only special character allowed with underscore which is more used to denote internal/private) to avoid most collisions with global functions defined by the page developer. Anyway, we checked that we didn't collide with any known alias in the known major frameworks. For the future, nothing can really guarantee that you don't have collisions, even if you use namespaces like we do, save for having a standardization group that coordinates everybody's efforts. That's exactly what OpenAjax is trying to do. Introduction Microsoft Ajax defines a component as a class that inherits from the base Sys.Component Jon: it's not just Prototype who's using the $ prefix. It's just about everyone. Leland: Were we really that late? What about Outlook Web Access, arguably the first major Ajax application? What about inventing XmlHttpRequest in the first place? What about callbacks in ASP.NET? If we were "opposed to" open source like you claim, how do you explain the the AJAX toolkit is fully open-source (and already includes external contributions)? How do you explain CodePlex as a whole? Anyway, *none* of the frameworks you mention are nicely integrated with ASP.NET. The Microsoft AJAX Library not only is very well integrated with ASP.NET, it can also be used without it, say with PHP or Java. And by the way, Microsoft *has* used Java in the past, but Sun didn't see it this way. In the same way, Microsoft wanted to include PDF support into Office, but again Adobe didn't see it this way so now it's a separate (free) download from Microsoft. So what does Atlas bring to the party? UpdatePanel, xml-script, tight integration with ASP.NET to name a few. Our customers seem to like it. Jon: I completely see your point and it's something we've always taken seriously, as the $get rename attests. We've had lots of discussions on this very subject, both internally and with early adopters. There are tradeoffs we have to make between keeping the majority of our users happy and reducing the risks of collisions with other frameworks. As a matter of facts, when we asked, the majority of our users told us that they preferred to keep the $ function even if there was a known collision with just about any other existing framework. We actually went against that feedback and did the rename to $get. Similarly, we went against the convenience of extending the Array prototype and moved everything to Array statics. By the way, the built-in type extensions have a much higher risk of collision than the $ functions, but in the same way, we think that if we (and other frameworks) are going to keep an acceptable level of usability, the only way to avoid stepping on each other's toes in the future is just to talk to each other. OpenAjax is proof that other framework authors feel the same way. Again, we take these considerations very seriously, so thanks for the feedback. Sort of a simplistic question after the discussion above; but does the MouseButton enumeration actually work? I'm currently developing an app that when run in Firefox works fine, when run in IE6 the buttons are mixed up, this occurs when I do something in a mouseup event, in a click event of course it returns left all the time (presumably because in IE window.event.button is always 0 in the click event.) Regards, Chad. Chad: this is a known problem that is fixed in the RTM version. Thanks for reporting it. Hmm, RTM version eh? So you guys are pretty close then :) This helped clear a few things up I am looking forward to the article on AJAX class events since that is where I am stuck. Hi, I have a file upload control in an iFrame and I try to add two event handlers in JS code run from teh parent window for the keyup and propertychange events. I have noticed from Ajax Beta 2 source that $get supports a document parameter: var g_UploadIFrame = $get("iframeFileUpload"); var g_FileUploadTextBox = $get("FileUpload", g_UploadIFrame.contentWindow.document); but then the following fails: $addHandler(g_FileUploadTextBox, "keyup", doSomething); $addHandler(g_FileUploadTextBox, "propertychange", doSomething); Any help would be appreciated. Jacques: sorry, $get takes two parameters. One is the id, the second, optional parameter is the parent element. Passing it an element that is in a different window is an abuse of the API and something that's entirely unsupported. The reason is mainly that $addHandler not only abstracts the browser differences but also helps components dispose of these events, which depends on the window where it happens. How exactly is it failing? Does it just not work or do you get any kind of error message? I'll make sure this is documented. I think you need to hook up your events yourself and handle dispose yourself, not using $addHandler in this case. Thx for your response. Internet Explorer reports the following JS error on the page: Error: object required Code: 0 I think this is a serious limitation and I am not sure I understand its justification considering: 1) the following works in IE (addEventListener works too) g_FileUploadTextBox.attachEvent("onkeyup", updateFileName); g_FileUploadTextBox.attachEvent("onpropertychange", updateFileName); 2) In the Ajax framework, $addHanlder/$removeHandler maintains an array of events which help the dispose process but I am not sure why this depends on the "window where it happens". 3) I think scripting components in an IFrame is a fairly common Ajax scenario, epecially for file uploads, and you this should have been considered. Jacques: the problem is that it would be very complex to subscribe to the unload event of all windows you attached events in and dispose of them properly. We just couldn't ensure the level of functionality that we can ensure in the same window. If you're using an iframe, what I recommend is that it has its own copy of the AJAX library and pretty much works in isolation, with just the level of communication you need between JavaScript components (which you can achieve by exchanging delegates between components in different windows. Avoid DOM-based communication like you're trying because that will be a nightmare to clean up and avoid memory leaks. If you are using an iframe, that's usually because you want the iframe to be able to post and navigate independantly, which means that the dispose logic will need to be taken care of. That will be a lot easier with the iframe working as a full page, with its own copy of the framework. Is what your doing is making AJAX work even if the browser has JavaScript turned off? It looks like it is not going as well as you had hoped. Does that mean that your idea is not going to make it out here in the real world? Stephen: yes, potentially, unobtrusively adding events like described here enable you to build a web page that could work without JavaScript. It requires some care but it absolutely can be done and it is the direction more and more people are taking. But to be perfectly clear, it doesn't "make Ajax work" if JavaScript is off, it makes the page work reasonably if it's off. By definition, Ajax only works with JavaScript on. I don't understand your question though. What idea are you referring to? What is not going as well as I had hoped? Are you still planning to do a post about Class events or can you point me at a good resource? Peter: yes, absolutely. I'll try to get that done next week. When building Ajax applications, you basically deal with two kinds of events. First, there are DOM events > "It seems like we're the only ones to actually care about not colliding with existing aliases from competing frameworks, and we don't even get some credit for that?" Actually, jQuery does a very good job of preventing namespace collisions; it only uses "jQuery" in the global namespace and you can tell it to not use "$" if someone else is using it. docs.jquery.com/Using_jQuery_with_Other_Libraries Dave: If I'm not mistaken, that jQuery feature appeared after that comment was written. It is true that jQuery does an excellent job at keeping to its namespace, as does Dojo.
http://weblogs.asp.net/bleroy/archive/2006/11/06/DOM-events-in-the-Microsoft-AJAX-Library.aspx
crawl-001
en
refinedweb
This document is also available in these non-normative formats: Single XHTML file, PostScript version, PDF version, ZIP archive, and Gzip'd TAR archive. Copyright ©2006 may also contain an xsi:schemaLocation attribute that associates this namespace with the XML Schema at the URI.. On introducing a fallback attribute PR #7723 State: Open Resolution: None User: None Notes: [XHTML 2] Conforming documents and meta properties PR #7777 State: Open Resolution: None User: None Notes: Fw: [XHTML 2] Section 3.1.1 PR #7791 State: Open Resolution: None User: None Notes: State: Open Resolution: None User: None Notes: Change XHTML 2.0 namespace to PR #7808 State: Open Resolution: None User: None Notes: Namespace versioning problem in XHTML 2 PR #7818 State: Open Resolution: None User: None Notes: Referance XML 1.1 and example with XML 1.1 declaration PR #7819 State: Open Resolution: None User: None, q, (parsed): Fw: [XHTML 2] Section 5.5 quality values. PR #7799 State: Open Resolution: None User: None Notes: Fw: [XHTML 2] Section 5.5 intersection of mime-types PR #7800 State: Open Resolution: None User: None Notes:,
http://www.w3.org/TR/xhtml2/xhtml2.html
crawl-001
en
refinedweb
What are the disadvantages of lazy evaluation? Give an example of an expression that evaluates with both eager and lazy evaluation, but takes much longer to evaluate with eager evaluation. What expressions take longer to evaluate with lazy evaluation than with eager evaluation? class Thunk: def __init__(self, expr, env): self._expr = expr self._env = env self._evaluated = False def value(self): if not self._evaluated: self._value = forceeval(self._expr, self._env) self._evaluated = True return self._value def forceeval(expr, env): value = meval(expr, env) if isinstance(value, Thunk): return value.value() else: return value def evalApplication(expr, env): subexprvals = map (lambda sexpr: meval(sexpr, env), expr) return mapply(subexprvals[0], subexprvals[1:]) def evalApplication(expr, env): ops = map (lambda sexpr: _______________________, expr[1:]) return mapply(_________(expr[0], env), ops). From In Praise of Idleness, by Bertrand Russell (co-author of Principia Mathematica), 1932
http://www.cs.virginia.edu/~evans/cs150/classes/class30/notes30.html
crawl-001
en
refinedweb
The QTextBrowser class provides a rich text browser with hypertext navigation. More... #include <qtextbrowser.h> Inherits QTextEdit. List of all member functions. This class extends QTextEdit (in read-only mode), adding some navigation functionality so that users can follow links in hypertext documents. The contents of QTextEdit is set with setText(), but QTextBrowser has an additional function, setSource(), which makes it possible to set the text to a named document. The name is looked up in the text view's mime source connetion to the sourceChanged() signal. QTextBrowser provides backward() and forward() slots which you can use to implement Back and Forward buttons. The home() slot sets the text to the very first document displayed. The linkClicked() signal is emitted when the user clicks a link. By using QTextEdit::setMimeSourceFactory() you can provide your own subclass of QMimeSourceFactory. This makes it possible to access data from anywhere, for example from a network or from a database. See QTextBrowser interprets the tags it processes in accordance with the default style sheet. Change the style sheet with setStyleSheet(); see QStyleSheet forSimpleRichText or QLabel. See also Advanced Widgets, Help System, and Text Related Classes. This signal is emitted when the user clicks-2005 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.3/qtextbrowser.html
crawl-002
en
refinedweb
A static member class (or interface) is much like a regular top-level class (or interface). For convenience, however, it is nested within another class or interface. Example 3-8 class or interface is defined as a static member of a containing class, making it analogous to the class fields and methods that are also declared static. Like a class method, a static member class is not associated with any instance of the containing class (i.e., there is no this object). A static member class does, however, have access to all the static members (including any other static member classes and interfaces) of its containing class. A static member class can use any other static member without qualifying its name with the name of the containing class. A static member class has access to all static members of its containing class, including private members. The reverse is true as well: the methods of the containing class have access to all members of a static member class, including the private members. A static member class even has access to all the members of any other static member classes, including the private members of those classes. Since static member classes are themselves class members, a static member class can be declared with its own access control modifiers. These modifiers have the same meanings for static member classes as they do for other members of a class. In Example 3-8, the Linkable interface is declared public, so it can be implemented by any class that is interested in being stored on a LinkedStack. A static member class cannot have the same name as any of its enclosing classes. In addition, static member classes and interfaces can be defined only within top-level classes and other static member classes and interfaces. This is actually part of a larger prohibition against static members of any sort within member, local, and anonymous classes. In code outside of the containing class, a static member class or interface is named by combining the name of the outer class with the name of the inner class (e.g., LinkedStack.Linkable). You can use the import directive to import a static member class: import LinkedStack.Linkable; // Import a specific inner class import LinkedStack.*; // Import all inner classes of LinkedStack Importing inner classes is not recommended, however, because it obscures the fact that the inner class is tightly associated with its containing class.
http://docstore.mik.ua/orelly/java-ent/jnut/ch03_09.htm
crawl-002
en
refinedweb