text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
The QStrListIterator class is an iterator for the QStrList and QStrIList classes. More... #include <qstrlist.h> Inherits QPtrListIterator<char>. List of all member functions. This class is a QPtrListIterator<char> instance. It can traverse the strings in the QStrList and QStrIList classes. See also Non-GUI Classes. This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.2/qstrlistiterator.html
crawl-002
en
refinedweb
Annotating a class with the @Configuration annotation indicates that the class will be used by JavaConfig as a source of bean definitions. DataSourceConfiguration { // bean definitions follow } Because the semantics of the attributes to the @Configuration annotation are 1:1 with the attributes to the <beans/> element, this documentation defers to the beans-definition section of Chapter 3, IoC from the Core Spring documentation. @Bean is a method-level annotation and a direct analog of the XML <bean/> element. The annotation supports most of the attributes offered by <bean/> such as init-method, destroy-method, autowiring, lazy-init, dependency-check, depends-on and scope.). @Configuration public class AppConfig { @Bean public TransferService transferService() { return new TransferServiceImpl(); } }The above is exactly equivalent to the following appConfig.xml: <beans> <bean name="transferService" class="com.acme.TransferServiceImpl"/> </beans>Both will result in a bean named transferServicebeing available in the BeanFactory/ ApplicationContext, bound to an object instance of type TransferServiceImpl: transferService => com.acme.TransferServiceSee Section 4.3, “ JavaConfigApplicationContext” for details about instantiating and using an ApplicationContextwith JavaConfig. *Awareinterfaces such as BeanFactoryAware, BeanNameAware, MessageSourceAware, ApplicationContextAware, etc. are fully supported. Consider an example class that implements BeanFactoryAware: public class AwareBean implements BeanFactoryAware { private BeanFactory factory; // BeanFactoryAwaresetter (called by Spring during bean instantiation) public void setBeanFactory(BeanFactory beanFactory) throws BeansException { this.factory = beanFactory; } public void close(){ // do clean-up } } Also, the lifecycle callback methods are fully supported. A feature unique to JavaConfig feature is bean visibility. JavaConfig uses standard Java method visibility modifiers to determine if the bean ultimately returned from a method can be accessed by an owning application context / bean factory. Consider the following configuration: @Configuration public abstract class VisibilityConfiguration { @Bean public Bean publicBean() { Bean bean = new Bean(); bean.setDependency(hiddenBean()); return bean; } @Bean protected HiddenBean hiddenBean() { return new Bean("protected bean"); } @Bean HiddenBean secretBean() { Bean bean = new Bean("package-private bean"); // hidden beans can access beans defined in the 'owning' context bean.setDependency(outsideBean()); } @ExternalBean public abstract Bean outsideBean() } Let's bootstrap the above configuration within a traditional XML configuration (for more information on mixing configuration strategies see Chapter 8, Combining configuration approaches). The application context being instantiated agaist the XML file will be the 'owning' or 'enclosing' application context, and will not be able to 'see' the hidden beans: <beans> <!-- the configuration above --> <bean class="my.java.config.VisibilityConfiguration"/> <!-- Java Configuration post processor --> <bean class="org.springframework.config.java.process.ConfigurationPostProcessor"/> <bean id="mainBean" class="my.company.Bean"> <!-- this will work --> <property name="dependency" ref="publicBean"/> <!-- this will *not* work --> <property name="anotherDependency" ref="hiddenBean"/> </bean> </beans> As JavaConfig encounters the VisibilityConfiguration class, it will create 3 beans : publicBean, hiddenBean and secretBean. All of them can see each other however, beans created in the 'owning' application context (the application context that bootstraps JavaConfig) will see only publicBean. Both hiddenBean and secretBean can be accessed only by beans created inside VisibilityConfiguration. Any @Bean annotated method, which is not public (i.e. with protected or default visibility), will create a 'hidden' bean. Note that due to technical limitations, private @Bean methods are not supported. In the example above, mainBean has been configured with both publicBean and hiddenBean. However, since the latter is (as the name imply) hidden, at runtime Spring will throw: org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'hiddenBean' is defined ... To provide the visibility functionality, JavaConfig takes advantage of the application context hierarchy provided by the Spring container, placing all hidden beans for a particular configuration class inside a child application context. Thus, the hidden beans can access beans defined in the parent (or owning) context but not the other way around. JavaConfig makes available each of the four standard scopes specified in Section 3.4, "Bean Scopes" of the Spring reference documentation. The DefaultScopes class provides string constants for each of these four scopes. SINGLETON is the default, and can be overridden by supplying the scope attribute to @Bean annotation: @Configuration public class MyConfiguration { @Bean(scope=DefaultScopes.PROTOTYPE) public Encryptor encryptor() { // ... } } Spring offers a convenient way of working with scoped dependencies through scoped proxies. The easiest way to create such a proxy when using the XML configuration, is the <aop:scoped-proxy/> element. JavaConfig offers as alternative the @ScopedProxy annotation which provides the same semantics and configuration options. If we were to port the the XML reference documentation scoped proxy example (see link above) to JavaConfig, it would look like the following: // a HTTP Session-scoped bean exposed as a proxy @Bean(scope = DefaultScopes.SESSION) @ScopedProxy public UserPreferences userPreferences() { return new UserPreferences(); } @Bean public Service userService() { UserService service = new SimpleUserService(); // a reference to the proxied 'userPreferences' bean service.seUserPreferences(userPreferences()); return service; } As noted in the Core documentation, method injection is an advanced feature that should be comparatively rarely used. When using XML configuration, it is required in cases where a singleton-scoped bean has a dependency on a prototype-scoped bean. In JavaConfig, however, it is a (somewhat) simpler proposition: @Bean public MyAbstractSingleton mySingleton(){ return new MyAbstractSingleton(myDependencies()){ public MyPrototype createMyPrototype(){ return new MyPrototype(someOtherDependency()); // or alternatively return myPrototype()-- this is some @Beanor @ExternalBeanmethod... } } } By default, JavaConfig uses a @Bean method's name as the name of the resulting bean. This functionality can be overridden, however, using the BeanNamingStrategy extension point. <beans> <bean class="org.springframework.config.java.process.ConfigurationPostProcessor"> <property name="namingStrategy"> <bean class="my.custom.NamingStrategy"/> </property> </bean> </beans> For more details, see the API documentation on BeanNamingStrategy. For more information on integrating JavaConfig and XML, see Chapter 8, Combining configuration approaches JavaConfigApplicationContext provides direct access to the beans defined by @Configuration-annotated classes. For more information on the ApplicationContext API in general, please refer to the Core Spring documentation. Instantiating the JavaConfigApplicationContext can be done by supplying @Configuration-annotated class literals to the constructor, and/or strings representing packages to scan for @Configuration-annotated classes. Each of the class literals supplied to the constructor will be processed, and for each @Bean-annotated method encountered, JavaConfig will create a bean definition and ultimately instantiate and initialize the bean. JavaConfigApplicationContext context = new JavaConfigApplicationContext(AppConfig.class); Service service = context.getBean(Service.class); JavaConfigApplicationContext context = new JavaConfigApplicationContext(AppConfig.class, DataConfig.class); Service service = context.getBean(Service.class); Base packages will be scanned for the existence of any @Configuration-annotated classes. Any candidate classes will then be processed much as if they had been supplied directly as class literals to the constructor. JavaConfigApplicationContext context = new JavaConfigApplicationContext("**/configuration/**/*.class"); Service service = (Service) context.getBean("serviceA"); JavaConfigApplicationContext context = new JavaConfigApplicationContext("**/configuration/**/*.class", "**/other/*Config.class); Service service = (Service) context.getBean("serviceA"); When one or more classes/packages are used during construction, a JavaConfigApplicationContext cannot be further configured. If post-construction configuration is preferred or required, use either the no-arg constructor, configure by calling setters, then manually refresh the context. After the call to refresh(), the context will be 'closed for configuration'. JavaConfigApplicationContext context = new JavaConfigApplicationContext(); context.setParent(otherConfig); context.setConfigClasses(AppConfig.class, DataConfig.class); context.setBasePackages("**/configuration/**/*.class"); context.refresh(); Service service = (Service) context.getBean("serviceA"); JavaConfigApplicationContext provides several variants of the getBean() method for accessing beans. The preferred method for accessing beans is with the type-safe getBean() method. JavaConfigApplicationContext context = new JavaConfigApplicationContext(...); Service service = context.getBean(Service.class); If more than one bean of type Service had been defined in the example above, the call to getBean() would have thrown an exception indicating an ambiguity that the container could not resolve. In these cases, the user has a number of options for disambiguation: Like Spring's XML configuration, JavaConfig allows for specifying a given @Bean as primary: @Configuration public class MyConfig { @Bean(primary=Primary.TRUE) public Service myService() { return new Service(); } @Bean public Service backupService() { return new Service(); } } After this modification, all calls to getBean(Service.class) will return the primary bean JavaConfigApplicationContext context = new JavaConfigApplicationContext(...); // returns the myService()primary bean Service service = context.getBean(Service.class); JavaConfig provides a getBean() variant that accepts both a class and a bean name for cases just such as these. JavaConfigApplicationContext context = new JavaConfigApplicationContext(...); Service service = context.getBean(Service.class, "myService"); Because bean ids must be unique, this call guarantees that the ambiguity cannot occur. It is also reasonable to call the getBeansOfType() method in order to return all beans that implement a given interface: JavaConfigApplicationContext context = new JavaConfigApplicationContext(...); Map matchingBeans = context.getBeansOfType(Service.class); Note that this latter approach is actually a feature of the Core Spring Framework's AbstractApplicationContext (which JavaConfigApplicationContext extends) and is not type-safe, in that the returned Map is not parameterized. Beans may be accessed via the traditional string-based getBean() API as well. Of course this is not type-safe and requires casting, but avoids any potential ambiguity entirely: JavaConfigApplicationContext context = new JavaConfigApplicationContext(...); Service service = (Service) context.getBean("myService");
http://static.springsource.org/spring-javaconfig/docs/1.0.0.m3/reference/html/creating-bean-definitions.html
crawl-002
en
refinedweb
). Some Context This post is essentially a reply to a reply to a post. Context Processors Context processors in Django are functions that run just before a template is rendered. A template is given a bag of data (the context) and context processors can add to that bag of data. A context processor is only given the request object;. Django also provides the ability to provide extra context processors directly to the RequestContext object. Sometimes this is used to run a particular context processor for a single view or a small subset of views. The drawback (unless you step back for a minute) is that you have to specify these extra processors each time. Well, no. You don't. Writing a replacement RequestHandler want. Drop this into a file somewhere: from django import template def make_request_context(extra_processors): class MyRequestContext(template.RequestContext): def __init__(*args, **kwargs): processors = list(kwargs.pop('processors', ()) super(MyRequest, self).__init__(processors=extra_processors + processors, *args, **kwargs) return MyRequestContext Now, whenever you want a RequestContext-like class that always runs a specific set of context processors, you call make_request_context(), passing it a list of the extra processors. For example, if you have one file with all your views in it, you could put this at the top: MY_CONTEXT_PROCESSORS = ( super.secret.special.processor.stuff, ... ) RequestContext = make_request_context(MY_CONTEXT_PROCESSORS) and your code will work transparently (that's why I called it RequestContext here, so that existing code won't require any change. Slightly contradicts the above advice and I wouldn't personally do this, but it's your call to make).. Aside: Why Not Use a Metaclass? Now, it's possible to write make_request_context() using a metaclass and a __new__() method. But why bother?? That just adds extra complexity because you have to remember the signature for __new__() __new__() method in the Python docs: it was originally created so that it was possible to subclass built-in types like int and str. Everything else was already possible. Again, sometimes a metaclass makes things a little neater. Sometimes it's a line ball. Often, it's just showing off. Conclusion: Put It In Django Core?! [Pure personal opinion here. I do not have my Django maintainers hat on, outside of the first sentence of the next paragraph.]. Topics: software/django/tips
http://www.pointy-stick.com/blog/2008/02/12/django-tip-application-level-context-processors/
crawl-002
en
refinedweb
Friday Java Quiz: Know Your java.lang Classes This weeks Friday Java Quiz is provided by Brian Coyner. Q: What does the following Java program print: public class Foo { public static void main(String[] args) { System.out.println(Math.min(Double.MIN_VALUE, 0.0d)); System.out.println(Math.min(Integer.MIN_VALUE, 0)); } } Like always, answer the question without the help of actually compiling and running the class or looking anything up in a book or on the Google. Bonus Q: Is -1 an integer literal?
http://www.weiqigao.com/blog/2007/12/07/friday_java_quiz_know_your_java_lang_classes.html
crawl-002
en
refinedweb
Monday C++ Quiz: We Are All Confused There is a whole bunch of super smart C++ types in the office talking about a strange little C++ program, which I'm turning into an emergency quiz. Q: Should the following C++ program compile? link? run without core dumps? And if so, what does it output? namespace A { class F { public: static F f; F() {} void bar() { } }; }; namespace { A::F::F f; } void foo() { A::F &myF = A::F::f; myF.bar(); } int main() { return 0; } Re: Monday C++ Quiz: We Are All Confused Ok, maybe that was unfair... I'm going to guess that it won't compile, because f is never defined. Did you mean for the code inside the anonymous namespace to be "A::F::f" or even "A::F A::F::f"? That still doesn't make sense because f lives inside namespace A, so it doesn't make sense to try to define it inside an anonymous namespace. Can you use "inline" with fields? Oh well, if I missed the point, I chalk it up to not having really used C++ for quite a while now. Re: Monday C++ Quiz: We Are All Confused Try it with GCC and you'll find a surprise... I verified with the Comeau online compiler that this is not valid C++. Interestingly, the initial code fragment not only compiled in GCC but also had the same overall effect as what the author intended -- it created an anon namespace object which could be referenced the same way the class-scoped static could. Re: Monday C++ Quiz: We Are All Confused A simple query on the gcc mailing list leads to this gcc bug report. Re: Monday C++ Quiz: We Are All Confused The anonymous namespace has ___nothing___ to do with this since it is invalid C++. But let's get to it, first, with your example: g++ 4.3.2 : /tmp/ccIYjJxr.o: In function `foo()': coredumper.cc:(.text+0xc): undefined reference to `A::F::f' collect2: ld returned 1 exit statusicc 10.1.018: compiles & runs (that's an error intel guys, you don't do it right). That bug report is against g++ 3.3.2, and there is code to make your example "pass" with 4.3.2 as well, but it is off topic. Take a look at this example and see why you don't have namespace issues: #include <cstdio> namespace A { class F { public: static F f() { return F(); }; // static function signature F() { printf("hello\n");}; void bar() const { printf("you called bar..\n"); }; }; } namespace { A::F f() { return A::F(); } } void foo() { A::F const& myF = f(); // const reference myF.bar(); } int main() { foo(); return 0; }Compiles and runs under g++4.3.2, icc10.1.018 and visual studio 2008 C++. Online Comeau C/C++ 4.3.10.1 test passed. The anonymous namespace has nothing to do with this (we are in the same translation unit), the ill defined struct and the erroneous use of references are to blame. You got my email, see you :)
http://www.weiqigao.com/blog/2008/10/06/monday_c_quiz_we_are_all_confused.html
crawl-002
en
refinedweb
Programming/J2ME From Wikibooks, the open-content textbooks collection [edit] Introduction Java 2 Micro Edition (J2ME) is a collection of technologies found on more than a billion small devices worldwide. Small devices are less powerful than larger, desktop computers and a full Java Standard Edition (JavaSE) installation would be unworkable on many small devices. As well as this, there is a demand for devices with more rigorous security models than found in the JavaSE. To cope with this demand, two different subsets of JavaSE were created. These are called the CDC (Connected Device Configuration) and the CLDC (Connected Limited Device Configuration). Some of the missing capabilities may confuse developers used to the JavaSE: the CLDC, for example, does not by itself allow a Java program to access the file system. To add functionality useful for specific target use-cases, extensions to the CDC or CLDC APIs were developed. The best-known of these for the CLDC is the Mobile Information Device Profile (MIDP). The CDC (Connected Device Configuration) runs on 32-bit hardware with a few megabytes of memory. The current version is defined in JSR 218 and has the full name CDC 1.1.2. It includes a fully functional Java Virtual Machine. The CLDC/MIPD (Connected Limited Device Configuration with Mobile Information Device Profile) runs on 16 or 32-bit hardware with hundreds of kilobytes of memory. The current version of CLDC is 1.1 defined in JSR 139. It includes a reduced Java Virtual Machine called the K Virtual Machine. Before developing a J2ME application, it is necessary to know which Java technologies are supported on the target platforms. This book will explore the CLDC/MIDP platform as found on hundreds of millions of mobile phones worldwide. [edit] Overview [edit] What is Java and why? In this section I will introduce the Java language with some history as to why Java has the form it has. If you are already versed in Java, feel free to jump to the next chapter. [edit] A brief introduction to programming languages The central processing unit of a computer and the main memory are orderly collections of gates. A gate is here a term from Electronics and means a small electronic component with a number of inputs and outputs where the state of the outputs (usually +5 volt or 0 volt, sometimes + 3.3 volt and 0 volt) depends in some way upon the state of the inputs. These gates allow a physical representation of binary logic and discrete mathematics. Put simply, it is possible to arrange these gates in such a way that they can store certain states () or add two states together () or do a number of interesting things. Most central processing units available support these basic operations. More directly put, if you set the data inputs of a SPARC processor to be the right combination of high (5V) and low (0V), it will take the last 12 highs and lows and store these states in a register on the chip (a register is a small memory very close the chip which can usually be read from or written to in one clock cycle). If you set a different combination of highs and lows, the chip will do something different, right up to complicated things like "jump over the next 10 numbers to be entered if a number stored in a certain register is larger than the number stored in another register". This is the machine code the chip actually performs. The 32 long sequence of highs and lows can be called a 32-bit number or code. The allowed forms of these codes are called the op-codes () for the processor. Programming in this manner is done rarely nowadays, and machine code is referred to as a first generation programming language (). It is very error prone—one has to remember the different op-codes or the addresses in your program you may wish to spring to. To help solve these problems, second generation programming languages were developed. If 10000100 always means "add", why not write "add" and then use a computer program to translate "add" into "10000100"? This is the main addition to computing brought be second generation programming languages. Second generation programming languages () can not be run directly by a computer but first have to be translated into a first generation programming language. Usually this is not particularly difficult. A translation program for this (called an assembler) is usually quite small and error-free. This allows relatively legible code like the following: add %g0, 5, %l0 add %g0, 7, %l1 add %l0, %l1, %l2 Although this is still not particularly legible, it is nevertheless a vast improvement on the machine code: 100000000010010000100000000010110000000001001000110000000001111010010000100100000000000010001 What does it do? After the third instruction has been performed, the local register l2 will contain the number 12. Let us examine the following mathematical expression: s = ½at² + vt Processing this expression for different values of a, v, and t with a second generation programming language would require a lot of programming. The expression is however well defined and it should be possible to write a program to automatically generate the machine code needed to evaluate this expression. Another consideration which drove the next step in programming technology was the different types of number which can be processed by a chip. Because this is still relevant to modern programming languages like Java, I will go into this in some depth. The basic question is, what is the best way to store numbers as a combination of high and low states (bits)? When it comes to positive integers, the answer is fairly obvious—one just converts the decimal number to its binary representation. Provided that the number isn't too large (32 bits allow numbers up to about 4 thousand million), the number can be stored with no loss of accuracy. As mentioned above, it is possible to build logic gates to add numbers stored in this manner which work very quickly (often in one clock cycle). Finding a good way to store negative integers is a bit more difficult and various methods were tried out before a satisfactory solution was found (). Generally chips offer very fast methods of processing integers, though multiplying, dividing and finding remainders will still take appreciably longer than one clock cycle. The question as to how to store floating-point numbers like 0.25 or π is however much more difficult and there is no single right way of doing this. As well as this, the existing circuitry for performing operations on integers would not work with these numbers, so different op-codes must be introduced. Nevertheless a standard had to be found, otherwise exchanging data between two different architectures would have been made much more difficult and computations performed on one architecture may not have reached the same result as computations performed on another. IEEE 754 () specifies a number of ways of storing floating-point numbers, two of which are available to Java programmers. These are the "float" (32 bit precision) and the "double" (64 bit precision). Much as in scientific notation, where every number can be expressed as minus or plus one time a number between one and ten multiplied by a power of ten (for example 123 = +1 * 1.23 * 10*10 = +1 * 1.23 * 10²), a floating-point number is defined as minus or plus one times a number between one and two multiplied by a power of two (for example 5 = +1 * 1.25 * 2 * 2 = +1 * 1.25 * 2²). In binary, 1.25 = 1.01. Because the first digit for all numbers recorded in this manner is necessarily 1 (the number is between 1 and 2), it is not recorded and the number only requires 2 bits to store. "float" allows an accuracy of 23 bits (which is about 6 decimal places) and double 52 bits (about 16 decimal places). The rest of the space contains the sign and the power of two for the multiplication. More information can be found in the above mentioned article and only one thing has to be noted here—using first and second generation programming languages, the programmer has to make sure that they use the right op-codes for the right numbers. Calling an integer add on a float and an integer results in the wrong answer. It should be possible to write a program which checks the code to make sure that the programmer hasn't inadvertently tried to subtract a double from an integer and, if this is the case, warn the programmer (or automatically add a line transforming the integer into a double beforehand). These considerations resulted in third generation programming languages (). A program which translates a text in a third generation programming language to a second or first generation programming language is called a compiler. No commonly available computer chip can run a third generation programming language natively, so this compilation step is always necessary. Third generation programming languages have a number of advantages. The program itself does not rely on a particular architecture. This means that if a compiler is available for an architecture, the program can be compiled to run on it. Java is here different. [edit] Why Java is different To understand why Java is different one must first understand what Java is. In general a computer language consists of a set of rules and a list of words with which it is possible to write valid texts or programs in this language. For a language to be used, a compiler for this language has to exist and this compiler has to generate machine code for an architecture which exists. This means that if you wish to run a program on a number of architectures (standard 32 bit intel, AMD64, PowerPC, SPARC, etc.), you need to compile the program text for each architecture individually and make sure the right architecture gets the right program. This approach is not necessary with Java because the Java compiler creates machine code to be performed by another program called a virtual machine. Before a Java program can be run, a Java virtual machine must be available on the computer. Sun Microsystems provides Java virtual machines for a number of architectures and operating systems while IBM and others provide Java virtual machines for other architectures (the op-codes for the Java virtual machine are freely available so anybody with enough programming skill and free time can write a Java virtual machine for a particular architecture and operating system). The first thing the budding Java programmer has to do is make sure that a Java virtual machine is installed on their system. Java, like many other third generation programming languages, is object oriented. This is a further programming abstraction which has proven itself useful for writing code of high quality. [edit] CLDC/MIDP Programming [edit] Setting up an environment There are two straightforward ways to set up an environment for developing and testing applications for the CLDC configuration with the MIDP extension. - Download the Sun Java(TM) Wireless Toolkit for CLDC from - The Sun Java(TM) Wireless Toolkit for CLDC contains the MIDP/CLDC libraries, an emulator and debugging/profiling software. You will need to use your own text editor to create the .java files to be built - Download the NetBeans IDE from and install the Mobility Plugin using the Plugin Manager (Tools->Plugins). At time of writing, the Mobility plugin is not supported on SPARC/Solaris - The NetBeans IDE is an Integrated Development Environment offering a large amount of features for creating and running Java Technologies. Both options require a working Java SDK. The latest version of Sun Microsystems' implementation can be downloaded from. It is recommended that you acquaint yourself with both possibilities. NetBeans offers many options to help programmers but these options also make it more difficult for the beginner to understand how a midlet works. To install and configure the software, follow the relevant instructions on the Sun Microsystems site. [edit] Getting Started In this section we will write a very simple midlet to be run using the Wireless Toolkit. A midlet is the name for a small Java program run using the J2ME technologies. Since CLDC/MIPD offers a small subset of the features of JavaSE (the version usually found on desktop computers) with a few supplementary classes for mobile phones, CLDC/MIPD programming is less extensive than JavaSE programming. Start the Wireless Toolkit by navigating to the installation directory, entering the bin directory and starting the ktoolbar application. If you have a compatible Java installation on your system, the program will start. - click New Project... - enter a Project Name, for example project01 - enter a MIDlet Class Name, for example MyFirstMidlet - an API Selection screen will open. Click OK If you now navigate to the installation directory and enter the apps directory, you will see a new folder named project01 has appeared among a number of folders containing demonstration midlets. - enter the project01 folder - the Wireless Toolkit has created a number of folders ready to hold files generated when building the project - enter the src folder - using a text editor, enter the following file: import javax.microedition.midlet.*; import java.lang.System; public class MyFirstMidlet extends MIDlet { public MyFirstMidlet() { System.out.println("hello world!"); destroyApp(false); } public void startApp() {} public void pauseApp() {} public void destroyApp(boolean unconditional) {} } Not a book title page. Please remove {{alphabetical}} from this page.
http://en.wikibooks.org/wiki/Programming/J2ME
crawl-002
en
refinedweb
UML a munmap file_operation which is called from do_munmap map counts in a structure that's parallel to the swap entries in tmpfs a new misc device at minor 10. Jeff Rationale: UML uses a mmapped tmp file for its physical memory. When it reads from disk, it allocates a page of this memory for the page cache, then calls the ubd block driver to fill it, which does a read() on the host from the file backing the device. This results in two copies of the data in the host's memory, one in the host page cache, and one in this tmp file for the UML page cache. These duplicate copies can be eliminated by mmapping pages from the device's file into UML physical memory. However, in order to get any memory savings on the host, the tmp file page which was mapped over needs to be freed, which no filesystem will do. Hence the need for sys_punch or /dev/anon. In the testing I've done, booting my Debian image takes about 25% less host memory with ubd-mmap and /dev/anon than without (~28M vs ~21M), with a corresponding 25% increase in the number of UMLs I can boot before the host starts swapping (20 vs 16). The patch: diff -Naur host-2.4.23-skas3/drivers/char/mem.c host-2.4.23-skas3-devanon/drivers/char/mem.c --- host-2.4.23-skas3/drivers/char/mem.c 2003-12-16 22:16:27.000000000 -0500 +++ host-2.4.23-skas3-devanon/drivers/char/mem.c 2004-01-09 04:09:26.000000000 -0500 @@ -664,6 +664,8 @@ write: write_full, }; +extern struct file_operations anon_file_operations; + static int memory_open(struct inode * inode, struct file * filp) { switch (MINOR(inode->i_rdev)) { @@ -693,6 +695,9 @@ case 9: filp->f_op = &urandom_fops; break; + case 10: + filp->f_op = &anon_file_operations; + break; default: return -ENXIO; } @@ -719,7 +724,8 @@ {5, "zero", S_IRUGO | S_IWUGO, &zero_fops}, {7, "full", S_IRUGO | S_IWUGO, &full_fops}, {8, "random", S_IRUGO | S_IWUSR, &random_fops}, - {9, "urandom", S_IRUGO | S_IWUSR, &urandom_fops} + {9, "urandom", S_IRUGO | S_IWUSR, &urandom_fops}, + {10, "anon", S_IRUGO | S_IWUSR, &anon_file_operations}, }; int i; diff -Naur host-2.4.23-skas3/include/linux/fs.h host-2.4.23-skas3-devanon/include/linux/fs.h --- host-2.4.23-skas3/include/linux/fs.h 2003-12-16 22:16:36.000000000 -0500 +++ host-2.4.23-skas3-devanon/include/linux/fs.h 2004-01-10 00:59:17.000000000 -0500 @@ -864,6 +864,8 @@ unsigned int (*poll) (struct file *, struct poll_table_struct *); int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long); int (*mmap) (struct file *, struct vm_area_struct *); + void (*munmap) (struct file *, struct vm_area_struct *, + unsigned long start, unsigned long len); int (*open) (struct inode *, struct file *); int (*flush) (struct file *); int (*release) (struct inode *, struct file *); diff -Naur host-2.4.23-skas3/include/linux/shmem_fs.h host-2.4.23-skas3-devanon/include/linux/shmem_fs.h --- host-2.4.23-skas3/include/linux/shmem_fs.h 2003-09-02 15:44:03.000000000 -0400 +++ host-2.4.23-skas3-devanon/include/linux/shmem_fs.h 2004-01-09 04:09:26.000000000 -0500 @@ -22,6 +22,8 @@ unsigned long next_index; swp_entry_t i_direct[SHMEM_NR_DIRECT]; /* for the first blocks */ void **i_indirect; /* indirect blocks */ + unsigned long map_direct[SHMEM_NR_DIRECT]; + void **map_indirect; unsigned long swapped; /* data pages assigned to swap */ unsigned long flags; struct list_head list; diff -Naur host-2.4.23-skas3/Makefile host-2.4.23-skas3-devanon/Makefile --- host-2.4.23-skas3/Makefile 2003-12-16 22:16:23.000000000 -0500 +++ host-2.4.23-skas3-devanon/Makefile 2004-01-11 02:39:25.000000000 -0500 @@ -1,7 +1,7 @@ VERSION = 2 PATCHLEVEL = 4 SUBLEVEL = 23 -EXTRAVERSION = +EXTRAVERSION = -devanon KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) diff -Naur host-2.4.23-skas3/mm/mmap.c host-2.4.23-skas3-devanon/mm/mmap.c --- host-2.4.23-skas3/mm/mmap.c 2004-01-09 03:50:00.000000000 -0500 +++ host-2.4.23-skas3-devanon/mm/mmap.c 2004-01-09 04:09:26.000000000 -0500 @@ -995,6 +995,11 @@ remove_shared_vm_struct(mpnt); mm->map_count--; + if((mpnt->vm_file != NULL) && (mpnt->vm_file->f_op != NULL) && + (mpnt->vm_file->f_op->munmap != NULL)) + mpnt->vm_file->f_op->munmap(mpnt->vm_file, mpnt, st, + size); + zap_page_range(mm, st, size); /* diff -Naur host-2.4.23-skas3/mm/shmem.c host-2.4.23-skas3-devanon/mm/shmem.c --- host-2.4.23-skas3/mm/shmem.c 2003-12-16 22:16:36.000000000 -0500 +++ host-2.4.23-skas3-devanon/mm/shmem.c 2004-01-10 01:10:59.000000000 -0500 @@ -128,16 +128,17 @@ * +-> 48-51 * +-> 52-55 */ -static swp_entry_t *shmem_swp_entry(struct shmem_inode_info *info, unsigned long index, unsigned long *page) +static void *shmem_block(unsigned long index, unsigned long *page, + unsigned long *direct, void ***indirect) { unsigned long offset; void **dir; if (index < SHMEM_NR_DIRECT) - return info->i_direct+index; - if (!info->i_indirect) { + return direct+index; + if (!*indirect) { if (page) { - info->i_indirect = (void **) *page; + *indirect = (void **) *page; *page = 0; } return NULL; /* need another page */ @@ -146,7 +147,7 @@ index -= SHMEM_NR_DIRECT; offset = index % ENTRIES_PER_PAGE; index /= ENTRIES_PER_PAGE; - dir = info->i_indirect; + dir = *indirect; if (index >= ENTRIES_PER_PAGE/2) { index -= ENTRIES_PER_PAGE/2; @@ -169,7 +170,21 @@ *dir = (void *) *page; *page = 0; } - return (swp_entry_t *) *dir + offset; + return (unsigned long **) *dir + offset; +} + +static swp_entry_t *shmem_swp_entry(struct shmem_inode_info *info, unsigned long index, unsigned long *page) +{ + return((swp_entry_t *) shmem_block(index, page, + (unsigned long *) info->i_direct, + &info->i_indirect)); +} + +static unsigned long *shmem_map_count(struct shmem_inode_info *info, + unsigned long index, unsigned long *page) +{ + return((unsigned long *) shmem_block(index, page, info->map_direct, + &info->map_indirect)); } /* @@ -838,6 +853,7 @@ ops = &shmem_vm_ops; if (!S_ISREG(inode->i_mode)) return -EACCES; + UPDATE_ATIME(inode); vma->vm_ops = ops; return 0; @@ -1723,4 +1739,131 @@ return 0; } +static int adjust_map_counts(struct shmem_inode_info *info, + unsigned long offset, unsigned long len, + int adjust) +{ + unsigned long idx, i, *count, page = 0; + + spin_lock(&info->lock); + len >>= PAGE_SHIFT; + for(i = 0; i < len; i++){ + idx = (i + offset) >> (PAGE_CACHE_SHIFT - PAGE_SHIFT); + + while((count = shmem_map_count(info, idx, &page)) == NULL){ + spin_unlock(&info->lock); + page = get_zeroed_page(GFP_KERNEL); + if(page == 0) + return(-ENOMEM); + spin_lock(&info->lock); + } + + if(page != 0) + free_page(page); + + *count += adjust; + } + spin_unlock(&info->lock); + return(0); +} + EXPORT_SYMBOL(shmem_file_setup); + +struct file_operations anon_file_operations; + +static int anon_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct file *new; + struct inode *inode; + loff_t size = vma->vm_end - vma->vm_start; + int err; + + if(file->private_data == NULL){ + new = shmem_file_setup("dev/anon", size); + if(IS_ERR(new)) + return(PTR_ERR(new)); + + new->f_op = &anon_file_operations; + file->private_data = new; + } + + if (vma->vm_file) + fput(vma->vm_file); + vma->vm_file = file->private_data; + get_file(vma->vm_file); + + inode = vma->vm_file->f_dentry->d_inode; + err = adjust_map_counts(SHMEM_I(inode), vma->vm_pgoff, size, 1); + if(err) + return(err); + + vma->vm_ops = &shmem_vm_ops; + return 0; +} + +static void anon_munmap(struct file *file, struct vm_area_struct *vma, + unsigned long start, unsigned long len) +{ + struct inode *inode = file->f_dentry->d_inode; + struct shmem_inode_info *info = SHMEM_I(inode); + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte; + swp_entry_t *entry; + struct page *page; + unsigned long addr, idx, *count; + + for(addr = start; addr < start + len; addr += PAGE_SIZE){ + idx = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + + count = shmem_map_count(info, idx, NULL); + if(count == NULL) + continue; + + (*count)--; + if(*count > 0) + continue; + + pgd = pgd_offset(vma->vm_mm, addr); + if(pgd_none(*pgd)) + continue; + + pmd = pmd_offset(pgd, addr); + if(pmd_none(*pmd)) + continue; + + pte = pte_offset(pmd, addr); + if(!pte_present(*pte)) + continue; + + *pte = pte_mkclean(*pte); + + page = pte_page(*pte); + + LockPage(page); + lru_cache_del(page); + ClearPageDirty(page); + ClearPageUptodate(page); + remove_inode_page(page); + UnlockPage(page); + + entry = shmem_swp_entry(info, idx, 0); + if(entry != NULL) + shmem_free_swp(entry, 1); + + page_cache_release(page); + } +} + +int anon_release(struct inode *inode, struct file *file) +{ + if(file->private_data != NULL) + fput(file->private_data); + return(0); +} + +struct file_operations anon_file_operations = { + .mmap = anon_mmap, + .munmap = anon_munmap, + .release = anon_release, +}; - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at Please read the FAQ at
http://lkml.org/lkml/2004/1/13/143
crawl-002
en
refinedweb
Articles Index If: <version>). jar The framework's API exists in just one package: application. Although a few subpackages hold text and icon resources, all the API is in the application package itself. application Two classes help you manage your application: ApplicationContext and Application. The Application and ApplicationContext objects have a one-to-one relationship. ApplicationContext Application. SingleFrameApplication: main launch initialize startup ready exit shutdown public class BasicApp implements Runnable { JFrame mainFrame; JLabel label; public void run() { mainFrame = new JFrame("BasicApp"); label = new JLabel("Hello, world!"); label.setFont(new Font("SansSerif", Font.PLAIN, 22)); mainFrame.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE); mainFrame.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { mainFrame.setVisible(false); // Perform any other operations you might need // before exit. System.exit(0); } }); mainFrame.add(label); mainFrame.pack(); mainFrame.setVisible(true); } public static void main(String[] args) { Runnable app = new BasicApp(); try { SwingUtilities.invokeAndWait(app); } catch (InvocationTargetException ex) { ex.printStackTrace(); } catch (InterruptedException ex) { ex.printStackTrace(); } } } public class BasicFrameworkApp extends Application { private JFrame mainFrame; private JLabel label; @Override protected void startup() { mainFrame = new JFrame("BasicFrameworkApp"); mainFrame.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE); mainFrame.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { mainframe.setVisible(false); exit(); } }); label = new JLabel("Hello, world!"); mainFrame.add(label); mainFrame.pack(); mainFrame.setVisible(true); } public static void main(String[] args) { Application.launch(BasicFrameworkApp.class, args); } }. Application.launch. JFrame WindowAdapter Code Example 3 public class BasicSingleFrameApp extends SingleFrameApplication { JLabel label; @Override protected void startup() { getMainFrame().setTitle("BasicSingleFrameApp"); label = new JLabel("Hello, world!"); label.setFont(new Font("SansSerif", Font.PLAIN, 22)); show(label); } public static void main(String[] args) { Application.launch(BasicSingleFrameApp.class, args); } }. show getMainFrame. ExitListener System.exit. WindowListener The SingleFrameApplication superclass implements a simple shutdown method. It saves its window-frame session state and includes all secondary frame state as well. For this reason, you should remember to call super.shutdown() if you override this method. Code Example 4 shows you what to do. super.shutdown() Code Example 4 @Override protected void shutdown() { // The default shutdown saves session window state. super.shutdown(); // Now perform any other shutdown tasks you. Application.ExitListener. canExit true false willExit public class ConfirmExit extends SingleFrameApplication { private JButton exitButton; @Override protected void startup() { getMainFrame().setTitle("ConfirmExit"); exitButton = new JButton("Exit Application"); exitButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { exit(e); } }); addExitListener(new ExitListener() { public boolean canExit(EventObject e) { boolean bOkToExit = false; Component source = (Component) e.getSource(); bOkToExit = JOptionPane.showConfirmDialog(source, "Do you really want to exit?") == JOptionPane.YES_OPTION; return bOkToExit; } public void willExit(EventObject event) { } }); show(exitButton); } @Override protected void shutdown() { // The default shutdown saves session window state. super.shutdown(); // Now perform any other shutdown tasks you need. // ... } /** * @param args the command-line arguments */ public static void main(String[] args) { Application.launch(ConfirmExit.class, args); } }. JButton. ResourceBundle ListResourceBundle. resources Table 1 shows the relationship among an application class or form, its ResourceBundle name, and the ResourceBundle file name. demo.MyApp demo.resources.MyApp demo/resources/MyApp.properties demo.hello.HelloPanel demo.hello.resources.HelloPanel demo/hello/resources/HelloPanel.properties demo.hello.ExitPanel demo.hello.resources.ExitPanel demo/hello/resources/ExitPanel.properties. ResourceManager ResourceMap The ResourceManager is responsible for creating maps and their parent chains when you request resources. You will use the ApplicationContext to retrieve ResourceManager and ResourceMap objects. You have three options for working. getContext Code Example 6 public class HelloWorld extends SingleFrameApplication { JLabel label; ResourceMap resource; @Override protected void initialize(String[] args) { ApplicationContext ctxt = getContext(); ResourceManager mgr = ctxt.getResourceManager(); resource = mgr.getResourceMap(HelloWorld.class); } @Override protected void startup() { label = new JLabel(); String helloText = (String) resource.getObject("helloLabel", String.class); // Or you can use the convenience methods that cast resources // to the type indicated by the method names: // resource.getString("helloLabel.text"); // resource.getColor("backgroundcolor"); // and so on. Color backgroundColor = resource.getColor("color"); String title = resource.getString("title"); label.setBackground(backgroundColor); label.setOpaque(true); getMainFrame().setTitle(title); label.setText(helloText); show(label); } // ... } You can also retrieve a resource map using the convenience method in the ApplicationContext instance: resource = ctxt.getResourceMap(HelloWorld.class); In Code Example 6, the HelloWorld class uses three resources: a label's text, a color for the label background, and text for the frame's window title. It gets those resources from a resource map for the HelloWorld class: HelloWorld resource = mgr.getResourceMap. Class resources/HelloWorld resources/HelloWorld.properties Code Example 7 helloLabel = Hello, world! color = #AABBCC title = HelloWorld with Resources. injectComponents. setName .. btnShowTime btnShowTime.text Code Example 8 shows several resource definitions in a resources/ShowTimeApp.properties file. Use UI component names and their property names to define injectable resources. resources/ShowTimeApp.properties Code Example 8 btnShowTime.text = Show current time! btnShowTime.icon = refresh.png txtShowTime.text = Press the button to retrieve time. txtShowTime.editable = false. txtShowTime btnShowTime.icon public class ShowTimeApp extends SingleFrameApplication { JPanel timePanel; JButton btnShowTime; JTextField txtShowTime; @Override protected void startup() { timePanel = new JPanel(); btnShowTime = new JButton(); txtShowTime = new JTextField(); // Set UI component names so that the // framework can inject resources automatically into // these components. Resources come from similarly // named keys in resources/ShowTimeApp.properties. btnShowTime.setName("btnShowTime"); txtShowTime.setName("txtShowTime"); btnShowTime.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Date now = new Date(); txtShowTime.setText(now.toString()); } }); timePanel.add(btnShowTime); timePanel.add(txtShowTime); show(timePanel); } // ... }. ShowTimeApp. String Color Font @Resource injectFields. getResourceMap. MyApp myIcon MyApp.myIcon Code Example 10 shows how you can inject field resources into your code. Not only does this example inject the field, but it also uses class-specific resources defined in a resources/NameEntryPanel resource bundle. Mark fields with the @Resource annotation to use field injection. resources/NameEntryPanel Code Example 10 public class NameEntryPanel extends javax.swing.JPanel { @Resource String greetingMsg; ApplicationContext ctx; ResourceMap resource; /** Creates new form NameEntryPanel. */ public NameEntryPanel(ApplicationContext ctxt) { initComponents(); ResourceMap resource = ctxt.getResourceMap(NameEntryPanel.class); resource.injectFields(this); } private void initComponents() { lblNamePrompt = new javax.swing.JLabel(); txtName = new javax.swing.JTextField(); btnGreet = new javax.swing.JButton(); lblNamePrompt.setName("lblNamePrompt"); btnGreet.setName("btnGreet"); // ... } private void btnGreetActionPerformed(java.awt.event.ActionEvent evt) { String personalMsg = String.format(greetingMsg, txtName.getText()); JOptionPane.showMessageDialog(this, personalMsg); } private javax.swing.JButton btnGreet; private javax.swing.JLabel lblNamePrompt; private javax.swing.JTextField txtName; }. NameEntryPanel NameEntryPanel.greetingMsg <classname>.<fieldname> key Code Example 11 # resources/NameEntryPanel.properties NameEntryPanel.greetingMsg = Hello, %s, this string was injected!. .properties Icon ResourceConverter Code Example 12 shows the resource file and the ConverterApp source code that demonstrates how to create and use resources with ResourceConverter objects. ConverterApp Code Example 12 # resources/ConverterApp.properties Application.id = ConverterApp Application.title = ResourceConverter Demo msg = This app demos ResourceConverter. font = Arial-BOLD-22 color = #BB0000 icon = next.png ** public class ConverterApp extends SingleFrameApplication { protected void startup() { ApplicationContext ctx = getContext(); ResourceMap resource = ctx.getResourceMap(); String msg; Color color; Font font; Icon icon; JLabel label; // Use resource converters to convert text representations // of resources into Color and Font objects. msg = resource.getString("msg"); color = resource.getColor("color"); font = resource.getFont("font"); icon = resource.getIcon("icon"); label = new JLabel(msg); label.setOpaque(false); label.setForeground(color); label.setFont(font); label.setIcon(icon); show(label); } // ... } #AARRGGBB R, G, B R, G, B, A. <name>-<style>-<size> PLAIN BOLD ITALIC Arial-PLAIN-12: NameEntryPanel.greetingMsg = Hello, %s, this string was injected!. getString Code Example 13 private void btnGreetActionPerformed(java.awt.event.ActionEvent evt) { String personalMsg = resource.getString("NameEntryPanel.greetingMsg", txtName.getText()); JOptionPane.showMessageDialog(this, personalMsg); }. ActionListener AbstractAction javax.swing.Action Action ActionManager javax.swing.ActionMap The following discussion about Action objects uses a demo application called ActionApp and a panel named ResizeFontPanel. Figure 7 shows the demo UI, which provides the context for the Action class and annotation descriptions. ResizeFontPanel public class ResizeFontPanel extends javax.swing.JPanel { ... @Action public void makeLarger() { Font f = txtArea.getFont(); int size = f.getSize(); if (size < MAX_FONT_SIZE) { size++; f = new Font(f.getFontName(), f.getStyle(), size); txtArea.setFont(f); } } @Action public void makeSmaller() { Font f = txtArea.getFont(); int size = f.getSize(); if (size > MIN_FONT_SIZE) { size--; f = new Font(f.getFontName(), f.getStyle(), size); txtArea.setFont(f); } } ... } Code Example 14 defines two Action methods with default names: makeLarger and makeSmaller. When invoked, the actions reset the font for a text area, which is named txtArea. makeLarger makeSmaller. ActionMap setAction Code Example 15 // ctx is the ApplicationContext instance. ResourceMap resource = ctxt.getResourceMap(ResizeFontPanel.class); resource.injectComponents(this); ActionMap map = ctxt.getActionMap(this); btnMakeLarger.setAction(map.get("makeLarger")); btnMakeSmaller.setAction(map.get("makeSmaller"));. get: ActionMap map = ctxt.getActionMap(this);. resources/ResizeFontPanel.properties getActionMap ResourceFontPanel Code Example 16 # resources/ResizeFontPanel.properties makeLarger.Action.text = Increase font size makeLarger.Action.icon = increase.png makeSmaller.Action.text = Decrease font size makeSmaller.Action.icon = decrease.png. SwingWorker Task The Task class extends a SwingWorker implementation, which is similar to the SwingWorker available in Java SE 6. A TaskService class helps you execute tasks, and a TaskMonitor helps you monitor their progress. TaskService TaskMonitor. NetworkTimeRetriever doInBackground succeeded NetWorkTimeRetriever NetworkTimeApp JTextField Code Example 17 public class NetworkTimeApp extends SingleFrameApplication { JPanel timePanel; JButton btnShowTime; JTextField txtShowTime; @Override protected void startup() { // Create components and so on. // ... // Retrieve and set Actions. ActionMap map = getContext().getActionMap(this); javax.swing.Action action = map.get("retrieveTime"); btnShowTime.setAction(action); timePanel.add(btnShowTime); timePanel.add(txtShowTime); show(timePanel); } @Action public Task retrieveTime() { Task task = new NetworkTimeRetriever(this); return task; } // ... class NetworkTimeRetriever extends Task<Date, Void> { public NetworkTimeRetriever(Application app) { super(app); } @Override protected Date doInBackground() throws Exception { URL nistServer = new URL(""); InputStream is = nistServer.openStream(); int ch = is.read(); StringBuffer dateInput = new StringBuffer();; while(ch != -1) { dateInput.append((char)ch); ch = is.read(); } String strDate = dateInput.substring(7, 24); DateFormat dateFormat = DateFormat.getDateTimeInstance(); SimpleDateFormat sdf = (SimpleDateFormat)dateFormat; sdf.applyPattern("yy-MM-dd HH:mm:ss"); sdf.setTimeZone(TimeZone.getTimeZone("GMT-00:00")); Date now = dateFormat.parse(strDate); return now; } @Override protected void succeeded(Date time) { txtShowTime.setText(time.toString()); } } }. retrieveTime retrievetime: SessionStorage Code Example 18 // ... String sessionFile = "sessionState.xml"; ApplicationContext ctx = getContext(); JFrame mainFrame = getMainFrame(); @Override protected void startup() { //... /* Restore the session state for the main frame's component tree. */ try { ctxt.getSessionStorage().restore(mainFrame, sessionFile); } catch (IOException e) { logger.log(Level.WARNING, "couldn't restore session", e); } // ... } @Override protected void shutdown() { /* Save the session state for the main frame's component tree. */ try { ctxt.getSessionStorage().save(mainFrame, sessionFile); } catch (IOException e) { logger.log(Level.WARNING, "couldn't save session", e); } // ... Note that applications will not usually need to create their own SessionStorage instance. Instead, you should use the ApplicationContext object's shared storage: SessionStorage ss = ctxt.getSessionStorage();. LocalStorage XMLEncoder XMLDecoder Again, the ApplicationContext provides access to a shared LocalStorage instance. You should retrieve the LocalStorage instance like this: LocalStorage ls = ctxt.getLocalStorage(); Now you can use the object's save and load methods to encode and decode objects to your local storage. The LocalStorage class uses your home directory as a base subdirectory for determining the default location of storage files. save load. phonelist.xml JList file Code Example 19 @Action public void loadMap() throws IOException { Object map = ctxt.getLocalStorage().load(file); listModel.setMap((LinkedHashMap<String, String>)map); showFileMessage("loadedFile", file); } @Action public void saveMap() throws IOException { LinkedHashMap<String, String> map = listModel.getMap(); ctxt.getLocalStorage().save(map, file); showFileMessage("savedFile", file); }.
http://java.sun.com/developer/technicalArticles/javase/swingappfr/
crawl-002
en
refinedweb
If. The Accelerate framework is a natural fit if your application uses AltiVec or other vector-based code. The framework provides a layer of abstraction that lets you perform vector-based operations without needing to use low-level vector instructions yourself. With the Accelerate framework, you don't need to be concerned with the architecture of the user's machine because the routines in this framework abstract the low-level details. Your application will run on either PowerPC-based or Intel-based Macintoshes without processor-specific customization. The framework is written to automatically invoke the appropriate instruction set for the architecture that your code runs on. The Accelerate framework gives you reliable, predictable results, and portable, highly optimized code in one package. This article looks at each library in the Accelerate framework, shows you how easy it is to import the framework into your existing projects, and gives you pointers to more detailed documentation on the ADC website. At the 2005 Worldwide Developers Conference, Apple announced a move to Intel-based Macintoshes. At the same time, Apple also provided a new version of the Xcode suite of tools to build univeral binaries. A universal binary is a compiled application that is capable of running on both the PowerPC and Intel-based Macintosh computers. The amount of work needed to create a universal binary depends greatly on the level of your source code. High-level code typically contains no processor dependencies, and therefore will require few if any changes to create a universal binary. However, developers who have low-level code containing hardware dependencies, such as AltiVec or SSE instructions, will face the most challenges in creating a universal binary. This is where system APIs such as those provided in the Accelerate framework come in. With a new hardware architecture to support, it makes sense to replace your own code with system APIs wherever possible. The Accelerate framework is the solution for computationally intensive universal binaries. The functions it provides eliminate the need for you to include hardware-dependent AltiVec or SSE code in your application. First introduced on Mac OS X v10.3 Panther and expanded in Mac OSX v10.4 Tiger, the Accelerate framework includes the vImage image processing framework, the vDSP digital signal processing framework, and the LAPACK and BLAS libraries for Linear Algebra, among others. The vImage framework, also introduced in Panther, contains functions to perform image processing operations such as resizing, distortions, and rotations. The vDSP framework provides support for a wide range of applications, including signal processing (audio, digital image, and speech), physics, statistics, and cryptography just to name a few. The vImage library contains routines to perform operations on raster graphics, or images. vImage routines transparently make the best use of the hardware available. For example, vImage uses AltiVec if it is available, but your code will also run on a PowerPC G3 processor. On the Intel platform, vImage will use SSE. Next you'll see the broad functional capabilities provided by vImage. The vImage library has primary support for four pixel formats and contains functions to convert between them: There are additional conversion functions that convert between over a dozen major pixel formats. These other formats are currently not supported by vImage for operations other than format conversions. vImage functions such as convolutions and geometry functions operate on the four pixel formats listed above. Convolution is a term used to describe an operation in which a result pixel is the weighted sum of the source pixels around it. Convolutions are used to cause blur, sharpening, and embossing effects. Other uses for convolutions include shifting the image horizontally and vertically with subpixel precision, or even to swap the order of pixels. vImage also supports deconvolution, which a process that approximately reverses a previous convolution. vImage provides an implementation of the Richardson-Lucy deconvolution algorithm, which can be used to remove lens distortion. vImage supports two morphological operations: dilation, and erosion. Two special cases, Max (for dilation) and Min (for erosion) are also provided. Histogram operations serve two purposes. The first is to give you a histogram for an image. The histogram contains information about the range of intensities of the pixels in the image. The second purpose is to transform an image so that it has approximately the same distribution of pixel intensities as a particular histogram. For example, if an image contains a large proportion of dark pixels, you can improve the contrast by transforming the image to a histogram with a more even distribution of pixel intensities. vImage provides functions to geometrically alter images. These functions include: Each pixel in an image contains an associated alpha value that determines the opaqueness of the pixel. Alpha compositing is the process of taking two images, each with its own alpha information, and combining them to create the image that would appear if one image were placed over the other. Other pixel transformation functions, which do not depend on the values of other pixels are: The vDSP library is focused primarily in the realm of Fourier Transforms, vector-to-scalar, and vector-to-vector operations. The vDSP library has a wide range of applications, including signal processing (audio, digital image, and speech), physics, statistics, and cryptography. The vDSP library can perform both one and two dimensional Fourier transforms. vDSP functions operate on both real and complex data types. vDSP uses vectorized code to implement functions that operate on single precision data. This code uses AltiVec extensions when a PowerPC G4 or G5 is present, or the SSE extensions when an Intel microprocessor is present. On the PowerPC G3 processor, vDSP uses scalar code. vDSP functions operate on the basic C data types: float, double, integer, short integer, and character. There are two additional data types defined by the library to handle complex numbers and complex vectors. The Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK) libraries contain—as you would expect—functions to perform linear algebra computations such as solving simultaneous linear equations, least squares solutions of linear equations, and eigenvalue problems. The BLAS library serves as a building block for the LAPACK library. The BLAS and LAPACK libraries are widely distributed and industry standard computational libraries. They are available on a number of different platforms and architectures. So, if you are already using these libraries you should feel right at home, as the APIs are exactly the same on Mac OS X. These two libraries are somewhat unique in that they have Fortran roots. The C version of LAPACK was actually built using a Fortran to C converter. Therefore, there are some caveats to calling LAPCACK routines from C, C++, and Objective-C. See the Vector Libraries page on the ADC website for more information. The vMathLib and vForce are vector-accelerated versions of the standard libm math libary. The difference between them is that vMathLib uses short, 128-bit hardware vectors to perform operations. It allows you to stay completely in the hardware vector domain. The vForce library uses long vectors to perform libm functions. You can pass it very long arrays, which allows the library to keep the processor's pipelines saturated. This gives much better performance than libm, without using underlying hardware vectors in your code. vForce automatically selects between scalar and vector code to give the best performance. vBasicOps performs operations such as integer multiplication, addition and multiplication using 64- and 128-bit operands. vBigNum is essentially the same as vBasicOps, except it uses up to 1024 bit integer operands. If you are used to using static libraries or individual shared libraries, you might not be familiar with the Mac OS X concept of a framework. The topic of Mac OS X frameworks is discussed in detail in the Framework Programming Guide. Very briefly, a framework on Mac OS X encapsulates both code and resources. A framework might contain only compiled code, but typically a framework contains both header files, and compiled code. Any resource needed at runtime can be packaged with the framework howerver, and this includes images, resource strings, nib files, and documentation. Another big advantage of frameworks is that they allow multiple versions to be deployed side-by-side in the same resource bundle. This makes backwards compatibility with older versions of the framework possible. Some frameworks act as containers for a set of smaller, related frameworks. These are called umbrella frameworks, and such is the case with the Accelerate framework. Knowledge of the framework's internal packaging is not required to use it in your project, however. It is merely an interesting implementation detail. It is simple to incorporate the Accelerate framework into your Xcode project. There is only one header file you need to include: #include <Accelerate/Accelerate.h> Next, add the framework to your project. It resides in the system frameworks folder: /System/Library/Frameworks/Accelerate.framework If you are using make or GCC directly, include the same header file and add the -framework switch to the command line: -framework Accelerate Updated: 2006-10-16 Get information on Apple products. Visit the Apple Store online or at retail locations. 1-800-MY-APPLE
http://developer.apple.com/performance/accelerateframework.html
crawl-002
en
refinedweb
java.lang.Object java.util.Dictionary<K,V>java.util.Dictionary<K,V> java.util.Hashtable<Object,Object>java.util.Hashtable<Object,Object> java.util.Propertiesjava.util.Properties public class., Serialized. loadFromXML(InputStream in) throws IOException, InvalidPropertiesFormatException The XML document must have the following DOCTYPE declaration: <!DOCTYPE properties SYSTEM "">Furthermore, the document must satisfy the properties DTD described above. The specified stream is closed) public void storeToXML(OutputStream os, String comment) throws IOException An invocation of this method of the form props.storeToXML(os, comment) behaves in exactly the same way as the invocation props.storeToXML(os, comment, "UTF-8");... The specified stream remains open after this method returns., or if encodingis null. ClassCastException- if this Propertiesobject contains any keys or values that are not Strings. loadFromXML(InputStream)() string. Copyright 2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Also see the documentation redistribution policy.
http://java.sun.com/javase/6/docs/api/java/util/Properties.html
crawl-002
en
refinedweb
JNI FAQ: UnsatisfiedLinkError I was asked questions similar to the following one quite a few times recently: [weiqi@gao] $ java Main Exception in thread "main" java.lang.UnsatisfiedLinkError: no Foo in java.librar y.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1682) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1030) at Foo. (Foo.java:3) at Main.main(Main.java:3) This question is most prone to be asked by C++ programmers working in a hybrid C++/JNI/Java environment, usually prefixed with "I'm trying to run our test suite, but am getting the following error. Any ideas?" Here's my response in a email yesterday, posted here through the "Pull email response up to blog entry" refactoring pattern (the answer is Linux specific, but can be translated to other OSes with minimal effort): This is the part that trips up people all the time. Look at it this way: - Your Main class uses the Foo class - The Foo class uses the libFoo.so JNI glue library. Foo probably looks like this: public class Foo { static { System.loadLibrary("Foo"); } // ... } - The libFoo.so JNI glue library uses underlying C++ *.so libraries The manner through which java searches for the needed files are different in each scenario: - Java searches for *.class files through the "-cp"/"-classpath" command line switch. If neither are specified on the command line, $CLASSPATH is searched - Java searches for JNI glue *.so files through the "-Djava.library.path" command line switch. If "-Djava.library.path" is not specified, $LD_LIBRARY_PATH is searched, but the /etc/ld.so.conf mechanism is not used - The way a JNI glue library searches for the underlying C++ *.so libraries is not controlled by the Java process at all. This is just a case where one *.so library is linked to other *.so libraries. The normal operating system rules apply ($LD_LIBRARY_PATH, /etc/ld.so.conf, etc.) So you need "-Djava.library.path=...:/path/to/dir-of-libFoo:..." on your java command line. Or add /path/to/dir-of-libFoo to your $LD_LIBRARY_PATH. As you can see, it's quite a long winded explanation (like a lot of things Java). And even this is not the most thorough. The key insight is the jump from the highlighted part of the stack trace to the corresponding conclusion #2. Re: JNI FAQ: UnsatisfiedLinkError Sometimes the library called out in the error line is itself attempting to satisfy a dependency that is not shown anywhere in the error message. In these cases, I have found the "ldd" command to be useful in the *nix environment for these errors. Once you have determined that your paths are correct and you just know that's not the error, run "ldd" on the .so in the error line and ensure all the dependencies for this shared library can be found.
http://www.weiqigao.com/blog/2008/06/05/jni_faq_unsatisfiedlinkerror.html
crawl-002
en
refinedweb
The ScreenClick class enables an audible clicking sound whenever the stylus is used. More... #include <ScreenClick> Inherits QObject and QtopiaServerApplication::QWSEventFilter. The ScreenClick class enables an audible clicking sound whenever the stylus is used. The ScreenClick class is not a true task. Instead, a real task should derive from the ScreenClick baseclass and implement the ScreenClick::screenClick() virtual method. For example, an implementation may look like this: class MyScreenClick : public ScreenClick { protected: virtual void screenClick(bool pressed) { // Make click noise } }; QTOPIA_TASK(MyScreenClick, MyScreenClick); Screen clicking will only occur when the Trolltech/Sound/System/Touch configuration variable is true, otherwise ScreenClick::screenClick() will not be called. Creating this class automatically enables the "AudibleScreenClick" QtopiaFeature. This class is part of the Qtopia server and cannot be used by other Qtopia applications. See also KeyClick. Construct a new ScreenClick instance. Destroys the ScreenClick instance. Called whenever the user taps or releases the screen. pressed will be true when the user taps the screen, and false when they release.
http://doc.trolltech.com/qtopia4.3/screenclick.html
crawl-002
en
refinedweb
Reiser4 file system: Transparent compression support. Further development and compatibility. A. Reiser4 cryptcompress file plugin(*) and its conversion(**) This is the second file plugin that realizes regular files in reiser4. Unlike previous one (unix-file plugin), cryptcompress plugin manages files with encrypted and(or) compressed bodies packed to metadata pages, so plain text is cached in data pages (pinned to inode's mapping), which don't participate in IO: at background commit their data get compressed with the following update of old compressed bodies. This update is going in so-called "squalloc" phase of the flush algorithm, so eventually everything will be tightly packed. And yes, metadata pages are supposed to be writebacked. Roughly speaking, cryptcompress file occupies more memory and smaller disk space then ordinary file (managed by unix-file plugin). In contrast with unix-file plugin, the smallest addressable unit is page cluster (in memory) and item cluster (on disk). Also cryptcompress plugin implements another, more economic approach in representing holes. However it calls the same low-level (node, etc) plugins, so you can have a "mixed" fileset on your reiser4 partition. See below about backward compatibility. To reduce cpu and memory usage when handling incompressible data one should assign proper compression mode plugin. The default one activates special hook in ->write() method of cryptcompress file plugin (only once per file's life, when starting to write from special offset in some iteration) which tries to estimate whether a file is compressible by testing its first logical cluster (64K by default). If evaluation result is negative, then fragments will be converted to extents, and management will be passed to unix-file plugin. Back conversion does not take place. If evaluation result is positive, then file stays under cryptcompress plugin control, but compression will be dynamically switched by flush manager in accordance with the policy implemented by compression mode plugin. This heuristic looks mostly like improvisation and might be improved via modifying the compression mode plugin (***) (some statistical analysis is needed here to make sure we don't worsen the situation). So let's summarize what we have in the cases of not success in primary evaluation performed by default mode: 1. file is incompressible, but its first logical cluster is compressible. In this case compression will be "turned off" in flush time, so we save only cpu, whereas memory consumption is wasteful, as file stays under cryptcomptress plugin control. Also deleting a huge file built of fragments is not the fastest operation. 2. file is compressible, but its first logical cluster is incompressible. In this case management will be passed to the unix-file plugin forever (not the worse situation). --- (*) "plugins" means "internal reiser4 modules". Perhaps, "plugin" is a bad name, but let us use it in the context of reiser4 (at least for now). Each plugin is labeled by a unique pair (type, id), so plugin's name is composed of id name (first) and type name. For example, "extent item plugin" means plugin of item type that manages extent pointers in reiser4. Plugins of file type are to service VFS entrypoints. (**) plugin conversion means passing management to another plugin of the same plugin type: (type, id1) -> (type, id2) with the following (or preceded) conversion of controlled objects (tail conversion is a classic example of such operation). (***) when modifying an existing plugin we should be careful (see below about backward compatibility). B. Getting started with cryptcompress plugin ****************** Warning! Warning! Warning! ************************ This stuff is experimental. Do not store important data in the files managed by cryptcompress plugin. It can be lost with no chances to recover it back. Also creating at least one such file on your product Reiser4 partition can cause its unrecoverable crash. It is not a joke! ********************************************************************** NOTE: We don't consider using pseudo interface (metas), as it is still deprecated. 1. Build and boot the latest kernel of -mm series. 2. Build and install the latest version of reiser4progs(1.0.6 for now) 3. Have a free partition (not for product using). 4. Format it by mkfs.reiser4. Use the option -o to override "create" and maybe other related plugins that mkfs installs to root directory by default. List of default settings is available via option -p. List of all possible settings is available via option -l For example: "mkfs.reiser4 -o create=ccreg40 /dev/xxx" specifies cryptcompress file plugin with (default) lzo1 compression "mkfs.reiser4 -o create=ccreg40,compress=gzip1 /dev/xxx" specifies cryptcompress file plugin with gzip1 compression. Description of all cryptcompress-related settings can be found here: 5. Mount the reiser4 file system (better with noatime option). 6. Have a fun. NOTE: If you use cryptcompress plugin, then the only way to monitor real disk space usage is looking at a counter of free blocks in superblock (for example, df (1)), but first of all make sure (for example, by sync (1)), that there is no dirty pages in memory, otherwise df will show incorrect information (will be fixed). du (1) statistics does not reflect (online) real space usage, as i_bytes and i_blocks are unsupported by cryptcompress plugin (supporting those fields "on-line" leads to performance drop). However, their proper values can be set "offline" by reiser4.fsck. NOTE: 1. Currently ciphering is unsupported (for this to work, some human key manager is needed). 2. Don't create loopback devices over files managed by cryptcompress plugin, as it doesn't work properly for now. 3. Make sure your boot partition does not contain files managed by cryptcompress plugin, as grub does not support this. C. Compatibility WARNING: Don't try to check/repair your partition that contains cryptcompress files with reiser4progs of version less then 1.0.6. Also don't try to mount such partition in old kernels < 2.6.18-mm3. We hope to completely avoid such compatibility problems (and therefore to get rid of "don't mount to kernelXXX" and "don't check by reiser4progsYYY" stuff) in future via using a simple technique based on plugin architecture as it is described in the document appended below. All comments, suggestions and, of course, bugreports are <reiserfs-dev at namesys dot com>, <reiserfs-list at namesys dot com>. Hope, you'll find this stuff useful. Reiserfs team. Appendix D. Devoted to resolving backward compatibility problems in Reiser4. Directed to file system developers and anyone with an interest in Reiser4 and plugin architecture. Reiser4 file system: development, versions and compatibility. Edward Shishkin 1. Backward compatibility problems Such problems arise when user downgrades kernel or fsprogs package: old ones can be not aware of new objects on his partition. We have tried to resolve backward compatibility problems using plugin architecture that reiser4 is based on. The main idea is very simple: to reduce them to a problem of "unknown" plugins". However, this puts some restrictions to the development model. Such approach (introduced in 2.6.18-mm3 and reiser4progs-1.0.6) is considered below in details. On one's way we try to clarify reiser4 possibilities in the development aspect. 2. Core and plugins. SPL and FPL Reiser4 kernel module consists of core code and plugins. Core includes balancing, transaction manager, flush manager, etc. code manipulates with virtual objects like formatted nodes, items, etc. Such virtualization technology is not new and is used everywhere (manipulations with VFS is a good example). Now it should be easy to understand a concept of reiser4 plugin, the basic concept of reiser4 file system. Reiser4 plugin is a kernel data structure filled by pointers to some functions (its "methods"). Each reiser4 plugin is labeled by a unique pair (type, id), globally persistent plugin identifier, so plugins with the same first components are of the same type of data (struct typename_plugin). Plugin name is composed of id name (first) and type name. For example: "extent item plugin" means plugin of item type which manages extent pointers in reiser4. All plugins of any type are initialized by the array typename_plugins. Every reiser4 plugin has its counterpart in reiser4progs (1**). Every reiser4 plugin belongs to one of the following two libraries (the same for reiser4progs): First library, SPL (per-Superblock Plugins Library), aggregates plugins that work with low-level disk layouts (superblock, formatted nodes, bitmap nodes, journal, etc). (Disk) format in reiser4 is a disk format plugin (i.e plugin labeled by the pair (disk_format, id)). Disk format are assigned per superblock. Disk format plugin installs node plugin and some other SPL members to reiser4 superblock in mount time. SPL has a version number defined as greatest supported disk format plugin id. Second library, FPL (per-File Plugins Library), aggregates so called file managers which are to work with disk layouts (like item plugin), represent some formatting policy (like formatting plugin), etc. The "uppermost" plugins of file type are to service VFS entry points. File managers are pointed by inode's plugin table (pset) described by data structure plugin_set filled by pointers to plugins. Attributes (type, id) of non-default file managers pointed in object's pset are packed/extracted like other attributes to/from disk stat-data by special stat-data item plugin. We associate FPL with a set of pairs {(type, id) | type = file, directory, item, hash, ...}. FPL version number is defined by another, more economic way (2**). Every plugin has a version number defined as minimal version of library which contains that plugin. General version of reiser4 kernel module is defined as 4.X.Y, so that X is version of SPL, and Y is version of FPL. We will say that X is SPL-subversion, and Y is FPL-subversion of reiser4 kernel module. The same for reiser4progs. 3. General disk version Every reiser4 partition has general disk version 4.X.Y. Number X is assigned by mkfs as some disk format plugin id (format 4.X) supported by reiser4progs package, and can not be changed in future. Y is assigned as FPL version of mkfs with the following upgrade at mount time in accordance with kernel FPL version: if user mounts 4.A.B to kernel with reiser4 module of general version 4.C.D, so that B < D, and mount is okay, then general disk version will be updated to 4.A.D. We will say that X is format subversion, and Y is FPL-subversion of reiser4 partition. 4. Definition of development model Here goes a set of rules which are not to be encoded. But first some helper definitions. Upgrading SPL means contributing a set of new SPL members, which must include new disk format plugin. Upgrading FPL means contributing a set of new FPL members and incrementing FPL version number. Upgrading reiser4 kernel module (reiser4progs) means upgrading SPL and(or) upgrading FPL. . Developer is allowed to upgrade reiser4 kernel module and reiser4progs (3**). . Kernel and reiser4progs should be upgraded simultaneously. . Every such upgrade should be performed via applying a single incremental patch. . No "development branches", i.e. don't modify existing plugins (4**). Issue only proved incremental patches. As we will see below, such restrictions will help to provide compatibility. 5. Supporting disk versions Here we describe encoded support of the development model above. This is what we aimed to minimize. Suppose we want to mount a filesystem of version 4.A.B to kernel with reiser4 module of version 4.C.D (or want to check it by reiser4progs of such version). At first, kernel/reiser4progs will check format subversion A. If A > C, then, obviously, format id A is unsupported by kernel/reiser4progs, and mount/check will be refused. Suppose, format subversion A is ok(supported). If B <= D, then in accordance with (4) pset members packed in disk stat-data of every object are supported by kernel/reiser4progs and there is no problems. The most interesting case is when B > D. It means that disk can contain plugins that kernel/reiser4progs is not aware of. Kernel and reiser4progs will support such file system by different ways. 5.1. Kernel: fine-grained access to disk objects First, some definitions. If some plugin (file manager) is not listed by FPL of some kernel, then we say that this plugin is unknown for this kernel, and file managed by this plugin is not available in this kernel. As it was mentioned above, all file managers are pointed by inode's pset which is extracted from disk stat-data at ->lookup() time, and in our approach this is the time when kernel recognizes unavailable objects: if plugin stat-data extension contains unknown plugin type or id, then read_inode() will return -EINVAL (or another value to indicate that object is not available) and bad inode will be created. Plugins missed in stat-data extension are assigned by kernel from file system defaults, and, hence, are "known". 5.2. Reiser4progs: access "all or nothing" Reiser4progs should be more suspicious about "unknown" plugins, as it can be a result of metadata corruption. So if B > D, then reiser4progs will refuse to work with such file system and user will be suggested to upgrade reiser4progs. If reiser4progs package is uptodate (B <= D), then unknown plugin type or id means metadata corruption that will be handled by proper way. 6. Definition of compatibility So, if B > D, then in spite of successful mount, in accordance with (5.1) some disk objects can be inaccessible, and an interesting question arises here: "what objects must be accessible in the case of such partial access?". To answer this, we need some definitions. . If some object of a semantic tree was created by kernel/reiser4progs with FPL of version V, then we say that this object has version V. Let's consider for every object Z of the semantic tree its version v(Z). . If every object Z of the semantic tree is available in every kernel with FPL of version >= v(Z), then we say, that the development model is (weakly) compatible. In contrast with (weak) compatibility, strong compatibility requires each object to be accessible regardless of downgrade deepness. However such concept does not have practical interest, and we won't consider it. So we have a short answer on the question above: "development model must be (weakly) compatible". Note, that we define v(Z) to not depend on SPL version. 7. Plugin conversion Plugins are allowed to modify object's plugin table (pset). This is a case of so called plugin conversion, when management is passed to another plugin of the same type: (type, id1) -> (type, id2). So, plugin conversion is a type safe operation. Such dynamic (on-the fly) conversion can be useful (5**). Examples: . tail conversion (passing management from tail to extent item plugin and back) performed for files managed by unix-file plugin. Came from reiserfs (v3), although nobody suspected that this is a kind of such plugin operation; . file conversion (passing management from cryptcompress to unix file plugin) performed for incompressible files. Definition. If plugin conversion doesn't increase plugin version, then we say it is squeezing. 8. How to provide compatibility? Now it should be obviously, how to provide it: just upgrade reiser4 kernel module and reiser4progs in accordance with the instructions (4). Statement. "2-dimentional" development model defined by instructions (4) is (weakly) compatible. Proof. In accordance with definition of compatibility, it is enough to consider only upgrading FPL. In accordance with the instructions (4) every implemented plugin conversion is squeezing. Let's consider a root node R of the semantic tree, so R consists of root directory and all its entries. Since version of plugins pointed by root pset can not be increased (6**), we have that every object Z of R is available in every kernel with FPL of version >= v(Z). Do the same for every node of the semantic tree in descending order. 9. On-disk evolution of format 4.X in compatible development model How this theory looks in real life. Suppose user has created (by mkfs) a file system of version 4.X.i, and want to mount empty file system to kernel with version 4.Y.j (Y >= X). If i < j, then kernel will upgrade disk version to 4.X.j and user will be suggested to run fsck to update also backup blocks (7**). If i > j, then kernel will complain that its FPL-subversion is too small, so some files can be inaccessible. Actually even empty root directory can be inaccessible in ancient kernels (if so, then mount will be refused). Suppose, mount was okay, and user was working for a long time with this partition upgrading kernel from time to time, so FPL-subversion numbers got upgraded to j_1, j_2, ...j_k (j < j_1 < ... < j_k). This scenario defines on the latest FPL = FPL(j_k) a structure of nested subsets (filtration): FPL(j) < FPL(j_1) < FPL(j_2) < ... < FPL(j_k), which induces filtration on the latest snapshot of user's semantic tree: T(j) <= T(j_1) <= T(j_2) <= ... <= T(j_k). Here "<=" means "subtree" (i.e. T(j_1) is a subset of T(j_2), moreover T(j_1) is a tree with the same root). T(j_s) is a snapshot of the semantic tree that was upgraded to j_(s+1), or a part of this snapshot (if something was removed after later upgrades). T(j_s)\T(j_(s-1)) ("\" is "without") contains objects of version j_s. In accordance with the development model above all elements of T(j_s) are managed by plugins of versions <= j_s, and, hence, all of them will be accessible in kernels with FPL version >= j_s. 10. Subversion numbers: why do we need this? One can ask: "Why do we need to keep/manage FPL-subversion numbers? Just add new per-file plugins properly, and everything will be recognized/handled automatically". For sure, indeed. However, keeping track of them is useful for some reasons: . For fsck to catch metadata corruption as it was described in (5.2). . For various accessibility issues. For example, you have offline reiser4 partition and want to know what kernel will provide full access (not restricted) to all objects (solution: run debugfs.reiser4, look at disk subversion number, then have a kernel with appropriate FPL version). 11. Examples 11.1. Transparent compression support The following is a list of FPL members that have been added for such support: . cryptcompress file plugin . ctail item plugin . compression transform plugins (new type) . compression mode plugins (new type) 11.2. Transparent encryption support Not yet implemented. It is supposed to be implemented for existing cryptcompress file plugin: one just need to add cipher transform plugins (it should be wrappers for linux crypto-api) and provide human key manager. 11.3. Supporting ECC-signatures for metadata protection Not yet implemented. In order to support such signatures we need new node format and, hence, new disk format plugin id. ECC signature should be checked by zload() for data integrity and possible error correction, and updated in commit time right before writing to disk. 11.4. Supporting xattrs Not yet implemented. Here we need new stat-data extension plugin (for packing/extracting namespaces, etc.) and a new file plugin as the most graceful way to not allow old objects acquire new stat-data extension (i.e. to not break compatibility). ------ (1**) Counterpart with the same (type, id) in reiser4progs can do different work. For example, fsck doesn't perform data decompression, but uses compression plugin (namely, its ->checksum() method) to check data integrity. (2**) Currently it is hardcoded, see definition of PLUGIN_LIBRARY_VERSION in kernel (reiser4/plugin/plugin.h) and reiser4progs (include/reiser4/plugin.h). (3**) We don't consider modifications of core code, as it doesn't break compatibility, at least we put efforts for this. (4**) However, anyway, it is impossible to avoid various bugfixes and micro-optimizations. Instructions about modifications of existing plugins is out of this paper. For now just make sure, that they don't break compatibility. Changing disk layouts and using non-squeezing plugin conversion is unacceptable. (5**) Actually, it is not easy to implement plugin conversion: most likely that it won't be a simple update of file's plugin table (pset), but will also require conversion of controlled objects and serialization of critical code sections that work with shared pset. (6**) Actually, plugin set of root directory can not be changed at all, as it defines "default settings" for the partition. (7**) Backup blocks are copies of main superblock spread through the whole partition. Before updating backup blocks, fsck will check the whole partition for consistency. It shouldn't cause discomfort, as upgrading reiser4 is not a frequent event. Anyway, specification of backup blocks is quite complex and it is better for kernel to not be aware about them (they just look as busy blocks for kernel). Updating status of backup blocks is reflected by special flag in superblock. Mounting with not updated backup blocks is possible without any additional options: kernel will warn for every such mount. When rebuilding crashed filesystem with not updated backup blocks, user will be suggested to confirm that new disk version in main superblock is correct (not a result of metadata corruption). - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at Please read the FAQ at
http://lkml.org/lkml/2007/3/14/446
crawl-002
en
refinedweb
The QMutexLocker class simplifies locking and unlocking QMutexes. More... All the functions in this class are thread-safe when Qt is built with thread support. #include <qmutex.h> List of all member functions.; }Mutex, QWaitCondition, Environment Classes, and Threading. which was locked in the constructor. See also QMutexLocker::QMutexLocker() and QMutex::unlock(). Returns a pointer to the mutex which was locked in the constructor. See also QMutexLocker::QMutexLocker(). This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.2/qmutexlocker.html
crawl-002
en
refinedweb
An early release of MSDN2 went live two weeks ago today. There have been quite a few posts about it, most of them very positive. The site, which contains only the Whidbey beta 1 documentation, is the first widely available use of our new online delivery and rendering infrastructure, called MTPS (MSDN/TechNet Publishing System). Unlike the main MSDN Library, the content in MTPS is delivered directly from the UE team that wrote it as individual XML topics, NOT decompiled from the offline help package (a .HxS file). This marks a big step forward for MSDN because it is the first step away from a very expensive process that involves taking ~1500 .HxS files containing hundreds of thousands of HTML files and modifying them to build the MSDN Web site. It is changes in the structure of these files that have changed MSDN URLs in the past, leading to tons of frustration for everyone. The MSDN2 feature that's getting the most content is the ability to use a managed API name as a URL. While that is a cool feature, it's really only one aspect of a new URL model that's designed to keep links from breaking. Specifically, each topic is now identified using an id that is held constant across minor revisions and major versions of a topic; and also across locales. The id has three forms: a GUID for decentralized authoring (sometimes cooked up by MSDN based on a locally unique identifier in a topic), a short id and, optionally an alias. Aliases are used when a piece of content has a for well-known (e.g., a TechNet KB#) or inferrable name (a managed - and someday a native - API name). These values can be used interchangeably. For instance, these URLs reference the same topic: The last token in the URL path identifies a topic, the left-hand of the URL identifies the context in which it's seen. The context piece MAY be used by a site to decide what do display in it's chrome. Because these two pieces are seperate, you can write URLs that make arbitrary combinations of the two, like this (note that the CSS won't work right, but the topic is displayed): is an essential part of this model, then, that all topic ids are usable everywhere. That is, all content exists in all locations in a context, or in multiple contexts. That enables us to surface the same information in multiple sites, with multiple organizations, allowing us to tailor information delivery to specific customer needs, as with the Library and the various Dev Centers (these share some content today, but it's a hack). This also means that, even if a DNS name or vroot goes away, you should ultimately be able to take the last token of a URL you know, append it to a wellknown base URL (ultimately, it should be msdn2.microsoft.com) and get the right content. Further, our 404 handling code should be smart enough to do that for you. We aren't there yet, but that's where we're headed. Now, more about aliases. Here are some comments I posted on Junfeng's blog when he broke the news that we were live (about 2/3 of the way down you'll see my favorite aliasing feature, which is the ability to access the list of a types members by appending _members to the type name): The URL alias mechanism is a new feature we added for this early release of our new online infrastructure. It's designed to work for every managed API page, but it isn't fully implemented yet. Here are some observations: It will work for namespaces as long as they have a '.' in them. So it won't work for the System, Microsoft, etc. namespaces. This is a bug that will be fixed. It should work for all class names. It will work for *SOME* property and method names. We tried to get all of them working, but ran into some issues, so we had to drop some for this early release. The plan is to make it work for all fields/properties/methods, including overloads. Note that you have to reference the member on it's defining type. Inherited members are not exposed with an alias on the derived type unless they are overridden. This is simply the nature of the doc structure. You can also get to the list of methods, properties, fields or all members, but appending the appropriate value, like this: As Chris noted, the .aspx is optional. Finally, the database behind this is not yet optimized for this sort of access. That will happen in a future release. So be patient if alias resolution takes a little longer than short id based lookups. Finally, note one other feature. The URL's you navigate to use short ids instead of file paths. This is the first concrete step toward keeping API URLs stable over time. Finally, I have to mention the new model we used for tree navigation of the site. This is a big change for a site whose customers routinely tell us that they use this resource everyday and they don't want us to change it. We went this way because it let us avoid frames, reflect the current page's URL in the address bar, and it's pretty fast; all of which people have asked for for a long time. The other interesting issue is scale. The current Library has many hundreds of thousands of nodes in it. Mix in Whidbey and Yukon, you get over a million. Mix in Indigo, Avalon, etc., you get even more. With that much info, is a single tree the right way to go? Even if it is, can we build a control that will do it efficiently without consuming oodles of memory in your browser? Personally, I like the new tree because it's lighter weight and feels more Web-like to me, but that's just me. Feel free to post feedback to MSDN on this point using the Product Feedback Center (pick MSDN Online as the product), we're very interested in what you think. wrote ADO.NET and System.Xml v. 2.0--The Beta Version (2nd Edition) now Published Du kan se en tidlig release af MSDN2 hos Microsoft. Vi er sluppet for den s. 3JrJlg <a href="qhnybuwvpont.com/.../a>, [url=]hjcrwcbecsuc[/url], [link=]zuyylzvlmzsf[/link], HelPp NDhn7W <a href="bleysmigdovz.com/.../a>, [url=]mmsdioooitez[/url], [link=]lkwdkxshsuay[/link], O7dH5Z <a href="vdtmclytmftu.com/.../a>, [url=]mgeahgrigqtm[/url], [link=]ixcntvhtexzx[/link], Y0hvj1 <a href="rpmuwrwncxaw.com/.../a>, [url=]qwfoodliskpx[/url], [link=]ouaiboxorpja[/link], LkFtdN <a href="vesjxvwrtzuy.com/.../a>, [url=]bfxhwdgzjwus[/url], [link=]prbblmnjfsfl[/link], YxopMG <a href="zopyqkdlzpmq.com/.../a>, [url=]qgccpejdfiff[/url], [link=]wegqzrzhbhwm[/link], ezjhFD <a href="qrrdbbyvypta.com/.../a>, [url=]wmlndqdvydfh[/url], [link=]auvaawuyisrs[/link],
http://www.pluralsight.com/community/blogs/tewald/archive/2004/09/23/2368.aspx
crawl-002
en
refinedweb
The Social Life of XMLThe Social Life of XML I recently found a picture of the panelists at the XML DevCon 2001 session entitled "The Importance of XML." My body language told the story: I wasn't a happy camper. Of course I agreed with all the reasons the panel thought XML was important: for web services, for interprocess communication, and for business process automation. But I also thought XML was important for a whole different set of reasons that weren't on the conference's agenda. I thought XML was important for end-user applications, for human communication, and for personal productivity. I believed then, and I believe more strongly today, that it's a bad idea to separate those two ways of using XML. When you get right down to it, what's really so special about web services? Is it distributed computing? Is it serialization and transfer of complex data? We've been there and done those things, though it's true that that we didn't use to do them using cheap and ubiquitous XML technologies. So, is service-oriented architecture the real game-changer? Clearly a lot of us think so, and maybe we're right. But I want to focus on something much more basic. The really important thing, it seems to me, is the way the XML document can become a shared construct, a tangible thing that processes and people can pass around and interact with. On the one hand, an XML document is the payload of a SOAP message that gets routed around on the Web services network -- a payload that represents, for example, a purchase order. On the other hand, an XML document is the form that somebody uses to submit, or approve, or audit that purchase order. Now, all of a sudden, these two documents are not only made of the same XML stuff, they can literally be the same XML document. When Tim Bray talks about the tribal history of XML, he says the current focus on XML data wasn't foreseen by what he calls "publishing-technology geeks" who thought they were building what he calls the "smart-document format of the future." Maybe not, but I've never been able to make much of a useful distinction between documents and databases. For me every document is a database, and every database is an assembly of documents. The "publishing-technology geeks" and the "Database Divas" that Tim writes about may cling to their tribal allegiances for a while longer, but the interbreeding experiment is already a success. I can query any XML document, including the slideshow that accompanies this talk, as if it were a database. And I can absorb XML documents into relational databases in increasingly granular and flexible ways. We're heading toward an extraordinary convergence of documents and databases. But I'm not sure we're always as clear as we could be about why this convergence is happening, or what opportunities it presents. I don't think the fact that XML has its roots in publishing is an accident -- or if it was, then it was a happy accident. Let's imagine a purchase order flowing through a web services pipeline, sometime in the near future. It's an XML document, perhaps created with a tool such as InfoPath. The document carries core data elements -- an item number, a department code. But it also carries contextual metadata -- for example, a threaded discussion involving the requester, the reviewer, and the approver. This context is the key to understanding how the data got there and what it means.. When I read the specs that define how these systems will work, I'm struck most of all by their treatment of exceptions. Here's how the BPEL 1.1 spec puts it: "The ability to specify exceptional conditions and their consequences," it says, "is at least as important for business protocols as the ability to define the behavior in the 'all goes well' case." I agree. But when I read these computer-sciency descriptions of compensation scopes and upward-chaining exception handlers, I worry that the we've left something important out of the picture. In our example, the exception was thrown by Frank, who asserted a veto for budgetary reasons. And it was handled by Paul, who agreed to a negotiated compromise that enabled the transaction to go forward. This kind of scenario isn't an exception, if you'll pardon the pun. It's the rule. Everyone has an agenda; every transaction is a negotiation; and every outcome is a compromise. But the documents that help us to articulate agendas, conduct negotiations, and assess compromises don't exploit the contextual power of XML, and they aren't being woven into the web services fabric. I think that's a problem. I also think we can solve it without inventing huge amounts of new technology. Common sense, basic tools, and some elbow grease can take us a long way. Of the various Microsoft slogans that have come and gone over the years, two in particular have stuck with me. The first, from 1990, was "information at your fingertips." In his Comdex speech that year, Bill Gates laid out a vision that's still, frankly, pretty elusive. It wasn't just about finding the information we're looking for, though that did require a leap of imagination back before Internet search came along and made it look easy. The premise of "information at your fingertips" was also that we would empower knowledge workers to interact with that information. These folks -- who we're now supposed to call information workers, by the way, because knowledge evidently sounds too elitist -- any these folks aren't just passive consumers of information, they're active creators of it. They need tools to produce, combine, transform, analyze, and exchange lots of different kinds of data, without tripping over differences in the underlying formats or editing tools. The solution proposed at the time was compound documents with embedded active parts. Microsoft called this OLE; Apple, IBM, Novell, and Sun called it OpenDoc. You don't hear much about OLE and OpenDoc any more, and that's a shame because the problems they were meant to solve are still very much with us. I'm glad to see that WSRP (Web Services for Remote Portlets) is now tackling the problem from a web services perspective. It's a really good idea to work out how markup fragments -- and the machinery for interacting with those fragments -- can be packaged up for use on the web services network. Back in the last century, of course, the assumption was that applications like Word and Excel were still going to control the data, and retain their own proprietary ways of representing it. The OLE interfaces would wake up chunks of that proprietary data for editing, and then tuck them them to bed again in a binary file-system-within-a-filesystem. This wasn't exactly a recipe for free-flowing data integration, but it sold some big fat programming books. A decade after the 1990 Comdex speech, the .NET platform was rolled out with much celebration of XML as a universal data store, and with a new slogan -- "the universal canvas" -- that I absolutely love. It's an idea that makes intuitive sense to everyone. Science fiction writers have always imagined what this would be like. The best demonstration of the concept I've seen is a 1987 concept video produced by Apple, called Knowledge Navigator. When I mentioned it on my weblog last month and posted a link to the video, it attracted a huge amount of interest. We all have a deep conviction that networked computers are supposed to help us create and inhabit shared collaborative spaces where we can fluidly manage relationships, create and reuse information, and conduct business transactions. Those transactions are governed by business protocols that we're working hard to formalize and automate. I don't want to trivialize the effort that's going to require. It's a deep problem and there's a lot we still don't know. Take, for example, the question of schemas. Some really smart people, including Jon Bosak, think we'll need a Universal Business Language to connect business protocols across different vertical-industry domains. Some other really smart people, including Jean Paoli, are tackling the problem from the bottom up, on the assumption that schemas need to emerge from specific practices before they can be codified in the large. I'm sure there's no simple answer, and I expect that both approaches will usefully coexist. But no matter how this plays out, the schemas and protocols are just the skeletal outlines of business processes. The flesh on the bones is the context that we create as we participate in these processes. Weblogs are arguably the best examples we have of XML connecting people to other people in information-rich contexts. But while the glue that holds the weblog universe together is an XML application called RSS, it's really only a thin wrapper of metadata around otherwise opaque content. The RSS payload typically isn't XML, it's escaped HTML -- a practice that Norm Walsh calls an abomination. I think Norm is right to say that. So my own RSS payload, like a few others out there, includes namespaced XHTML. But the gymnastics I that have to perform, in order to create that payload, are another kind of abomination. We've waited a long time for XML-aware authoring tools that fit easily and naturally into the flow of the Web. Although this was the year in which Microsoft shipped an XML-aware version of Office, the sad truth is that it was still easier for me to create my presentation in Emacs, rather than in PowerPoint, or Word, or InfoPath. Having said that, InfoPath, in particular, does get a number of things very right. It enables a relatively non-technical person to invent a schema, create a form that captures information that's valid with respect to that schema, and distribute the form to completely non-technical people who can fill it with data. What's more, the form, or document -- it's hard to know just what to call it -- has exactly the dual nature I've been talking about. Its information payload can be detached from a web services pipeline, edited offline by Kathy, emailed to Frank, edited offline by Frank, and injected back into the web services pipeline using email, or an HTTP postback, or a WSDL call. Since InfoPath only runs on Windows, and isn't part of the basic Office 2003 kit, it's not on a path to ubiquity. But it's based on the same standard technologies that I can use in Mozilla Firebird on Mac OS X: XML, XPath, XSLT, CSS, JavaScript. I'm not suggesting that the browser is the right hammer for every nail. Rather, it's one way to package a set of standard XML-aware components. I'd love to see, among other things, an InfoPath-like application built on the Mozilla platform. The email client is another way of packaging those components. And unless spam completely kills it, email is going to keep on being a primary lubricant of our business processes. Email is where most of our contextual information is created and exchanged, but where none of XML's contextual power is brought to bear. Here, by the way, Microsoft completely dropped the ball. The only Office 2003 application in which users can't create and use XML content is Outlook. But that's precisely where the need is greatest. Every day we ask questions about who said what, to whom, in reference to what, in response to whom. Because none of our routine written communication is well-formed, we fall back on decades-old search and navigation strategies in order to find things. And what we find is typically a mess. It's amazing to watch a highly-paid professional spending billable time trying to untangle what we like call an "email thread," but what's really just a patchwork quilt of mangled fragments with no discernible order, structure, or attribution. The problem with routine and casual use of well-formed content, of course, is that the XML parser is designed to keep the barbarians at bay. If the parser smells even a whiff of trouble, it slams the gate shut. As well it should. We wouldn't be having a web services revolution, right now, if we encouraged the kind sloppiness that's rampant on the Web. But we do need to find ways to make it easier for the barbarians to become respectable citizens. We have these liberal parsers that browser developers have spent untold effort creating, parsers that can slurp up the worst kind of tag soup that comes pouring out of HTML composers, or is written by hand. Maybe we can get more mileage of them. It's easy to just dismiss the barbarians, but there are an awful lot of them. They're creating and sharing tons of content that isn't well-formed, but in many cases we could squint and pretend that it is, just as browsers do. If we did that, we might be able to make the information they create and exchange more useful to them, as they performt the business scenarios we script for them. And we might also be able to make the information more useful to us, as we try to manage and debug those scenarios. I think that the combination of XHTML, CSS, and XPath adds up to a fruitful opportunity, even at this late date. Back in 2001, at that other convention I mentioned, somebody asked Tim Bray when XML would replace HTML on the Web. Here was his answer: Nobody thought for a microsecond that HTML would be replaced, and I don't think HTML will be replaced in my lifetime. It is the single most successful document format ever devised for delivering information electronically to humans. The population of computer users has voted for it overwhelmingly. I like it, I use it, I can't see why you'd want to stop using it. I completely agree. And since we are going to keep on using HTML, it behooves us to find smarter and better ways to use it. XHTML is one of those smarter and better ways. CSS is another. It strikes me as a really interesting opportunity to smuggle metadata into documents. People who don't know or care about metadata will nevertheless spend a lot of time fiddling with styles because they care a lot about how their documents look. A friend of mine, who's a teacher, told me that it takes her much longer to make presentations now, in PowerPoint, than it used to when she wrote them by hand on overhead-projector transparencies. There's a powerful human urge to achieve the right style. So let's exploit that. Let's promote packages of style tags that people will use just because they want to look cool. That's the immediate payoff. They don't need to know that those style tags are also hooks that make it easier to search for and manipulate content. Then, let's give them XPath-enhanced document viewers that do useful things with those hooks -- that cut down on the hassle and frustration of finding and reusing stuff. There's nothing earth-shattering here. It's just a modest proposal that aims to make better use of the tools and technologies already in place. Given the amount of hassle and frustration that's experienced by everyone on a daily basis, though, it's the kind of thing that could add up to a big payoff. It's also time to get serious about using XML to capture and represent real-world context. The XML and web services communities are doing a good job of reducing friction at the interface between processes and data. I'm pretty sure we can solve that eventually because it's the kind of problem that we, as technologists, are good at solving. We like to think about protocols and formats. I'm not sure we'll do such a good job of reducing friction at the interface between people and data-driven processeses. Success there will require serious attention to how people connect with one another, and with data, in information-dense, event-driven, networked environments. That means thinking about "human factors" and the "user experience" -- a couple of awkward phrases for things that we, as technologists, are not very good at dealing with. We don't like to think about habits, or agendas, or ways of thinking, or modes of communicating. Fortunately, there's all that publishing DNA floating around in the XML community's gene pool. We've only got a few decades of experience with networked information systems. But we've got a few millenia of experience with documents. Let's use that to our advantage as we build out service-oriented architectures in which documents are both payloads and user interfaces. From a publishing perspective, we know a lot about how to build documents that capture and hold attention, establish historical and current contexts, and tell stories that help people understand themselves in relation to those contexts. We need to draw on all that publishing knowledge as work out how to connect people to data-driven processes. Here's another idea.. For a long time I've thought that if we could bring these two worlds closer together, we could achieve powerful synergies. The idea got a boost recently when Microsoft revealed its plans for Indigo, the communication subsystem in Longhorn. Indigo aims, among things, to make XML web services efficient for use across -- or maybe even within -- local applications. I invite you to think about what that could mean, not only for Longhorn but for all platforms, and not only in three years but also right now. XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
http://www.xml.com/lpt/a/2003/12/23/udell.html
crawl-002
en
refinedweb
The QPtrQueue class is a template class that provides a queue. More... #include <qptrqueue.h> List of all member functions. QValueVector can be used as an STL-compatible alternative to this class. A template instance QPtrCollection classes, current() and remove() are provided; both operate on the head(). See also QPtrList, QPtrStack, Collection Classes, and Non-GUI Classes.(). Assigns queue to this queue and returns a reference to this queue. This queue is first cleared and then each item in queue is enqueued to this queue. Only the pointers are copied. Warning: The autoDelete() flag is not modified. If it it TRUE for both queue and this queue, deleting the two lists will cause double-deletion of the items.(). Writes a queue item, item, to the stream s and returns a reference to the stream. The default implementation does nothing. See also read(). This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.2/qptrqueue.html
crawl-002
en
refinedweb
The QToolTip class provides tool tips (sometimes called balloon help) for any widget or rectangular part of a widget. More... #include <qtooltip.h> Inherits Qt. List of all member functions. The tip is a short, one-line text reminding the user of the widget's or rectangle's function. It is drawn immediately below the region, in a distinctive black on yellow combination. In Motif style, Qt's tool tips look much like Motif's but feel more like Windows 95 tool tips. lets the mouse rest on a tip-equipped region for a second or so, and remains in active mode until the user either clicks a mouse button, presses a key, lets the mouse rest for five seconds, or moves the mouse outside all tip-equpped g is a QToolTipGroup * and already connected to the appropriate status bar: QToolTip::add( quitButton, "Leave the application", g, "Leave the application, without asking for confirmation" ); QToolTip::add( closeButton, "Close this window", g, "Close this window, without asking for confirmation" ); the above are one-liners and cover the vast majority of cases. The third and most general way to use QToolTip uses a pure virtual function to decide whether to pop up a tool tip. The tooltip/tooltip.cpp example demonstrates this too. This mode can be used to implement e.g. tips for text that can move as the user scrolls. To use QToolTip like this, you need to subclass QToolTip and reimplement maybeTip(). maybeTip() will be called when there's a chance that a tip should pop up. It must decide whether to show a tip, and possibly call add() with the rectangle the tip applies to, the tip's text and optionally the QToolTipGroup details. The tip will disappear once the mouse moves outside the rectangle you supply, and not reappear - maybeTip() will be called again if the user lets the mouse rest within the same rectangle again. You can forcibly remove the tip by calling remove() with no arguments. This is handy if the widget scrolls. Tooltips can be globally disabled using QToolTip::setEnabled(), or disabled in groups with QToolTipGroup::setEnabled(). See also QStatusBar, QWhatsThis, QToolTipGroup and GUI Design Handbook: Tool Tip Constructs a tool tip object. This is necessary only if you need tool tips on regions that can move within the widget (most often because the widget's contents can scroll). parent is the widget you want to add dynamic tool tips to and group (optional) is the tool tip group they should belong to. See also maybeTip(). [static] Adds a tool tip to a fixed rectangle within widget. text is the text shown in the tool tip. QToolTip makes a deep copy of this string. [static] Adds a tool tip to an entire widget, and to tool tip group group. text is the text shown in the tool tip and longText is the text emitted from group. QToolTip makes deep copies of both strings. Normally, longText is shown in a status bar or similar. [static] Adds a tool tip to widget. text is the text to be shown in the tool tip. QToolTip makes a deep copy of this string. This is the most common entry point to the QToolTip class; it is suitable for adding tool tips to buttons, check boxes, combo boxes and so on. [static] Adds a tool tip to widget, and to tool tip group group. text is the text shown in the tool tip and longText is the text emitted from group. QToolTip makes deep copies of both strings. Normally, longText is shown in a status bar or similar. [protected] Removes all tool tips for this tooltip's parent widget immediately. [static] Returns whether tooltips are enabled globally. See also setEnabled(). [static] Returns the font common to all tool tips. See also setFont(). Returns the tool tip group this QToolTip is a member of, of 0 if it isn't a member of any group. The tool tip group is the object responsible for relaying contact between tool tips and a status bar or something else which can show a longer help text. See also parentWidget() and QToolTipGroup. [static] Hides any tip that is currently being shown. Normally, there is no need to call this function; QToolTip takes care of showing and hiding the tips as the user moves the mouse. [virtual protected] This pure virtual function is half of the most versatile interface QToolTip offers. It is called when there is a chance that a tool tip should be shown, and must decide whether there is a tool tip for the point p in the widget this QToolTip object relates to. p is given in that widget's local coordinates. Most maybeTip() implementation will be of the form: if ( <something> ) { tip( <something>, <something> ); } The first argument to tip() (a rectangle) should include the p, or QToolTip, the user or both can be confused. See also tip(). [static] Returns the palette common to all tool tips. See also setPalette(). Returns the widget this QToolTip applies to. The tool tip is destroyed automatically when the parent widget is destroyed. See also group(). [static] Remove the tool tip from widget. If there are more than one tool tip on widget, only the one covering the entire widget is removed. [static] Remove the tool tip for rect from widget. If there are more than one tool tip on widget, only the one covering rectangle rect is removed. [static] Sets the all tool tips to be enabled (shown when needed) or disabled (never shown). By default, tool tips are enabled. Note that this function effects all tooltips in the entire application. See also QToolTipGroup::setEnabled(). [static] Sets the font for all tool tips to font. See also font(). [static] Sets the palette for all tool tips to palette. See also palette(). [protected] Pops up a tip saying text right now, and removes that tip once the cursor moves out of rectangle rect (which is given in the coordinate system of the widget this QToolTip relates to). The tip will not come back if the cursor moves back; your maybeTip() has to reinstate it each time. [protected] Pops up a tip saying text right now, and removes that tip once the cursor moves out of rectangle rect. The tip will not come back if the cursor moves back; your maybeTip() has to reinstate it each time. Search the documentation, FAQ, qt-interest archive and more (uses): This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved.
http://doc.trolltech.com/2.3/qtooltip.html
crawl-002
en
refinedweb
Author: neal.norwitz Date: Mon Jul 28 07:22:45 2008 New Revision: 65262 Log: Backport r65182. This change modified from using the unsigned max value to the signed max value similar to 2.5 and trunk.. Modified: python/branches/release24-maint/Include/pymem.h python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/Modules/almodule.c python/branches/release24-maint/Modules/arraymodule.c python/branches/release24-maint/Modules/selectmodule.c python/branches/release24-maint/Objects/obmalloc.c Modified: python/branches/release24-maint/Include/pymem.h ============================================================================== --- python/branches/release24-maint/Include/pymem.h (original) +++ python/branches/release24-maint/Include/pymem.h Mon Jul 28 07:22:45 2008 @@ -66,8 +66,12 @@ for malloc(0), which would be treated as an error. Some platforms would return a pointer with no memory behind it, which would break pymalloc. To solve these problems, allocate an extra byte. */ -#define PyMem_MALLOC(n) malloc((n) ? (n) : 1) -#define PyMem_REALLOC(p, n) realloc((p), (n) ? (n) : 1) +/* Returns NULL to indicate error if a negative size or size larger than + Py_ssize_t can represent is supplied. Helps prevents security holes. */ +#define PyMem_MALLOC(n) (((n) < 0 || (n) > INT_MAX) ? NULL \ + : malloc((n) ? (n) : 1)) +#define PyMem_REALLOC(p, n) (((n) < 0 || (n) > INT_MAX) ? NULL \ + : realloc((p), (n) ? (n) : 1)) #endif /* PYMALLOC_DEBUG */ @@ -80,24 +84,31 @@ * Type-oriented memory interface * ============================== * - * These are carried along for historical reasons. There's rarely a good - * reason to use them anymore (you can just as easily do the multiply and - * cast yourself). + * Allocate memory for n objects of the given type. Returns a new pointer + * or NULL if the request was too large or memory allocation failed. Use + * these macros rather than doing the multiplication yourself so that proper + * overflow checking is always done. */ #define PyMem_New(type, n) \ - ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( ((n) > INT_MAX / sizeof(type)) ? NULL : \ ( (type *) PyMem_Malloc((n) * sizeof(type)) ) ) #define PyMem_NEW(type, n) \ - ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ + ( ((n) > INT_MAX / sizeof(type)) ? NULL : \ ( (type *) PyMem_MALLOC((n) * sizeof(type)) ) ) +/* + * The value of (p) is always clobbered by this macro regardless of success. + * The caller MUST check if (p) is NULL afterwards and deal with the memory + * error if so. This means the original value of (p) MUST be saved for the + * caller's memory error handler to not lose track of it. + */ #define PyMem_Resize(p, type, n) \ - ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ - ( (p) = (type *) PyMem_Realloc((p), (n) * sizeof(type)) ) ) + ( (p) = ((n) > INT_MAX / sizeof(type)) ? NULL : \ + (type *) PyMem_Realloc((p), (n) * sizeof(type)) ) #define PyMem_RESIZE(p, type, n) \ - ( assert((n) <= PY_SIZE_MAX / sizeof(type)) , \ - ( (p) = (type *) PyMem_REALLOC((p), (n) * sizeof(type)) ) ) + ( (p) = ((n) > INT_MAX / sizeof(type)) ? NULL : \ + (type *) PyMem_REALLOC((p), (n) * sizeof(type)) ) /* In order to avoid breaking old code mixing PyObject_{New, NEW} with PyMem_{Del, DEL} and PyMem_{Free, FREE}, the PyMem "release memory" Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Mon Jul 28 07:22:45 2008 @@ -18,6 +18,13 @@ Core and builtins ----------------- +-. + - Added checks for integer overflows, contributed by Google. Some are only available if asserts are left in the code, in cases where they can't be triggered from Python code. Modified: python/branches/release24-maint/Modules/almodule.c ============================================================================== --- python/branches/release24-maint/Modules/almodule.c (original) +++ python/branches/release24-maint/Modules/almodule.c Mon Jul 28 07:22:45 2008 @@ -1633,9 +1633,11 @@ if (nvals < 0) goto cleanup; if (nvals > setsize) { + ALvalue *old_return_set = return_set; setsize = nvals; PyMem_RESIZE(return_set, ALvalue, setsize); if (return_set == NULL) { + return_set = old_return_set; PyErr_NoMemory(); goto cleanup; } Modified: python/branches/release24-maint/Modules/arraymodule.c ============================================================================== --- python/branches/release24-maint/Modules/arraymodule.c (original) +++ python/branches/release24-maint/Modules/arraymodule.c Mon Jul 28 07:22:45 2008 @@ -814,6 +814,7 @@ array_do_extend(arrayobject *self, PyObject *bb) { int size; + char *old_item; if (!array_Check(bb)) return array_iter_extend(self, bb); @@ -829,10 +830,11 @@ return -1; } size = self->ob_size + b->ob_size; + old_item = self->ob_item; PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); if (self->ob_item == NULL) { - PyObject_Del(self); - PyErr_NoMemory(); + self->ob_item = old_item; + PyErr_NoMemory(); return -1; } memcpy(self->ob_item + self->ob_size*self->ob_descr->itemsize, @@ -884,7 +886,7 @@ if (size > INT_MAX / n) { return PyErr_NoMemory(); } - PyMem_Resize(items, char, n * size); + PyMem_RESIZE(items, char, n * size); if (items == NULL) return PyErr_NoMemory(); p = items; Modified: python/branches/release24-maint/Modules/selectmodule.c ============================================================================== --- python/branches/release24-maint/Modules/selectmodule.c (original) +++ python/branches/release24-maint/Modules/selectmodule.c Mon Jul 28 07:22:45 2008 @@ -342,10 +342,12 @@ { int i, pos; PyObject *key, *value; + struct pollfd *old_ufds = self->ufds; self->ufd_len = PyDict_Size(self->dict); - PyMem_Resize(self->ufds, struct pollfd, self->ufd_len); + PyMem_RESIZE(self->ufds, struct pollfd, self->ufd_len); if (self->ufds == NULL) { + self->ufds = old_ufds; PyErr_NoMemory(); return 0; } Modified: python/branches/release24-maint/Objects/obmalloc.c ============================================================================== --- python/branches/release24-maint/Objects/obmalloc.c (original) +++ python/branches/release24-maint/Objects/obmalloc.c Mon Jul 28 07:22:45 2008 @@ -585,6 +585,15 @@ uint size; /* + *; + + /* * This implicitly redirects malloc(0). */ if ((nbytes - 1) < SMALL_REQUEST_THRESHOLD) { @@ -814,6 +823,15 @@ if (p == NULL) return PyObject_Malloc(nbytes); + /* + *; + pool = POOL_ADDR(p); if (Py_ADDRESS_IN_RANGE(p, pool)) { /* We're in charge of this block */
http://mail.python.org/pipermail/python-checkins/2008-July/072174.html
crawl-002
en
refinedweb
VML and PGML: A Comparisonby Lisa Rein June 22, 1998 There are many ways in which PGML and VML are similar. Both are text-based metamarkup languages, and are singing the same song about the advantages of text-based markup over a binary format. Both strive to be CSS2 compliant. Both PGML and VML were inspired by their respective companies' software products' image and document formats. Both will enable applications like Word (for VML) and Illustrator (for PGML) to pump out XML versions of their files. This is good news for many that non-XML developers can start marking up their Web content despite themselves, which is something we have all been waiting on for quite some time. The integration of XML into the core level of every end user product is a smart move for everyone concerned, but at present, there are a lot of legacy documents (.ps, .pdf, .doc, .rtf) that are not just going to disappear anytime soon. But remember, both the VML and PGML camps insist that their XML-based formats are much more than XML wrappers for their existing formats. PGML and VML use different syntaxes for graphic objects, but the feature sets are basically the same. Both specifications allow for: - Paths - Images - Text (there might be some differences here) - Predefined shape objects (e.g., rectangles, rounded rectangle, ellipse, circle) - Prototype shape objects The VML specification included a partial DTD, while the PGML specification included a complete and verified DTD. When a standard is adopted, surely a complete DTD will be provided. Both specifications include features allowing applications to store higher-level application-specific private data within the graphics file. These features promote a higher-level interchange of graphics between applications. VML goes beyond PGML in this regard by providing additional features which are particularly valuable in exchanging higher-level graphics data, particularly between template-oriented graphic diagramming applications. PGML allows the creation and editing of shapes via transformation matrix in a coordinate system. VML accomplishes this differently, through the use of shape types that can then combine semantically to form other shapes that in turn are scalable, etc. VML's coordinate space's shape and group define a CSS2 block level box. The following two snippets illustrate how PGML and VML are equivalent in how they save/restore a graphics state. Both snippets draw a rectangle rotated at 180 degrees and an oval/ellipse rotated at 90 degrees. VML logic: <v:group <v:group <v:rect ... /> <!-- rectangle rotated 180 degrees --> </v:group> <v:oval ... /> <!-- oval rotated 90 degrees --> </v:group> PGML logic: <p:group <p:group <p:rectangle .../> <!-- rectangle rotated 180 degrees --> </p:group> <p:ellipse ... /> <!-- ellipse rotated 90 degrees --> </p:group> (Note: the concat="0 -1 -1 0 0 0" tells PGML to apply a 2x3 transformation matrix to the graphics, which in this case is a 90-degree rotation transformation.) Applications needing either VML or PGML must identify links to external objects even though they may occur in extensions. The specifications adopted for identifying such links within XML will be used by VML. In addition, both will depend on a namespace mechanism to identify unknown XML tags and to reliably and safely add new XML tags. PGML/VML Comparison Chart PGML and VML and DOM One of the advantages of vector graphics formats such as PGML and VML is that they provide new ways of combining scripting with the Document Object Model (DOM) API or the SAX API to control a Web page's graphical elements. For a small PGML or VML graphic embedded in a larger document, the DOM is ideal. It provides an easy, standard way for a script writer to manipulate the graphic (by rotating it, say, 90 degrees), or allowing the ability to tweak a presentation without requiring the use of special software tools. When dealing with larger files, however, the DOM might not always be the best choice for processing PGML or VML, or in accessing their potentially resource-intensive media types. "Since building a DOM could have potentially terrifying resource requirements, a computer would need gigabytes of RAM just to represent all of the XML markup," explains David Megginson, creator of SAX, member of the XML working group, and author of the recently-published Structuring XML Documents. "For these reasons it is far more likely that XML applications such as PGML will be developing their own specialized APIs and interfaces for processing the potentially demanding needs of their media-based formats," Megginson says proudly. "Just another great advantage of using XML." Indeed, if PGML created and maintained DOM data structures for every point on every curve, it would use lots of memory. For this reason, PGML and VML processors will need to be implemented with DOM-related memory saving techniques. Sources at Adobe insist that this kind of memory savings (i.e., points on paths) will come very easily. What conclusions can be drawn at this point? It is highly unlikely that an XML-based vector graphics language is about to replace GIFs on the Web in the near future. Even after a standard is developed, a hybrid approach may still prove useful. Whether PGML and/or VML rendering will utilize a plug-in or an embedded, "built-in" browser component is too early to predict. Nevertheless, one can be sure that one or the other will be required in order for the XML parser to process the markup correctly. Adobe wants PGML to be viewed as more of a future-oriented submission, attempting to define a great standard for both rendering and printing in next year's browsers. VML is more of a present-oriented submission, attempting to define a great standard built around the graphics capabilities built into this year's releases of Office and IE for the vast market of users who wish to publish Web pages directly from Microsoft Office. Both camps praise the other's formats for presentation capabilities, but insist that their own syntax is superior for authoring and editing requirements. The important thing to remember for both PGML and VML is that they are experiments that companies have contributed as starting points for discussion. By the end of the W3C process, not only will there be a single XML-based vector graphics format, but all the companies will be moving toward it.
http://www.xml.com/pub/a/98/06/vector/vmlpgml.html
crawl-002
en
refinedweb
Submitter: Rajan Bhakta Submission Date: 2014/03/05 Summary With reference to ISO/IEC WG14 N1569, subclause 7.20.4.1: The macro UINTN_C(value) shall expand to an integer constant expression corresponding to the type uint_leastN_t. 7.20.4 p1 imposes a stricter requirement on the form of the expansion; it must be an integer constant (for which paragraph 2 points to 6.4.4.1). The type described in 7.20.4 p3 for the result of the expansion has an interesting property; we observe this for uint_least16_t without reference to the UINT16_C macro by using u'\0' in a context where it will be first promoted as part of the usual arithmetic conversions: #include <assert.h> #if u'\0' - 1 < 0 // Types: #if (uint_least16_t) - (signed int) < (signed int) // Due to 6.10.1 p4, near the reference to footnote 167, // after applying the integer promotions as part of 6.3.1.8 p1 // to the operands of the subtraction, the expression becomes: // Types: #if (unsigned int) - (signed int) < (signed int) // Following 6.3.1.8 p1 through to the last point gives: // Types: #if (unsigned int) - (unsigned int) < (signed int) // Result: false # error Expected large unsigned value. #endif int main(void) { // Types: assert((uint_least16_t) - (signed int) < (signed int)) // Assuming that signed int can represent all values of uint_least16_t, // after applying the integer promotions as part of 6.3.1.8 p1 // to the operands of the subtraction, the expression becomes: // Types: assert((signed int) - (signed int) < (signed int)) // Result: true assert(u'\0' - 1 < 0); return 0; } The code presented should neither fail to compile nor abort when executed (for example) on a system using two's complement and 8, 16 and 32 bits (respectively) for char, short and int with no padding bits. Consider the case for N = 8 or 16 on systems with INT_MAX as +2147483647, UCHAR_MAX as 255 and USHRT_MAX as 65535: it is unclear how a macro can be formed such that it expands to an integer constant that has the promoted signed int type in phase 7 of translation and also the promoted unsigned int type in phase 4 of translation without special (non-standard) support from the compiler. Even if the requirement for an integer constant is relaxed to only require an integer constant expression, the case for N = 8 on systems with INT_MAX as +32767 and UCHAR_MAX as 255 remains a problem without the use of casts (since uint_least16_t, for which we can form a literal, has different promotion behaviour from uint_least8_t). Implementations seen: #define UINT8_C(c) c ## U #define UINT8_C(c) c DR 209 seemed to try to address the issue of needing special compiler support in order to define the macros for integer constants; however, the problem seems to remain. Suggested Technical Corrigendum UINT{8,16}_Cmacros from the standard.
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1798.htm
CC-MAIN-2017-43
en
refinedweb
As part of the forthcoming Qt 5.7, we are happy to be releasing a tech preview of the new Qt Wayland Compositor API. In this post, I’ll give you an overview of the functionality along with few examples on how to create your own compositors with it. Wayland is a light-weight display server protocol, designed to replace the X Window System. It is particularly relevant for embedded and mobile systems. Wayland support in Qt makes it possible to split your UI into different processes, increasing robustness and reliability. The compositor API allows you to create a truly custom UI for the display server. You can precisely control how to display information from the other processes, and also add your own GUI elements. Qt Wayland has included a compositor API since the beginning, but this API has never been officially released. Now we have rewritten the API, making it more powerful and much easier to use. Here’s a snapshot of a demo that we showed at Embedded World: it is a compositor containing a launcher and a tiling window manager, written purely in QML. We will keep source and binary compatibility for all the 5.7.x patch releases, but since this is a tech preview, we will be adding non-compatible improvements to the API before the final release. The Qt Wayland Compositor API is actively developed in the dev branch of the Qt git repository. The Qt Wayland Compositor tech preview will be included in the Qt for Device Creation packages. It is not part of the Qt for Application Development binary packages, but when compiling Qt from source, it is built by default, as long as Wayland 1.6 is installed. What is new? - It is now possible to write an entire compositor in pure QML. - Improved API: Easier to understand, less code to write – both QML and C++ APIs - Completely reworked extension support: Extensions can be added with just a few lines of QML, and there’s a powerful, easy-to-use C++ API for writing your own extensions. - Multi-screen support - XDG-Shell support: Accept connections from non-Qt clients. - And finally, a change that is not visible in the API, but should make our lives easier as developers: We have streamlined the implementation and Qt Wayland now follows the standard Qt PIMPL( Q_DECLARE_PRIVATE) pattern Take a look at theAPI documentation for more details. Examples Here is a complete, fully functional (but minimalistic) compositor, written purely in QML: import QtQuick 2.6 import QtQuick.Window 2.2 import QtWayland.Compositor 1.0 WaylandCompositor { id: wlcompositor // The output defines the screen. WaylandOutput { compositor: wlcompositor window: Window { visible: true WaylandMouseTracker { anchors.fill: parent enableWSCursor: true Rectangle { id: surfaceArea color: "#1337af" anchors.fill: parent } } } } // The chrome defines the window look and behavior. // Here we use the built-in ShellSurfaceItem. Component { id: chromeComponent ShellSurfaceItem { onSurfaceDestroyed: destroy() } } // Extensions are additions to the core Wayland // protocol. We choose to support two different // shells (window management protocols). When the // client creates a new window, we instantiate a // chromeComponent on the output. extensions: [ WlShell { onShellSurfaceCreated: chromeComponent.createObject(surfaceArea, { "shellSurface": shellSurface } ); }, XdgShell { onXdgSurfaceCreated: chromeComponent.createObject(surfaceArea, { "shellSurface": xdgSurface } ); } ] } This is a stripped down version of the pure-qml example from the tech preview. And it really is a complete compositor: if you have built the tech preview, you can copy the text above, save it to a file, and run it through qmlscene : These are the commands I used to create the scene above: ./bin/qmlscene foo.qml & ./examples/widgets/widgets/wiggly/wiggly -platform wayland & weston-terminal & ./examples/opengl/qopenglwindow/qopenglwindow -platform wayland & The Qt Wayland Compositor API can of course also be used for the desktop. The Grefsen compositor ( ) started out as a hackathon project here at the Qt Company, and Shawn has continued developing it afterwards: C++ API The C++ API is a little bit more verbose. The minimal-cpp example included in the tech preview clocks in at 195 lines, excluding comments and whitespace. That does not get you mouse or keyboard input. The qwindow-compositor example is currently 743 lines, implementing window move/resize, drag and drop, popup support, and mouse cursors. This complexity gives you the opportunity to define completely new interaction models. We found the time to port everyone’s favourite compositor to the new API: This is perhaps not the best introduction to writing a compositor with Qt, but the code is available: git clone What remains to be done? The main parts of the API are finished, but we expect some adjustments based on feedback from the tech preview. There are still some known issues, detailed inQTBUG-48646 and on our Trello board . The main unresolved API question is input handling. How you can help Try it out! Read the documentation, run the examples, play around with it, try it in your own projects, and give us feedback on anything that can be improved. You can find us on #qt-lighthouse on Freenode. 评论 抢沙发
http://www.shellsec.com/news/27736.html
CC-MAIN-2017-43
en
refinedweb
Java Source Code: Sort Numbers in Selection Sort Get A Website Plus a Free Domain Name in Just 1 Hour! Bring the new technology in your hands! Share your skills, improve and impress. Get Your Own Website and a Free Domain Name Here! Random Tips for Readers Make your internet connection faster and bypass all the restricted websites in your country using PD-Proxy VPN Service. If you feel that you are not getting the freedom you deserve while surfing online or you feel you do not have the absolute privacy that you deserve while you pay so high for your internet connection, here is your solution! This VPN service will help you not just to make your internet connection faster, but it will also protect your identity online. Through PD-Proxy VPN service, you will have the chance to: - Connect with varieties of international servers worldwide - Connect to fast online gaming servers - Connect to fast torrent download servers - View movies and video clips online with absolute speed - Chance to keep your identity hidden and protect it from identity thieves - Chance to avoid unauthorized access of your information without your permission - Chance to access restricted sites like facebook and the likes, if you are from a country with lots of restrictions For more details and information, visit PD-Proxy VPN Service. Below is the sample java source code on sorting numbers using selection sort. Selection Sort in java is much prepared by the programmers when it comes on sorting. Here, I will present to you on how the selection sort works. How Selection Sort works in Java? Assuming the user entered 6 numbers which are: 4 5 1 3 2 6 The program will sort it in an ascending order using selection sort. The program will simply compare the first number which is 4 to the second number, third number and so on, and then sort. See below on how it works. From the entered numbers 4 5 1 3 2 6 Sorted To: 5 1 3 2 4 6 1 3 2 4 5 6 1 2 3 4 5 6 //sorting ends here and this will be output to the screen. Sorting in selection sort is pretty fast than bubble sort. But If you are interested on comparing other methods of sorting like bubble sort , you can visit the link below. Below is the java source code for Selection Sort. Java Source Code: How to Sort Numbers using Selection Sort //java class public class SelectionSort { public void SelectionSort(int[] arr){ for(int i=0; i<arr.length; i++) { for(int j=i+1; j<arr.length; j++) { if(arr[i] > arr[j] ) { int temp = arr[j]; arr[j] = arr[i]; arr[i] = temp; } } } for(int i=0; i<arr.length; i++) { System.out.print(arr[i] + " "); } } //main class import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.print("Enter the size of the array: "); int n = input.nextInt(); int[] x = new int[n]; System.out.print("Enter "+ n +" numbers: "); for(int i=0; i<n; i++) { x[i] = input.nextInt(); } SelectionSort access = new SelectionSort(); System.out.print("The Sorted numbers: "); access.SelectionSort(x); } } Java Tutorials and Tips - Java Tutorial Examples - 5 Important Tips to Learn Java Programming and Other Programming Languages - Basic Knowledge Required in Programming - How to Program in Java: Complete Simple Easy Steps - Java Simple Codes for Beginners - Java Tutorial for Beginners: A Beginners Guide on Learning Java Programming - Java Class: Learn More About Classes in Java Sample Output: Enter the size of the array: 10 Enter 10 numbers: 500 600 250 1000 35 50 10 15 20 1 The Sorted numbers: 1 10 15 20 35 50 250 500 600 1000 Other Java Source Code Examples - Java Source Code: Sort Numbers in Bubble Sort - Java Simple Codes for Beginners - A Java source code: How to Output Asterisk in Different Forms Using Recursion - Java Source Code: A Recursive Asterisk Diamond Shape - Java Source code on Printing the Greatest Common Divisor (GCD) using Recursion - Java Source code: How to Print a String Backward using Recursion - Java Source code: How to Add Numbers inside an Array using For Loop - Java Source code: How to Add numbers inside an Array Using Recursion - Java Source Code on a Recursive Linear Search - Java Source Code: Binary Search in Recursion - Java source code: How to Output the Answer of the Integer X to the Power Y - Java Source Code: How to make a Program that will determine a Person's Salutation and Current Age - Java Source Code: Recursive Snow Flakes xD thanks do you have code for sorting names in descending do you have a codes for shell sort that have a ascending and descending order? do you have a code for selection sort using descending order? do you have a code for descending order using the selection sort? I'm sorry to say, but it looks more like a bubble sort to me. 8
https://hubpages.com/technology/Java-Source-Code-How-to-Sort-Numbers-using-Selection-Sort-in-Recursion
CC-MAIN-2017-43
en
refinedweb
diff -auNrp tmp-from/include/mtd/ubi-user.h tmp-to/include/mtd/ubi-user.h--- tmp-from/include/mtd/ubi-user.h 1970-01-01 02:00:00.000000000 +0200+++ tmp-to/include/mtd/ubi-user.h 2007-03-23 18:20:01.000000000 +0200@@ -0,0 +1,161 @@+/*+ *ityutskiy (ÐиÑÑÑкий ÐÑÑÑм)+ */++#ifndef __UBI_USER_H__+#define __UBI_USER_H__++/*+ * UBI volume creation+ * ~~~~~~~~~~~~~~~~~~~+ *+ * UBI volumes are created via the %UBI_IOCMKVOL IOCTL command of UBI character+ * device. A &struct ubi_mkvol_req object has to be properly filled and a+ * pointer to it has to be passed to the IOCTL.+ *+ * UBI volume deletion+ * ~~~~~~~~~~~~~~~~~~~+ *+ * To delete a volume, the %UBI_IOCRMVOL IOCTL command of the UBI character+ * device should be used. A pointer to the 32-bit volume ID hast to be passed+ * to the IOCTL.+ *+ * UBI volume re-size+ * ~~~~~~~~~~~~~~~~~~+ *+ * To re-size a volume, the %UBI_IOCRSVOL IOCTL command of the UBI character+ * device should be used. A &struct ubi_rsvol_req object has to be properly+ * filled and a pointer to it has to be passed to the IOCTL.+ *+ * UBI volume update+ * ~~~~~~~~~~~~~~~~~+ *+ * Volume update should be done via the %UBI_IOCVOLUP IOCTL command of the+ * corresponding UBI volume character device. A pointer to a 64-bit update+ * size should be passed to the IOCTL. After then, UBI expects user to write+ * this number of bytes to the volume character device. The update is finished+ * when the claimed number of bytes is passed. So, the volume update sequence+ * is something like:+ *+ * fd = open("/dev/my_volume");+ * ioctl(fd, UBI_IOCVOLUP, &image_size);+ * write(fd, buf, image_size);+ * close(fd);+ */++/*+ * When a new volume is created, users may either specify the volume number they+ * want to create or to let UBI automatically assign a volume number using this+ * constant.+ */+#define UBI_VOL_NUM_AUTO (-1)++/* Maximum volume name length */+#define UBI_MAX_VOLUME_NAME 127++/* IOCTL commands of UBI character devices */++#define UBI_IOC_MAGIC 'o'++/* Create an UBI volume */+#define UBI_IOCMKVOL _IOW(UBI_IOC_MAGIC, 0, struct ubi_mkvol_req)+/* Remove an UBI volume */+#define UBI_IOCRMVOL _IOW(UBI_IOC_MAGIC, 1, int32_t)+/* Re-size an UBI volume */+#define UBI_IOCRSVOL _IOW(UBI_IOC_MAGIC, 2, struct ubi_rsvol_req)++/* IOCTL commands of UBI volume character devices */++#define UBI_VOL_IOC_MAGIC 'O'++/* Start UBI volume update */+#define UBI_IOCVOLUP _IOW(UBI_VOL_IOC_MAGIC, 0, int64_t)+/* An eraseblock erasure command, used for debugging, disabled by default */+#define UBI_IOCEBER _IOW(UBI_VOL_IOC_MAGIC, 1, int32_t)++/*+ * UBI volume type constants.+ *+ * @UBI_DYNAMIC_VOLUME: dynamic volume+ * @UBI_STATIC_VOLUME: static volume+ */+enum {+ UBI_DYNAMIC_VOLUME = 3,+ UBI_STATIC_VOLUME = 4+};++/**+ * struct ubi_mkvol_req - volume description data structure used in+ * volume creation requests.+ * @vol_id: volume number+ * @alignment: volume alignment+ * @bytes: volume size in bytes+ * @vol_type: volume type (%UBI_DYNAMIC_VOLUME or %UBI_STATIC_VOLUME)+ * @padding1: reserved for future, not used+ * @name_len: volume name length+ * @padding2: reserved for future, not used+ * @name: volume name+ *+ * This structure is used by userspace programs when creating new volumes. The+ * @used_bytes field is only necessary when creating static volumes.+ *+ * The @alignment field specifies the required alignment of the volume logical+ * eraseblock. This means, that the size of logical eraseblocks will be aligned+ * to this number, i.e.,+ * (UBI device logical eraseblock size) mod (@alignment) = 0.+ *+ * To put it differently, the logical eraseblock of this volume may be slightly+ * shortened in order to make it properly aligned. The alignment has to be+ * multiple of the flash minimal input/output unit, or %1 to utilize the entire+ * available space of logical eraseblocks.+ *+ * The @alignment field may be useful, for example, when one wants to maintain+ * a block device on top of an UBI volume. In this case, it is desirable to fit+ * an integer number of blocks in logical eraseblocks of this UBI volume. With+ * alignment it is possible to update this volume using plane UBI volume image+ * BLOBs, without caring about how to properly align them.+ */+struct ubi_mkvol_req {+ int32_t vol_id;+ int32_t alignment;+ int64_t bytes;+ int8_t vol_type;+ int8_t padding1;+ int16_t name_len;+ int8_t padding2[4];+ char name[UBI_MAX_VOLUME_NAME+1];+} __attribute__ ((packed));++/**+ * struct ubi_rsvol_req - a data structure used in volume re-size requests.+ * @vol_id: ID of the volume to re-size+ * @bytes: new size of the volume in bytes+ *+ * Re-sizing is possible for both dynamic and static volumes. But while dynamic+ * volumes may be re-sized arbitrarily, static volumes cannot be made to be+ * smaller then the number of bytes they bear. To arbitrarily shrink a static+ * volume, it must be wiped out first (by means of volume update operation with+ * zero number of bytes).+ */+struct ubi_rsvol_req {+ int64_t bytes;+ int32_t vol_id;+} __attribute__ ((packed));++#endif /* __UBI_USER_H__ */-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/3/23/198
CC-MAIN-2017-43
en
refinedweb
ICountDownLatch is a backed-up distributed alternative to the java.util.concurrent.CountDownLatch java.util.concurrent.CountDownLatch. More... #include <ICountDownLatch.h> ICountDownLatch is a backed-up distributed alternative to the java.util.concurrent.CountDownLatch java.util.concurrent.CountDownLatch. ICountDownLatch is a cluster-wide synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes. There are a few differences compared to the ICountDownLatch : Causes the current thread to wait until the latch has counted down to zero, an exception is thrown, or the specified waiting time elapses. If the current count is zero then this method returns immediately with the value true. If the current count is greater than zero then the current thread becomes disabled for thread scheduling purposes and lies dormant until one of five things happen: If the count reaches zero then the method returns with the value true. If the countdown owner becomes disconnected while waiting then MemberLeftException will be thrown. If the current thread: then InterruptedException is thrown and the current thread's interrupted status is cleared. If the specified waiting time elapses then the value false is returned. If the time is less than or equal to zero, the method will not wait at all. Decrements the count of the latch, releasing all waiting threads if the count reaches zero. If the current count is greater than zero then it is decremented. If the new count is zero: If the current count equals zero then nothing happens. Returns the current count. Sets the count to the given value if the current count is zero. The calling cluster member becomes the owner of the countdown and is responsible for staying connected to the cluster until the count reaches zero. If the owner becomes disconnected before the count reaches zero: If count is not zero then this method does nothing and returns false.
https://docs.hazelcast.org/docs/clients/cpp/3.6.2/classhazelcast_1_1client_1_1_i_count_down_latch.html
CC-MAIN-2019-22
en
refinedweb
Bubblewrap Bubblewrap is a lightweight setuid sandbox application developed from Flatpak with a small installation footprint and minimal resource requirements. While the package is named bubblewrap, the actual command-line interface is bwrap(1). Bubblewrap is expected to anchor the sandbox mechanism of the Tor Browser (Linux) in the future. Notable features include support for cgroup/IPC/mount/network/PID/user/UTS namespaces and seccomp filtering. Note that bubblewrap drops all capabilities within a sandbox and that child tasks cannot gain greater privileges than its parent. Notable feature exclusions include the lack of explicit support for blacklisting/whitelisting file paths. Contents Installation Install bubblewrap or bubblewrap-gitAUR. Configuration Bubblewrap can be called directly from the command-line and/or within shell scripts as part of a complex wrapper. Unlike applications such as Firejail which automatically set /var and /etc to read-only within the sandbox, Bubblewrap makes no such operating assumptions. It is up to the user to determine which configuration options to pass in accordance to the application being sandboxed. Bubblewrap does not automatically create user namespaces when running with setuid privileges and can accommodate typical environment variables including $HOME and $USER. Usage examples No-op A no-op bubblewrap invocation is as follows: $ bwrap --dev-bind / / bash This will spawn a bash process which should behave exactly as outside a sandbox. If a sandboxed program misbehaves, you may want to start from the above no-op invocation, and work your way towards a more secure configuration step-by-step. Bash Create a simple Bash sandbox: - Determine available kernel namespaces $ ls /proc/self/ns cgroup ipc mnt net pid user uts userindicates that the kernel has exposed support for user namespaces with CONFIG_USER_NS=y - Bind as read-only the entire host /directory to /in the sandbox - Create a new user namespace and set the user ID to 256and the group ID to 512 $ bwrap --ro-bind / / --unshare-user --uid 256 --gid 512 bash bash-4.4$ id uid=256 gid=512 groups=512,65534(nobody) bash-4.4$ ls -l /usr/bin/bash -rwxr-xr-x 1 nobody nobody 811752 2017-01-01 04:20 /usr/bin/bash dhcpcd Create a simple dhcpcd sandbox: - Determine available kernel namespaces $ ls /proc/self/ns cgroup ipc mnt net pid uts userindicates that the kernel has been built with CONFIG_USER_NS=nor is user namespace restricted. - Bind as read-write the entire host /directory to /in the sandbox - Mount a new devtmpfs filesystem to /devin the sandbox - Create new IPC and control group namespaces - Create a new UTS namespace and set dhcpcdas the hostname # /usr/bin/bwrap --bind / / --dev /dev --unshare-ipc --unshare-cgroup --unshare-uts --hostname dhcpcd /usr/bin/dhcpcd -q -b Unbound Create a more granular and complex Unbound sandbox: - Bind as read-only the system /usrdirectory to /usrin the sandbox - Create a symbolic link from the system /usr/libdirectory to /lib64in the sandbox - Bind as read-only the system /etcdirectory to /etcin the sandbox - Create empty /varand /rundirectories within the sandbox - Mount a new devtmpfs filesystem to /devin the sandbox - Create new IPC and PID and control group namespaces - Create a new UTS namespace and set unboundas the hostname # /usr/bin/bwrap --ro-bind /usr /usr --symlink usr/lib /lib64 --ro-bind /etc /etc --dir /var --dir /run --dev /dev --unshare-ipc --unshare-pid --unshare-cgroup --unshare-uts --hostname unbound /usr/bin/unbound -d unbound.service Desktop Leverage Bubblewrap within desktop entries: - Bind as read-write the entire host /directory to /in the sandbox - Re-bind as read-only the /varand /etcdirectories in the sandbox - Mount a new devtmpfs filesystem to /devin the sandbox - Create a tmpfs filesystem over the sandboxed /rundirectory - Disable network access by creating new network namespace [Desktop Entry] Name=nano Editor Exec=bwrap --bind / / --dev /dev --tmpfs /run --unshare-net st -e nano -o . %f Type=Application MimeType=text/plain; --dev /devis required to write to /dev/pty - Example MuPDF desktop entry incorporating a mupdf.shshell wrapper: [Desktop Entry] Name=MuPDF Exec=mupdf.sh %f Icon=application-pdf.svg Type=Application MimeType=application/pdf;application/x-pdf; mupdf.shis located within your executable PATH e.g. PATH=$PATH:$HOME/bwrap MuPDF The power and flexibility of bwrap is best revealed when used to create an environment within a shell wrapper: - Bind as read-only the host /usr/bindirectory to /usr/binin the sandbox - Bind as read-only the host /usr/libdirectory to /usr/libin the sandbox - Create a symbolic link from the system /usr/libdirectory to /lib64in the sandbox - Create a tmpfs filesystem overlaying /usr/lib/gccin the sandbox - This effectively blacklists the contents of /usr/lib/gccfrom appearing in the sandbox - Create a new tmpfs filesystem as the $HOMEdirectory in the sandbox - Bind as read-only an .Xauthorityfile and Documents directory into the sandbox - This effectively whitelists the .Xauthorityfile and Documents directory with recursion - Create a new tmpfs filesystem as the /tmpdirectory in the sandbox - Whitelist the X11 socket by binding it into the sandbox as read-only - Clone and create private containers for all namespaces supported by the running kernel - If the kernel does not support non-privileged user namespaces, skip its creation and continue - Do not place network components into a private namespace - This allows for network access to follow URI hyperlinks #!/bin/sh #~/bwrap/mupdf.sh (exec bwrap \ --ro-bind /usr/bin /usr/bin \ --ro-bind /usr/lib /usr/lib \ --symlink usr/lib /lib64 \ --tmpfs /usr/lib/gcc \ --tmpfs $HOME \ --ro-bind $HOME/.Xauthority $HOME/.Xauthority \ --ro-bind $HOME/Documents $HOME/Documents \ --tmpfs /tmp \ --ro-bind /tmp/.X11-unix/X0 /tmp/.X11-unix/X0 \ --unshare-all \ --share-net \ /usr/bin/mupdf "$@") $ bwrap \ --ro-bind /usr/bin /usr/bin \ --ro-bind /usr/lib /usr/lib \ --symlink usr/lib /lib64 \ --tmpfs /usr/lib/gcc \ --tmpfs $HOME \ --ro-bind $HOME/.Xauthority $HOME/.Xauthority \ --ro-bind $HOME/Desktop $HOME/Desktop \ --tmpfs /tmp \ --ro-bind /tmp/.X11-unix/X0 /tmp/.X11-unix/X0 \ --unshare-all \ --share-net \ /usr/bin/sh bash-4.4$ ls -AF .Xauthority Documents/ Perhaps the most important rule to consider when building a bubblewrapped filesystem is that commands are executed in the order they appear. From the MuPDF example above: - A tmpfs system is created followed by the bind mounting of an .Xauthorityfile and a Documents directory: --tmpfs $HOME \ --ro-bind $HOME/.Xauthority $HOME/.Xauthority \ --ro-bind $HOME/Documents $HOME/Documents \ bash-4.4$ ls -a . .. .Xauthority Desktop - A tmpfs filesystem is created after the bind mounting of .Xauthorityand overlays it so that only the Documents directory is visible within the sandbox: --ro-bind $HOME/.Xauthority $HOME/.Xauthority \ --tmpfs $HOME \ --ro-bind $HOME/Desktop $HOME/Desktop \ bash-4.4$ ls -a . .. Desktop p7zip Applications which have not yet been patched against known vulnerabilities constitute prime candidates for bubblewrapping: - Bind as read-only the host /usr/bin/7zaexecutable path to the sandbox - Create a symbolic link from the system /usr/libdirectory to /lib64in the sandbox - Blacklist the sandboxed contents of /usr/lib/modulesand /usr/lib/systemdwith tmpfs overlays - Mount a new devtmpfs filesystem to /devin the sandbox - Bind as read-write the host /sandboxdirectory to the /sandboxdirectory in the sandbox - 7za will only run in the host /sandboxdirectory and/or its subdirectories when called from the shell wrapper - Create new cgroup/IPC/network/PID/UTS namespaces for the application and its processes - If the kernel does not support non-privileged user namespaces, skip its creation and continue - Creation of a new network namespace prevents the sandbox from obtaining network access - Add a custom or an arbitrary hostname to the sandbox such as p7zip - Unset the XAUTHORITYenvironment variable to hide the location of the X11 connection cookie - 7za does not need to connect to an X11 display server to function properly - Start a new terminal session to prevent keyboard input from escaping the sandbox #!/bin/sh #~/bwrap/pz7ip.sh (exec bwrap \ --ro-bind /usr/bin/7za /usr/bin/7za \ --symlink usr/lib /lib64 \ --tmpfs /usr/lib/modules \ --tmpfs /usr/lib/systemd \ --dev /dev \ --bind /sandbox /sandbox \ --unshare-all \ --hostname p7zip \ --unsetenv XAUTHORITY \ --new-session \ /usr/bin/7za "$@") bwrap \ --ro-bind /usr/bin/7za /usr/bin/7za \ --ro-bind /usr/bin/ls /usr/bin/ls \ --ro-bind /usr/bin/sh /usr/bin/sh \ --symlink usr/lib /lib64 \ --tmpfs /usr/lib/modules \ --tmpfs /usr/lib/systemd \ --dev /dev \ --bind /sandbox /sandbox \ --unshare-all \ --hostname p7zip \ --unsetenv XAUTHORITY \ --new-session \ /usr/bin/sh bash: no job control in this shell bash-4.4$ ls -AF dev/ lib64@ usr/ bash-4.4$ ls -l /usr/lib/modules total 0 bash-4.4$ ls -l /usr/lib/systemd total 0 bash-4.4$ ls -AF /dev console full null ptmx@ pts/ random shm/ stderr@ stdin@ stdout@ tty urandom zero bash-4.4$ ls -A /usr/bin 7za ls sh Filesystem isolation To further hide the contents of the file system (such as those in /var, /usr/bin and /usr/lib) and to sandbox even the installation of software, pacman can be made to install Arch packages into isolated filesystem trees. In order to use pacman for installing software into the filesystem trees, you will need to install fakeroot and fakechroot. Suppose you want to install the xterm package with pacman into an isolated filesystem tree. You should prepare your tree like this: $ MYPACKAGE=xterm $ mkdir -p ~/sandboxes/${MYPACKAGE}/files/var/lib/pacman $ mkdir -p ~/sandboxes/${MYPACKAGE}/files/etc $ cp /etc/pacman.conf ~/sandboxes/${MYPACKAGE}/files/etc/pacman.conf You may want to edit ~/sandboxes/${MYPACKAGE}/files/etc/pacman.conf and adjust the pacman configuration used: - Remove any undesired custom repositories and IgnorePkg, IgnoreGroup, NoUpgradeand NoExtractsettings that are needed only for the host system. - You may need to remove the CheckSpaceoption so pacman will not complain about errors finding the root filesystem for checking disk space. Then install the base group along with the needed fakeroot into the isolated filesystem tree: $ fakechroot fakeroot pacman -Syu \ --root ~/sandboxes/${MYPACKAGE}/files \ --dbpath ~/sandboxes/${MYPACKAGE}/files/var/lib/pacman \ --config ~/sandboxes/${MYPACKAGE}/files/etc/pacman.conf \ base fakeroot Since you will be repeatedly calling bubblewrap with the same options, make an alias: $ alias bw-install='bwrap \ --bind ~/sandboxes/${MYPACKAGE}/files/ / \ --ro-bind /etc/resolv.conf /etc/resolv.conf \ --tmpfs /tmp \ --proc /proc \ --dev /dev \ --chdir / ' You will need to set up the locales: $ nano -w ~/sandboxes/${MYPACKAGE}/files/etc/locale.gen $ bw-install locale-gen Then set up pacman’s keyring: $ bw-install fakeroot pacman-key --init $ bw-install fakeroot pacman-key --populate archlinux Now you can install the desired xterm package. $ bw-install fakeroot pacman -S ${MYPACKAGE} If the pacman command fails here, try running the command for populating the keyring again. Congratulations. You now have an isolated filesystem tree containing xterm. You can use bw-install again to upgrade your filesystem tree. You can now run your software with bubblewrap. command should be xterm in this case. $ bwrap \ --ro-bind ~/sandboxes/${MYPACKAGE}/files/ / \ --ro-bind /etc/resolv.conf /etc/resolv.conf \ --tmpfs /tmp \ --proc /proc \ --dev /dev \ --chdir / \ command Note that some files can be shared between packages. You can hardlink to all files of an existing parent filesystem tree to reuse them in a new tree: $ cp -al ~/sandboxes/${MYPARENTPACKAGE} ~/sandboxes/${MYPACKAGE} Then proceed with the installation as usual by calling pacman from bw-install fakechroot fakeroot pacman …. Troubleshooting Using X11 Bind mounting the host X11 socket to an alternative X11 socket may not work: --bind /tmp/.X11-unix/X0 /tmp/.X11-unix/X8 --setenv DISPLAY :8 A workaround is to bind mount the host X11 socket to the same socket within the sandbox: --bind /tmp/.X11-unix/X0 /tmp/.X11-unix/X0 --setenv DISPLAY :0 Sandboxing X11 While bwrap provides some very nice isolation for sandboxed application, there is an easy escape as long as access to the X11 socket is available. X11 does not include isolation between applications and is completely insecure. The only solution to this is to switch to a wayland compositor with no access to the Xserver from the sandbox. There are however some workarounds that use xpra or xephyr to run in a new X11 environment. This would work with bwrap as well. To test X11 isolation, run 'xinput test <id>' where <id> is your keyboard id which you can find with 'xinput list' When run without additional X11 isolation, this will show that any application with X11 access can capture keyboard input of any other application, which is basically what a keylogger would do. Opening URLs from wrapped applications When a wrapped IRC or email client attempts to open a URL, it will usually attempt to launch a browser process, which will run within the same sandbox as the wrapped application. With a well-wrapped application, this will likely not work. The approach used by Firejail is to give wrapped applications all the privileges of the browser as well, however this implies a good amount of permission creep. A better solution to this problem is to communicate opened URLs to outside the sandbox. This can be done using snapd-xdg-open as follows: - Install snapd-xdg-open-gitAUR - On your bwrapcommand line, add: $ bwrap ... \ --ro-bind /run/user/$UID/bus /run/user/$UID/bus \ --ro-bind /usr/lib/snapd-xdg-open/xdg-open /usr/bin/xdg-open \ --ro-bind /usr/lib/snapd-xdg-open/xdg-open /usr/bin/chromium \ ... The /usr/bin/chromium bind is only necessary for programs not using XDG conventions, such as Mozilla Thunderbird. New Session There is a security issue with TIOCSTI, (CVE-2017-5226) which allows sandbox escape. To prevent this, bubblewrap has introduced the new option '--new-session' which calls setsid(). However this causes some behavioural issues that are hard to work with in some cases. For instance, it makes shell job control not work for the bwrap command. It is recommended to use this if possible, but if not the developers recommend that the issue is neutralized in some other way, for instance using SECCOMP, which is what flatpak does:
https://wiki.archlinux.org/index.php/Bubblewrap
CC-MAIN-2019-22
en
refinedweb
Express Gateway - A microservices API Gateway built on top of Express.js - Interview with Vincenzo Chianese Our Kanban application is almost usable now. It looks alright and there's basic functionality in place. In this chapter, we will integrate drag and drop functionality to it as we set up React DnD. After this chapter, you should be able to sort notes within a lane and drag them from one lane to another. Although this sounds simple, there is quite a bit of work to do as we need to annotate our components the right way and develop the logic needed. As the first step, we need to connect React DnD with our project. We are going to use the HTML5 Drag and Drop based back-end. There are specific back-ends for testing and touch. In order to set it up, we need to use the DragDropContext decorator and provide the HTML5 back-end to it. To avoid unnecessary wrapping, I'll use Redux compose to keep the code neater and more readable: app/components/App.jsx import React from 'react'; import uuid from 'uuid';import {compose} from 'redux'; import {DragDropContext} from 'react-dnd'; import HTML5Backend from 'react-dnd-html5-backend';import connect from '../libs/connect'; import Lanes from './Lanes'; import LaneActions from '../actions/LaneActions'; const App = ({LaneActions, lanes}) => { const addLane = () => { LaneActions.create({ id: uuid.v4(), name: 'New lane' }); }; return ( <div> <button className="add-lane" onClick={addLane}>+</button> <Lanes lanes={lanes} /> </div> ); };export default connect(({lanes}) => ({ lanes }), { LaneActions })(App)export default compose( DragDropContext(HTML5Backend), connect( ({lanes}) => ({lanes}), {LaneActions} ) )(App) After this change, the application should look exactly the same as before. We are ready to add some sweet functionality to it now. Allowing notes to be dragged is a good first step. Before that, we need to set up a constant so that React DnD can tell different kind of draggables apart. Set up a file for tracking Note as follows: app/constants/itemTypes.js export default { NOTE: 'note' }; This definition can be expanded later as we add new types, such as LANE, to the system. Next, we need to tell our Note that it's possible to drag it. This can be achieved using the DragSource annotation. Replace Note with the following implementation: app/components/Note.jsx import React from 'react'; import {DragSource} from 'react-dnd'; import ItemTypes from '../constants/itemTypes'; const Note = ({ connectDragSource, children, ...props }) => { return connectDragSource( <div {...props}> {children} </div> ); }; const noteSource = { beginDrag(props) { console.log('begin dragging note', props); return {}; } }; export default DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() }))(Note) If you try to drag a Note now, you should see something like this at the browser console: begin dragging note Object {className: "note", children: Array[2]} Just being able to drag notes isn't enough. We need to annotate them so that they can accept dropping. Eventually this will allow us to swap them as we can trigger logic when we are trying to drop a note on top of another. In case we wanted to implement dragging based on a handle, we could apply connectDragSourceonly to a specific part of a Note. Note that React DnD doesn't support hot loading perfectly. You may need to refresh the browser to see the log messages you expect! Annotating notes so that they can notice that another note is being hovered on top of them is a similar process. In this case we'll have to use a DropTarget annotation: app/components/Note.jsx import React from 'react';import {DragSource} from 'react-dnd';import {compose} from 'redux'; import {DragSource, DropTarget} from 'react-dnd';import ItemTypes from '../constants/itemTypes'; const Note = ({connectDragSource, children, ...propsconnectDragSource, connectDropTarget, children, ...props}) => {return connectDragSource(return compose(connectDragSource, connectDropTarget)(<div {...props}> {children} </div> ); }; const noteSource = { beginDrag(props) { console.log('begin dragging note', props); return {}; } };const noteTarget = { hover(targetProps, monitor) { const sourceProps = monitor.getItem(); console.log('dragging note', sourceProps, targetProps); } };export default DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() }))(Note)export default compose( DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() })), DropTarget(ItemTypes.NOTE, noteTarget, connect => ({ connectDropTarget: connect.dropTarget() })) )(Note) If you try hovering a dragged note on top of another now, you should see messages like this at the console: dragging note Object {} Object {className: "note", children: Array[2]} Both decorators give us access to the Note props. In this case, we are using monitor.getItem() to access them at noteTarget. This is the key to making this to work properly. onMoveAPI for Notes# Now, that we can move notes around, we can start to define logic. The following steps are needed: Noteid on beginDrag. Noteid on hover. onMovecallback on hoverso that we can deal with the logic elsewhere. LaneStorewould be the ideal place for that. Based on the idea above we can see we should pass id to a Note through a prop. We also need to set up a onMove callback, define LaneActions.move, and LaneStore.move stub. idand onMoveat Note# We can accept id and onMove props at Note like below. There is an extra check at noteTarget as we don't need trigger hover in case we are hovering on top of the Note itself: app/components/Note.jsx ... const Note = ({ connectDragSource, connectDropTarget,children, ...propsonMove, id, children, ...props}) => { return compose(connectDragSource, connectDropTarget)( <div {...props}> {children} </div> ); };const noteSource = { beginDrag(props) { console.log('begin dragging note', props); return {}; } };const noteSource = { beginDrag(props) { return { id: props.id }; } };const noteTarget = { hover(targetProps, monitor) { const sourceProps = monitor.getItem(); console.log('dragging note', sourceProps, targetProps); } };const noteTarget = { hover(targetProps, monitor) { const targetId = targetProps.id; const sourceProps = monitor.getItem(); const sourceId = sourceProps.id; if(sourceId !== targetId) { targetProps.onMove({sourceId, targetId}); } } };... Having these props isn't useful if we don't pass anything to them at Notes. That's our next step. idand onMovefrom Notes# Passing a note id and onMove is simple enough: app/components/Notes.jsx import React from 'react'; import Note from './Note'; import Editable from './Editable'; export default ({ notes, onNoteClick=() => {}, onEdit=() => {}, onDelete=() => {} }) => ( <ul className="notes">{notes.map(({id, editing, task}) => <li key={id}><Note className="note" onClick={onNoteClick.bind(null, id)}><Note className="note" id={id} onClick={onNoteClick.bind(null, id)} onMove={({sourceId, targetId}) => console.log('moving from', sourceId, 'to', targetId)}><Editable className="editable" editing={editing} value={task} onEdit={onEdit.bind(null, id)} /> <button className="delete" onClick={onDelete.bind(null, id)}>x</button> </Note> </li> )}</ul> ) If you hover a note on top of another, you should see console messages like this: moving from 3310916b-5b59-40e6-8a98-370f9c194e16 to 939fb627-1d56-4b57-89ea-04207dbfb405 The logic of drag and drop goes as follows. Suppose we have a lane containing notes A, B, C. In case we move A below C we should end up with B, C, A. In case we have another list, say D, E, F, and move A to the beginning of it, we should end up with B, C and A, D, E, F. In our case, we'll get some extra complexity due to lane to lane dragging. When we move a Note, we know its original position and the intended target position. Lane knows what Notes belong to it by id. We are going to need some way to tell LaneStore that it should perform the logic over the given notes. A good starting point is to define LaneActions.move: app/actions/LaneActions.js import alt from '../libs/alt'; export default alt.generateActions( 'create', 'update', 'delete', 'attachToLane', 'detachFromLane', 'move' ); We should connect this action with the onMove hook we just defined:} onClick={onNoteClick.bind(null, id)}onMove={({sourceId, targetId}) => console.log('moving from', sourceId, 'to', targetId)}>onMove={LaneActions.move}><Editable className="editable" editing={editing} value={task} onEdit={onEdit.bind(null, id)} /> <button className="delete" onClick={onDelete.bind(null, id)}>x</button> </Note> </li> )}</ul> ) It could be a good idea to refactor onMoveas a prop to make the system more flexible. In our implementation the Notescomponent is coupled with LaneActions. This isn't particularly nice if you want to use it in some other context. We should also define a stub at LaneStore to see that we wired it up correctly: app/stores/LaneStore.js import LaneActions from '../actions/LaneActions'; export default class LaneStore { ... detachFromLane({laneId, noteId}) { ... }move({sourceId, targetId}) { console.log(`source: ${sourceId}, target: ${targetId}`); }} You should see the same log messages as earlier. Next, we'll need to add some logic to make this work. We can use the logic outlined above here. We have two cases to worry about: moving within a lane itself and moving from lane to another. Moving within a lane itself is complicated. When you are operating based on ids and perform operations one at a time, you'll need to take possible index alterations into account. As a result, I'm using update immutability helper from React as that solves the problem in one pass. It is possible to solve the lane to lane case using splice. First, we splice out the source note, and then we splice it to the target lane. Again, update could work here, but I didn't see much point in that given splice is nice and simple. The code below illustrates a mutation based solution: app/stores/LaneStore.js import update from 'react-addons-update';import LaneActions from '../actions/LaneActions'; export default class LaneStore { ...move({sourceId, targetId}) { console.log(`source: ${sourceId}, target: ${targetId}`); }move({sourceId, targetId}) { const lanes = this.lanes; const sourceLane = lanes.filter(lane => lane.notes.includes(sourceId))[0]; const targetLane = lanes.filter(lane => lane.notes.includes(targetId))[0]; const sourceNoteIndex = sourceLane.notes.indexOf(sourceId); const targetNoteIndex = targetLane.notes.indexOf(targetId); if(sourceLane === targetLane) { // move at once to avoid complications sourceLane.notes = update(sourceLane.notes, { $splice: [ [sourceNoteIndex, 1], [targetNoteIndex, 0, sourceId] ] }); } else { // get rid of the source sourceLane.notes.splice(sourceNoteIndex, 1); // and move it to target targetLane.notes.splice(targetNoteIndex, 0, sourceId); } this.setState({lanes}); }} If you try out the application now, you can actually drag notes around and it should behave as you expect. Dragging to empty lanes doesn't work, though, and the presentation could be better. It would be nicer if we indicated the dragged note's location more clearly. We can do this by hiding the dragged note from the list. React DnD provides us the hooks we need for this purpose. React DnD provides a feature known as state monitors. Through it we can use monitor.isDragging() and monitor.isOver() to detect which Note we are currently dragging. It can be set up as follows: app/components/Note.jsx import React from 'react'; import {compose} from 'redux'; import {DragSource, DropTarget} from 'react-dnd'; import ItemTypes from '../constants/itemTypes'; const Note = ({connectDragSource, connectDropTarget, onMove, id, children, ...propsconnectDragSource, connectDropTarget, isDragging, isOver, onMove, id, children, ...props}) => { return compose(connectDragSource, connectDropTarget)(<div {...props}> {children} </div><div style={{ opacity: isDragging || isOver ? 0 : 1 }} {...props}>{children}</div>); }; ... export default compose(DragSource(ItemTypes.NOTE, noteSource, connect => ({ connectDragSource: connect.dragSource() })), DropTarget(ItemTypes.NOTE, noteTarget, connect => ({ connectDropTarget: connect.dropTarget() }))DragSource(ItemTypes.NOTE, noteSource, (connect, monitor) => ({ connectDragSource: connect.dragSource(), isDragging: monitor.isDragging() })), DropTarget(ItemTypes.NOTE, noteTarget, (connect, monitor) => ({ connectDropTarget: connect.dropTarget(), isOver: monitor.isOver() })))(Note) If you drag a note within a lane, the dragged note should be shown as blank. There is one little problem in our system. We cannot drag notes to an empty lane yet. To drag notes to empty lanes, we should allow them to receive notes. Just as above, we can set up DropTarget based logic for this. First, we need to capture the drag on Lane: app/components/Lane.jsx import React from 'react';import {compose} from 'redux'; import {DropTarget} from 'react-dnd'; import ItemTypes from '../constants/itemTypes';import connect from '../libs/connect'; import NoteActions from '../actions/NoteActions'; import LaneActions from '../actions/LaneActions'; import Notes from './Notes'; import LaneHeader from './LaneHeader'; const Lane = ({lane, notes, LaneActions, NoteActions, ...propsconnectDropTarget, lane, notes, LaneActions, NoteActions, ...props}) => { ...return (return connectDropTarget(... ); }; function selectNotesByIds(allNotes, noteIds = []) { ... }const noteTarget = { hover(targetProps, monitor) { const sourceProps = monitor.getItem(); const sourceId = sourceProps.id; // If the target lane doesn't have notes, // attach the note to it. // // `attachToLane` performs necessarly // cleanup by default and it guarantees // a note can belong only to a single lane // at a time. if(!targetProps.lane.notes.length) { LaneActions.attachToLane({ laneId: targetProps.lane.id, noteId: sourceId }); } } };export default connect( ({notes}) => ({ notes }), { NoteActions, LaneActions } )(Lane)export default compose( DropTarget(ItemTypes.NOTE, noteTarget, connect => ({ connectDropTarget: connect.dropTarget() })), connect(({notes}) => ({ notes }), { NoteActions, LaneActions }) )(Lane) After attaching this logic, you should be able to drag notes to empty lanes. Our current implementation of attachToLane does a lot of the hard work for us. If it didn't guarantee that a note can belong only to a single lane at a time, we would need to adjust our logic. It's good to have these sort of invariants within the state management system. The current implementation has a small glitch. If you edit a note, you can still drag it around while it's being edited. This isn't ideal as it overrides the default behavior most people are used to. You cannot for instance double-click on an input to select all the text. Fortunately, this is simple to fix. We'll need to use the editing state per each Note to adjust its behavior. First we need to pass editing state to an individual Note:}editing={editing}onClick={onNoteClick.bind(null, id)} onMove={LaneActions.move}> <Editable className="editable" editing={editing} value={task} onEdit={onEdit.bind(null, id)} /> <button className="delete" onClick={onDelete.bind(null, id)}>x</button> </Note> </li> )}</ul> ) Next we need to take this into account while rendering: app/components/Note.jsx import React from 'react'; import {compose} from 'redux'; import {DragSource, DropTarget} from 'react-dnd'; import ItemTypes from '../constants/itemTypes'; const Note = ({ connectDragSource, connectDropTarget, isDragging,isOver, onMove, id, children, ...propsisOver, onMove, id, editing, children, ...props}) => {// Pass through if we are editing const dragSource = editing ? a => a : connectDragSource;return compose(connectDragSource, connectDropTarget)(return compose(dragSource, connectDropTarget)(<div style={{ opacity: isDragging || isOver ? 0 : 1 }} {...props}>{children}</div> ); }; ... This small change gives us the behavior we want. If you try to edit a note now, the input should work as you might expect it to behave normally. Design-wise it was a good idea to keep editing state outside of Editable. If we hadn't done that, implementing this change would have been a lot harder as we would have had to extract the state outside of the component. Now we have a Kanban table that is actually useful! We can create new lanes and notes, and edit and remove them. In addition we can move notes around. Mission accomplished! In this chapter, you saw how to implement drag and drop for our little application. You can model sorting for lanes using the same technique. First, you mark the lanes to be draggable and droppable, then you sort out their ids, and finally, you'll add some logic to make it all work together. It should be considerably simpler than what we did with notes. I encourage you to expand the application. The current implementation should work just as a starting point for something greater. Besides extending the DnD implementation, you can try adding more data to the system. You could also do something to the visual outlook. One option would be to try out various styling approaches discussed at the Styling React chapter. To make it harder to break the application during development, you can also implement tests as discussed at Testing React. Typing with React discussed yet more ways to harden your code. Learning these approaches can be worthwhile. Sometimes it may be worth your while to design your applications test first. It is a valuable approach as it allows you to document your assumptions as you go. This book is available through Leanpub. By purchasing the book you support the development of further content.
https://survivejs.com/react/implementing-kanban/drag-and-drop/index.html
CC-MAIN-2019-22
en
refinedweb
- Advertisement Content Count19 Joined Last visited Community Reputation181 Neutral About kytoo - RankMember Personal Information fatal error C1083 How do I solve? kytoo replied to JKGFLOkm's topic in General and Gameplay ProgrammingProject-> Properties -> C/C++ -> General -> Additional Include Directories : add the catalog where u header files exist. Real-time RPG Framework design kytoo replied to FrontBack's topic in General and Gameplay ProgrammingOf course, first of all, it is feasible to write a generic framework, but it needs a very long time to work for a generic framework,i am spent a few years time to complete my generic framework. some advice you can begin your work with other open source framework , it will save you time, and can refer or change it. the most important that is object[data] management and interface management, you can try to do something, and as the base of you next game. OOP: calling overridden method C# kytoo replied to BaukjeSpirit's topic in General and Gameplay ProgrammingThe best way is to write a component manager to manage these component, his will ensure that all component be added. public class SpartanKing : PlayerShape { } public class PlayerData : MonoBehaviour { } public class ComponentManager : MonoBehaviour { void Start() { Addcomponent<SpartanKing>(); Addcomponent<PlayerData>(); //and so on } } public class Player : MonoBehaviour { void Start() { Addcomponent<ComponentManager>(); } } Resources on architecting for plugins / updates / dlc kytoo replied to Questioning's topic in General and Gameplay ProgrammingIf you decide to go with dynamic linking then and i Suggest you use plugin architectures, and here is a render framework called [Ogre]() using this technology. If you want your architecture more better, here is a plugin architectures[NFrame] and all code modular in this frame. it require five functions to run a module like this: virtual bool Init(); virtual bool AfterInit(); virtual bool Execute(const float fLastFrametime, const float fStartedTime); virtual bool BeforeShut(); virtual bool Shut(); You can register a interface of this module to a plugin and not export c++ class, so all logic depend upon abstractions and not depend upon concretions. - Game Networking using C++. Best Practice Question - RPG Inventory kytoo replied to Questioning's topic in Networking and Multiplayersome advice: 1: learn how to use a network to establish a connection between two applications.[It is recommended to use TCP] 2:how to transfer a message(safety) between server and client;[ yes, u can use Google Protocol buffers , its a good choose] 3:now you can deal with your agreement, such as pick up item or use a skill; 4:them, u can do some to save data and load data when player online. there have a opensource solution written by me,it contains those I said above. Gaming server kytoo replied to Ming Li's topic in Networking and MultiplayerSome advice : Whether you learn what kind of game servers, network library is need to understand. To learn fastly, u need't have to study very deep professional knowledge, we can learn such as how to accept and send a message with a network, them we can learn how to manager some object Scientific and efficient. As you know these, you can see this open source project?There have a solution for making a MMo game written by c++(client:Unity and c#), the website: - and the network in this website: i have used it in my online game and it easy to learn. //1:init NFINet* m_pNet = new NFCNet(nHeadLength, this, &NFINetModule::OnRecivePack, &NFINetModule::OnSocketEvent); m_pNet->Initialization(strIP, nPort);//as server //m_pNet->Initialization(nMaxClient, nPort, nCpuCount);//as client //2:call it in every frame m_pNet->Execute(fLastFrametime, fStartedTime); there have a demo what how to use the network //as server //as client if u have and question please contact the author. - There have a solution for making a MMo game using Unity,its free and opensource. It have a stablish network(include client's network write by c# and server's network write by c++[libevent] ). the website: features: 1: it is easy to use interface oriented design minimise the effort; 2: extensible plugin framework makes getting your application running is quick and simple; 3: clean, uncluttered design, stable engine used in several commercial products; 4: using the actor model has very high performance(by theron); 5: based on the event-driven and attribute-driver can make business more clearly and easy to maintenance; 6: based on the standard c + + development, cross-platform support; 7: with existing c++, c# game client for rapid development; if u r interest to use this frame, pelased contact the author without hesitation. MORPG Basic logic kytoo replied to DrimZ's topic in Networking and Multiplayernormally,You can send an event when it happens. but this is very abstract and there are a lot of work to do. for example, a palyer or a NPC information changes (for instance 'HP'), you should send it(using network module organized into a package) when it changed(HP =100 -> HP =50). Here I have a code in disguise, NFIObject* pObject = new NFCObject(NFIDENTID(0, 1), pPluginManager); pObject->GetPropertyManager()->AddProperty(pObject->Self(), "HP", TDATA_STRING, true, true, true, true, 0, ""); pObject->SetPropertyInt("HP", 100); pObject->AddPropertyCallBack("HP", this, &HelloWorld2::OnPropertyCallBackEvent); pObject->SetPropertyInt("HP", 50); int HelloWorld2::OnPropertyCallBackEvent( const NFIDENTID& self, const std::string& strProperty, const NFIDataList& oldVarList, const NFIDataList& newVarList, const NFIDataList& argVarList ) { //it will be call when 'HP' has changed, u can send message in this funciton, like this: NFMsg::ObjectPropertyInt xPropertyInt; NFMsg::Ident* pIdent = xPropertyInt.mutable_player_id(); *pIdent = NFToPB(self); NFMsg::PropertyInt* pDataInt = xPropertyInt.add_property_list(); pDataInt->set_property_name( strPropertyName ); pDataInt->set_data( newVar.Int( 0 ) ); for ( int i = 0; i < valueBroadCaseList.GetCount(); i++ ) { NFIDENTID identOld = valueBroadCaseList.Object( i ); NF_SHARE_PTR<BaseData> pData = mRoleBaseData.GetElement(identOld); if (pData.get()) { NF_SHARE_PTR<ServerData> pProxyData = mProxyMap.GetElement(pData->nGateID); if (pProxyData.get()) { SendMsgPB(NFMsg::EGMI_ACK_PROPERTY_INT, xPropertyInt, pProxyData->nFD, NFIDENTID(0, pData->nFD)); } } } } return 0; } this code is my open source engine tutorial code[] details you can refer at - if dont use a framework to create from scratch, it will have a lot of work to do. best methods: data driven code kytoo replied to Norman Barrows's topic in General and Gameplay ProgrammingMost of the game to achieve data driven is through configuration files and scripts to do, but I think it would increase the programmer's maintenance costs. In my project, I designed a virtual logic class module, as follows this website is the cpp code: Then I add any configuration dynamically through this file without the need to change any c + + code, but it can capture increases the configuration of the c + + code any changes. The 'cpp class' implement a set of mechanisms through XML configuration files. you can get calls by registering callback function when these properties changes at run time and not just a static configuration file. This technology has been used in my game, if u r interested u can download and have a look. - hi, if u use unity3d as u client engine u can use my open source as u server frame[write by c++].[] it has stable client code[with net module, write by c#] and it has a lot of existing functions, such as account authorize module, switch scene module, the property manager of role, and so on. u can embedded it into your app easily. Is this a good GameCenter authentication pattern? kytoo replied to Blixt Gordon's topic in Networking and MultiplayerYou could build a game server independently not dependent the game center of apple store. there are many other reasons to require you to use your own server. for example, to support the android game, or adjust different rewards by people'rank when u dont want to recompile the application. TCP clients-servers-servers kytoo replied to alnite's topic in Networking and Multiplayerthere have a game server frame maybe suitable for u, it use libevent as netlibrary, maybe u can use it as a reference. my company use it to develop a server and support tens of thounsands of people online at the same time in a gameserver's process. website: - Advertisement
https://www.gamedev.net/profile/227220-kytoo/
CC-MAIN-2019-22
en
refinedweb
Unreal Engine quickstart This quickstart will help you set up Unreal Engine, install the PlayFab Marketplace Plugin, and make your first API call in Unreal, using the PlayFab Marketplace Plugin. You can make your first API call using Blueprints, or C++, or both. Before continuing, make sure you have completed Getting started for developers, which ensures you have a PlayFab account and are familiar with the PlayFab Game Manager. Table of contents - Unreal project setup - Set up your first Blueprint call - Set up your first C++ call - Deconstruct the Blueprint example - Deconstruct the C++ code example - Upgrading to the Unreal Marketplace plugin Unreal project setup OS: This guide is written for Windows 10, however, steps should be similar for Macintosh. This guide is created for using Visual Studio 2017, and Unreal Engine 4.x (Usually latest). Install Unreal Download Unreal Engine. Register and log in on the Unreal website: Download the Epic Games Launcher: Open the Epic Games Launcher. Select the "Unreal Engine" tab, and "Library" from the left-hand navigation bar. Click "+Add Versions." Select the most recent version of the SDK. Install the PlayFab Plugin into your engine Use the following steps to ensure you've properly installed the PlayFab Plugin. - In the Epic Games launcher, go to the Marketplace and Search for the PlayFab SDK. - Select the PlayFab SDK, then Free, and Install to Engine. - Confirm your version and select Install. - Select the Launch button, and run Unreal Engine. - Select all the options as seen here: New Project tab, C++ sub-tab, No Starter Content. - Now, select Create Project with these options. - Enable the PlayFab Plugin. PlayFab Installation Complete! Set up your first Blueprint call This section provides the minimum steps to make your first PlayFab Blueprint call. Confirmation is done via an on-screen debug print. - Select Open level Blueprint. - Use the existing "Event BeginPlay" node, and build the following structure: Note Title ID is the default, and should be unique to your game, which we call a title. You can apply any ID you want, but you must use that ID when you make PlayFab API calls. Save the Blueprint, and close the Blueprint Editor window. Save the level. Finish and execute with Blueprint Push the Play button. When you execute this program, you should get the following output: Congratulations, you made your first successful API call! Press any key to close. Set up your first C++ call This section will provide the minimum steps to make your first PlayFab API call. Confirmation happens through a debug print in the Output Log. Open your new project Create a new actor called LoginActor, and place it in the scene. Creating the new LoginActor should automatically open Visual Studio, with LoginActor.cpp and LoginActor.h available to edit. Under Solution Explorer -> Games/YourProjectName/Source, find and open YourProjectName.Build.cs. Add the following line: PrivateDependencyModuleNames.AddRange(new string[] { "PlayFab", "PlayFabCpp", "PlayFabCommon" }); - Replace the contents of LoginActor.hwith the code shown below. #pragma once #include "GameFramework/Actor.h" #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "PlayFab.h" #include "Core/PlayFabError.h" #include "Core/PlayFabClientDataModels.h" #include "LoginActor.generated.h" UCLASS() class ALoginActor : public AActor { GENERATED_BODY() public: ALoginActor(); virtual void BeginPlay() override; void OnSuccess(const PlayFab::ClientModels::FLoginResult& Result) const; void OnError(const PlayFab::FPlayFabCppError& ErrorResult) const; virtual void Tick(float DeltaSeconds) override; private: PlayFabClientPtr clientAPI = nullptr; }; - Replace the contents of LoginActor.cpp with the following: #include "LoginActor.h" #include "Core/PlayFabClientAPI.h" ALoginActor::ALoginActor() { PrimaryActorTick.bCanEverTick = true; } void ALoginActor::BeginPlay() { Super::Begin, &ALoginActor::OnSuccess), PlayFab::FPlayFabErrorDelegate::CreateUObject(this, &ALoginActor::OnError) ); } void ALoginActor::OnSuccess(const PlayFab::ClientModels::FLoginResult& Result) const { UE_LOG(LogTemp, Log, TEXT("Congratulations, you made your first successful API call!")); } void ALoginActor::OnError(const PlayFab::FPlayFabCppError& ErrorResult) const { UE_LOG(LogTemp, Error, TEXT("Something went wrong with your first API call.\nHere's some debug information:\n%s"), *ErrorResult.GenerateErrorReport()); } void ALoginActor::Tick(float DeltaTime) { Super::Tick(DeltaTime); } - Run the Unreal Editor (Debug -> Start Debugging). Finish and execute with C++ Earlier, you created a level with a LoginActor entity already placed in the world. Load this level. Press Play. You will immediately see the following in the output log: LogTemp: Congratulations, you made your first successful API call! - Press any key to close. Deconstruct the Blueprint example This optional last section describes each part of the blueprints above, in detail. Event BeginPlay This is an Unreal node that exists by default for a level blueprint. It triggers the nodes following it immediately, when the level is loaded. Set PlayFab Settings Use this to set the titleId. Other keys can be set here too, but for this guide, you only need to set titleId.. Make the LoginWithCustomID request Most PlayFab API methods require input parameters, and those input parameters are packed into a request object. Every API method requires a unique request object, with a mix of optional and mandatory parameters. - For the LoginWithCustomIDRequestobject, there is a mandatory parameter of CustomId, which uniquely identifies a player and CreateAccount, which allows the creation of a new account with this call. Login with Custom ID This begins the async request to "LoginWithCustomID" - For login, most developers will want to use a more appropriate login method. See the PlayFab Login documentation for a list of all login methods, and input parameters. Common choices are: The left-side blueprint pins Blue: Request - For every PlayFab API blueprint, this must always receive from a paired Make Request blueprint node Red: "On Success" and "On Failure" - You can drag an un-bound red marker to empty space, to create a new custom event for this action. One of those events, according to circumstances, is then invoked when the async-call returns Cyan: Custom Data - Custom Data is just a relay. That object is passed un-touched into the red custom events. This isn't terribly useful for blueprints, but it's very useful when invoking API calls directly from C++ (Advanced topic: won't be covered in this guide). The right-side blueprint pins White: the unlabeled first exec pin is executed immediately as the API call is queued (response does not exist yet) - Do not use this pin! White: the second exec pin is labeled "On PlayFab Response," and is executed after the async remote call has returned. Use this to trigger logic that needs to wait or use the Response. Blue: Response - This is a JSON representation of the result. The OnSuccess pin provides a properly typed object with the correct fields pre-built into the blueprint. - This JSON field is an older pin which is only maintained for legacy. Cyan: Custom Data - Same as Custom Data above. Maroon: Successful - Legacy boolean which indicates how to safely unpack the legacy Response pin. - Again, it's better to use the red OnSuccess and OnFailure pins. OnLoginSuccess and OnLoginFail The names of these modules are optional, and should be different for every API call. Described above, they attach to the red pins of PlayFab API calls, and allow you to process success and failure for those calls. The OnSuccess/Result pin The result pin will contain the requested information, according to the API called. Break PlayFab Result (Not displayed, the only valid connection for the OnSuccess/Result pin). If you drag the Result pin from OnSuccess, it'll create a Break-Result blueprint. This blueprint is used to examine the response from any API call. The OnFailure/Error pin - Always connects to a Break PlayFabError blueprint. - Contains some information about why your API call failed. If you are having difficulty debugging an issue, and the information within the error information is not sufficient, please visit us on our forums Prints and Append nodes Just part of the example, giving you some on-screen feedback about what's happening. Most examples will extract and utilize the data, rather than just printing. Deconstruct the C++ code example This optional last section describes the code in this project line by line. GettingStartedUeCpp.Build.cs - To reference code from a plugin in your project, you have to add the plugin to your code dependencies. The Unreal build tools do all the work, if you add the "PlayFab" string to your plugins. LoginActor.H - includes - The LoginActor includes are default includes that exist for the template file before we modified it - The PlayFab includes are necessary to make PlayFab API calls - UCLASS ALoginActor - Most of this file is the default template for a new actor; the only exceptions to this are: - OnSuccess and OnError - These are the asynchronous callbacks that will be invoked after PlayFab LoginWithCustomID completes. - PlayFabClientPtr clientAPI - This is an object that lets you access the PlayFab client API. LoginActor.cpp - Most of this file is the default template for a new actor; the only exceptions to this are: clientAPI = IPlayFabModuleInterface::Get().GetClientAPI(); - This fetches the clientAPI object from the PlayFab plugin, so you can make API calls with it clientAPI->SetTitleId(TEXT("xxxx")); -. PlayFab::ClientModels::FLoginWithCustomIDRequest request; - Most PlayFab API methods require input parameters, and those input parameters are packed into a request object - Every API method requires a unique request object, with a mix of optional and mandatory parameters - For LoginWithCustomIDRequest, there is a mandatory parameter of CustomId, which uniquely identifies a player and CreateAccount, which allows the creation of a new account with this call. clientAPI->LoginWithCustomID(request, {OnSuccess delegate}, {OnFail delegate}); - This begins the async request to "LoginWithCustomID", which will call LoginCallback when the API call is complete - For login, most developers will want to use a more appropriate login method - See the PlayFab Login documentation for a list of all login methods, and input parameters. Common choices are: - {OnSuccess delegate}: PlayFab::UPlayFabClientAPI::FLoginWithCustomIDDelegate::CreateUObject(this, &ALoginActor::OnSuccess) - combined with: void ALoginActor::OnSuccess(const PlayFab::ClientModels::FLoginResult& Result) const - These create a UObject callback/delegate which is called if your API call is successful - An API Result object will contain the requested information, according to the API called - FLoginResult contains some basic information about the player, but for most users, login is simply a mandatory step before calling other APIs. - {OnFail delegate} PlayFab::FPlayFabErrorDelegate::CreateUObject(this, &ALoginActor::OnError) - combined with: void ALoginActor::OnError(const PlayFab::FPlayFabError& ErrorResult) const - API calls can fail for many reasons, and you should always attempt to handle failure -. - At this time, the PlayFab Unreal C++ SDK maintains state with static variables which are non atomic and are not guarded by synchronization techniques. For this reason, we recommend limiting PlayFab calls to within the main Unreal thread. - If you are having difficulty debugging an issue, and the information within the error information is not sufficient, please visit us on our forums. Upgrading to the Unreal Marketplace Plugin The Unreal Marketplace Plugin - Upgrade Tutorial will step you through upgrading your project from either the PlayFab Unreal C++ SDK or the PlayFab Unreal Blueprint SDK, to the new PlayFab Unreal Marketplace Plugin. Feedback Send feedback about:
https://docs.microsoft.com/en-us/gaming/playfab/sdks/unreal/quickstart
CC-MAIN-2019-22
en
refinedweb
In this blog post I want to show you one possible way creating a business logic in ASP.NET MVC. Okay, referring to my last blog postI want to take you one step further and extend the older post a little bit. In the last post we saw how to build up areas and to get them clean, with separated concerns and nice looking, testable etc. But this is worth nothing if the rest you have is not well separated and you have a big mess there. That’s why I want to give you the second part (which is a bit shorter) to present you one way to create a business-tier. Well, the problem we face is that we have to access our data. We have to have any way of communication between our UI and the database. The first blog post was touching the UI (remember? Areas and their friends…). The third one will touch the repositories (generic) and the UnitOfWork-Stuff and so on. Why don’t we access the data from the Controllerservice (through the UnitOfWork) and were done? The answer is: Yeah we could. But sometimes some database queries are a little bit more complex. You have to have this object A with B in it to get C, the user has to be there first and so on. If you would write this now in the Controller service (mentioned in the blog postbefore) this would work, but would generate a lot of code and in the best case you would end up with a lot of functions, which are named after what they are doing but still getting the class very big and difficult to handle. Also testing would be difficult. You would have a lot of private functions to test. If you have only one class this should be a step to think about what you are doing! If you are writing a private function so “mighty” that it should be tested in 95% you are hurting the single-responsibility-principle and the separation of concerns, too. So what you are writing should be an own class, with its own tests and its own public and private functions. With a class name which describes, what its doing and functions which describe exactly, what they do. Another reason is: Sometimes (as mentioned in this post) you have a third entity (EntityC) to connect two other entities in your application (let’s call the EntityA and EntityB). This is an N:M-Relationship. And you should access these entities only through the EntityC one, including those you want to have (EntityA, EntityB or both). These queries could, even with the Entity-Framework, be very cryptic and you better have a class which does the queries for you. This is not like a general rule. This only makes sense, when you have these entities. But to stay clean and testable, you can have every query wrapped in a service…why not? 😉 Further you probably want to give your controller-service functions which have a sorting logic or anything like that, etc. he can call them and he does not care about the implementation. So these are only three reasons why you should work with services behind your controller service. Area Services These services are written in another tier, the “logic-tier” or “business-tier”; call it like you want to. Note: In the Screenshots I have only one project in the solution and I am separating the tiers only in namespaces. You can, of course, introduce different projects in the solution to get the concerns separated for each project. Well you should do this…would be better 😉 But for this post, it’s about the idea behind it. If you got this, I won a lot! Concrete example: You have a service which is giving you Chart-Data to display a chart in your view. You should have one service for this which is only build to work with and give you this data. Mostly you want this data to be generated out of anything in the database. This is perfect for a service. And because this service interacts directly with any area (you can inject the interface of the service wherever you want in you controller-services) I call them “AreaServices”. Note. How to get along with DotNet Highcharts I am describing here. Here you see an area service called “ChartService” which is, when you collapse the whole thing, only visible to the outside through his interface (information hiding, I mentioned this in part I of this article here). His Impl-namespace contains the direct implementation. Everything which is connected to this service also takes place in this namespace, as long as it’s only needed there. In this case we have a special factory which creates the chart (interface/impl) and a very “stupid” container class “ChartData” which summarizes the data for a chart. Note: this could be any worker service for you. I choose this one because its doing some work and looking for data in the database. So you have both things covered. Let’s see some code: You see that this service knows the factory and calls it after he collects the data from the database. Attention: You do NOT have to use a using here in your UnitOfWork. The using of the UnitOfWork is ONLY used in a controller service, because this is the main entry point for a lot of database-requests and as I mentioned in part one of this, Ninject is only injecting one instance for you per request. One controller service call represents one request from a client. So put the using there and you are safe to have the same instance over all services the request touches. This is why you can inject it here. The point is: You are having a tier which is calling the database, collecting information and doing something with it. To get to the example I mentioned before you could have a EntityCService, where you can have all nice methods on it which the controller service can call and here you are gathering the information with EntityC having EntitiesB and A on it and so on. All this is hidden here inside this service. Conclusion so far: Sometimes you have a lot of work to do with some database data or your requests are a little bit more complex. So do separate this in services which can be called from your areas/controller-services. This is the first part of the middle-tier. Business services Another type of services? Oh come on! Well, what we touched was a type of service which interacts with the database and is very strongly connected to the application. But what about services which are… - …not that connected to the application - …could possibly stand alone (as a module) - …are doing work which is not interacting with the database or at least not writing into it Lets do another kind of service and call them business services. Examples for these business services are maybe a pdf-generator which generates you a pdf of data which is given to him. Or an email service which is sending emails from your application to the user. Or a calculator who is only feed with data and calculating some values. These “worker services” are doing some work which stands a little bit beside the normal CRUD-operations you normally have in a web application. In this example you see two services which represent classical business services and are only worker-bees producing an outcome of something you give them. Here you can have a little, but normally you have no database-contact. If you have this, this is only reading data. Never writing something into it. On the screenshot you also see the namespaces “Impl” which hides the implementation and the interface which is representing the service. So we are extending our logic-layer with the business services and have now area services and business services in it. Of course these services can and should be provided in different projects to have several dlls. But with this, every layer should have an api-project to represent it and this api-dll should be referenced from the projects which needs it. Unfortunately this was it for this time. In the next part I will touch the generic repositories with the UnitOfWork. Regards Fabian
https://offering.solutions/blog/articles/2014/06/10/creating-a-business-logic-in-asp.net-mvc/
CC-MAIN-2022-21
en
refinedweb
#220 – Using the Predefined Colors February 17, 2011 2 Comments The Colors class, in the System.Windows.Media namespace, includes a large set of static properties representing a set of predefined colors. Each property has a name representing the color, e.g. “Blue”, and is stored internally as a standard WPF Color structure. The example below shows two different ways of using the predefined colors. You can use a color name in XAML wherever either a color or a brush is expected. The name of the color in XAML is converted to a Color value by mapping the name of the color to one of the properties in the Colors class. <Button Content="Hello" Background="Purple" /> <Button Content="Again"> <Button.Background> <SolidColorBrush Color="Blue"/> </Button.Background> </Button> Below is an image showing the full list of predefined colors. (Click on the image to see it full size). Pingback: #809 – You Can Use a Brush for a Control’s Background | 2,000 Things You Should Know About WPF Pingback: #810 – Setting Foreground and Background Properties from Code | 2,000 Things You Should Know About WPF
https://wpf.2000things.com/2011/02/17/220-using-the-predefined-colors/
CC-MAIN-2022-21
en
refinedweb
?Creating the TodoList Project In this section, we’ll create the TodoList project from scratch using GWT’s webAppCreator, a command-line utility. Before we start, make sure to download the most recent GWT distribution and install Maven. Using webAppCreator The webAppCreator is a command-line tool included in the GWT SDK. It generates the project structure necessary to get started. It also creates a starter application, which you can run to ensure that the components are created and linked successfully. As you develop your software, you’ll replace the code in the starter application with yours. For the TodoList project, we’ll need to run webAppCreator with the following parameters: Setting up a new project. Create the TodoList application. GWT webAppCreator will generate the project structure and the build script (maven pom.xml). $ /full_path_to_gwt_sdk/webAppCreator \ -templates maven,sample \ -out TodoListApp \ org.gwtproject.tutorial.TodoList Tip: If you include the GWT SDK folder in your PATH environment variable, you won’t have to specify the full path. You may have to modify the pom.xmlbefore you can run the application. Add <type>pom</type>to the gwt dependency, otherwise you will encounter an error. See the “Creating a Mavenproject” section in the webAppCreator documentation for more information. Run the application in SuperDevMode. To check that the project was created correctly start the new app in SuperDevMode. $ cd TodoListApp $ mvn war:exploded $ mvn gwt:devmode Tip: Since the created project is built with Maven, you can import it in Eclipse, IDEA, etc. Launch your Browser. In the GWT developer window, press “Launch Default Browser” to launch the application. Alternatively, you can click “Copy to Clipboard” and paste the URL into any browser. If you change something in the code, you can recompile the application by simply reloading the web page. If you change configuration files, e.g. pom.xml or static content in webapp, you might have to restart SuperDevMode. Ctrl+Cand mvn gwt:runstops and starts the execution, respectively. Customizing your project With the base project set up, we’ll now add the necessary external dependencies. At the same time, we’ll also remove some of the files and dependencies that are set up and generated by default when the starter application was built. Add the vaadin gwt-polymer-elementsdependency to your project by editing the pom.xmlfile. <dependency> <groupId>com.vaadin.polymer</groupId> <artifactId>vaadin-gwt-polymer-elements</artifactId> <version>${gwtPolymerVersion}</version> <scope>provided</scope> </dependency> Note: Replace the ${gwtPolymerVersion}placeholder with the current version (as of this writing 1.0.2.0-alpha3) or add the corresponding property in your pom.xml Update the gwt-maven-plugin configuration to support the experimental JsInteropfeature. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> ... <configuration> <jsInteropMode>JS</jsInteropMode> ... </configuration> </plugin> Note: JsInterop is an experimental flag in GWT-2.7.0 and you need to enable it explicitly. In future versions of GWT it will be enabled by default. Update TodoList.gwt.xmlmodule file so that we can use the new gwt library. <module rename- ... <inherits name="com.vaadin.polymer.Elements"/> ... </module> Update TodoList.html - Configure the <meta>viewport to handle mobile layouting. - Import the polyfill <script>for non web-component capable browsers. - Remove the content inside the tag. <!doctype html> <html> <head> <meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1" /> <script src="todolist/bower_components/webcomponentsjs/webcomponents.js"></script> <script type="text/javascript" src="todolist/todolist.nocache.js"></script> </head> <body> </body> </html> Remove greetServletand its mapping in WEB-INF/web-xml <web-app> </web-app> Remove all unnecessary files. - Remove the folder and shared folders located in src/main/java/org/gwtproject/tutorial. - Remove GreetingService.javaand GreetingServiceAsync.javafrom the clientpackage. - Remove the example tests in src/main/test. Update the EntryPoint. Replace the content of TodoList.javawith package org.gwtproject.tutorial.client; import com.google.gwt.core.client.EntryPoint; import com.google.gwt.user.client.ui.RootPanel; import com.vaadin.polymer.paper.widget.PaperButton; public class TodoList implements EntryPoint { public void onModuleLoad() { // Use Widget API to Create a <paper-button> PaperButton button = new PaperButton("Press me!"); button.setRaised(true); RootPanel.get().add(button); } } Note: The example above shows how to add a PaperButtonelement using the Widgets API. Run the application again. You should see a web page containing a Material Design button. What’s next In this lesson we learned how to: - Create a new GWT maven project from scratch. - Run and debug our application in SuperDevMode - Add external dependencies to our project. - Configure our project to use the experimental JsInteropmode. - Replace the starter application code with our own. We’re now prepared to design the UI of the TodoList application. There are two ways we can go about it: using GWT widgets (classic) or the Elements API (modern). Step 2a: Building the User Interface using Widgets Step 2b: Building the User Interface using Elements
https://www.gwtproject.org/doc/latest/polymer-tutorial/create.html
CC-MAIN-2022-21
en
refinedweb
MDT (and more generally, WoW add-on) serialization/deserialization libraryMDT (and more generally, WoW add-on) serialization/deserialization library This library aims to fill a gap in interfacing with the WoW add-on ecosystem. A number of add-on developers tend to use a combination of Ace3 Serialize and LibDeflate for serialization/deserialization when conveying information to other users over channels (guild/party/addon-specific), or when storing and retrieving data from LUA. As it stands, in order to be able to understand most of the information encoded in such strings, we need to be able to parse it into a format that makes sense, and that we can take further. Ace3 ser/deserAce3 ser/deser Ace3 (with Ace serialization revision 1)'s goal is to be able to turn any LUA object into a serialized, but raw, representation. It bears a lot of resemblance to MessagePack, in that it obeys simple rules: - Every object starts with a field identifying its type, followed by a field with its value - These fields are always prepended with an opening preamble, but not necessarily explicitly closed - All LUA types are covered, with map keys being their own sub-type In terms of implementation, this library takes an Ace serialization string and returns the corresponding object: stringfor strings numberfor lua integers, floats and all other numeric types booleanfor booleans nullfor null values Record<string, unknown>for maps and arrays (Lua does not have the concept of an array) This is all done while leveraging the strongly typed nature of typescript, and providing all the tools required to ensure testability. UsageUsage To serialize, import, and call Serialize: import { Ace } from 'mdt-compression'; const ser = async () => { console.log(await Ace.Serialize("This is a test")); }; Similarily, to deserialize, call Deserialize: import { Ace } from 'mdt-compression'; const ser = async () => { console.log(await Ace.Deserialize("^1^F3^f-2")); };
https://www.npmjs.com/package/@letstimeit/mdt-compression
CC-MAIN-2022-21
en
refinedweb
Python beginners may sometimes get confused by this match and search functions in the regular expression module, since they are accepting the same parameters and return the same result in most of the simple use cases. In this article, let’s discuss about the difference between these two functions. match vs search in Python regular expression Let’s start from an example. Let’s say if we want to get the words which ending with “ese” in the languages, both of the below match and search return the same result in match objects. import re languages = "Japanese,English" m = re.match("\w+(?=ese)",languages) #m returns : <re.Match object; span=(0, 5), m = re.search("\w+(?=ese)",languages) #m returns : <re.Match object; span=(0, 5), But if the sequence of your languages changed, e.g. languages = “English, Japanese”, then you will see some different results: languages = "English,Japanese" m = re.match("\w+(?=ese)",languages) #m returns empty m = re.search("\w+(?=ese)",languages) #m returns : <re.Match object; span=(8, 13), The reason is that match function only starts the matching from the beginning of your string, while search function will start matching from anywhere in your string. Hence if the pattern you want to match may not start from the beginning, you shall always use search function. In this case, if you want to restrict the matching only start from the beginning, you can also achieve it with search function by specifying “^” in your pattern: languages = "English,Japanese,Chinese" m = re.search("^\w+(?=ese)",languages) #m returns empty m = re.search("\w+(?=ese)",languages) #m returns: <re.Match object; span=(8, 13), findall in Python regular expression You may also notice when there are multiple occurrences of the pattern, search function only returns the first matched. This sometimes may not be desired when you actually want to see the full list of matched patterns. To return all the occurrences, you can use the findall function: languages = "English,Japanese,Chinese,Burmese" m = re.findall("\w+(?=ese)", languages) #m returns: ['Japan', 'Chin', 'Burm']
https://www.codeforests.com/category/tutorials/page/5/
CC-MAIN-2022-21
en
refinedweb
I have an add-in tool created that works perfectly. Yehaw! It takes your input selection, takes the tools rectangle_geometry, does a load of clever stuff, then clears the selection as work is complete. Its time to make another selection, and use the tool to select your region again.,,, But I cant find any way to de-activate the tool or set it back to the standard ArcMap select tool after its job is done? Please help!? Its very user unfriendly as if they click again now, it will complain about no selection being present! I have tried using: self.deactivate() , but thats not an attribute of my tool :S Yes that is what I am using. I found this so it doesnt look hopeful: Its a bit of a pain, I have now to decide whether to code my tool as a "Selector if nothing is selected, or the tool if so", or just leave it as is! Neither is the ideal solution. as per John Dye's comment then... it is misleading at worst, but as described as best @Luke Webb, In your code add an "if" block to see if there is any selection - if yes then put your "takes the tools rectangle_geometry, does a load of clever stuff, then clears the selection as work is complete" code there. In the "else" block you can print a statement or pop-up a message window or do nothing - going ahead to use ArcMap's select tool to make a new selection and use the tool again. Your code may look like: def onRectangle( .................): dsc = arcpy.Describe("layer_name") selection_set = dsc.FIDSet # returns a list of selected feaures' IDs if selection_set: # that means at least one feature is in selection set # do your stuff ... # .... else: # no feature is selected # pop up a message dialog asking user to select some feature to use the tool pythonaddins.MessageBox("Please select some features to use the tool", 0) # or ... just skip this part pass Note: the above code is typed in the browsed - there might be syntax error The correct syntax for MessageBox is – I mistakenly left out ‘title’ ☹ MessageBox(message, title, )
https://community.esri.com/t5/python-questions/arcpy-add-in-tool-deselect-tool/td-p/111536
CC-MAIN-2022-21
en
refinedweb
I am working with COVID19 case data and created a dashboard. I have a Jupyter Notebook inside ArcGIS Pro to process the heath dept.'s CSV file everyday. That said, I am a complete novice with Python and fumbled my way though but got something working. I now have a request to show the daily change in cases from the previous day. The source data table just lists the cumulative cases each day over time and lumps the dates together: ID Date FIPS Cases 1 5/7/20 001 25 2 5/7/20 002 13 3 5/6/20 001 23 4 5/6/20 002 9 5 5/5/20 001 21 6 5/5/20 002 8 7 5/4/20 001 21 8 5/4/20 002 6 I would like to add a field where it contains the change in value from the previous day: ID Date FIPS Cases Difference 1 5/7/20 001 25 2 2 5/6/20 001 23 2 3 5/5/20 001 21 0 4 5/4/20 001 21 0 (because this is the starting value) 5 5/7/20 002 13 4 6 5/6/20 002 9 1 7 5/5/20 002 8 2 8 5/4/20 002 6 0 (start) The goal is a time series chart showing the sum of the daily changes for all FIPS by date (but might need to show them by FIPS as well). I know others are doing it but maybe their source data is supplied that way. This seems like it should be fairly simple but I don't know where to start. Right now I am downloading the csv, truncating the table in my GDB, then appending the csv data to the table to refresh it every day. I think I need a bit of code to run daily to recalculate the difference after I grab the new day's values. Appreciate any direction. Thanks! Sara, if you have Data Interoperability extension then change detection between next/previous values in a series is available in the AttributeManager. I don't want to send you down this path if its all new to you though as you're working on response data. If you need to pursue this let me know. Thanks for the quick reply, but I'm afraid I do not have that extension. Sounds ideal though. You could do it with dictionaries. Use the FIPS as the key. Each value could be a list of tuples with 2 values (date, case_num) After you build the dictionaries sort each list by the date part of the tuple. Then take the top two list elements and subtract the case number. Output data done! Maybe something like this... (not jupyter notebook) code: import csv with open('case.csv', 'r') as f: reader = csv.reader(f) data = list(reader) data.pop(0) data = [x[1:] for x in data] data_dict = {x[1]: [] for x in data} for row in data: data_list = data_dict[row[1]] data_list.append((row[0], row[2])) data_dict[row[1]] = data_list for key, value in data_dict.items(): value.sort(reverse=True) for i in range(0, len(value) - 1): diff = int(value[1]) - int(value[i + 1][1]) print '{},{},{},{}'.format(key, value[0], value[1], diff) print '{},{},{},{}'.format(key, value[-1][0], value[-1][1], 0) csv: ID,Date,FIPS,Cases 7,5/4/20,001,21 8,5/4/20,002,6 1,5/7/20,001,25 2,5/7/20,002,13 3,5/6/20,001,23 4,5/6/20,002,9 5,5/5/20,001,21 6,5/5/20,002,8 output: 002,5/7/20,13,4 002,5/6/20,9,1 002,5/5/20,8,2 002,5/4/20,6,0 001,5/7/20,25,2 001,5/6/20,23,2 001,5/5/20,21,0 001,5/4/20,21,0 Thanks for providing this sample - I'll see if I can make it work. I had some real hope today that Esri's new Coronavirus Recovery Dashboard was going to take care of this for me but alas, they require that the data already has the daily increase to calculate the trends. you're welcome... I updated the code sample so the output will do the calc for all dates, not just the top two.
https://community.esri.com/t5/python-questions/calculate-daily-change-between-unique-values/td-p/88676
CC-MAIN-2022-21
en
refinedweb
Simple pytest plugin to look for regex in Exceptions Project description I really missed assertRaisesRegexp in unittest module from pytest, so I wrote this simple plugin. Usage # some_module.py class ExpectedException(Exception): pass def function_to_test(): raise ExpectedException('error message: 1560') # test_some_module.py from pytest import raises_regexp from some_module import function_to_test, ExpectedException def test_something_to_test(): with raises_regexp(ExpectedException, r".* 1560"): function_to_test() Installation $ pip install pytest-raisesregexp It installs as a pytest entry point, so you can: from pytest import raises_regexp LICENSE MIT license Copyright (c) 2013-2015 Kiss György Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pytest-raisesregexp/
CC-MAIN-2022-21
en
refinedweb
Chart.js charts for Wagtail Project description Wagtail Charts Chart.js charts in Wagtail, edited and customised from the Wagtail admin Getting started Assuming you have a Wagtail project up and running: pip install wagtailcharts Add wagtailcharts to your settings.py in the INSTALLED_APPS section, before the core wagtail packages: INSTALLED_APPS = [ # ... 'wagtailcharts', # ... ] Add a wagtailcharts ChartBlock to one of your StreamFields: from wagtailcharts.blocks import ChartBlock class ContentBlocks(StreamBlock): chart_block = ChartBlock() Include your streamblock in one of your pages class HomePage(Page): body = StreamField(ContentBlocks()) content_panels = Page.content_panels + [ StreamFieldPanel('body'), ] Add the wagtailcharts_tags templatetag to your template and call the render_charts tag just before your </body> closing tag. Please note that you must render your chart block so that the render_charts tag can detect the charts. Here is a tiny example of a page rendering template: {% load wagtailcore_tags wagtailcharts_tags %} {% block content %} <div class="container-fluid"> <div class="row"> <div class="col-6"> <h1>{{self.title}}</h1> <div class="excerpt">{{self.excerpt|richtext}}</div> </div> </div> {% for block in self.body %} {% include_block block %} {% endfor %} </div> {% endblock %} {% block extra_js %} {% render_charts %} {% endblock %} Configuration ChartBlock accepts a few extra arguments in addition to the standard StructBlock arguments. colors A tuple of color tuples defining the available colors in the editor. from wagtailcharts.blocks import ChartBlock COLORS = ( ('#ff0000', 'Red'), ('#00ff00', 'Green'), ('#0000ff', 'Blue'), ) class ContentBlocks(StreamBlock): chart_block = ChartBlock(colors=COLORS) Dependencies - This project relies on Jspreadsheet Community Edition for data entry and manipulation. - Charts are rendered using Chart.js. - 100% stacked bar charts use a plugin Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/wagtailcharts/
CC-MAIN-2022-21
en
refinedweb
Recently I have upgraded the Telerik Testing Framework to the latest freely available version - TestingFrameworkFree.2015.2.723.but now my solution is full of errors related to the FindElementException class or ArtOfTest.Common. Exceptions namespace. It does not locate the namespace. I even search the Object Explorer but not able to locate there as well. Have you guys removed the FindElementException class or ArtOfTest.Common. Exceptions from the latest version. If yes, why? If no, how should I locate this?. What is the difference between FindElementException and FindException? Thanks and Regards Vinay
https://www.telerik.com/forums/not-able-to-locate-findelementexception-class-or-artoftest-common-exception-namespace-in-telerik-testing-framework-latest-version
CC-MAIN-2022-21
en
refinedweb
SYNOPSIS #include <openssl/cms.h> int CMS_verify_receipt(CMS_ContentInfo *rcms, CMS_ContentInfo *ocms, STACK_OF(X509) *certs, X509_STORE *store, unsigned int flags); DESCRIPTIONCMS. NOTESThis functions behaves in a similar way to CMS_verify() except the flag values CMS_DETACHED, CMS_BINARY, CMS_TEXT and CMS_STREAM are not supported since they do not make sense in the context of signed receipts. RETURN VALUESCMS_verify_receipt() returns 1 for a successful verification and zero if an error occurred. The error can be obtained from ERR_get_error(3) HISTORYCMS_verify_receipt() was added to OpenSSL 0.9.8
https://manpages.org/cms_verify_receipt/3
CC-MAIN-2022-21
en
refinedweb
This article explains how to calculate basic statistics such as average, standard deviation, and variance TLDR; To average a NumPy array x along an axis, call np.average() with arguments x and the axis identifier. For example, np.average(x, axis=1) averages along axis 1. The outermost dimension has axis identifier “0”, the second-outermost dimension has identifier “1”. Python collapses the identified axis and replaces it with the axis average, which reduces dimensionality of the resulting array by one. Feel free to watch the video while skimming over the article for maximum learning efficiency: Graphical Explanation Here’s what you want to achieve: Extracting basic statistics such as average, variance, standard deviation from NumPy arrays and 2D matrices is a critical component for analyzing a wide range of data sets such as financial data, health data, or social media data. With the rise of machine learning and data science, your proficient education of linear algebra operators with NumPy becomes more and more valuable to the marketplace Code Solution Here is how you can accomplish this task in NumPy: import numpy as np x = np.array([[1, 3, 5], [1, 1, 1], [0, 2, 4]]) print(np.average(x, axis=1)) # [3. 1. 2.] print(np.var(x, axis=1)) # [2.66666667 0. 2.66666667] print(np.std(x, axis=1)) # [1.63299316 0. 1.63299316] Slow Explanation Next, I’ll NumPy internally represents data using NumPy arrays ( np.array). These arrays can have an arbitrary number of dimensions. In the figure above, we show a two-dimensional NumPy array but in practice, the array can have much higher dimensionality. You can quickly identify the dimensionality of a NumPy array by counting the number of opening brackets “[“ when creating the array. (The more formal alternative would be to use the ndim property.) Each dimension has its own axis identifier. ? Rule of thumb: The outermost dimension has the identifier “0”, the second-outermost dimension has the identifier “1”, and so on. By default, the NumPy average, variance, and standard deviation functions aggregate all the values in a NumPy array to a single value. Do you want to become a NumPy master? Check out our interactive puzzle book Coffee Break NumPy and boost your data science skills! (Amazon link opens in new tab.) Simple Average, Variance, Standard Deviation What happens if you don’t specify any additional argument apart from the NumPy array on which you want to perform the operation (average, variance, standard deviation)? import numpy as np x = np.array([[1, 3, 5], [1, 1, 1], [0, 2, 4]]) print(np.average(x)) # 2.0 print(np.var(x)) # 2.4444444444444446 print(np.std(x)) # 1.5634719199411433 For example, the simple average of a NumPy array is calculated as follows: (1+3+5+1+1+1+0+2+4)/9 = 18/9 = 2.0 Calculating Average, Variance, Standard Deviation Along an Axis However, sometimes you want to calculate these functions along an axis. For example, you may work at a large financial corporation and want to calculate the average value of a stock price — given a large matrix of stock prices (rows = different stocks, columns = daily stock prices). Here is how you can do this by specifying the keyword “ axis” as an argument to the average, variance, and standard deviation functions: import numpy as np ## Stock Price Data: 5 companies # (row=[price_day_1, price_day_2, ...]) x = np.array([[8, 9, 11, 12], [1, 2, 2, 1], [2, 8, 9, 9], [9, 6, 6, 3], [3, 3, 3, 3]]) avg, var, std = np.average(x, axis=1), np.var(x, axis=1), np.std(x, axis=1) print("Averages: " + str(avg)) print("Variances: " + str(var)) print("Standard Deviations: " + str(std)) """ Averages: [10. 1.5 7. 6. 3. ] Variances: [2.5 0.25 8.5 4.5 0. ] Standard Deviations: [1.58113883 0.5 2.91547595 2.12132034 0. ] """ Note that you want to perform these three functions along the axis=1, i.e., this is the axis that is aggregated to a single value. Hence, the resulting NumPy arrays have a reduced dimensionality. High-Dimensional Averaging Along An Axis Of course, you can also perform this averaging along an axis for high-dimensional NumPy arrays. Conceptually, you’ll always aggregate the axis you specify as an argument. Here is an example: import numpy as np x = np.array([[[1,2], [1,1]], [[1,1], [2,1]], [[1,0], [0,0]]]) print(np.average(x, axis=2)) print(np.var(x, axis=2)) print(np.std(x, axis=2)) """ [[1.5 1. ] [1. 1.5] [0.5 0. ]] [[0.25 0. ] [0. 0.25] [0.25 0. ]] [[0.5 0. ] [0. 0.5] [0.5 0. ]] """ Where to Go From Here? Solid programming skills are the foundation of your thorough education as a data scientist and machine learning expert. Master Python first! To Join more than 55,000 email subscribers and download your personal Python cheat sheets as high-resolution PDFs. Print them, study them, and keep consulting them daily until you master every bit of Python syntax by.
https://blog.finxter.com/numpy-average-along-axis/
CC-MAIN-2022-21
en
refinedweb
Effector is a brand new reactive state manager. Its ambitious team aims to solve all the problems that existing solutions have. Writing the core of the library from scratch took several attempts across six months, and recently the team released the first stable release. In this article, I will show why I prefer using Effector for my new projects instead of other state managers. Let's get started with the Effector API. Basics Effector uses two concepts you might already be familiar with: store and event. A store is an object that holds some value. We can create stores with the createStore helper: import {createStore} from 'effector' const counter = createStore(0) // create store with zero as default value counter.watch(console.log) // watch store changes Stores are lightweight, so whenever you need to introduce some state to your app, you simply create a new store. So how do we update our store? Events! You create events with the createEvent helper and have your store updated by reacting on them: import {createStore, createEvent} from 'effector' const increment = createEvent('increment') const decrement = createEvent('decrement') const resetCounter = createEvent('reset counter') const counter = createStore(0) .on(increment, state => state + 1) // subscribe to the event and return new store value .on(decrement, state => state - 1) .reset(resetCounter) counter.watch(console.log) Event is like an "action" in Redux terms, and store.on(trigger, handler) is somewhat like createStore(reducer). Events are just functions that can be called from any place in your code. Effector implements the Reactive Programming paradigm. Events and stores are considered as reactive entities (streams, in other words), they have a watch method which allows subscribing to events and store changes. Integration with React A component can connect to the store by calling the useStore hook from effector-react package. Effector events can be passed to child React elements as event handlers ( onClick, etc.) import React from 'react' import ReactDOM from 'react-dom' import {createStore, createEvent} from 'effector' import {useStore} from 'effector-react' const increment = createEvent('increment') const decrement = createEvent('decrement') const resetCounter = createEvent('reset counter') const counter = createStore(0) .on(increment, state => state + 1) .on(decrement, state => state - 1) .reset(resetCounter) counter.watch(console.log) const Counter = () => { const value = useStore(counter) // subscribe to store changes return ( <> <div>Count: {value}</div> <button onClick={increment}>+</button> <button onClick={decrement}>-</button> <button onClick={resetCounter}>reset</button> </> ) } const App = () => <Counter /> const div = document.createElement('div') document.body.appendChild(div) ReactDOM.render( <App/>, div ) Integration with other frameworks Vue There is effector-vue package. Svelte Effector stores are Observable, so you don't need any additional packages to use them in Svelte. Simply prepend $to the store's name in your template: // Counter.svelte <script context="module"> import effector from 'effector/effector.umd.js'; export const increment = createEvent('increment') export const decrement = createEvent('decrement') export const resetCounter = createEvent('reset counter') export const counter = effector.createStore(0) .on(increment, (n) => n + 1) .on(decrement, state => state - 1) .reset(resetCounter) </script> // App.svelte <script> import { counter, increment, decrement, resetCounter } from './Counter.svelte' </script> <div>Count: {$counter}</div> <button on:click={increment}>+</button> <button on:click={decrement}>-</button> <button on:click={resetCounter}>reset</button> Side effects With Effector you don't need thunks or sagas to handle side effects. Effector has a convenient helper called createEffect that wraps an async function and creates three events that your store can subscribe to: an initializer (the effect itself) and two resolvers called done and fail. const getUser = createEffect('get user'); getUser.use(params => { return fetch(` .then(res => res.json()) }) // OR const getUser = createEffect('get user', { handler: params => fetch(` .then(res => res.json()) }) const users = createStore([]) // <-- Default state // getUser.done is the event that fires whenever a promise returned by the effect is resolved .on(getUser.done, (state, {result, params}) => [...state, result]) Advanced usage: combine, map One of the awesome features of Effector is computed stores. Computed stores can be created using either the combine helper or .map method of the store. This allows subscribing only to changes that matter to the particular component. In React apps, performance may be heavily impacted by unnecessary state updates, so Effector helps eliminate them. combine creates a new store that calculates its state from several existing stores: const balance = createStore(0) const username = createStore('zerobias') const greeting = combine(balance, username, (balance, username) => { return `Hello, ${username}. Your balance is ${balance}` }) greeting.watch(data => console.log(data)) // Hello, zerobias. Your balance is 0 map allows creating derived stores: const title = createStore("") const changed = createEvent() const length = title.map((title) => title.length) title.on(changed, (oldTitle, newTitle) => newTitle) length.watch((length) => console.log("new length is ", length)) // new length is 0 changed("hello") // new length is 5 changed("world") changed("hello world") // new length is 11 Comparison with other state managers Redux - Most projects that use Redux implement the whole application state in a single store. Having multiple stores isn't forbidden, but doing this right is kind of tricky. Effector is built to work with lots of different stores simultaneously. - Redux is very explicit but very verbose as well. Effector requires less boilerplate code, but all state dependencies are still explicit. - Redux was originally written in pure JS and without static typing in mind. Effector has much wider typing support out of the box, including type inference for most helpers and methods. - Redux has great dev tools. Effector somewhat lags right now, but the team already has plans for dev tools that visually represent your application as a graph of connected stores and events. MobX - When minified and gzipped, MobX is almost 20kb (14.9kb + 4.6kb for React bindings), while Effector is less than 8 kb (5.8 kb + 1.7 kb for React). - MobX has a lot of magic inside: implicit subscriptions to observable data changes, "mutable" state objects that use Proxies under the hood to distribute updates, etc. Effector uses immutable state, explicitly combines stores' state and only allows changing it through events. - MobX encourages keeping your data model close to the view. With Effector, you can completely isolate the data model and keep your UI components' API clean & simple. - May be difficult to use with custom data-structures. RxJS - Strictly speaking, although RxJS solves many tasks, it's a reactive extensions library, not a state management tool. Effector, on the other hand, is designed specifically for managing application state and has a small API that is easy to learn. - RxJS is not 'glitch-free'. In particular, synchronous streams for computed data do not produce consistent updates by default: see an example on how different reactive state management tools handle this task. Why did I choose Effector Here's a list of things I consider to be Effector's advantages over most similar tools: - Expressive and laconic API. - Reactive programming paradigm at its core. - Stable, production-ready. - Great performance, also I don't see any memory leaks. - Motivated team, great community. Conclusion Effector is not a silver bullet, but it's certainly a fresh take on state management. Do not be afraid to try something new and diverge from the most popular solutions. Interested? Try Effector now! Thanks - Andrey Sitnik @ai - article promotion - Alexander Kladkov @A1992 - fact checking - Artyom Arutyunyan @artalar - fact checking - Alexander Chudesnov - proofreading, editing Discussion (2) Reminds me Undux. undux.org/ This has bucklescript types. Sold
https://dev.to/lessmess/why-i-choose-effector-instead-of-redux-or-mobx-3dl7
CC-MAIN-2022-21
en
refinedweb
On Wed, May 16, 2001 at 02:36:44PM -0700, H. Peter Anvin wrote:> > But.> > > > It's still completely braindamaged: (a) these interfaces aren't> disjoint. They refer to the same device, and will interfere with each> other; (b) it is highly undesirable to tie the naming to the interfaces> in this way. It further restricts the namespaces you can export, for one> thing.We do this already with ide-scsi. A device is visible as /dev/hdaand /dev/sda at the same time. Or think IDE-CDRW: /dev/hda,/dev/sr0 and /dev/sg0.All at the same time.It is perfectly normal to export different interfaces for thesame device. This is basically, what subfunctions on PCI do: Samedevice with different interfaces. Just that we do it through a driver with ide and through thehardware with a multi function PCI card.Applications don't care about devices. They care about entitiesthat have capabilities and programming interfaces. What they_really_ are and if this is only emulated is not important.Sorry, I don't see your point here :-
https://lkml.org/lkml/2001/5/16/123
CC-MAIN-2022-21
en
refinedweb
#include <LazySilentFileJobInputter.hh> References protocols::jd2::tr. this function determines what jobs exist. This function neither knows nor cares what jobs are already complete on disk/memory - it just figures out what ones should exist given the input. NOTE: your JobInputter should order Job objects in the Jobs vector to have as few "transitions" between inputs as possible (group all Jobs of the same input next to each other). This improves efficiency of the "FAIL_BAD_INPUT" functionality. Note I said "should", not "must". Implements protocols::jd2::JobInputter. References nstruct, option, runtime_assert, tags, and protocols::jd2::tr. return the type of input source that the JobInputter is currently using Implements protocols::jd2::JobInputter. References protocols::jd2::JobInputterInputSource::SILENT. Implements protocols::jd2::JobInputter. References core::pose::Pose::clear(), protocols::jd2::Job::inner_job(), protocols::jd2::Job::input_tag(), tag_into_pose(), and protocols::jd2::tr. References protocols::jd2::Job::inner_job(), option, and utility_exit_with_message. Referenced by silent_file_data().
https://www.rosettacommons.org/manuals/archive/rosetta_2014.35.57232_user_guide/protocols/db/dac/classprotocols_1_1jd2_1_1_lazy_silent_file_job_inputter.html
CC-MAIN-2022-21
en
refinedweb
xml_schema_namespace (Transact-SQL) Updated: June 10, 2016 SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse). xml. The following example retrieves the XML schema collection ProductDescriptionSchemaCollection from the production relational schema in the AdventureWorks2012 database. View a Stored XML Schema Collection XML Schema Collections (SQL Server)
https://msdn.microsoft.com/en-us/library/ms190316.aspx
CC-MAIN-2017-09
en
refinedweb
I have been trying to resolve this all day and finally i come to you all. The task is simple, I need to set language type in the URL, so it looks something like this: domain.com/{langVar}/other/paths And be able to change it by clicking/selecting language in my apps header or any other component. Important: the language variable should always remain in the URL. I am using "react-router": "^2.7.0", "react": "^15.3.1". This is how my router config looks like: export default ( <Router history={browserHistory}> <Route path="/:lang" component={MainApp}> <IndexRoute component={Home} /> <Route component={OtherPage} /> </Route> <Route path='*' component={NotFound} /> </Router> ); Extending this stack overflow question, I added a function called userRedirect which will be triggered when the matching route isn't found. Note - the / following argument :lang in <Route path=":lang/" > is very important due to which our route * gets hit (as explained in the stack overflow link shared above. import React from 'react'; import { Route } from 'react-router'; import { App, About } from './containers'; function userRedirect(nextState, replace) { var defaultLanguage = 'en-gb'; var redirectPath = defaultLanguage + nextState.location.pathname replace({ pathname: redirectPath, }) }; <Route path="/" component={App}> <Route path=":lang/" > <Route path="about"> <Route path="show" component={About}/> </Route> </Route> <Route path="*" onEnter={userRedirect} /> </Route> If you navigate to the url <domain>/about/show, it will be redirected to <domain>/en-gb/about/show. Hope this is what you were looking for.
https://codedump.io/share/w9dH70ehJVUS/1/how-to-set-and-handle-language-in-react-router-from
CC-MAIN-2017-09
en
refinedweb
Vol. 17 No. 1 Spring/Summer 2013 Journey to India Journey Inward FREE SHIPPING Contents Yoga Samachar’s Mission Letter From the President — Janet Lilly . . . . . . . . . . . . . . 2,. org, Yoga Samachar is designed to provide interesting and useful information to IYNAUS members to: News From the Regions . . . . . . . . . . . . . . . . . . . . . 3 Journey to India, Journey Within —Tori Milner . . . . . . . . . 6 Early Days at RIMYI Public Classes in Poona — Fred Smith . . . . . . . . . . . . . 10 Pune Without Pollution (Almost) — Joan White . . . . . . . . . 11 First Impressions — Bobby Clennell . . . . . . . . . . . . . 14 Diary Excerpts Gifts From the Source — Sharon Conroy . . . . . . . . . 17 • Promote the dissemination of the art, science, and philosophy of yoga as taught by B.K.S. Iyengar, Geeta Iyengar, and Prashant Iyengar Finding the Grill — Vicky Grogg . . . . . . . . . . . . . . . 18 • Communicate information regarding the standards and training of certified teachers Traveling to India: Two Trips in One — Siegfried Bleher . 20 • Report on studies regarding the practice of Iyengar Yoga History and Highlights of the Pune Guide — Denise Weeks . . . 22 • Provide information on products that IYNAUS imports from India Guruji’s Birthday Gifts and Maitri in Bellur — Gaye Painten . . . 25 Paksha Pratipaksha on Results-Oriented Versus Indifference — Robin Lowry . . . . . . . . . . . . . . . . . . . . . . . . 29 Samachar Sequence Jet Lag Sequence — Julie Lawrence . . . . . . . . . . . . . . . 32 2012 Iyengar Yoga Assessments . . . . . . . . . . . . . . . . 33 • Review and present recent articles and books written by the Iyengars • Report on recent events regarding Iyengar Yoga in Pune and worldwide • Be a platform for the expression of experiences and thoughts from members, both students and teachers, about how the practice of yoga affects their lives Musings Memory — Carrie Owerko . . . . . . . . . . . . . . . . . . . 34 • Present ideas to stimulate every aspect of the reader’s practice Book Review Yoga Philosophy On and Off the Mat: B.K.S. Iyengar’s Core of the Yoga Sutras — Peggy Hong . . . . . . . . . . . . Yoga Samachar is produced by the IYNAUS Publications Committee 36 Classfieds . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Treasurer’s Report – IYNAUS Finances — David Carpenter . . . . . . . . . . . . . . . . . . . . . . 38 On the Rolling Seas — Mary Ann Travis . . . . . . . . . . . . 41 IYNAUS Board Member Contact List Please contact your board members at iynaus-board-staff. David Carpenter dcarpenter@sidley.com Kathy Simon kathyraesimon@gmail.com Alex Cleveland clevelandalex@yahoo.com Eric Small ericsmall@yogams.com Kevin Hainley khainleyoga@cox.net Nancy Watson nancyatiyanus@aol.com Rebecca Lerner rlerner108@comcast.net Denise Weeks denise.iynaus@gmail.com Janet Lilly lilly.janet@gmail.com Sharon Cowdery (General Manager) generalmanager@iynaus.org Michael Lucey 1michael.lucey@gmail.com Tori Milner torimilner@yahoo.com Mary Reilly maryreilly36@gmail.com Phyllis Rollins phyllis204@bellsouth.net Contact IYNAUS: P.O. Box 538 Seattle WA 98111 206.623.3562 Editor: Michelle D. Williams Copy Editor: Denise Weeks Design: Don Gura Sept. 1. Please send queries to yogasamachar@iynaus.org in advance. Advertising Yoga Samachar is now accepting paid advertising. Fullpage, half-page and quarter-page ads are available for placement throughout the magazine, and a classified advertising section is available for smaller ads. All advertising is subject to IYNAUS board approval. For more information, including rates, artwork specifications, and deadlines, please go to. Cover photo by Jake Clennell Spring /Summer 2013 Yoga Samachar 1 IYNAUS Officers and Standing Committees Letter from the President President: Janet Lilly Vice President: Michael Lucey Secretary: Denise Weeks Treasurer: David Carpenter Dear Fellow IYNAUS Members, Archives Committee I would like to begin by thanking our former IYNAUS board president Chris Beach and Eric Small, Chair Kim Kolibri, Director of Archives Lindsey Clennell, Elaine Hall, Linda Nishio, Deborah Wallach ByLaws Committee outgoing board members Patrina Dobish, Leslie Freyberg, Elizabeth Hynes, and Christine Nounou. This dedicated team contributed so much to the association and has left behind large shoes to fill for those of us following in their footsteps! Janet Lilly, Chair David Carpenter, Kevin Hainley, David Larsen Certification Committee Mary Reilly, Chair Marla Apt, Linda DiCarlo, James Murphy, Lois Steinberg Elections Committee Alex Cleveland, Chair Chris Beach, Patrina Dobish Ethics Committee Rebecca Lerner, Chair One of the initiatives that the previous board oversaw was the 2011 amendment of the IYNAUS Bylaws, which brought us more into alignment with the Pune Constitution. The most obvious benefit of this change was apparent at our November 2012 board meeting. For the first time, the incoming board members sitting around the table— Alex Cleveland, Kevin Hainley, Tori Milner, Kathy Simon, Eric Small, Nancy Watson, and Denise Weeks—had been elected or appointed from their regional associations. Joan White, Sue Salaniuk, Michael Lucey Events Committee Nancy Watson, Chair Patrina Dobish, Gloria Goldberg, Diana Martinez, Phyllis Rollins Finance Committee David Carpenter, Chair Kevin Hainley, Janet Lilly Membership Committee Phyllis Rollins, Chair IMIYA – Leslie Bradley IYAGNY – Elisabeth Pintos IYAMN – Elizabeth Cowan IYAMW – Becky Meline IYANC – Risa Blumlien IYANE – Kathleen Swanson IYANW – Margrit von Braun IYASC-LA – Kat Lee Shull IYASC-SD – Lynn Patton IYASCUS – Michelle Mock IYASE – Diana Martinez IYASW – Lisa Henrich Publications Committee Tori Milner, Chair Carole Del Mul, Don Gura, Richard Jonas, Pat Musburger, Phyllis Rollins, Denise Weeks, Michelle D. Williams Public Relations and Marketing Committee Tori Milner, Chair David Carpenter, Sharon Honeycutt, Michael Lucey Regional Support Committee Alex Cleveland, Chair IMIYA – Melody Madonna IYAGNY – Ann McDermott-Kave IYAMN – Katy Olson IYAMW – Jennie Williford IYANC – Heather Haxo Phillips IYANE – Jarvis Chen IYANW – Anne Geil IYASC-LA – Kat Lee Shull IYASC-SD – Lynn Patton IYASCUS – Anne Marie Schultz IYASE – Alex Cleveland IYASW – Lisa Henrich and Josephine Lazarus Scholarship and Awards Committee Denise Weeks, Chair Chris Beach, Leslie Freyberg, Richard Jonas, Lisa Jo Landsberg, Pat Musburger, John Schumacher Service Mark & Certification Mark Committee Gloria Goldberg, Attorney in Fact for B.K.S. Iyengar Rebecca Lerner, Board Liaison Systems & Technology Committee Kevin Hainley, Chair Ed Horneij, William Mckee, David Weiner Yoga Research Committee Kathy Simon, Chair Jerry Chiprin, Jean Durel, Alicia Rowe, Kimberly Williams This issue of Yoga Samachar, “Journey to India, Journey Inward,” includes stories from members about their experiences studying at the Ramamani Iyengar Memorial Yoga Institute (RIMYI). Having just returned from my sixth course of study at RIMYI, I remain amazed by the depth of knowledge and continued innovation that the Iyengars bring to their daily teaching. It is gratifying to witness what an international center RIMYI has become with students from all over the world, including China and the Middle East, coming to study with the Iyengar family. As a professor of dance by profession, while in India, I also travel to other cities to teach dance and choreography. When I mention that I study Iyengar Yoga, I am always struck by the degree of respect people have for the Iyengars and their method. This observation has led me to reflect that there is often not the same familiarity with the value of the Iyengar method in the United States. I hope this is not everyone’s experience, but many IYNAUS-certified teachers have contacted board members requesting that IYNAUS help them find ways to increase public exposure to the teachings of B.K.S. Iyengar. To this end, IYNAUS hosted an in-service panel, “Building Your Student Base,” at the Sarvabhauma San Diego Convention. This event invited convention attendees to join a panel discussion of strategies for creating thriving Iyengar method classes in a crowded marketplace. We are particularly grateful to our panelists—Peggy Hong, Holly Hughins, Randy Just, Pat Musburger, and John Schumacher— for volunteering their time and expertise. The board received many positive responses to IYNAUS Treasurer David Carpenter’s Financial Report featured in the Fall 2012/Winter 2013 issue of Yoga Samachar. We plan to include financial updates in subsequent issues with the goal of keeping members informed about the factors involved in making fiscal decisions that best serve all of our members. To this end, we also are initiating a multistaged strategic planning process that begins with addressing the core values of the organization and imagining our collective future. I know that I speak for the entire IYNAUS board when I say that we look forward to hearing your thoughts and questions. Please don’t hesitate to let us know how we can serve the Iyengar Yoga community better. With many thanks, Janet Lilly, President Iyengar Yoga National Association of the United States IYNAUS Senior Council Chris Saudek, John Schumacher, Patricia Walden 2 Yoga Samachar Spring /Summer 2013 News from the regions IMIYA Yogathon, the centerpiece of the afternoon, features yoga The InterMountain Iyengar Yoga Association (IMIYA) launched demonstrations by teachers, students, and board members who “Studio Walk” as a way for members and other students to hold a pose, repeat a pose, or show creativity, with sponsors connect and to reach out to people new to the Iyengar method. pledging them. This year’s profits will support the association Each month between May and September 2013, a different and help outfit the new Brooklyn Institute. member-owned studio is hosting an IMIYA-sponsored class taught by a certified Iyengar Yoga instructor. Classes are free The Brooklyn Institute furthers IYAGNY’s mission of bringing to IMIYA members and nonmembers alike. For schedule Iyengar Yoga to as many people as possible while increasing information, contact Angie Woyar at manager@ opportunities for teachers. Many current Institute students and iyengaryogacenter.com. Also, IMIYA can provide marketing teachers live in Brooklyn, and the borough has been in for your studio event. IYAGNY’s expansion plans for years. IMIYA will hold the Second Annual Iyengar Yoga Day, Saturday, A goal of $300,000 has been set for outfitting the studio, and Oct. 5, 2013. Iyengar Yoga Day will be held at Iyengar Yoga more than $160,000 has been raised so far. Donations are Center Denver. This year, Yoga Day teachers will offer eight 1.5- welcome; please go to for details. hour classes and four alternative classes. All-day pass holders can choose four sessions. Sessions will be taught by a variety of 25 Years Lighting the Way certified Iyengar Yoga teachers. A complete listing of teachers, Auspiciously timed to coincide with Diwali—India’s festival of descriptions of their classes, and times they’ll teach each class lights—IYAGNY celebrated its 25th anniversary with a gala will be available on the IMIYA website no later than July 15, 2013. themed “25 Years Lighting the Way.” On Nov. 14, 2012, nearly 200 IYAGNY association students, teachers, and supporters enjoyed a night of festive delights and appreciation. Via a video recording, Abhijata Iyengar welcomed attendees and expressed her Bridges to Brooklyn gratitude for all that IYAGNY is doing to promote Iyengar Yoga. The Mary Dunn Celebration/Yogathon, the yearly event that brings the Iyengar Yoga Association of Greater New York Sponsored by Dana and Michael Goldstein, the celebration (IYAGNY) together, took place on Sunday, June 2. This year’s honored Martha Stewart, who has long championed Iyengar edition, “Bridges to Brooklyn,” spotlighted IYAGNY’s soon-to- Yoga, and also recognized Judy Brick Freedman and Carol open second studio in Brooklyn. Eugenia Burns, two founding IYAGNY teachers. Proceeds from the event will support the association’s three-fold mission of The Institute has hosted the celebration since 2005. Students, enabling progressive lifelong learning and practice of Iyengar teachers, and association members attend special classes, Yoga, offering teaching of the highest standards, and fostering a including the annual Spirit of Mary Dunn class, in which community of practitioners within New York, New Jersey, association teachers pay tribute to Mary’s teachings by Connecticut, and Pennsylvania. presenting an asana as they remember her teaching it. Architectural rendering of the lobby at the new IYAGNY Brooklyn Institute by Mitchell B. Owen Spring /Summer 2013 Yoga Samachar The Sa Dance Company performs at the IYAGNY 25th anniversary celebration. (Photo by Liam Cunningham) 3 News from the regions Among the specific association initiatives funded in part by gala donations are the opening of the Iyengar Yoga Institute of Brooklyn, as well as a continuation of a student scholarship program, specific needs classes, and free classes for amputees, survivors of breast cancer, students living with HIV, and veterans of the U.S. Armed Forces. IYAGNY would like to thank supporters and volunteers who helped make the 25th anniversary celebration a success, and who continue to help IYAGNY thrive. To view more photos from the 25th anniversary celebration visit iyengarnyc.org and click on the “Photos” tab. IYAMN In November, the Iyengar Yoga Association of Minnesota (IYAMN) was privileged to host Jawahar Bangera from Mumbai, India. In addition to managing two yoga centers in Mumbai, Students participate in a three-day teacher-training course at the Yoga Institute of Champaign Urbana. IYANC Jawahar is also a director of the Iyengar Institute in Pune and a The Iyengar Association of Northern California (IYANC) has Trustee of the Light on Yoga Research Trust. He also has seen a renewed interest in supporting regional activities beyond traveled to many conventions with Guruji over the years, so we the Institute in San Francisco. There is interest in establishing a felt very fortunate to have him teach here. This was Jawahar’s regional committee that would elevate awareness and attract first visit to Minnesota. On Nov. 1–2 he taught a series of students to Iyengar Yoga throughout Northern California, intermediate classes at the B.K.S. Iyengar Yoga Center of support certified teachers in our region, and create a more Minneapolis, followed by two days of general classes at the meaningful sangha (community or association) among Iyengar Minneapolis Yoga Workshop, concluding his workshop with a practitioners. Specific goals include increasing the number of pranayama class. His classes gave students a wealth of asana members and certified teachers in our region. Next steps are to instruction and knowledge of the Iyengar method threaded formalize the committee and create an action plan for with philosophical insights into the practice of yoga. His long 2013/2014. For more information or to get involved with the association with the Iyengars provided a sense of the history regional committee, please email s.l.wilner@gmail.com. and depth of study that his teachers engage in. It was an inspiring weekend for everyone. The Iyengar Yoga Institute of San Francisco (IYISF) is excited to announce a new, free restorative class for members. The class On Dec. 14, IYAMN held one of its biannual Yoga Days to celebrate will take place quarterly and be taught by local Iyengar Yoga Guruji’s birthday. The event was held at the Saint Paul Yoga certified teachers. The next class is on June 15, and details are Center, and William Prottengeier donated his teaching. After class available at under Community Events. there was a celebration of Guruji’s birthday with tea and cake. IYAMN Yoga Days provide the Minneapolis and greater region an In April, IYISF held a successful Yogathon, bringing together opportunity for members to connect with each other and build members of the Northern California community to practice all community. These events allow students to celebrate their 108 asanas while raising funds for the Institute. IYISF has dedication and devotion to the Iyengars and the subject of yoga. hosted this fundraiser for seven years, but this was the first IYAMW A member studio of the Iyengar Yoga Association of the Midwest (IYAMW), the Yoga Institute of Champaign Urbana time the event has been open to beginner students. Beginners were invited to participate for the first 54 asanas and stay for the rest of the event, which included a movie, potluck, and prizes. recently held a three-day teacher education course. Dr. Sucheta IYASCUS Paranjape from Pune joyfully lectured on The Bhagavad Gita, The Iyengar Yoga is alive and well in the South Central region. Yoga Sutras of Patanjali, and The Upanishads. Student teachers practiced their syllabi and taught in groups as well as teaching U.S. (IYASCUS) blog at , written and mock assessments and learning adjustments. maintained by Karen Phillips. 4 Yoga Samachar Spring /Summer 2013 Austin Yoga Institute has moved to a new location and Student projects ranged from poems to glass sculptures to a sponsored George Purvis and Gabriella Guibilaro this spring. 10-foot by 3-foot poster created by Inge Mullerup-Brookhuis. The Boerne Yoga Center, also in a new location, hosted H.S Arun The colorful poster covered all kinds of mind and brain in May. Arun also will teach workshops in Denton, Dallas, and activities and would have fit in at a science fair. Austin this summer. O’Bannon says she made the assignment “because I feel too Many of our local teachers are quite busy as well: many people are turned away from their creativity as children. • George Purvis (Senior Intermediate III) came to San Marcos Many never know the beauty that lies with them.” Yoga unlocks School of Yoga in the fall and visited Austin and Houston this spring. • Randy Just (Junior III) teaches numerous workshops around this inner beauty—and “connects us to our soul.” IYASW On Nov. 3–4, 2012, Open Spaces Yoga in Pinetop, Ariz., hosted the region and is involved with a teacher-training program the first membership workshop for the newly formed Iyengar at his studio and in the San Angelo area. Yoga Association of the Southwest (IYASW). Taught by certified instructor Josephine Lazarus, the workshop theme was • Peggy Kelley (Junior III) has been traveling to Mexico quite “Opening to Transformation,” based on a sequence developed frequently. She helps with assessments in Mexico and does by B.K.S. Iyengar and Manouso Manos. Thirteen students teacher training for studios in Veracruz and elsewhere. attended the workshop—many of which were new to yoga or the Iyengar method. • Pauline Schloesser (Introductory II) is hosting a series of special Saturday workshops at Alcove Studio in Houston, Senior teacher Caroline Belko taught a weekend workshop at and Devon Dederich is offering a series of Saturday classes Scottsdale Community College Feb. 22–24, 2013. Caroline is a on how to use props at Clear Spring Studio in Austin. regular instructor in the ongoing teacher-training program. • Anne-Marie Schultz (Introductory II) maintains the Iyengar Tucson is not the dry desert after all. Life-giving showers have Yoga in Austin blog as well as a Teaching Philosophy and supported the B.K.S. Iyengar Studio in the form of a Rita Lewis Yoga blog. Both blogs (iyengaryogainaustin.blogspot.com Manos workshop in February. Just back from Pune, Rita shared and teachingphilosophyandyoga.blogspot.com) have more the messages from Guruji’s morning classes. than 1,000 views per month. IYASE Dean Lerner will offer a workshop at B.K.S. Iyengar Yoga Tucson in October, marking more than 20 visits to this studio. Everyone in A member studio of the Iyengar Yoga Association of the Arizona appreciates the willingness of senior teachers to come to Southeast (IYASE), Audubon Yoga Studio in New Orleans hosted a small community over many years to share their knowledge. Karin O’Bannon for a teacher training in January. O’Bannon is an inspirational trainer of teachers and a yoga practitioner who urges us to teach from our intuition. “Give up your analytical mind,” she says. “Be one with your students.” O’Bannon (Intermediate Senior III) has taught yoga in Los Angeles; Rishikesh, India; Kuala Lumpur, Malaysia; Brazil; and most recently, China. Since 2009, she has been traveling from her current home in Shreveport, La., to New Orleans to conduct teacher-training workshops five times a year at Audubon Yoga Studio, which is owned and directed by Becky Lloyd. In January, O’Bannon gave participants in this year’s teachertraining program an open-ended assignment: Look at all parts of citta, and come up with a way to relate them to each other. Be creative, she said. Make a chart, a poem, a play, or a picture. Spring /Summer 2013 Yoga Samachar Students participate in the first membership workshop in the Southwest region. 5 Journey to India Journey Within W By Tori Milner hen I reflect on my five trips to India so far, I am most struck by how each trip is different. Each time I go, I am different. India changes and becomes more Westernized. Being in Pune with the Iyengars and the Indian teachers and students gives me a perspective that I just couldn’t find anywhere else in the world. The journey, like yoga itself, is like a mirror, reflecting back exactly where I am and who I am at that point in time. Illustration by Curtis Settino Photos by Tori Milner 6 Yoga Samachar Spring/Summer 2013 When I first began taking yoga classes at 25, I really wasn’t looking for a journey inward, nor did I think I had any interest in visiting India. I had done yoga out of books with my mother as a child but nothing that stuck. Then I saw my friend’s 65-year-old mother do a headstand and variations at a party to entertain her grandchildren. I was fascinated. She looked so graceful and stable. I was completely inspired. I had a motivation to begin: I wanted to be able to do what she did with her body and look as graceful. But there was something else; I wanted to be able to concentrate like that. The closest yoga center happened to offer Iyengar Yoga. I took one class a week for about six months. Then it crept up to two, three, and four classes, and before I knew it, I was completely hooked. My first teacher, Joe, used to tell very funny stories of going to study with the Iyengars. I never imagined I would go at some point. years of seeing him in black and white in Light on Yoga. His skin I wound up moving to New York in 1999 and had the good looked soft like a child’s, his body even more supple than the fortune to begin studying with Mary Dunn. She had the most youngsters in the room, and the energy he radiated seemed as unique, inspiring way of speaking to the wholeness of our bright as the sun, lighting up the hall. I was mesmerized. Many humanity as she taught the mechanics of the postures, not days I would set up near him during practice to catch what was only instructing us how to do but also how to be. She opened a happening and with the exciting and terrifying hope that he window into a view of myself and taught me how to relate the might “notice me.” After two months there, I realized that asanas to life. She taught me how to use my arms and legs to whether or not he noticed me was not the point. I was there to serve the greater whole, and also how to use my senses to take note of him and what he was teaching, how he was discover the core of my being. I realized that asana was not only practicing, and how he transmitted information to the students about doing but also undoing and even not doing. I began he was working with. assisting classes, shifting from doing to observing, and realized the incredible range of ways people can (or can’t) move. In 2001, Someone told me that if you brought a letter to Geeta, she just a few months before 9/11, I decided I wanted to teach, so I would give you a sequence of your very own. Innocently, one quit my job and enrolled in the two-year program in New York. night after class early the first month, I went up to her, got on I arranged dates to go to RIMYI and study with the Iyengars— my knees and slipped her a letter that I had written. When I June and July 2004. Mary suggested going at the start of their raised my head up, she was looking right into my eyes. I had new session in June and that two months were better than one. never felt so seen by another human being. It felt as if she could see straight into my soul. Not sure whether to cry, smile, When I arrived at JFK airport to embark on my trip, a miniature or run, I was determined to stay put and look neutral. I India was taking place in the Air India section. What first immediately sensed that she would not be giving me a personal appeared like a line was a chaotic frenzy—a cluster of activity sequence. As the trip went on, I realized she was giving me that gave me a taste of where I was headed—far away from the something far greater—her time, her energy, her love of familiar, straightforward, organized ways of my American city teaching, and her devotion to the subject of yoga. By absorbing into the mysterious ways of the East. those, I would receive my “answers.” I arrived a few days early to acclimate myself and went to the That first trip, Geeta had recently hurt her arm and was not Institute to watch Geeta teach a class. As I sat on the stairs, I teaching all of her usual classes, so I got to experience a range craned my neck to see the entire room, and suddenly, there he of teachers. They made it so simple! There were so many poses! was. He was on a Viparita Karani box in the middle of the room, When I observed classes, there were so many things that they and I couldn’t believe my eyes. I’m sure I held my breath. It was were letting go. Some of the headstands I saw would have sent captivating and thrilling to see that B.K.S. Iyengar was real— an American teacher into a panic, I thought. And Prashant’s live, three-dimensional and in full Technicolor—after all those classes were a lively forum for “doing, knowing, and Spring /Summer 2013 Yoga Samachar 7 I went up to him to pay my respects, and when I lifted my head back up, he looked at me sternly and said, “So, did you catch something?” understanding” the asanas from his and drop back.” He pointed to the mat right in front of Mr. wonderful perspective of marrying Iyengar. I looked around to make sure nobody else named Tori the mind and breath to the body. was standing behind me. I looked at the mat he was pointing to. “Right there?” I asked, incredulous. “Yes, you have to,” he said, On that first trip, I was extremely pleading with his eyes but not his voice. extroverted. I shopped a lot for myself and bought scarves and My generation has not had the experience of studying directly jewelry. I went to the German Bakery under Guruji. But we are getting a taste of it in the ladies’ class on Sundays. I took side trips to over the past few years. He came to New York to see us perform Mahabeleshwar and the caves. I ate on his Light on Life book tour and bless our then-new institute in the spicy food regularly and got sick 2005. Of course, I had seen him on all my trips to Pune, and I quite a few times because it was so had met him in New York, but I wasn’t sure if he had any idea delicious and I just couldn’t restrain who I was. myself. I planned and held parties to meet my fellow Iyengar Yoga So I stepped onto the mat right in front of Guruji, feeling practitioners from around the world. more vulnerable than I have ever felt. I tried to be brave It was, after all, my first trip. and did my best, but I didn’t feel anywhere near ready to go all the way back to touch the floor, and I didn’t. I didn’t In spite of all that, I had a profound experience in the practice want to sacrifice good form just to drop. Truthfully, I hall and felt truly changed by my first experience in India and hadn’t gone from Tadasana to Urdhva Dhanurasana in quite with the Iyengars. It was early August when I arrived back a while. I had had a lower back issue flare up about a year home to hot summer in New York City. I was shocked by all the and a half prior, and I was rebuilding my flexibility, cars, stores, and people—and the amount of skin they were strength, and courage. As Guruji revealed to me that day, showing! Every time I saw an Indian person or family on the it probably had most to do with courage. subway, I wanted to rush up to them and explain how I was just transformed by their country. I wanted to tell them I Guruji had the assistants tie me incredibly close to the rope understood India! Luckily for them, I restrained myself. wall with a short belt and then insisted that I reach back and touch the floor. I am only five feet tall, and while I am certainly My fifth and most recent trip was this past October. I went for flexible, I was probably at least six inches away from touching one month. I lived right next door in my favorite apartment the ground. He yelled instructions as I went back, “Press your where I have become a regular. It was a relatively quiet month heels and make your middle fingers HEAVY! Go down from the in the practice hall. I did not plan any parties or do much latissimus! Elbow joints back!” I tried, but they still didn’t touch shopping. I wanted to immerse myself in the practice more all the way. I quickly came back up. “Ah, see,” he said to than ever. I enjoyed morning classes with Prashant, ladies’ Abhijata and the assistants, “that is called escapism.” I tried classes with Guruji and Abhijata, and pranayama with Geeta. I again, determined to touch the floor. I still couldn’t reach all also went to the library and helped out in the medical classes. the way down. This time, he said, “Lift your kidneys to come Because I have been there quite a few times now, some of the back up!” It felt more supported. teachers have gotten to know my name. I made several attempts from this new position at the wall. It One day, mid-month, after a morning class with Prashant, I all seemed to go in slow motion. It was surreal. At one point, I went back to my apartment to have a leisurely cup of tea and noticed that a large, blurry crowd had gathered around us, but I some banana before returning to practice. When I got to the was barely aware of them. During one of my attempts, Guruji hall around 10 a.m., it was fairly busy, so I set up in the middle. finally came over. I was reaching for the floor, upside down, and I then went to use the ropes because I was planning to do saw only his legs coming toward me over on my right-hand backbends. I was doing simple Ropes 1, static and swinging, as side. I’m sure I tensed up, afraid that he was going to break me well as some work they had shown us in the ladies’ class. One in two! He pushed me down and pumped several times on the of the Indian teachers was doing graceful drop backs from right side of my diaphragm. Then he walked around and did the Tadasana a few mats down. Guruji was practicing in his usual same on my left-hand side. As strong as it might have looked spot. I heard him say something. Suddenly, the Indian teacher and as loud as I yelled out, it didn’t hurt. I think my shouts leapt over to me, and said, “Tori, you have to come over here came more from a place of visceral surprise as he showed me a 8 Yoga Samachar Spring /Summer 2013 glimpse of my body’s true potential, and it was fierce! Finally, I where you are no one and you are nothing.” And there are times touched the ground! He walked away and surveyed his work. “She when I really can feel that humble, quiet place inside, unsoiled has improved,” he said. ”See how much is the fear complex.” by my wants, worries, and the outside world. They moved me into a different position on the rope wall I am struck by the difference between this last trip and my first hanging over a rope swing with my shins on the wall and I trip almost 10 years ago. I can see that my reflection in the reached over backwards toward a rope attached to the bottom mirror is a little older, but also wiser. My motivations and hooks on the floor. “You have to bring life to the back ribs and expectations are more aligned with the present moment and pacify the lumbar,” Guruji said. “This is why all of them less intent on “getting it” for some external praise or complain of lower backache!” After some time there, I went to recognition. On any given day, I attempt to explore the vastness Dwi Pada Viparita Dandasana at the wall, and by now it had been within through the incredible practice of Iyengar Yoga, not just over an hour. I was tired but determined to understand, so I through the asanas, but also through studying the philosophy. kept working. I saw them finally put him in Savasana. “Oh good, Just before I left, I asked Prashant to sign a book for me. He it’s over,” I thought. Just as I was about to start winding down, wrote, “Wishing you motivation without motive in yog.” Now, I Abhijata said, “Tori, come over here and do Kapotasana!” My understand that concept and aim to loosen my grip on those heart began racing again. “Come and do! He remembers you motivations as much as possible. from the New York demonstration!” she said. She adjusted me adeptly with the fat round “ruler” to keep the tailbone lifted so Although my practice is far from perfect, it wouldn’t be the outer hip sockets and buttocks would not drop. In the anything like it is without the guidance of the Iyengars. Being center! Tied again to the ropes. More and more! Then, suddenly there and having direct contact with them has shown me what it seemed the hall was half-emptied out, everyone left was in I think it means to experience involution—to take the journey Savasana, and somehow I made my way to Ardha Halasana over inward. Classes are simple, profound, pure, and transformative. a bench for some relief. I was exhilarated and quite tired. Being a learner in their presence and under their influence uplifts the level of my practice, and when I return, my teaching I went back to my apartment, and after one of the deepest naps is uplifted as well. Many of the typical obstacles I face, such as I have ever taken, I wrote Guruji a note thanking him for what laziness, fear, doubt, and restlessness, go into remission under he had shown me. I went to the library that afternoon to give their guidance. I contact the depth and breadth of my being, him the note. He wasn’t there. As I came up for the medical transcending my limited perceptions. And that, for me, is the class, I saw him and handed it to him. He didn’t acknowledge deeply powerful beauty of Iyengar Yoga and the reason I am me, but took the letter. The next morning, after the ladies’ class, still hooked on learning and teaching it after all these years. I went up to him to pay my respects, and when I lifted my head back up, he looked at me sternly and said, “So, did you catch something?” “Yes Guruji!” I exclaimed. “So, when you go home, Tori Milner (Intermediate Junior III) teaches at the Iyengar Yoga Institute teach like that!” “Yes, Guruji,” I said, and as I walked away, I was of New York. stunned that he hadn’t said, “When you go home, practice like that.” He said teach like that! So I began to reflect on what that meant and what a big responsibility we have as Iyengar Yoga teachers and students. What was that teaching like? The approach was clear, direct, and demanding, from a place of understanding what the student was capable of, to help him or her overcome the “fear complex.” The student’s job is to catch, receive, and break through perceived limits. Guruji’s teaching married intensity with intelligence to a level that was transforming. It drove me deeper and deeper inward beyond the dualities—there simply wasn’t room! Under the right conditions, I can experience this quality of transcendence in my practice, and I strive to transmit that to my students. Geeta once said in Savasana to “go to the place Spring /Summer 2013 Yoga Samachar 9 Early Days At RIMYI Public Classes in Poona By Fred Smith P une in the late ’70s and early ’80s was a very different place than it is in 2013. For yoga students and everyone else, it was a much quieter and more beautiful city. In this way, it was more amenable to yoga study. My position was quite different from many of the other students visiting the Institute. As a Ph.D. student studying Sanskrit at the University of Pennsylvania, I had a long-term grant for conducting research in India. Thus, I took public classes rather than attend the classes reserved for foreigners. I also did not mix with them socially because I had a full life in Pune and was I also enjoyed sitting beneath the big banyan trees that lined Fergusson committed to my work. College Road. I attended four classes per week—on Monday morning, Tuesday Alas, they were evening, Thursday evening, and Saturday morning. The Tuesday evening class was the most advanced class of the week, and Thursday evening was the pranayama class. Saturday morning was a men’s class, and Monday morning was a mixed general class. Most of the classes were taught nominally by Prashant, but Guruji was right there and ended up teaching most of every chopped down about 15 years ago. the last week. But any class could easily move into twists or advanced balancing poses. The Iyengars did not plan out their classes beforehand with written sequences. They ebbed and flowed with their knowledge, as perhaps only they could at that time. For me it was a joyful time, even if I was completely wasted after a difficult Saturday morning class. I frequently took off afterward to the Vaishali or Roopali snack joints, drinking two cups of their pudding-like milk chai and eating plates of idlis or sabudana wadas, the tapioca dumplings with class. When I arrived, Guruji had not yet begun growing his hair ground peanut, coriander, and long, but by the mid-1980s, he had. It did not change his green chile that were demeanor much, but his leonine appearance added to his characteristic of Pune. reputation for ferociousness. I also enjoyed sitting beneath the big banyan trees that lined The sequences were varied, with the general pattern of Fergusson College Road. Alas, they were chopped down about standing poses the first week of every month, forward bends 15 years ago, sacrificed to the great god of modernity. Most of the second week, backbends the third week, and pranayama the beautiful old houses built in pre-Independence days, part of the Indo-Saracenic architecture that made Bombay and Poona so lovely (this was before “Mumbai” and “Pune”), have also been ripped down by the demons who stole away Poona—namely developers, who also destroyed about two-thirds of the big maidan or cricket field that gave Deccan Gymkhana its name. Those were good years to be in Poona; the Iyengars were in their prime, and the city was vibrant, beautiful, uncluttered, and relatively unpolluted. Fred Smith, professor of Sanskrit and classical Indian literature at the University of Iowa, has been practicing Iyengar Yoga since 1980, six of those years in Pune at RIMYI, studying with the Iyengars. He has frequently lectured at yoga studios and yoga conferences on aspects 1977 International General Intensive (Photo by Lindsey Clennell) 10 of The Yoga Sutras of Patanjali and other yoga related topics. Yoga Samachar Spring /Summer 2013 Pune Without Pullution (Almost) By Joan White I ’ll never forget my first trip to Pune in 1976 (nor my husband’s response when I told him I wanted to go in 1975: “Over my dead body!”) Well, fortunately that didn’t have to happen. It was 1976, and “yoga” was a four-letter word. Women’s lib was focused on equality in the workplace, and I was mother to a four-year-old and had a husband who needed me to take care 1977: Guruji is on the platform, adjusting someone in Sarvangasana. He’s working to move the student’s tailbone in. (Photo by Lindsey Clennell) of them both. However, after not being able to go to Pune in 1975 when the Institute opened, I didn’t ask anyone’s permission when I received the invitation in 1976 from Mary Palmer. I immediately sent her whatever money I had stashed away as a deposit. I didn’t dare mention anything to anyone. It was our little secret. Of course as the time approached, I had to tell my husband, who, caught completely off guard, had no words at all to respond. It was the first time I had ever left him or our son for longer than a week. When the shock wore off, I told him I had some childcare in place and some food in the freezer. He was a professor of classical archaeology and could manage to pick our son up from preschool. The Pune we saw then is almost completely gone. people and lots of small shops were open for business. I had no idea what to expect, but somehow I never expected what I saw. Who shops at 2 a.m.? What was holding the shop owners’ wooden carts together? How did they manage to rig up lighting with only a single light bulb or some sort of flashlight configuration? Everyone seemed so poor. Slums surrounded the airport. They no longer exist, but at the time, they were overwhelming. There were people sleeping on the ground, under blankets, shawls, or any I later learned I wasn’t the only one who had a hard time sort of covering they could find. At first, I thought they were getting away from home to make the journey. Some had dead because you couldn’t see their heads. We had been told childcare issues, one or more had to leave behind 25 frozen before we left that sometimes dead people were left on the dinners, and others had to take bank loans in order to even sidewalks, so how was I to tell if they were dead or not? begin the journey—let alone face what was awaiting them upon their arrival in Mumbai. From the airport we made our way to what was then the brand new Oberoi Hotel. People were lying on the sidewalks outside Because it was my first epic journey, I thought it wise to ask my the hotel, too, but when we entered, we were suddenly doctor for something that would help me sleep on the long transported into a world of marble floors, doormen, white plane ride. I met the Ann Arbor group at JFK, and we flew Swiss uniforms with gold epaulets and turbans, fancy shops (not Air with a short layover in Switzerland. I dutifully took my pill open in the middle of the night), and beautiful rooms with at takeoff, and when we landed in Switzerland, I couldn’t wake sparkling bathrooms. Mary Palmer thought it would be a good up. I have a vague memory of Mary Palmer shepherding me idea for us to spend a couple of nights in Mumbai to adjust down an escalator, feeling kind of nauseated, and then before we headed to the Institute. boarding another plane. This was to be the cushiest part of my journey, and already I couldn’t have made it alone. After sleeping for only a few hours, I jumped out of bed so I could go outside and see what it was like. There were small Nothing can adequately prepare you for the airport in Mumbai, shops everywhere selling old silver bracelets and even one which at that time was very rundown and smelled from years weird coral necklace with tigers’ teeth between the corals. of mildew and squat toilets seldom properly cleaned. We (I don’t know where I put that one.) The streets were teaming arrived in the middle of the night, but the streets were full of with vendors. I went to see the gate of India only to discover Spring /Summer 2013 Yoga Samachar 11 Early Days At RIMYI Guruji started our mornings with dropbacks from Sirsasana and then followed them with 1976 International General Intensive (Photo by Lindsey Clennell) Mandalasana. Sunrise was the place to go for cereal or omelets. If you wanted something Western, you could find it there. We were expected to rest and take it easy during the day, and then we went back to the Institute for our late afternoon pranayama classes. Sometimes we stayed and took Geeta’s class at 6 p.m. if Guruji thought it was a good idea. We were not allowed to write anything down during classes, and there were no tapes available, so most of us spent a that there were drug dealers everywhere approaching any foreigner who happened to be in their territory. I took a boat to good deal of time writing notes in groups. It wasn’t until the Elephanta, which was fun and full of trash and monkeys. I Japanese students started to come to the Institute that people walked into the Taj Hotel, which at the time was just the old were given permission to tape the classes—but still not us section, and it was charming and beautiful. But like everyone Westerners. Guruji was full of high energy and sometimes else in our group, I was anxious to get to Pune. started our mornings with drop-backs from Sirsasana and then followed them with Mandalasana. This was our introduction to Finally we got on the road in a series of taxis. The road proved classes at the Institute. to be narrow with a broken up surface, and it was extremely dusty because this was the dry season. There were no super While riding to class, it was not uncommon to see people taking highways. It took more than five hours to get there, and our their morning baths and going to the bathroom across the first and only roadside stop was at a small outpost, with the street from the large slum we passed on our way. A huge usual unspeakable toilet facilities, where they offered some sort bellows that was larger than the huts was used to get fires of cooked food that we were all afraid to eat. We were all really going. The air was filled with heavy smoke so that by the time glad when we reached the Amir Hotel, which was not near the we got to the Institute, it seemed like we were in the suburbs. It Institute at all but down in the “camp” area. Why there, you was not uncommon when riding back at night to see large rats. might ask? It was the only hotel in town that had bathtubs, One time when I was coming back from a friend’s apartment, which Mary Palmer felt were more important than proximity. I saw a rat the size of a large rabbit! Which reminds me of There have been several times since that I wished I’d had the another story. same priorities. After my experience with the Amir where I stayed in 1976 and Like so many things in Pune, the Amir Hotel no longer exists. 1978, I switched to the Agit Hotel, which was across the street The Pune we saw then is almost completely gone. India’s ’70s from the Deccan Gymkanna Club. Here Patricia Walden, Victor and ’80s streets were filled with cows or wandering members of Oppenheimer, and a large contingent of English yoga teachers a water buffalo herd that lived near the Institute. Getting to and stayed for many years. We paid $7 a night, and it was a from the Amir Hotel required a rickshaw ride of 15 to 20 25-minute walk to the Institute, which we often had to do if we minutes, depending on how many times we had to stop for couldn’t find rickshaws at 6:30 a.m. I actually loved those walks cows or sheep or goats on the roads. Today it can take 45 because it brought us into contact with the old British minutes to an hour to make the same trip. bungalows built with stone in the Saracenic architecture style, which still exists in Mumbai. We also encountered vegetable In those days, we went to intensives that were taught by Guruji vendors who came by with their bullock carts early in the himself. Classes would start at 7 a.m. and usually finish around morning and later in the afternoon when we were on our way 10 a.m., when many of us would rush off to Vaishali’s or the back to the Institute. Sunrise café to get breakfast. Vaishali’s, which is actually still there, served Indian food in a lovely garden setting that had While staying at the Ajit Hotel in 1981, two rats buried tables with large umbrellas to shield you from the weather. The themselves in my pillow, which I didn’t discover until I leaned 12 Yoga Samachar Spring /Summer 2013 back. They immediately jumped out squealing and started December or January was at its best quality of the year. Because running around the room. I went down the hall to ask my there were very few cars in Pune, we didn’t suffer from street friend Victor to come in and help, but he started shrieking his noise and could easily hear everything that was being said. head off and jumped on the other bed when he saw the rats. There were no burning leaves outside the windows of the Patricia came out into the hall and started shrieking as well. Institute. Bicycles were the mode of choice for 99 percent of the There I was, totally traumatized, having to calm the two of population. Several of us rented bicycles at least once to get around. them down, while the rats, terrified of Victor, were running around trying to get out of the room. The population of Pune in the 1970s was a mere 250,000 as opposed to the 5 million who live there today. The only five-star Apparently, the rats had climbed a tree behind my bathroom hotel was the Blue Diamond, built mostly out of wood that was window, which, of course, was broken, and had come in that painted blue. It took about 15 minutes to get there. A group of way. When I called down to the front desk to get someone to us would go there on Sunday mornings for brunch, which come up and do something, they said, “Yes, madame. Coming, mostly consisted of baked beans on toast or some sort of egg madame. It is only rats, madame. We will be coming soon, combination. For those of you who have not been to Pune: madame. Try to remain calm.” They finally showed up with a Because of the terrible traffic and intense pollution, it now piece of cardboard to shoo the rats out, and then covered the takes 45 minutes to get to the camp area and about the same to broken window with the same piece of cardboard and some get to the Blue Diamond area. tape. “There madame, now you can go back to bed. They are gone now.” The next morning I insisted on moving despite their One of the things I miss the most about Guruji’s intensives, assurances about the efficacy of the cardboard. apart from his extraordinary teachings, was the makeup of those classes. Forty people from around the globe were gathered We sometimes went up on the roof of the hotel to take to study with him. He corrected each of us individually in Salamba advantage of the sun, and we would find nearly the entire Sarvangasana. He demonstrated our mistakes on his body first, contingent of British teachers up there as well. That lasted until and if we weren’t getting it, he would take one of us up on the the mid-’80s when Geeta announced that if anyone showed up stage and demonstrate how and where to change what that to class with “a changed color,” she would throw them out. student was doing. We would then go back and repeat. Little did we know that she was saving our skins, literally. As a result of the diversity of the intensives, many of us formed Those Saracenic stone bungalows no longer exist nor do the life-long friendships with yogis all over the world. I am so fields across the street from the Institute. The beautiful banyan grateful to have had the opportunity to have met so many trees that lined Fergusson College Road were torn down 15 wonderful people. Here are just a few of the people I remember years ago. Pune was a beautiful city in the early days. University meeting during those early years: Lillian Biggs, Lindsey and students could be seen sitting outside on the grounds of their Bobby Clennell, Mary Dunn, Angela Farmer, John Floris and his colleges. There were no high-rises or malls, and the air in beautiful wife Maria, Beverly Graves, Martin Jackson, Judith Lasater, Manouso and Rita Manos, Jean Maslow, Mira Mehta, Shaym Mehta, Silva Mehta, Victor Oppenheimer, Lisa Schwartz, Clay Soren and Nanda, his then partner, Karin Stephen, Peter Thompson, Victor Van Kooten and his wife Annameeka, Patricia Walden, and Judith from Bern, Switzerland. Joan White has been a student of the Iyengars since 1973 and received her advanced certification from B.K.S. Iyengar in 1993. She gives workshops and classes all over the states and in Europe, and also runs the B.K.S. Iyengar yoga school of Central Philadelphia. She has an active teacher-training program at her school. She served for six and a half years as the national certification chair, served on the IYNAUS board, and has served continuously on the ethics committee since 2000. She was the first recipient of the Lighting the Way award. 1976: Guruji adjusting a student while he teaches Jalandhara Bandha. You can see the dust on the floor. (Photo by Lindsey Clennell) Spring /Summer 2013 Yoga Samachar 13 Early Days At RIMYI First Impressions By Bobby Clennell I made my first journey to RIMYI in 1976 with my husband, Lindsey, and our two sons, Miles and Jake, who were 10 and 5 at the time. Lindsey and I have been back 20 or so times since then, but the memory of that first trip has remained the strongest in my mind: The colors, sounds, smells, and tastes of India that year made a lasting impression on my senses. Most important and profound was the impact of Mr. Iyengar’s teaching. I had never met a teacher who 1976: Guruji, Geetaji, and Prashantji always practiced their inversions together in the late afternoon, before the evening class. You can see the white dust on the sole of Guruji’s foot and all over the mats from the polishing and smoothing of the marble floor. (Photo by Lindsey Clennell) demanded—and received—such undivided attention. When was confined to the portion of the floor that was dry. Guruji Guruji teaches, his eyes are everywhere. In his classes, he sternly announced that no one, NO ONE, was to drop one of the demands that one remain on the very edge of the moment. new white blankets onto the wet portion of the floor. This was It is interesting to experience his teaching now, in February nerve-racking. I was so nervous that I dropped my blanket right 2013, and compare it with what I remember of his teaching into one of the puddles. I froze. The entire class froze. Guruji 40 years ago. Now he teaches through his granddaughter, looked furious. Finally, my dear husband, Lindsey, stepped Abhijata. In the ladies’ class, she hears his voice but you forward, lifted the blanket out of the water, took me by the don’t. Strictly speaking, this is his practice time; he begins hand, and led me over to a dry spot. The class resumed. curved over the Viparita Dandasana bench. If you glance over to his practice area at the end of the class, however, It was entirely different at RIMYI in the ’70s. I found Guruji both you see that he is now standing and watching the class. alarming (make that terrifying) and utterly charismatic. He is What hasn’t changed is his absolute mastery as a teacher. still both, but now I understand him better. In those days, Guruji was addressed as “Sir.” In fact, I still find myself calling Guruji’s language, then and now, is pure poetry. Later Geetaji him “Sir.” Then as now, when he taught, he bypassed gender, came along and taught in a way that made his teachings more age, and class. He demanded that all participate. All were easily and clearly understood, and that was marvelous. We subject to his penetrating attention. Most educational began to absorb the information differently. But Guruji’s institutions I had attended had been happy to allow me to hide. instructions somehow bypassed the logical, computing brain, Now I had to come out from the shadows. going straight to our innermost being. After each of those early trips, I would return to London with the sensation of floating— I remember every correction, every admonishment, and every and this would last for a good six months. adjustment. We had pushed up into what was, I think, our sixth Urdhva Dhanurasana. I was struggling to hold the pose. A voice I remembered a large group at the first intensive I attended. from above roared, “Don’t die yet! You have two children. Stay up!!” Now, looking at an old photograph, I realize it was small, certainly compared with the number of students in the asana Once Guruji corrected my Ardha Chandrasana. In retrospect, I hall these days. This past February, there were 200 students, think he was being fairly gentle, as he said, “You are a and it took 15–20 minutes to seat everyone. Various methods beginner aren’t you?” My pose was corrected for the benefit were employed to make room: “Has anyone attended a class of the group. That day at lunch with some of the students, I already today—even the medical class?” “Does anyone have a cried and cried. It was such a strong experience. I was bad cold? Is anyone coughing? OUT!!” absolutely overwhelmed. All I can tell you is that I went back the following year. I knew I had to. During that first intensive, the Institute had only just been built, so some things were not quite finished, and the marble Backbends, Balancings, and Props floor was being polished. When classes weren’t meeting, huge, The teaching was exciting and strong and instilled much circular grinding machines were run over the wet floor again confidence into us students. A men’s class was taught by Mr. and again to produce the shine that we see to this day. Class Shar. It was a tough class, but women who were strong enough 14 Yoga Samachar Spring /Summer 2013 During one restorative class, we held Urdhva Dhanurasana for an incredible seven minutes. were welcome. Shar also taught Guruji, Geetaji, and Prashantji, to discuss the interface of some of the regular classes. medicine and yoga. Prashant organized a photo shoot depicting During one restorative class, we the use of props for various ailments. The photos were held Urdhva Dhanurasana for an displayed giving us teachers our first solid guidelines on yoga incredible seven minutes. for medical uses. These pictures were crude by the standard of today’s teachings, but that event was another of those turning In back-bending classes, Guruji would line everyone up in a row points in Iyengar Yoga history. and drop each person back from Colored Paper, Chips, and a Conch Shell Tadasana to Urdhva Dhanurasana. I was working with a team of volunteers to decorate the I was beginning to come out of Institute on the eve of a celebration. It was late. We were sitting my shell. It worked for me—I on the floor, cutting large mandalas out of colored paper. The was only 30 years old. He made floor of the Institute was strewn with paper, glue, scissors, you do things you didn’t imagine pencils, and the like. A pair of feet that was unmistakably you were capable of. There were Guruji’s appeared in front of me. Guruji disappeared, then fast-moving and very lively moments later, tea and little bowls of desert were brought to jumping sessions led by Guruji. There was so much each worker. Another year, I was making paper cutouts of yoga laughter and so much happiness in those jumping poses as decoration for another celebration. Guruji appeared sessions. again and began correcting my drawings. At the opening of the original London Institute, one of my cutout decorations was of One year we stayed for two months. Between intensives, we RIMYI. Guruji wanted to make sure that I included the were taught in small classes of eight or nine. That’s where I Hanuman statue that rests atop the building. learned the balancing poses. Although this was a profound experience, I didn’t really understand then just how special For our first two or three trips, Lindsey and I stayed at the Ajit those tiny classes were. Hotel, Deccan Gymkhana. On that first trip, much to my children’s annoyance, I had brought to Pune brown rice, miso In the early days, there were fewer props, but over the years, the paste, umaboshi (salted) plums, and Japanese rice noodles—all prop collection expanded and developed. In 1988, a medical the ingredients needed to make macrobiotic meals. I was in my symposium was staged. Up on the platform, an assortment of macrobiotic phase (later came vegetarianism, raw food, and doctors and healthcare practitioners assembled, along with sprouted, “living” food). I prepared our macrobiotic meals on a one-ring burner, purchased locally, on the floor of our hotel room. Our children ate very little of this. Because they were still hungry, we would then take them to the Pune Coffee House (no longer in existence), for finger chips (deep fried potatoes), which they dipped into sugary, tomato ketchup. Among my most treasured memories of those early days was how much access we had to the Iyengar family. Mr. Iyengar would often come and join the group for a meal in a hotel or a restaurant. At the end of each intensive, we would be invited to a meal in the reception area of the Institute served to us by Geeta and some of her sisters. Geeta would urge us to eat more, especially the delicious and syrupy gulab jamun, which she assured us would heal us of practically any ailment. One evening, a group of students, including our sons, Miles and Jake, were sitting in Guruji’s house. Guruji began talking about the conch shell that lay on a cabinet. Not everyone would be 1976: Guruji adjusts a student’s head in a supported variation of Viparita Dandasana as Prashant looks on. (Photo by Lindsey Clennell) Spring /Summer 2013 Yoga Samachar able to get a sound out of it, he said. Guruji explained that the 15 Early Days At RIMYI conch, or shankha shell, is used as an important ritual object in the Vedic tradition. It is an auspicious instrument and is often played in pujas in temples or homes. Vishnu, the god of preservation, is said to hold a special conch that represents life because it came out of life-giving waters. The sound of the conch is believed to drive away evil spirits. Blowing the conch requires tremendous respiratory power. Blowing it daily helps keep the lungs healthy. Guruji blew into it, and a long, low, melodious note emerged. He passed it around the room, and no one else could get a sound. Finally, Guruji passed it to 10-year-old Miles. Miles put it to his lips and blew. The sound was beautiful! Guruji laughed and laughed. His eyes twinkled. A young boy was drawn into a group that he had been somewhat on the outside of and made to feel welcome and validated. It was a wonderful moment. Bobby Clennell (Intermediate Senior II) is the author and illustrator of The Women’s Yoga Book and Watch Me Do Yoga. 1976: Guruji adjusts the head and shoulders of a student in supported Savasana. Perhaps the thick mat that has been rolled up for support pre-dates bolsters? (Photo by Lindsey Clennell) 16 Yoga Samachar Spring /Summer 2013 diary excerpts Gifts From the Source By Sharon Conroy A lthough I began to practice in the Iyengar tradition in 1986, I studied at RIMYI only twice during the first 17 years. A variety of seemingly sound reasons kept me away—work, family, finances. Then, two life-changing events—a brother’s death and hurricane Katrina—inspired me to reevaluate how I spend my time and what’s most important to me. Since 2005, I’ve studied at RIMYI annually. Doing so has transformed my practice as well as my teaching. The primary thing I cherish about these visits is being taught by a member of the Iyengar family. Their instructions are precise, and their language is both potent and elegant in its I’ve heard Sometimes, the gift I take home from RIMYI is from a class. At other Geetaji say times, it’s something that I’ve heard Guruji say when he breaks from his more than own practice to teach a longtime student who is working nearby. once, “I give you the clues; the work is Over the years, again and again, I’ve heard Guruji lament that even his most senior students work mechanically and practice “yesterday’s pose” today. Instead, he simplicity. Their words transform the mind as well as the body. yours!” A few years ago, throughout a backbend class, Geetaji brought actions we give our body as well as observe our own habits and our attention to various places in the body and asked us to tendencies. Only then can we refine our poses and, over time, “sanctify” those places with our presence. With one well-chosen change the tendencies and habits that work against us. wants each of us to be absolutely present and see the effects of the word, she transformed the way our minds received the actions she was giving our body. Working this way takes tremendous curiosity and discipline, both of which appear to abound in our beloved Guruji, even at Years later, I still treasure Geetaji’s use of the word “sanctify.” the age of 94! While there is no question in my mind that I’m a While it’s true that my mind spreads and penetrates inward beginner, the reminder I hear year after year at RIMYI—to see whenever I’m able to maintain multiple actions in the body the effects of the actions I give my body—has inspired and simultaneously, my practice takes me even deeper when I can, informed my practice and teaching more than any other at the same time, see myself as sanctifying the body with my treasure I have presence. By working in this manner, we transform the body received there. and the mind. And, in our daily lives, we begin to live in the sacred fullness of the present moment. Most recently, the gifts I’ve brought The Iyengar family’s teachings abound with such treasures. In home come from 1998, the first time I visited RIMYI, I recall Prashantji saying in classes that almost every class, “You people are always doing, doing, doing. Abhijata teaches Asana is a state of being, not a state of doing.” In a similar, and with Guruji at the same time different way, this teaching transformed the guiding her from mind with which I practiced asana. With just 12 years of the sidelines. In experience, I was very focused on maintaining and refining the December 2011, actions I was given by teachers. I had not been asked, nor had it we were given occurred to me, to simply “be” in a pose. However, Prashantji simple actions for was inviting me, at some point in the practice of each asana, to the feet that I make a conscious decision that I had done all that I could do practiced and and, maintaining the actions, simply be in the pose receiving taught the effects of what I had created. Like Geetaji’s use of the word throughout 2012. “sanctify,” from the moment I heard Prashantji’s perspective on What amazed practicing asana, it began to inspire my practice and has been a and delighted me gateway into the spaciousness and silence within. all year was the Guruji in the library (Photo by Tori Milner) Spring /Summer 2013 Yoga Samachar 17 diary excerpts I had not been asked, nor had it occurred to me, to simply “be” in a pose. way such seemingly basic actions persist and am committed to working toward that end, could “intelligize” the entire leg. I slowly but surely. call such actions elegant because while they are simple, when used intelligently, their effects are farreaching, making other leg actions superfluous. Even my tendency to I.14 sa tu dirghakala nairantarya satkara asevitah drdhabhumih hyperextend the knees is corrected because the actions in Long, uninterrupted, alert practice is the firm foundation for the feet have the effect of sucking restraining the fluctuations. the back of the calf into the bone. In addition, the defects in my right leg show up clearly as I attempt to find the actions in my When we are fortunate enough to study with the Iyengar right foot. Could these actions be one of the missing puzzle family at RIMYI, above all else, they teach us how to practice. pieces for me? Can I become as proficient with them in my May we work with dedication and discipline and put their right foot as I am in the left? potent and eloquently spoken words to good use back home. As I’ve heard Geetaji say more than once, “I give you the clues; Sharon Conroy (Intermediate Junior III) founded the Iyengar Yoga the work is yours!” I don’t know how many years it will take to community in New Orleans where she once more resides and teaches. make my right leg as intelligent as the left, but I intend to Her email address is sharon@greatwhiteheron.net. Finding the Grill By Vicky Grogg I froze when I heard the words “Adho Mukha Vrksasana.” Still sitting after the invocation, my deepest fear about classes at the Ramamani Iyengar Memorial Yoga Institute had come true: the call to do full arm balance. The pose simply scares me. My failed attempts to kick into full arm balance have left me injured, and more than once, my frustration has escalated to the point of making me want to The outside gate and Institute building with Vicky in the foreground, standing on the opposite side of the street (Photo by Keith Morese) quit my yoga practice altogether. As I considered a trip to Pune to study at the Iyengar Institute, one of the first things I noticed With this knowledge firmly planted in my mind, I thought I was was that full arm balance was not on the list of required poses. prepared for my first class taught by B.K.S. Iyengar and his granddaughter, Abhijata. I was wrong. When full arm balance Still worried about the dreaded pose as I prepared for my was called, any glimpse I had at contentment, or santosha, was month-long trip, I talked to several people who had studied at lost. I simultaneously feared the pose, desperately wanted to RIMYI. I always asked them, “What if I can’t go up into full arm find the ladies at the grill, and wished I could run out of the balance?” Everyone told me to simply go to the back of the room unnoticed. room near “the grill.” The grill is a grid of metal bars that cover the windows at the Institute. There, I could join a group of After taking a couple of deep breaths to try and calm myself, I Indian ladies who need help kicking up at the wall. Most people looked toward the back of the room for the ladies. From my reassuringly added, “It’s no big deal.” position near the props room, I could only see Guruji, upside- 18 Yoga Samachar Spring /Summer 2013 Vicky’s paper schedule for her month at the Institute, with class and practice times (Photo by Vicky Grogg) …one of the first things I noticed was that full arm balance was not on the list of required poses. down in a deep women, and I lifted one leg to the wall. Instead supported of feeling a metal bar, the back of my ankle backbend. He caught a curtain rod that protruded about was in front of five inches from the wall, just above the grill. what looked like Another kink in my plans. a grill on the wall. Now Before I could revert to full panic mode, one of the ladies quietly panicking, I decided it reassured me it was okay and encouraged me to lift my second probably wasn’t a good idea to leg to the curtain rod. It actually didn’t feel okay, instead it was move anywhere near Mr. quite wobbly, but at this point, I was simply relieved to find the Iyengar’s practice space. ladies and finally make it into a modified version of full arm balance. As I froze, the class turned into a Fortunately, the full arm balance gathering area shifted to a chaotic dance of people taking slightly different area during each class, so the curtain rod was turns hurling themselves at walls while others scrambled to not always in my way. Throughout the month, I found myself find space or avoid getting kicked. A stray foot that breezed by relaxing and even looking forward to the pose. I knew exactly my head brought me out of my daze. I quickly stepped through where to go and what to do. And most of all, I enjoyed being in the crowd of about 125 students and frantically searched the a group where everyone took turns with the pose, gave room for the ladies at the grill. When the teachers shouted encouragement while telling you if you were straight or instructions for students to switch places at the wall, I kept my crooked, and assumed the all-important job of holding the arms in my best Gomukhasana and pretended that I had horse in place. already gone up into the pose. It turns out that walking up the grill with the ladies who The room at the Institute is curved. On one side of the back regularly take classes at RIMYI was a real privilege. I had a wall, women who are menstruating gather together for class so glimpse into an everyday aspect of classes that most students that teachers can identify them and instruct them in alternate who come from other countries don’t get to see, all while poses. When Guruji is there, he’s on the opposite side of the working at my own pace. And that, after all, was a big deal. room. From where I stood, all I could see were menstruating ladies and Guruji. Vicky Grogg was hooked on yoga after taking her first class in the Iyengar tradition in 1996. She lives in Portland, Oregon, with her Continuing the charade of pretending I had already gone into husband, dog, and two cats. full arm balance, I moved toward the middle of the floor. Here I finally spotted the ladies at the grill, a small group pressed into a corner behind the menstruation section. Relieved, I let go of my Gomukhasana arms and hustled over to them. I saw they were taking turns kicking up from a large wooden horse to a metal grill that covered the windows. A new fear silenced me. I expected to face the wall and walk up the grill backwards, a much easier move for me than having my back toward the wall while lifting one leg at a time from a free-standing wooden horse. The ladies ignored me. Not knowing what else to do, I stood quietly until one woman looked at me and hesitated before saying, “Do you want to try?” I took my turn and did my best to imitate what I saw the other ladies doing. I placed my hands on the floor, took both of my feet to the wooden horse that needed to be held in place by two Spring /Summer 2013 Yoga Samachar Vicky and Nana, the infamous rickshaw driver who caters to Institute students (Photo by Keith Morese) 19 diary excerpts Traveling to India: Two Trips in One By Siegfried Bleher T raveling to India is never simple; at least it hasn’t been for me in three trips. In my experience, a single trip has so many dimensions that it can feel like at least cultivating wisdom. Thursday, Nov. 4—A Day Off From Classes two separate trips in one: the physical relocation to a different part of the world and the psychological adjustments No class today. The Institute is getting ready for Patanjali that this entails, plus the immersion into the deep ocean that is Jayatri, a celebration of Patanjali just before Diwali festivities. I the Iyengar method at its source. I will share just a few journal took a walk through a park near the Institute and enjoyed the entries from a blog I wrote while in Pune during the month of quiet of the park as well as the exotic trees and plants. That November 2010 (siegfriedbleher.blogspot.com). I was able to go was good preparation for what came next—a stroll to Fergusson to India through the generosity of a scholarship from the College Road, one of the busiest in Pune. Southeast Region (IYASE) and many kind friends. Tuesday, Nov. 2, 2010—First Day of Classes In the evening, two of Guruji’s students spoke. The first on yoga sutra I.1: atha yoga anusasanam, the other on sutra I.2: yogas citta Actually the first day of classes was Monday, Nov. 1. Mine was vrtti nirodhah. Then Guruji spoke about the aim of yoga, how we Nov. 2 because I was laid out the first day by gastritis. My may touch each layer of the being through asana, to reach the landlady took me on the back of a scooter to a local hospital soul, to recognize the expansive nature of our minds, to come Monday morning after it reached a crescendo. But I was well to realize cosmic consciousness. We begin by spreading our enough by Tuesday morning to attend class. No matter what minds evenly throughout our bodies. [March 2013: I remember anyone else tells you, don’t try gastritis—not at all recommended. feeling transfixed while Guruji spoke, as though he had created an environment outside of time during which I could absorb his First class with Prashant—excellent metaphors to teach us not to get too much into performing poses and actions for their words and his presence. I think this is a glimpse of yoga!] own sake, or automatically and dogmatically. See the poses as Monday, Nov. 8—Class with Prashant ways of culturing the breath and the mind. Be aware of the Discern between “I” and “mind” when you practice. Use the action you are performing, where it is initiated, what its breath in different modes, for example, as an agent for acting purpose is, and what its benefits are: Notice which are the on the body, for acting on the mind, or as the recipient of action benefactors, beneficiaries, and benefits for each action. This performed by the mind and by the body. Prashant calls the makes practice less about the body, more about the mind and breath “participant” when it is an agent or benefactor; when it The front of the RIMYI main hall during Patanjali Jayatri (Photo by Siegfried Bleher) 20 Yoga Samachar Spring /Summer 2013 It seems to take at least a week to sort out all the things that are needed to be able to settle into a routine… is a recipient or beneficiary, he calls it “adjusted.” We tend to practice only as participants (doers), which over time wears out our bodies. We need to include an equal amount of “adjustedness” in the pose, which means instead of doing, we are “done.” There is a rhythm in the shift between using the breath in its participatory role, especially on the strong exhalation (he calls it “uddiyanic breath”), and in its role Shopping on Laxmi Road (Photo by Siegfried Bleher) as adjusted/done/beneficiary on the inhalation. The shift is to May 17, 2011—Follow-up Six Months Later exhale more forcefully, using the breath to act on the body, then It has been six months since I returned from Pune. It didn’t take let the breath be done and adjusted while you use the body in long to get used to being home, but there was adjustment— its role as agency or participant to maintain the “doneness” of mostly getting back up to speed after having a very different the breath. While going through the rhythm of this cycle, be pace in India. aware of the difference between your mind, which perceives, organizes, and shifts the focus to deepen the embodiment, and What remains after six months, or at least what is most your “self”—the “I”—which is present and unperturbed by the noticeable to me, is the feeling that I am more deeply flows within this cycle. Don’t practice “postures,” which is just integrating what I learned there into my practice and teaching. practicing for the body, but practice “asanas,” which is practicing For example, what does Prashant mean by “uddiyana kriya”? As I for your entire embodiment (mind, breath, body, emotions). understand Prashant’s instructions, uddiyana kriya is the Thursday, Nov. 11—A Day of Routine It seems to take at least a week to sort out all the things that practice of exhaling deeply and forcefully, as one might during the initial stage of uddiyana bandha. But instead of completing the bandha by holding the exhaled breath out—bahiya are needed to be able to settle into a routine—paying for kumbhaka—we perform only the action (kriya) of exhaling classes, moving into an apartment (which is often a few days sharply, without holding the breath. This serves the purpose of after arriving), figuring out Internet access, getting money deepening the links between the actions in the legs and hips exchanged into rupees, figuring out where to buy groceries, and those in the arms and trunk. Such links then become etc. And then there is the need to adjust to the class and evident in both pranayama and asana. practice schedule: If you take a class from 7–9 a.m., and practice time is 9 a.m.–noon, then you’d better figure on I also realize what a tremendous gift it is to be able to travel to having a good breakfast before class or doing lots of Pune and learn from the Iyengars. restoratives at practice time. Or what I have been doing is going back to my apartment, having a second breakfast, then returning to the Institute for a 2-hour practice. This only Siegfried Bleher (Intermediate Junior III) runs Inner Life Yoga Studio works if you are very close to the Institute. with his wife Kimberly in Morgantown, West Virginia. He is also a physicist who lectures at West Virginia University and is interested in So by now I have the comfort and predictability of routine—or I the physics of nonduality. He is currently writing a book on the should say some routine preceding the inevitable unexpected “Science of Breath” and another on “Yoga as Transformation.” thing. [March 2013: The best change in my thinking came in the third week when I came to accept that I was in Pune not to catch up on unfinished projects from home but to fully experience being in Pune and at the Institute.] Spring /Summer 2013 Yoga Samachar 21 History and Highlights of the Pune Guide By Denise Weeks B obby Clennell went to Pune “So much has changed in Pune in the past 40 years,” Bobby for the first time 40 years reflected, “changed beyond belief.” Now you can find organic ago. It was “a great leap in food and toilet paper, for instance. The modern world has made the dark,” Bobby recalled. its way in. “The only thing that hasn’t changed is the Institute.” “You were going as far away from Though the guide is full of tips for shopping and travel and how Western civilization as you could go.” to get connected via email and the Internet, the experience of Twenty trips later, she can still feel going to the Institute is still about the yoga. Bobby said, “It’s not some of that early terror—like when a spa; it’s authentic.” And when you go there, they expect you to you arrive at the airport and wonder what you’ll do if your cab give (if you are certified at a level that qualifies you to help in driver doesn’t show up. To help ease the fears and make the the medical classes, for example) “in the same way they give.” trip more accessible for the roughly 2,000 students who make their way the Institute every year, Bobby put together an “You need to go,” Bobby said. “You need to see it in context.” invaluable resource: the Pune Guide. Bobby’s words and the encyclopedic guide are certainly encouraging. She said, “Everyone comes back transformed.” Available online at, the guide began as a short document, just a few pages long, Enticing Tips From the 2013 Pune Guide nearly 15 years ago. As it grew in length and scope, the guide Preparing to go: In your visa application or interview, do not continued to reflect Bobby’s interest in having something very mention that you’re going to study yoga or take classes; always practical, organized, and up-to-date. Now 73 pages long, the state that you are a tourist. If consulate officials learn you’re guide provides details on everything from visa requirements studying yoga, they will assign you an (X) visa for yoga or Vedic and lodging to where you can have a bolster cover made. Each studies, which requires you to register within two weeks of your entry provides as much contact information as possible, arrival in India, with the Foreign Registry Office (FRO) at the including, in some cases, walking directions that use familiar Pune Police Commissioner, where you’ll receive a Residential landmarks such as “facing the Commonwealth Building, down Permit. If you don’t register there, you may have trouble later a small alley, next to the night dresses. It’s the second tailor on leaving the country. [Page 6] upstairs on the right.” Simple things that bear repeating: When calling RIMYI, Pandu, Bobby updates the guide every year when she goes to Pune and tel (91-20) 2565 6134, may be reached during the following asks for input from fellow travelers as well as local Indian hours: From 9 a.m. to 11 a.m., Monday, Tuesday, Thursday, and service providers. The Friday; and from 4 p.m. to 6 p.m., Monday, Wednesday, Friday, information in the guide is and Saturday. [Page 40] democratic, Bobby said. It belongs to the community; The “What to Bring” list, with these helpful details: Look for she is not judgmental RIMYI on Google Earth. Print out a map of the immediate area, about what people submit especially the triangle between Ferguson College Road and for inclusion. She hasn’t University Ave. The neighborhood is not laid out on a grid and had time to develop can be disorienting. [Page 8] anything like a rating system, but she would be Glue stick. Envelopes do not come with glue on the flap; happy to delegate some of likewise, stamps are not provided with glue—and post office the work of maintaining glue is not reliable. [Page 8] and developing the guide to those who’d take Helpful “Just in Case” options: If your ride to Pune fails to show, responsibility for some there is a reliable car service that operates out of Mumbai specific part of the task Airport. As you leave customs, the “Authorized” (not yellow) cab (editing, for example, or office is on the right-hand side. [Page 14] checking phone numbers, which seem to change Pune Central, the eight-story shopping center just around the corner from the Institute (Photo by Vicky Grogg) 22 “every five minutes”). Yoga Samachar Spring /Summer 2013 Protocols at the Institute: Editors note: When booking your apartment, A request to “alternate yourself” means that ask your landlord or landlady if the apartment the person in the center vertical line in front has Wi-Fi. If it doesn’t, move on! It’s common of the platform should lie with his or her for apartments to offer Wi-Fi these days. [Page head toward the platform and the persons 42] to the left and right of him or her should lie The Ambassador Hotel provides Internet access with their heads facing in the opposite at 200 rupees per use, and although it’s direction of the center person. [Page 29] When observing Guruji in the practice sessions, please do not take notes! [Page 32] Raja Dinkar Kelkar Museum, one collector’s extensive and eclectic collection of folkloric and spiritual artifacts from all parts of India (Photo by Bobby Clennell) expensive, it saves you a trip to the Reliance on Fergusson College Road, where you pay a fixed rate of 300 rupees each time you log on. [Page 42] Tell an assistant that you are menstruating as soon as you Getting around Pune in an auto rickshaw: Auto rickshaw driver arrive at class. Do not join the class if you are having your Nana is recommended by Iyengar students. He speaks English period and then drop out during inversions, i.e., don’t wait until and is reliable and punctual if a booking time is confirmed. If Sirsasana to tell someone you have your period. It is very distressing the time is not confirmed, and you are told to call him “when to Geetaji when someone decides they don’t need to be “on the you are ready” be aware that he may not be available when you side.” This is considered very rude. [Page 31] call. He also can arrange for airport transfers and take you to unfamiliar locations in Pune, and he’ll wait while you sightsee, Do not leave the hall until all the props are put back in the shop, etc. He charges metered rates. Nana has made many closet and the windows closed. Endeavor to put away more “foreign friends.” On one occasion, he took some teachers on a props than you used. This will ensure that cleanup is quick and Sunday morning, out-of-town, bird-watching expedition. [Page 33] easy. [Page 30] Simple pointers about food, as well as a long list of restaurants: Sticky mats are very valuable in Pune and the Institute takes Vegetarian food in India includes milk and milk products, but great care to preserve them. Do not place wooden props or not eggs, which along with seafood are considered to be chairs on them. And do not fold the thick mats, even when nonvegetarian. Prepackaged foods are marked with either a red carrying them or putting them on the floor. Many students or green dot in a square frame, denoting non-vegetarian and bring their own sticky mats and donate them to the Institute at vegetarian food, respectively. [Page 52] the end of their stay, and this is much appreciated. [Page 30] There is a large vegetable Everything you need to get online and stay in touch: The RIMYI market, which is fairly will ask you for a passport-sized photo of yourself. Bring a few amazing, called Mandai if you are thinking of purchasing a cell phone card or dongle market. It is located next device for the Internet in Pune. [Page 8] to Tulsi Baugh. The architecture of the You can use your GSM cell phone internationally, but it is building the market is in is cheaper to buy a local SIM card (this is the chip that gives you also interesting. The phone service). A store assistant at a cell phone store can vendors inside are more unlock your cell phone to enable you to use an Indian SIM card, expensive while the but it is more reliable to do so at home before you go. You pay quality inside and outside about 20 rupees for the SIM and then the same number of seem to be the same. The rupees per minute, so if you pay 375 rupees, you get 375 local best days to go are minutes. Be sure to ask for “full talk time” when buying Saturday and early Sunday, minutes. When you call outside the country, the rupees-to- as early as 8 a.m. In Tulsi minutes ratio increases. In 2012, it cost about 12 cents a minute Baugh, one can find to call the U.S. from a cell phone. [Page 40] almost anything. It gets extremely busy on the Jake Clennell relaxes in the foyer of the Chetak Hotel. (Photo by Bobby Clennell) Spring /Summer 2013 Yoga Samachar 23 weekends. Most stores open at 10 a.m., which is the best time extra. Speaks good English, to go. Most shops will close from noon to 4 p.m. generally. and his work is excellent. [Page 59] Sticks to deadlines. The student who recommends Places to visit in Pune: him has been going to him for A must see for those interested in the cultural history of Pune 25 years. [Page 66] and beyond: Raja Dinkar Kelkar Museum, one collector’s extensive and eclectic collection of folkloric and spiritual artifacts from all parts of India. Website:. rajakelkarmuseum. com/index.asp. [Page 34] More shopping—everything you could want, plus tips on getting it all home: Bagwan Aum Market. Laxmi Road Parvrati Hill. A collection of about five temples high up on (next to the Commonwealth Parvrati Hill. The best time to go is 5 p.m., when the sun is building). A great collection of down. At the top, you can see the whole of Pune. It’s a dupattas and scarves—a 15-minute rickshaw ride from the Institute. [Page 34] veritable feast for the eyes. Bring anything you want to color Geeta Bhojwani, owner of Arnav (Photo by Bobby Clennell) match. Second from the last shop on the left and across on the Shinde’s Temple located at Shinde Chhati, Wanowrie: As one right. Ready-made dresses and western clothes (currently very student put it, “one of the most peaceful and beautiful temples popular in India). [Page 61] I visited in Pune.” [Page 34] Karachiwala. 4 Moldina Road, Near Coffee House, Camp. Indian Confident tips to the aspiring adventurer: handicrafts; wholesalers, retailers, and exporters of fine jewelry, Agra. Fly to Delhi, and then drive to the Taj Mahal. [Page 36] arts and crafts, etc. Ganesh, Patanjali, Krishna, Vishnu, Brahma, and Shiva statues. Bronze, brass, sandalwood carvings, also Darjeeling. Drink first flush tea in the Himalayan Alps. [Page 36] scarves. [Page 63] Recommendations clearly based on personal experience: Arnav. Geeta Bhojwani has been shopping for yogis for a long A large statue of Ganesh lovingly touched up with fresh paint, ready for the Ganesh festival (Photo by Bobby Clennell) Maharashtra, “The time, and she knows what we want. Her home-based, one-stop Parade,” just around the store is an Aladdin’s cave of hand-selected works of art, jewelry, (Toyota dealership) and handicrafts from all over India. You will find lots of corner from Hari Krisna interesting gifts, some made by award-winning artists, Mandir, has fresh milk, including beautiful screen-printed paper, gift cards, good- yogurt, ghee, spices, quality woolen and silk stoles and scarves, Patanjali statues rice, mung dal, etc. They and embroideries. I advise that two or three of you go together. are very helpful. Don’t Slow down and enjoy a cup of chai as you browse. If you call before be thrown by the line you go, you can be picked up and dropped back home afterward. cutting. [Page 58] [Page 61] Gatik Ventures. Mr. Sanjay Lopes, at Smita Paranjape’s apartment, opposite the Navin Pandey. Highly Model Colony Post Office (look for “Ravi Pavanjape” on the recommended travel outside wall of the building). Enter through the gates of the agent, based in Delhi. driveway where a car is parked. Mr. Lopes provides excellent “... Arranged a few days packaging services, particularly catering to yoga students’ of travel in Gujarat and shipments abroad. Open 9 a.m. to 6 p.m.; closed Sunday. Rajasthan. He solved [Page 72] some nasty, last-minute problems for us very well. If I ever need a travel agent in India, I will call upon him again.” [Page 37] Editor’s note: I myself have never experienced any problems at Mumbai airport with customs, but I have heard of students Vama and Kajree. Kute Chowk, Laxmi Road. Silk saris, wedding being asked to provide receipts from goods purchased in India. saris, salwar-kurtas. The salesmen will parade hundreds of So, a word of warning: Save your receipts! [Page 60] items for you if you don’t stop them. [Page 64] Denise Weeks (Introductory II) teaches at Yoga Northwest in Satish Pise: Krishna Ladies Tailors… He will happily come to Bellingham, Wash., and is currently serving as secretary on the IYNAUS your apartment in Pune, but if he does this, pay him a little board. She is also the copy editor of Yoga Samachar. 24 Yoga Samachar Spring /Summer 2013 Guruji’s Birthday Gifts and Maitri in Bellur By Gaye Painten Guruji’s childhood home in Bellur, India (Photo by Gaye Painten) W ho says you can’t go home again? Last winter, Guruji traveled back to his birthplace, the small village of Bellur in the South Indian state of Kanartaka, to celebrate his 94th birthday. The celebration, held Nov. 26–28, 2012, in accordance with the Hindu calendar, coincided with the consecration of a newly restored Rama Temple in Bellur and the dedication of a newly “Bellur means ‘silver’ in English,” Guruji said. healthy sanitation facilities, and the impressive Sage Patanjali temple—it is Guruji’s intent through the Bellur Krishnamachar and Seshamma Smaraka Nidhi Trust (Bellur Trust) to restore this humble village to its former glory. erected junior college, Bellur College, Guruji’s most recent gift to his childhood village. It was an auspicious sign when I discovered in early November that a trip to India I had already planned would coincide with “Bellur means ‘silver’ in English,” Guruji said at the college’s Guruji’s birthday fete. If I could get to Bangalore, I could be part dedication ceremony where he spoke on the importance of of the celebration. I hastily altered my plans. Arrangements education. School children honored him and entertained were made in a modest hotel in Bangalore—about 150 miles hundreds of guests under a huge tent, with yoga from Bellur—for the small group of foreigners from all parts of demonstrations and colorful, lively song and dance. This the world who had traveled to India for the celebration. agrarian village once shone like silver in the 12th-century Hoysala Dynasty and is said to have held an important place in Each day, armed with cameras, iPhones, iPads, sunglasses, Indian mythology. During the time of the Mahabharata, the bottled water, mosquito repellant, and lots and lots of humor, village was known as Ekachakrapura. we “pilgrims” traveled by mini-bus along the bustling, dusty road to Bellur and adjacent Ramamani Nagar, the 15-acre Judging from what I saw during my trip to the village for the campus for religious ceremonies. On our first day, we toured festivities—a village primary school, the Ramamani the small village. We were greeted with heart-warming smiles Sundararaja Iyengar Memorial High School, the Ramamani from villagers and lots of requests to “take my picture, take my Sundararaja Iyengar Memorial Hospital, clean drinking water, picture.” I paused reverently in front of Guruji’s childhood Spring /Summer 2013 Yoga Samachar 25 Before leaving, I taught the kids how to give a “high five,” smacking each little palm. home, then proceeded down a narrow lane past impassive chickens, apathetic goats, lazy dogs, and dispassionate cows to the Sage Patanjali Temple in the back of the village with its exquisitely carved, black stone statue of Patanjali in the inner Guruji and family watch as the temple priest makes offerings to sage Patanjali. (Photo by Gaye Painten) sanctum. Along the way, I paused to befriend a young lady squatting locked. I caught a twinkle in the soft brown eyes peeping out in front of her house, doing her Monday morning wash under from under his bushy white eyebrows before instinctively the warm Indian sun. prostrating myself at his feet. I muttered something about being from Philadelphia and that it was a true honor to meet Back at the Ramamani Nagar, I was walking along the path to him. He allowed me to take a picture of him, and minutes the dining hall when an Iyengar Yoga student from the U.S. later, I floated back down the jagged path and headed to the asked if I had ever met Guruji. “No,” I replied, thinking that it dining hall where hearty South Indian fare was being served had always seemed an impossible dream. “Well, if you want on banana leaves. During lunch, Guruji appeared in the to meet him, he is right up there on the veranda,” she said, dining hall with several members of his family. For the next pointing to a residence at the top of an incline. Suddenly, my three days, he often graced us with his presence at meals, two feet took on a life of their own. They turned and started and whenever he did, mealtime took on an air of sacredness. up the slope while my head and body followed until I was standing at the edge of the porch, face to face with the How humbling it was to be part of Guruji’s religious life. Each venerable Guruji. He was relaxing on a long sofa, one leg day, temple priests, musicians, friends, and family bearing gifts crossed on top of the other, a few devoted yoga students for offerings arrived at the dining hall or Patanjali Temple to sitting on the floor at his feet. For one split second our eyes honor Guruji by observing the ritual of puja. Puja is the devotional act of showing reverence to a god or gods using music, water, incense, and offers of flowers, food, or clothing. I was taking photographs during one of the puja rituals when I noticed a little boy, about 10 or 11 years old, trailing me around the hall like a shadow. I turned around and smiled, but the boy didn’t smile back. He was serious about something, and his dark penetrating eyes were pleading. What could he possibly want, I thought. I had seen him sitting with another photographer earlier that day. From afar, it looked as though the photographer was giving the boy a lesson on how to use his camera. Finally, I got it. I slowly took the camera strap from around my neck, leaned down, and placed it around the child’s. Then I watched with motherly pride as he meandered around the grand hall taking snapshots of the puja with the intensity of a seasoned professional. The next morning, the boy brought his mother to meet me. Words to communicate failed again, so we smiled awkwardly and nodded at each other (maitri, friendliness, without words). She was beautiful, and I wanted to take a portrait of her, but when I held the camera aloft she quickly put one arm in front of her mouth and demurred. I think that she was ashamed of her teeth. More smiles, then the two of them disappeared into the crowd. Village boy before the start of puja—a budding photographer, perhaps? (Photo by Gaye Painten) 26 Yoga Samachar Spring /Summer 2013 But in Bellur, I saw a On the second afternoon of the fete, while the sprawling grounds were being decorated in anticipation of the grand celebration, another Iyengar Yoga student invited me to take a rickshaw back to the village with her to teach an English class different Guruji. at the primary school near Patanjali Temple. When the class When I reflect on B.K.S. Iyengar, I am reminded of all that he has done for the advancement of yoga in the world and the many people who have benefited from his wasn’t ready for us, we sat on a rocky ledge in the schoolyard teachings. I hear the name B.K.S. Iyengar, and I think revered and waited. Soon we were surrounded by a mob of little teacher, wise scholar, erudite author, world-traveler, celebrity, children. “What’s your name? What’s your name?” they all philanthropist, and strong-willed taskmaster. But in Bellur, I chirped over and over and over. “Let’s sing the ABC song,” my saw a different Guruji. I saw a humble and gracious man, a friend whispered to me. So we laughed and sang and watched loving and kind father and grandfather figure to us all—an as the crowd of children grew larger and larger. Finally, it was ageless man, pure like silver, spiritual, and devoted to his time to go into our classroom for the lesson. Before leaving, I God. I saw a benevolent and generous man who has high taught the kids how to give a “high five,” smacking each little hopes and dreams for the children of Bellur and neighboring palm. I looked back toward the school yard before entering the villages. “That the poorest of the poor, the lowest of the low classroom and caught sight of little arms still stretched high, to be educated so that they can come to the level of the the “high five” mantra filling the air. enlightened people of the cities. ... that by perspiration and inspiration [they will] become crystals in the field of We returned to a magically transformed campus. Sweet education.” These were Guruji’s expressed wishes at the anticipation filled the air. Metal security detectors had been dedication of the Bellur College. He has given so much to the erected at the entrance to the tent. We took our seats among world. Now he wants to continue to give to children, such as hundreds in the audience and listened that night as the the budding photographer and “high-fivers” I met, governor of Kanartaka and a host of dignitaries took turns opportunities that he never had as a child. praising our beloved Guruji for his work in education. Bellur’s favorite son had come home again and brought many of his friends with him. And so for three days, Guruji presided over his 94th birthday bash where villagers mingled with foreigners, and maitri, the spirit of friendship, ruled. What a birthday gift for us all! Gaye Painten has been an Iyengar Yoga student since 2007, studying primarily with Joan White in Philadelphia. Students from the Ramamani Sundararaja Iyengar High School greet visitors with Namaste. (Photo by Gaye Painten) Spring /Summer 2013 Yoga Samachar 27 yo g a c h a i r p r o p yoga chair prop is a proud sponsor of the 2013 iyNaUS Teacher Training, and is honored to donate 325 yoga chairs for the teaching staff’s instruction. yogachairprop.com 28 415-686-4547 Yoga Samachar Spring /Summer 2013 Paksha Pratipaksha on Results-Oriented Versus Indifference Interview by Robin Lowry As part of my dissertation research on yoga curricula for young people, I interviewed Dr. Geeta S. Iyengar several times. This segment is the last in the series. Some consider that a goal or result of yoga is social harmony, that yoga can help humans get along. joy of the game. That affects the players and brings not just Geeta Iyengar: Results-oriented teaching! You know children RL: Robin Lowry: will always want to know the effect-wise result: “What happens if I do this?” If it is cold outside and you tell them to put on physical breakdowns but mental breakdowns as well. The balanced state of mind is important. Whereas with games for children, which should be fun, we can teach how to share, how to play hard but not aggressively. woolen clothes, they will ask, “Why?” They want the answer given in such a manner that they are convinced. If they do not GI: Yes. I think if children are taught in that manner, they are wear woolen clothes, what happens? Again, they are looking for playing the game for the game’s sake. results-oriented answers. So this inquisitiveness is always present in children—or anyone: “Why should I do this?” But it should not be used to tempt with reward or to punish. But can one also use games to act on or cultivate the principles of Yama and Niyama? RL: Inquisitiveness should be replied to so that [the inquirer] develops the right and correct attitude. For instance, to ask GI: Yes. You have to certainly guide them on the track of Yama students to do Sirsasana and tempt them with some reward, and Niyama, but when the emphasis is on achievement, it goes that is not right. You should certainly inform them that back to your question about results. These nerve-wracking Sirsasana is going to help them in the future to retain a kinds of achievements are not good. Today I have to do balanced nature, calmness, quietness, sharpness etc. but not something for the great achievement, and then afterwards I am give them an expectation of definite reward. But to your nowhere. What is the point in having such attitudes? So these question, is social harmony merely to be nonviolent, demonic ambitions should not be there. One should have noncovetous and so on? To impart moral training is one thing, healthy ambitions, of course, but one should be fit enough to but how are you going to make students realize that problems stand up to whatever you really need to achieve. This is when are rooted in us? The deep-rooted sorrows, pains, and fears. the contradiction comes. All games cannot be of the sober Human nature is of that type; therefore, we need to create nature. It is not the fault of the game but the human beings. awareness first. In the field of physical education we teach sports and games. Would you consider sports and games practical ways to teach the Yamas and Niyamas? That through these sports and games you can learn about yourself? RL: GI: Oh yes. Yamas and Niyamas are universal disciplines, I’ve come upon a book called the Kama Shastra, which lists skills like stitching, bridgebuilding, and word games that are said to be necessary to learn before studying the Kama Sutra, and I see that your yoga curriculum for school children is also called a shastra, the Yogshastra. What is the relationship between a shastra and a sutra? RL: adoptable by one and all. We find that even on the national and international levels, cutthroat competitions go on, fights GI: Shastra is science, methodology, doctrine, and sutra means and murders at stadiums—of course, this is not good. You “aphorism.” Sutra will have minimum words, making the have to definitely introduce Yamas and Niyamas. Healthy statement clear. Shastra contains the science with details. Sutra competition is good. International competitions have to take is a concise form of literature. So it is a way of writing an place, but not with the killing instinct or like today where explanation. One could write the Yogasutras in the form of a sports are played for the sake of entertainment. Players don’t novel, too. One can put the science of yoga in the story form, enjoy the games when unhealthy attitudes develop. One is too. I once arranged a demonstration in Pune based on the idea always thinking of doing more and more, not for improving of purusha and prakriti that was story-like, novel-like. A shastra skills but for winning. The craze of winning overshadows the puts everything in the form of science, which is based on a Spring /Summer 2013 Yoga Samachar 29 foundation of principles. Yoga is a GI: Yes. So in that context, when you spiritual science, as physics is a read it, it becomes a shastra. You physical science. In a shastra, you put understand the spiritual approach every topic systematically with scientifically. In this manner, you definition as well as details. You can go right up to the end of the explain every aspect and the subject book. matter clearly. You bring the proper Guruji’s sequential syllabi that we study to become certified in his teaching method are so brilliant. I find so often in my studies, for example, that my problems with Malasana II have a root back in previous syllabi poses that I honestly never grasped fully. RL: connection in the topic. You explain the methodology, the purpose, and the aim of the science. You explain the utility of science. You deal with the opposing views or objections taken regarding the science. You give proof for its rationality and practicality. You explain the journey of science from start to end. RL: So like a study book? GI: Yes, a study book. And that is why Patanjali statue at RIMYI (Photo by Tori Milner) Guruji has always said when you are GI: Yes, exactly. When the body performs, we do, and if it performs studying yoga, you have to start with the Sadhana Pada (on well and it presents itself, then we know. But when it doesn’t practice, the second chapter of the Yoga Sutras), and the 13 do, we have to penetrate and find out why this is not sutras of the Vibhuti Pada (on properties and powers, the third happening. That means actually your penetration increases. chapter), then go to the Samadhi Pada (on contemplation, the If you would have just dropped into your Malasana straight first chapter). Why doesn’t he first teach Samadhi Pada? Because away, your penetration would not have been there. first you should know what sadhana is. You do sadhana (practice) Today I have to do something for the great achievement, and then afterwards I am nowhere. What is the point in having such attitudes? 30 for what purpose? And knowing the RL: And I wouldn’t know anything. purpose or touching the goal, some questions arise. There is a purusha GI: Yes! But now when you have to go back, you know exactly (the seer, the soul); there is a prakriti where you are stuck. So in that small area, whether it is the (nature). Why are we attaching groin or the root of the thigh or your knee or your back or lower ourselves to drishya (the visible, spine, then you work on that region specifically for that region. perceptible) and forgetting our drishta (the knower, the seer)? So in this manner, each aspect, when RL: Guruji gives the order for exploration in these syllabi. dealt in details, it is shastra. To know about the soul, the elements, GI: Yes. But no one understands why it is given in this manner. the evolutes of prakriti, the The syllabi are scientifically based. consciousness, the intelligence, mind, I-consciousness, the organs of actions, the organs of perception—all such things come under shastra, including the bones, muscles, and anatomy and I have created a self-inventory for teachers, not just yoga teachers, a list of statements that get to some behaviors and attitudes that may relate to specific kleshas (afflictions), or perhaps a conjunction of two or more kleshas. RL: physiology of the body as well as psychology. So you get the context from Sadhana Pada from which to study the other padas (chapters). RL: GI: “Cooperation is a human necessity.” It is good. You have a range for them to answer, and this range is showing that you have a level on which you can work. Sometimes they are feeling cooperation or tolerance is necessary, but if they are saying “sometimes” yes or they are saying “always” when you don’t really need always, or “rarely,” then that means you have a Yoga Samachar Spring /Summer 2013 So by indifference there is compassion; there is patience; there is also understanding of what is possible for such individuals. discrimination, that means GI: Yes. So as a teacher, you have to know such things, that they paksha (to espouse) pratipaksha are not going to improve or adapt. So by indifference there is (opposite thought/action) comes compassion; there is patience; there is also understanding of there. Again, you can have what is possible for such individuals. All students are not of the tolerance when a student is same level. doing wrongly and not trying to pick up what has to be done. You may have tolerance the first time and the second time, but then you realize he is doing this on Well, Geetaji, this leaves me with a lot to contemplate. Thank you so very much for your time today as well as all your hard work everyday for your students. RL: purpose, and then obviously you have to lose your temper over GI: Thank you! there. And that is really what you can study, so this is good. This brings up the idea of indifference. Guruji has written that the qualities of a teacher include compassion, patience, and tolerance, but also indifference. Please explain how this indifference works. RL: GI: I will give you an example of indifference. In my ladies’ class, there is an old lady who comes. She had been doing very well with the group, doing everything. Even with her arthritic knees, she was following everything. She is about 70 plus. Last Robin Lowry has been studying in the Iyengar method since 1987 and is certified at the Intermediate Junior I level. She teaches at her home yoga studio in the historic Germantown section of Philadelphia. She has been a public school health and physical education teacher for 18 years and currently teaches at the K–6 level. Her dissertation, “A Survey of Youth Yoga Curriculums,” was completed in August 2011 at Temple University in the Kinesiology Department. Women’s Iyengar Retreat in Northern California year she had an eye operation for her cataracts. Cataract surgery is not a big operation, but after that operation, when she came to class, she was totally lost. She could not remember anything. So the first two days, I gave her the sequence—what supine poses, which forward bends she should do, and what she should avoid. I introduced her to Setubandha Sarvangasana and Halasana. But now we notice that she is becoming thin, yet no one has come from her family, so we don’t know what is going on. She is forgetting everything. She has a sort of dislinking, a withdrawing, and so what can we do? She comes regularly, and all by herself. And she is doing mechanically. with Octavia Morgan & Athena Pappas August 1st –4th The brain is certainly affected. So as a teacher, I think, I should allow her to come as long as she can manage. I may not continue to adjust her. She is coming, and that is enough. I show indifference yet with compassion. So you see the student’s capacity not just on the physical level. You cannot push even though we may think we know what she should do, we have to be indifferent so that we don’t get in the way of her or get caught up. RL: Spring /Summer 2013 Yoga Samachar Join us for our 5th annual retreat — a long weekend of practice, nature and relaxation at the Ratna Ling Buddhist retreat center on the stunning California Coast. Ratna Ling offers beautiful facilities, a heated yoga floor, hiking trails, redwood groves, and close access to coastal preserves. $825 includes: 3 nights private room in a 2-person cottage, all meals (vegetarian), 9 yoga sessions (3 pranayama & 6 asana) For complete details visit 31 Samachar Sequence Jet Lag Sequence After a typically convoluted journey to Pune to study at RIMYI, I arrived at the Surya hotel and prepared to settle in. Back in the day, the Surya was a favorite temporary home for the Institute’s world travelers. Once inside my room, I opened the large vertical cabinet that provided a place to hang my clothes with an additional high shelf for my whatevers. As I pulled down the extra blanket and pillow, a single sheet of paper floated down as though from above. Its title was “Jet Lag Sequence.” I immediately recognized the familiar intelligence of the design of the poses. Thank you, Guruji.— Julie Lawrence Supta Virasana – recline onto bolster Malasana – on chair Roll blanket and place under feet and ankles in various ways to 1. Straddle and sit on the chair, facing the back of the chair. see effects. Press buttocks back; press chest to chair back. 2. Sit on chair, facing forward. Bend forward. Rest crown of Adho Mukha Virasana head on blanket or bolster. Hold back chair legs. Rest frontal Place bolster or blankets under abdomen. Rest head, arms, etc. ribcage on seat of chair. Notice how the groins soften and the ribcage relaxes. Salamba Sarvangasana – strap arms Supta Padmasana – strap thighs Hold for 10 minutes. OR Ardha (half) Padmasana – strap thigh to shin Halasana – strap arms Do for three minutes on each side. Hold for five minutes. Supta Baddhakonasana – recline over bolster, support head Sarvangasana – arms strapped: Do for five minutes or more. Notice how this relaxes the groins. • Parsvaikapada Uttanasana Halasana – remove strap and do these variations: Do for three minutes. • Eka Pada • Parsva Halasana Adho Mukha Svanasana – rest head on bolster • Parsva Karnapidasana Do for three minutes. • Supta Konasana • Halasana Sirsasana Do for seven minutes, then do the following variations: Sarvangasana – arms strapped: • Parsva Sirsasana • Virasana • Parivrttaikapada • Setu Bandha Sarvangasana – one leg to touch floor, then back • Eka Pada • Parsvaikapada • Baddhakonasana up; repeat with other leg; then drop both legs to floor and hold about three minutes. • Eka Pada Setu Bandha Sarvangasana • Upavistha Konasana Forward bends — wrap eyes Adho Mukha Virasana Janu sirsasana Ropes: Uttanasana/Urdhva Mukha Svanasana/ Paschimottanasana Do for three minutes on each side. Do eight times slowly to open shoulders and release neck. Paschimottanasana Do for five minutes. Viparita Dandasana – on chair Place crown of head on bolster, legs parallel to floor, feet on Savasana — keep eyes wrapped wall, arms rest overhead or hold back chair legs. Place hands on abdomen if nauseous or flushed. Hold for five minutes. Julie Lawrence (Intermediate Junior III) is the director of the Julie Lawrence Yoga Center in Portland, Oregon. 32 Yoga Samachar Spring /Summer 2013 2012 Iyengar Yoga Assessments Here are the names of those who went up for, and passed, an assessment in 2012. Our method provides ongoing education for teachers at every level. Congratulations on your hard work and dedication! Intermediate Senior I Rose Goldblatt Elizabeth Hargrove Dahlia Domian Linda DiCarlo Heide Grace Rachel Hazuga Nathalie Fairbanks Ray Madigan Lisa Hajek Michelle Hill Daryl Fowkes Garth Mclean Robyn Harrison Rebecca Hooper Susan Friedman Kathleen Pringle Karan Hase Terese Ireland Jane Froman Sue Salaniuk Susan Huard Jenelle Lee Marleen Hunt Keri Lee Cynthia Licht Susan Johnson Intermediate Junior III Carolyn Matsuda Leslie Lowder Mary Ellen Jurchak Christopher Beach Becky Meline Kimberly Z. Mackesy Nadzeya Krol Kquvien DeWeese Tal Mesika Victoria McGuffin Linda Kundla Matthew Dreyfus Michael Moore Melinda Morey Deb Lau Brian Hogencamp Lori Lipton Ritland Tzahi Moskovitz Kristin McGee Anara Lomme Pamela Seitz Linda Murphy Olya Mokina Michael Lucey Diana Shannon Chris O’Brien Willamarie Moore Tori Milner Christina Sible Katrina Pelekanakis Kathy Morris Athena Pappas Anastasia Sofos Martha Pyron Beth Nelson Faith Russell Tamarie Spielman Stephanie Rago Darcy Paley Nancy Sandercock Carmella Stone-Klein Michelle Ringgold Scott Radin Susan Turis Lisa Rotell Laurel Rayburn Intermediate Junior II Manju Vachher Mari Beth Sartain Tara Rice Gary Jaeger David Yearwood Paige Seals Mary Rotscher Robin Simmonds Alice Rusevic Jill Johnson Kiha Lee Introductory II Lori Theis Mary Bruce Serene Aretha McKinney Blevins Suzana Alilovic-Schuster Chere Thomas Yvonne Shanks Heather Haxo Phillips Autumn Alvarez Javier Wilensky Mary Shelley Anna Rain Cynthia Bernheim Ibi Winterman Leslie Silver Todd Semo Olga Boggio Angie Woyar Coreene Smith Lucienne Vidah van der Judy Brown Honing Natasha Caldwell Introductory I Kelly Sobanski Waraporn N. Cayeiro Kevin Allen Dan Truini Intermediate Junior I Lynn Celek Nadya Bair Anne Underwood Lynda Alfred Karen Chandler Mary J. Bridle Tiff Van Huysen Nichole Baker Tehseen Chettri Kirsten Brooks Amy Van Mui Sharon Carter Thea Daley Karen Bysiewicz Levy Vered Nikki Costello Charlotte Sather Davis Brendan Clarke Tatyana Wagner Mary DeVore Patrice Daws Elizabeth Cowan Da Gang Wang Aaron Fleming Jonathan Dickstein Deanna Cramer Sachiko Willis Laurie Medeiros Freed Amy Duncan Laila Deardorff Sarah Wilner Judith Friedman Diana Erney Kathleen Digby Joanna Zweig Jill Ganassi Robert Gadon Linda Dobbyn Spring /Summer 2013 Yoga Samachar Heidi Smith 33 Musings Memory By Carrie Owerko Perhaps our Sri Patanjali defines memory as “the unmodified recollection of ability to words and experiences,” or he writes that “memory retains living experience” (Sutra 1.11). Patanjali also says that memory, like all forms of thought or mental activity, can be afflicting or remember is nonafflicting. It depends on use. It depends on us. affected by how Geeta Iyengar once said that we always remember “peak” integrated we experiences. But why? Is it because perhaps, in those moments, we were more wholly present? Present with the totality of ourselves? Is it because, at those times, we were truly awake? Were we more open and receptive? Vivid memories or recollections often include our sense perceptions and emotions, and some proprioceptive and interoceptive sense of how we felt at the time. Perhaps our ability to remember is affected by how integrated we were at the time of the experience we are remembering—or by how integrated we are now, as we remember. When we are in a truly integrated state, our minds were at the time of the experience we are remembering… terrifying. But, whenever possible, to incline our hearts toward presence and invite the whole of ourselves into experience is to cultivate an integrated state of being. And when memories are integrated into our present experience, they can affect how we are now. Sometimes when we attend fully to some present experience, we are visited by the past. This may be an invitation toward integration, an opportunity to integrate our memories of past experiences into the present. Because we are never really and hearts tend to be open and receptive, or inclusive. Fragmented states of being tend to exclude large chunks of without our past or without our future, even when we embrace experience, which are then less easily committed to memory. the present moment. It is this knowing, this felt sense of how Sometimes this is born out of necessity, as a survival fragile and fleeting life is that wakes us up to the now. mechanism. Sometimes it is just how we tend to live and get by. A memory Memories will often include how our senses, our bodies, and Ever since I was small, I have had a fascination with and love our emotions were at the time. They can be rich, multifaceted, for aspen trees. They stand in clusters or groves with their and complex. They also can be difficult or, in some cases, white bark, delicately mottled with black, like a small family Photo by Curtis Settino 34 Yoga Samachar Spring /Summer 2013 of friendly skeletons wearing coats of gold sequins. Their somewhere. Her fur was a beautiful pale gold and white. She fragile beauty is especially brilliant during the few short reminded me of the quaking aspen trees that surrounded us. weeks in fall when their bright, round leaves quiver and That day she would not leave our station wagon to play or go quake in the thin mountain air. Their leaves seem so delicate, for a hike among the trees. She was terrified. So we stayed. We almost fragile. The small teardrops are coated with a light stayed with her and her fear. We surrounded her with love as waxy substance that makes them shimmer in the sunlight, she shook in anticipation—or in memory. shimmer like sequins attached by thread to cloth or bone. They appear to shiver. And then there is the sound. That I remember how the light that day reflected off the shimmering sound of time, of heartache, and of love. sequins of golden leaves. And off the fur of my scared dog Allie. It was clear and brilliant and all encompassing like the air. The My most recent memory of aspen trees (which we do not see in clean and crisp air that held light and sound, that held the NYC) was evoked by seeing the birch trees last fall in Riverside smell that dogs emit when they are afraid. It held the sweet Park. They share a similar white, silvery bark as their aspen and yet frustrated voices of my parents and my brother. It held brethren. That silvery white is so evocative of snow or bone. us all in the very breath that knows past and future are here, And then there is the gold of their leaves, the short but now, in this moment. And the breath that is inclusive of the beautiful life of those golden leaves. complexity that is experience. And the breath that is inclined toward acceptance and love. When I close my eyes to remember the aspen trees, my heart literally aches. I can feel the cool, dry Rocky Mountain air on This is the use of memory: my nostrils, hear those leaves quaking, as if they were speaking some primeval secret of life. It is as if their leaves For liberation—not less of love but expanding are softly whispering of what has been and what is to come. Listening to their quiet song was both sad and beautiful. It Of love beyond desire, and so liberation was like many a memory. From the future as well as the past. My aspen memories include my family. They include our sweet adopted border collie Allie, a young stray that we had taken —T.S. Eliot into our home and hearts. When we took a Sunday drive one fall afternoon to see the aspen trees, she became terrified. Carrie Owerko (Senior Intermediate I) is a core faculty member of the When we stopped the car, she began to shiver and shake just Iyengar Yoga Institute of NY, and she travels regularly to India to study like the trees, perhaps remembering being abandoned on a road with the Iyengar family. Spring /Summer 2013 Yoga Samachar 35 Book Review Yoga Philosophy On and Off the Mat: B.K.S. Iyengar’s Core of the Yoga Sutras By Peggy Hong As Iyengar Yoga students and teachers, we stand for Yama and Nyama, while mudita (joy) and upeksa know that the practice is far more than (indifference) correspond to Asana and Pranayama. They physical. We have witnessed, in ourselves, combine to eliminate the nine antarayas (impediments). our colleagues, and our students, the He continues: profundity of the practice. We know how it shapes our emotions, clarifies our “ Sutra I.33 stipulates that these antarayas must be intellects, and calms or stimulates our eradicated with the means of asana-abhyasa (I.32) minds. Yet, in a yoga methodology known [postures-practice] and, once cured, fixed, stabilised or for its rigor, precision, and attention to under control, one must treat them with mudita [joy] physical alignment, how do we discuss or present these finer, and upeksa [indifference]. The latter means, in this more subtle aspects? sense, vairagya [renunciation]. Once again, Guruji B.K.S. Iyengar, now age 94, has come to our Then from the next sutra (I.34) Patanjali introduces aid with a wonderful resource. His latest book, Core of the Yoga gradually and systematically the different aspects of Sutras, penetrates the classic yoga scriptures (especially astanga yoga [eight limbs of yoga] from pranayama [breath Patanjali’s Yoga Sutras), grouping them thematically for control] onward until dhyana (I.39) [meditation].” (89–90) understanding and application. In this way, Guruji thoroughly demonstrates how to approach We all know that understanding the yoga sutras brings a depth the classic texts with an integrative mindset that reveals the and richness to our practice, yet how do we share this? Core of relationships among the guiding principles of yoga. the Yoga Sutras is the kind of book, after you read it cover to cover, that you can refer to daily to enrich your own understanding The back of Core of the Yoga Sutras is nearly as valuable as the of yoga or to prepare to teach a class in which you share a seed main body of the book. It contains an extremely useful Sanskrit of philosophy. glossary, which goes into more detail than Guruji’s earlier classic, Light on the Yoga Sutras of Patanjali. It also contains a The structure of the book is what makes it groundbreaking and sequential layout of Patanjali’s sutras and its transliteration, so ever so applicable to yoga practitioners. For instance, Chapter X, they can be easily referenced by chapter and order. This is a Klesa, Vrtti, and Antaraya—Afflictions, Fluctuations, and particularly useful format for chanting. The next appendix Impediments, integrates these important concepts, tying them arranges the sutras in alphabetical order, so if you remember together with sutra references from all four padas (chapters) of how a sutra starts off, instead of thumbing through an entire Patanjali’s Yoga Sutras, as well as The Bhagavad Gita. I imagine if book, you can easily find its sutra number, as well as the page we each had an opportunity to sit down with Guruji to have a reference in the book. philosophical conversation, he would share insights and explanations, sprinkled with sutra references, to connect and All serious practitioners who seek a deeper understanding of ground the discussion to the scriptures. Only a teacher with a yoga’s underlying principles will find this book useful because sweeping knowledge of the sutras, who has studied them for it shows us how we can create more harmonious and more decades, applied them to daily life, and keenly observed his or conscious lives through the study of yoga. Once again, we thank her own consciousness, could present such a book. Guruji with our hearts and minds for continuing to shine the light on yoga. For instance, in Chapter X, Guruji cites sutras I.33, III.24, and III.25 as “sutras [to] help sadhakas directly build up the qualities Peggy Kwisuk Hong (Intermediate Junior II) directed a nonprofit Iyengar needed to stop unfavorable thoughts and help in removing Yoga center, Riverwest Yogashala, in Milwaukee for nearly 10 years. She wants, desires, and impressions” (88). He lists these sutras with recently moved to Detroit and is now helping spread the healing art of brief commentaries, then explains how they connect to astanga Iyengar Yoga through community classes at Yoga Suite Center for Yoga yoga (the eight limbs of yoga, or Guruji’s preferred translation, Studies as well as in homes, public schools, and neighborhood centers. the eight petals). Maitri (friendliness) and karuna (compassion) 36 Yoga Samachar Spring /Summer 2013 Classifieds Yoga Sanctuary for Sale in New Zealand (Bay of Islands) 6-bedroom house Congratulations Separate yoga centre with 12-year clientele; 2.2-hectare property with to Abhijata Iyengar river boundary, organic established on the birth of her daughter in April 2013. veggie gardens and fruit orchards. P.O.A. mobile N.Z. 0274981018 Happy wishes from the IYNAUS community. For photos of the property, email Louisa at kerikeriyogacentre@xtra.co.nz. A Call for Musings Yoga Samachar seeks submissions for our “Musings” column, which features a range of short thought pieces from members. These can be philosophical in nature or might focus on more practical topics—for example, a great idea for managing your studio or for creating community in your home town. For this issue, Carrie Owerko (Senior Intermediate I) contributed “Memory” (see page 34). Please send your own Musings to yogasamachar@iynaus.org by Aug. 1. Ask the Yogi Beginning with the Fall 2013/Winter 2014 issue, Yoga Samachar will feature a new column, “Ask the Yogi.” Rotating senior teachers will provide answers to a range of questions submitted by IYNAUS members. We welcome your questions related to how or when to use props, how best to deal with specific health conditions, philosophical help with the sutras, tips on teaching or doing certain poses, and more. Please send questions to yogasamachar@iynaus.org by Aug. 1. Volunteer Transcriptionist Wanted Yoga Samachar is looking for volunteers to help transcribe interviews with senior teachers and other people in the Iyengar community. If you are interested, please contact Michelle D. Williams at michelledelaine@yahoo.com. 2013 Convention Photos Wanted Did you get some great shots at the conference or convention this year? Yoga Samachar is looking for photos of students and teachers in the IYNAUS community as well as shots of the various activities in San Diego. Space will be limited in the magazine for publication, but we will consider all that are submitted. Please contact Michelle D. Williams at michelledelaine@yahoo.com for details on how to submit. YOUR AD HERE Yoga Samachar accepts short, text-only classified ads to announce workshops, offer props for sale, list teacher openings at your studio, or provide other yoga-related information. Ads cost $50 for up to 50 words, plus $1 per word over 50 words, including phone numbers, USPS addresses, and websites. Please contact Michelle D. Williams at michelledelaine@yahoo.com for more information or to submit an ad. Spring /Summer 2013 Yoga Samachar 37 treasurer’s Report—IYNAUS Finances By David Carpenter In the last issue of Yoga Samachar, I provided an overview of months of the association’s expenses. So we continue to IYNAUS’ finances and the challenges that the association faces. operate with a very small financial cushion. I will devote this report to updating the data from the last issue and providing a little more information on steps that can be Year-to-year comparisons are often instructive. Since last fall, taken to increase the association’s revenues and to improve its we have prepared a reasonably accurate profit and loss financial condition. statement for 2010, and we now obviously have figures for all of 2012. The following chart shows IYNAUS’ revenues and As of March 1, 2013, we had approximately $90,000 in expenses for 2010, 2011, and 2012. To simplify the presentation, unrestricted cash on hand, and there is also roughly $70,000 we have allocated all revenues and expenses for the 2010 of retricted moneys in the separate certification mark Portland Convention to that calendar year. We also allocated all account that is jointly controlled by IYNAUS and Guruji of IYNAUS’ expenses for IYAMW’s 2011 From the Heartland (through Gloria Goldberg, who is Guruji’s attorney in fact in Conference and for IYASE’s 2012 Maitri Conference—as well as the U.S). We also have inventory for the IYNAUS store and IYNAUS’ 50 percent share of the profits or losses from these other “illiquid” assets that we carry on our books at $142,465. conferences—to those specific years. We also have shown the In our day-to-day operation, we can only use our unrestricted results when these event revenues are excluded. cash, and $90,000 is only sufficient to cover about four IYNAUS Profit and Loss STATEMENTS REVENUES 2010 2011 2012 Dues (less regions’ shares) 85,825 72,650 84,920 Event revenues (including receivables) 84,513 35,366 -24,000 Store revenues (less cost of goods) 112,055 69,522 58,443 Charitable contributions to IYNAUS 7,485 4,750 1,720 22,600 16,580 16,785 Assessment fees and manual 48,895 47,985 46,850 Bellur donations 23,726 7,658 4,290 TOTAL REVENUES 385,099 254,511 189,008 Bellur donations 23,726 7,658 4,290 Salaries and employment taxes 79,864 76,807 64,531 Production expenses for Yoga Samachar 24,044 22,012 25,516 Assessment expenses 48,108 52,470 54,559 Legal fees 12,358 13,919 17,631 Website design and maintenance 46,659 29,002 25,929 IYNAUS board meeting travel expenses 10,304 12,035 10,532 Bookkeeping 12,750 5,475 4,853 Office supplies and expenses 7,487 6,004 5,981 Merchant and bank fees (for store) 27,212 22,565 15,429 Non employee insurance and taxes 7,054 5,612 2,434 TOTAL EXPENSES 299,566 253,559 231,685 Unrestricted Revenue Restricted Revenue Certification mark (less payments to India) Earmarked Revenue EXPENSES NET REVENUE 85,533 952 -42,677 NET REVENUE—EXCLUDING CONVENTION/REGIONAL CONFERENCES 1,020 -34,414 -18,677 38 Yoga Samachar Spring /Summer 2013 What these charts show is that in each of the past two years, hopeful that we can avoid these costs this year and in future IYNAUS’ annual expenses have exceeded its combined years. We also will begin selling limited advertisements in revenues from dues, from the IYNAUS store, from Yoga Samachar. assessment fees and manuals, and from charitable contributions. While the 2011 IYAMW conference produced Further, even if the San Diego convention is very successful, net revenues for IYNAUS, these were almost entirely offset there are reasons to consider steps to enhance IYNAUS’ by losses resulting from the 2012 IYASE conference. IYNAUS revenues: to enable the board to do more to promote Iyengar is financially viable today solely because the 2010 Portland Yoga, to support our certified teachers, and to provide other convention generated more than $155,000 in revenues. We benefits for the association’s members. For example, many had $84,500 in net revenues from the convention itself, and are concerned that the value of Iyengar certification is not the IYNAUS store made some $71,000 in sales at the sufficiently appreciated by the public and government convention. So revenues attributable to the Portland bodies, and they believe that the board should adopt other convention subsidized IYNAUS’ operations in 2011 and 2012 measures to enhance public understanding of the meaning and provided us with most of the (relatively small) financial of certification. This would require additional expenditures. cushion that we now have. Similarly, some believe that IYNAUS should make investments to foster research into the benefits of our I understand that the experience of the past three years is method or find ways to better disseminate existing research. not at all unusual, and that historically, profits attributable In addition, IYNAUS has performed an invaluable service by to our triennial conventions have supported the association’s collecting and maintaining tapes, videos, and other materials activities during nonconvention years. For this reason, it is from Guruji’s early years, but it will require significant fortunate that the upcoming San Diego conference and further investments to ensure that these archival materials convention promise to be exceptional events, and our board are adequately preserved. These are just three examples of has high hopes that they will be financially successful. But initiatives that IYNAUS might undertake that would require the experience with the Maitri Conference has taught us that increased funding. events with well-conceived programs and stellar teachers will not always generate positive financial results. In the The board is engaged in a serious strategic planning exercise event that the San Diego conference does not generate to identify options and set priorities, and the outcome of this substantial profits, the board will have to explore ways to exercise may include efforts to increase IYNAUS’ revenues. enhance the association’s revenues or reduce its expenses. One possibility might be for IYNAUS to begin making concerted efforts to attract charitable contributions, which Some such efforts are already underway. Because revenues would include the kinds of end-of-the-year appeals that are from assessment fees and manuals have not covered the annual events for most other not-for-profit corporations. costs of assessments during each of the past two years, the Another option might be a modest increase in dues. Still assessment committee has increased assessment fees other options will be explored. Members should be assured slightly this year (but these fees will continue to be held that the board will not undertake these measures unless we down by the fact that assessors all donate their time and are convinced that they will enable us to better achieve the that studios host assessments rent free). Also, in the past two association’s mission of promoting Iyengar Yoga in the U.S. years, our efforts to obtain federal tax IDs for Iyengar family and that they will benefit Iyengar method teachers and members caused us to incur significant legal fees, and we are IYNAUS members. Stay tuned. David Carpenter IYNAUS Treasurer Photo : Lois Steinberg Spring /Summer 2013 Yoga Samachar 39 Photo by Tori Mllner 40 Yoga Samachar Spring /Summer 2013 December, 2012, in the city of Varanasi, on the banks of the Ganges River (Photo by James Burton) On the Rolling Seas By Mary Ann Travis Mary Ann Travis is a yoga teacher at Audubon Yoga Studio in New Orleans. She has passed the Intro I level of teacher assessment. B.K.S. Iyengar Yoga National Association of the United States P.O. Box 538 Seattle, WA 98111 A traditional Indian market (Photo by Vicky Grogg)
https://issuu.com/dongura/docs/ys_1013_spring_summer_magazine
CC-MAIN-2017-09
en
refinedweb
:hm. this doesn't really add value to your testing case for passing signed :chars. now they all appear non-printable - and not like the integer :equivalent, as the naive programmer would expect. I would prefer the :openbsd way: : if (c =3D=3D EOF) : return 0; : else : return ((table+1)[(unsigned char)c)]); : :this gives expected results for (common?) misuse and even saves us one :conditional. : :cheers : simon Well, from all the standards that were quoted at me I don't think we can safely return anything but 0 for negative numbers simply because the data space for one of those numbers, -1, is overloaded. Since there is no way to get around the the -1 vs 0xFF problem, it's probably best not to try to retain compatibility for other negative numbers either. Another possibility would be to use more GCC constructs to check whether the passed type is a char and emit an #error if it is (verses just emitting a warning which is useless relative to all the warnings we get compiling third party software anyway). But I dislike such a solution because a program could use char's to store restricted ascii characters and still be entirely correct. So I think its best to stick with just returning 0 for out of bounds values. -Matt
https://www.dragonflybsd.org/mailarchive/commits/2005-07/msg00184.html
CC-MAIN-2017-09
en
refinedweb
Introduction I am working on a project and I confronted a weird use case that maybe ActiveJDBC is not meant for, I pledge for patience because many things in this project are not in my control: I have 10 to 15 small/medium databases (~30 tables each, 40000 records max) and most of them share a "core" schema of 15 Tables, but at the same time they have some specific Tables unique for each database, they are all maintained by legacy systems which I don't have access to. Goal We(me and some conrades) will need to centralize the data in a kind of "convolunted data warehouse". Unfortunately for higher reasons, I can not use any technologies other than ActiveJDBC and everything other than that needs to be write by us (I know that this could be handled better with MongoDB and/or Liquibase) We already handled the connection between the Databases, and the project itself is going well for most part. The part of the program that handles the core schema that all the databases share is already "working", but We are having trouble with their unique Tables. I get all table names from the databases from a query that is made at runtime (not my choice either). We need to keep the number of classes at minimum preferably Finally my question Can I create a Generic/Dynamic Model or something similar, that can hold Data from a query at runtime? something like: Model a = Base.findall("select * from ?", TableName) Model a = Model.fromTable(Tablename) You are in for a treat. I'm sensing from your reference to ActiveJDBC that this is not your choice, but you will be surprised at its flexibility. Let me dispell a couple of things first: MongoDB is not a relation database, and Liquibase is a DB migration system. JavaLite provides a much simpler DB-Migrator. Now, to the meat of the answer. As you might already know, an ActiveJDBC is really a Map on steroids. This means you can do this: Person p = new Person(); p.fromMap(aMap); See, the method Model#fromMap(Map) reads attributes from a map as long as they correspond to names of this models' attributes and overwrites its values by values from the map. Lets write some code: For instance, there is a table called PEOPLE in first database, and USERS in "other" database such that: create table USERS( first_name VARCHAR(56)); create table PEOPLE( firstname VARCHAR(56)); As you can see, the "first name" columns exist in both databases/tables, but have different names So, lets write code to read from USERS and save to PEOPLE: // define model public class Person extends Model {} ... Base.open(...) // open default database DB otherDB = new DB("other_database"); otherDB.open(...); // open other database // read users from "other" database List<Map> users = db.findAll("select first_name "firstname" from users"); // save people into edfault database for(Map user: users){ Person p = new Person(); p.fromMap(user); p.saveIt; } Base.close(); otherDb.close(); I hope this solves your problem!
https://codedump.io/share/lk2N0ziEgspJ/1/how-can-i-get-data-from-a-table-that-does-not-have-a-model
CC-MAIN-2017-09
en
refinedweb
Yesterday I posted a Channel 9 interview with Avner Aharoni, a Program Manager on the Visual Basic Team. In this interview Avner shows us how to enable XML IntelliSense in Visual Basic using the XML to Schema Wizard. He also shows the differences between how IntelliSense works with axis properties on XDocument and XElement objects and speaks to how the wizard can infer multiple schemas from multiple sources as well as the affect XML namespaces have on IntelliSense. Get started with LINQ to XML in Visual Basic with these How-to Videos. Enjoy! Join the conversationAdd Comment Hi Beth Massi, had a blast on the Geek Speak today. If you missed it, they will have it available on demand from their
https://blogs.msdn.microsoft.com/bethmassi/2008/01/18/channel-9-interview-xml-properties-and-enabling-intellisense/
CC-MAIN-2017-09
en
refinedweb
CatalystX::LeakChecker - Debug memory leaks in Catalyst applications version 0.06 package MyApp; use namespace::autoclean; extends 'Catalyst'; with 'CatalystX::LeakChecker'; __PACKAGE__->setup;...
http://search.cpan.org/dist/CatalystX-LeakChecker/lib/CatalystX/LeakChecker.pm
CC-MAIN-2017-09
en
refinedweb
Supermarket Sales System Services Computer Science Essay Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. Design and develop supermarket sales system services (SMSSS) are my graduation project; its a physical location of a transaction but usually refers to any prototype or system used to record the transaction for the retrial. Its the physical location at which goods are sold to costumers. The most basic SMSSS system consist of a computer, cash drawer, receipt printer, a monitor recommend a flat screen, and an input device such as keyboard or scanner. SMSSS system can create detailed reports that can help you make more informed business decisions. Its worthies by saving money, provide productivity gains and. The SMSSS is divided into two different needs like: retail operations and hospitality business such as restaurant, hotels, and pharmacies. SMSSS is more useful system helps to modify your product price in easy way, and promotions can be tracked more successfully. It contains details for the customers, suppliers and products. This system contains two types of users, administrator and the end user. The administrator has the authorization to create new users, edit an existing user and delete a user or disable a user. This system generates multi reports for costumer, supplier, product, and soling. This will cut down the amount you spend away from the primary focus of your business and getting more control over your business. CHAPTER ONE (Introduction) Introduction SMSSS is solving a problem that could face any business. An inventory that could disappears from your stores, pharmacies, restaurants and hospitals, due to theft, wastage and employee misuse; because employee will know that inventory is being carefully tracked. SMSSS can instantly tell you about how many a particular product have sold today, or in any period you want. My Graduation Project is applied on supermarket; this will help on tracking its remaining inventory, spot sales trends, use historical data to better forecast your needs and detailed sales reports make it much easier for it; to keep the right on hand. SMSSS will include two types of users, administered user and a normal user. The first will have the authorization to add users to check the reports and observing the suppliers. The normal user just will prepare the customers orders and print out the billing. Every SMSSS system needs a printer to create credit card slips for customer but the users will deal with this system without using that printer therefore the paying will be in separate operation with normal printer. Touch screens are more intuitive to use than keyboards for many users, but in this case there will not be a touch screens, the users will depend on the keyboard and a computer only. It is a simple system without providing the flexibility to users. Background As it is mention before in the introduction this project will apply on supermarket, which has number of employees who received the customer's orders. This supermarket is new, so it is looking for suitable manner to run on their business. This work is usually want a faster user who understand the customer's order in short time, and save the money by count the order's price without mistake, especially when your business is run will and lot of customer's orders you should focus on to not effected on your business, that why the SMSSS system will helps to save money, provide productivity gains. Objective Our project helps to record any and all sales, it automate overall inventory control, helping to keep stocks in proper balance depend on demand and other factors, so management is much easier. Also you are able to track promotions more successfully, whether through capons and special discount. In SMSSS you get many tools in a single package. SMSSS can make better use of your personal by little is more maddening to a business owner than watching his/her staff bogged down. SMSSS reduce paperwork, increase productivity, reduce time you have to spend doing inventory, provide more precise information on the rate at which each product in your inventory moves so you know when and how much of each item to order, showing you what selling and what not, which vendor product are profitable, and which vendors are making you the most money overall. In this case, your customer gets faster and more accurate service. CHAPTER TWO (Project Deliverables) Project Deliverables The deliverables for this project is consisting into four main sections: Research Analysis and Design Implementation and Testing Testing and Evaluation Project management 2.1 Research In this section, it will gather information about HCI, and similar software to the one that will be built and the language that decide to use. 2.2 Analysis and Design Analysis is the process of breaking a case or topic into many smaller parts to increase a better understanding of it. This way will helps to find the functionality of the software should have, and what programming language should used to implement it, and solve problems found during the research section, and discuss what methodology will use for both designing and implementing the system. Design method consists of the modeling language and the design process. Modeling language is a convention detailing how the design will be written on the paper. Unified Language is an example of a modeling language (UML). It makes details on how the software will be developed. 2.3 Implementation and Testing The design section will contain screen designs as well as the core design of the software and the way it is implemented and it will include the testing and evaluation. 2.4 Testing and Evaluation Testing is establishing a software system to find out the errors. Testing a set of programs against requirements specification expressed in different diagram such as data flow and entity relationship diagrams. In the process of software development, testing has historically been left until the code has been written. [1] Testing code done by different people at different times. The testers are depending on which testing is being done and the resources allocated in order to tasting a particular software product. The testers could be: the programmers, a team of testers, people represent the market for the software, the client and maintainer [1]. During and after implementation the software, code is being tested .Pre-implementation testing is done by a testing team of reviewer's project manager or clients or system developers. After implementation or code testing, software developers prefer to taste the system from the bottom up, in order to check if they launch to have coded correctly. The testers applying two main testing techniques: black box testing and white box testing. There are many different of testing such as unit testing, test phases, system testing, integration testing, regression testing , acceptance testing, release testing and beta testing …..[1] 2.5 Project Management Eatch project can run smoothly and efficiently according to some sort of managements. This ways of monitoring the project is important in order to controlling the costs and benefits of any project. If there are occurred reasons that make the work is running late, or make the costs are beginning to escalate, it is necessary and essential to discover this as soon as possible and to be able to recovery these bugs . That cause uncovered a problem then corrective action can be taken to have the possible effects of the problem reduced. The project manager is responsible for organizing planning, monitoring and controlling the project only if the management side has been separated from the technical development. [2] The task of the project management, plan, estimate time and effort, identify tasks for the team, use prior experience, schedule work, use resources, monitor and control the progress of the project, evaluate what is priority , and identify quickly the causes of problems all these are expensive, in terms of both time and money. [2]It is important to project manager to be able to identify areas of risk, such as lack of knowledge, occurred new technologies or problems relating to requirements, and to obviate these all plans should be prepared.[2] most of project managers use such as graphics of charts to scheduling their work there are many different types of char, they mainly fall into one of two categories, bar chart which is often referred to Gantt chart) and network chart which is often referred to (CPM or PERT or CPA) in this project we'll use the Gantt Chart in order to organize our timing,. Chapter Three Research Similar Product There are a variety of software solutions available for Cashier Assistant (point of sale). Several different approaches are examined here to compare different features and gain an understanding of the best approach to take when designing a Cashier Assistant (POS) in the following pages, good and bad points about surveyed application will be examined on what to add or to improve in the software to be. The layout of this examination is follow: The name of the being application A list of good points about the application A list of bad point about the application 3.1 Harold's Fine Home Lighting is a company based on tradition with 17 independent workstations. 3.1.1 Good Points Fast Eliminate the need for separate point of sale terminal. Provide quick verification and processing of credit cards at the POS. Simple bar coding and efficient, create a reliable way to control inventory. 3.1.2 Bad Points Huge data, there is no automatically back up. Difficult to update 3.2 Rod Works 5 stores depends on sale items 3.2.1 Good Points Automatically generate purchase orders when items fall below recorder levels Reduce inventory carrying costs by tracking inventory turns and gaining Generate discrepancy reports to resolve errors in physical inventory 3.2.2 Bad Point Huge data, there is no automatically back up. Difficult to update Research into HCI Human computer interaction is created by computer software and hardware. Human computer interaction makes computer and human interface more interesting and satisfies users needs. Human computer interaction design methodologies are based on User Centered Design. They are focus group, affordable analysis, participatory design, rapid prototyping, user scenario, value-sensitive design, and contextual design, etc. In Human computer interaction design seven principles are considered, tolerance, simplicity, Visibility, Affordance, Feedback, Structure, Consistency, etc. web interface and normal GUI are used according to the purpose. It is used for conferences, space shuttles, aircrafts, etc[3] The human-computer interface is the point of communication between the user and the computer. There are several goals depend on the end user to gain his interaction. Environment: depend where the user is used his own machine in which environment, college park whatever. Input information: the tasks the required by the user to the computer. Output data: whatever have generated from the computer represented to the user. Feedback: whatever the user required pass to the computer and forward back. HCI is affected by several forces of future computing for example:[5] Reduction hardware price guiding to larger memories and quicker systems. Miniaturization of hardware guiding to portability. Decrease in power needs guiding to portability. New present technologies guiding to the packaging of computational machines in new shapes. Specialized hardware guiding to new roles. Improved progress of network communication and shared out computing. Increasingly extensive use of computers, especially by user who are outside of the computing occupation. Growing novelty in input techniques (e.g., voice, gesture, pen), combined with lowering price, guiding to fast computerization by people who left out of the "computer insurgency." Widespread common concerns guiding to get better contact computers by disadvantaged groups Interview Techniques interview Inputs interview Outputs interview Interaction Techniques interview Issues kinds of input principle s (e.g., choice, distinct parameter specification, nonstop control) Input techniques: keyboard, menus, mouse-, pen-based, voice. kinds of output principles, express precise information, abstract information, show processes, create illustration of information) Output techniques like: scrolling present, windows, animation, sprites, fish-eye displays) Screen describe issues (e.g., focus, clutter, visual logic) Dialogue type and techniques for example: form filling, menu selection, icons and direct treatment, generic functions) searching, error management Multimedia and non-graphical dialogues: mic, speaker, voice mail, video mail, active documents, CD-ROM Real-time reaction issues Manual organize theory Supervisory control, automatic systems, embedded systems Standards protection 3.2 Research on Software Tools 3.2.1 Research on Java Java is an object oriented program it is often used in VCRs and toasters it was originally called OAK it is a high level language developed by Sun Microsystems. One a real information is that Java is based on the first version of C++. Java includes a set of class libraries that provide basic data types it is standard Java environment, system input and output capabilities, and other utility functions. Java is Platform that means it is easy to move the program from one computer to other. This is as an advantage of java over other programming languages. To execute a Java program a bytecode interpreter should be run. Bytecode is built into every Java-enabled browser. it is read the bytecodes and executes your Java program it is often called the Java virtual machine or the Java runtime. Java is a language that should compiled into the special machines code to then interpret. The role of the interpreter is to protect the machine from errors that can break down operating systems in C++. Compiled code can run on different systems, so a Java program can be across via a network to a machine with any different operating system you have it and with different GUI. Since java is high-level language it used to write applet and applications. An applet is a small program should send it cross the internet and interpreted on the client machine by a Java-aware browser. Java application can only be compiled and then run on the same machine. That why you cannot run the java if you do not have an application or a WWW page that refers to it. Once you decide to run the java you may have to change the properties of your browser this is a security issue in order to have the trust from your browser in order to use the compiler, that why when you do not recognize the compiler version you will face that your applet is rejected. 3.2.2 Visual Basic It is a programming language and environment developed for such type of windows or web application or reports and many purpose belong to development.[10]. since its launch in 1990, the Visual Basic come up to be the custom for programming languages. Now there are visual environments for many programming languages, including both C and C++, plus for Pascal and Java. Sometimes VB called RAD system because it is enable programmers to speedily build proposal applications. VB was one of the first products to supply interface environment for user, plus the VB programmers can insert a substantial quantity of code just by dragging and dropping control, like using buttons, text box, radio button…act and then defining them by using the properties functions. Although not a true OOP language, it is sometimes known as an event-driven language because each object can act in response to several events for example when we use a mouse click. 3.2.3 C# Language C# is an object-oriented language. It is further includes support for component-oriented programming. C# has a unified type system. All C# types, have primitive types(int and double, inherit from a single object type). C# supports user-defined reference types and value types, allowing dynamic allocation of objects and-line storage of lightweight formations. C# programs consist of one or more source files .Concepts in C# are programs, namespaces, types, members, and assemblies. Programs declare types, which contain members and can be organized into namespaces. Classes and interfaces are examples of types. Fields, methods, properties, and events are examples of members. When C# programs are compiled, they are physically packaged into. Assemblies have both executable code in the form of Intermediate Language (IL) orders, and symbolic information in the form of metadata. Before the executed begin, the Intermediate Language code is directly changed to processor-specific code by help of the Just-In-Time compiler of .net Common Language Run time (CLR). Variables types: value types and reference types. value types include their data where variables of reference types store references to their data. With reference types, it is likely to have two variables to reference the same object helping for operations on one variable to affect the object referenced by the other variable. Through value types, the variables have their own copy of the data. It is possible on one to affect the other. 3.3 Research on Database 3.3.1 Research on SQL Server SQL Server Aclient server is made of two components: an application that is used to present the application data, and a database system that is used to store it. The application may used the visual studio 2005 or access or some other graphical user interface. And to store the database it is suitable to use SQL server. Database is a collection of objects stored in the SQL server. This collection of objects including tables, views, stored procedures, functions and other object necessary to build the database. As know the tables are the first generally thing that you add to a SQL server. And each table contains information about specific case. Once the tables are created the user should mentioned the keys for each table. SQL server stored procedures and compiled code and executed on the server. You can execute them through any client(VB.net, microsoft access, microsoft word…) If you waant to modify the a stored procedure you can modify it from the server. The change you do impact to all user client thet call the stored procedure. Microsoft recognize that there is a plethora of database user with disparate needs. That why they release the following six version: SQL server 2005 Express edition SQL server 2005 workgroup edition SQL server 2005 developer edition SQL server 2005 standard edition SQL server 2005 enterprise edition SQL server 2005 mobile edition. Features are provided by database engine Clustering Services It is operating system allows the user to recover immediately from currently system to another. Replication services remains data in synchronization between SQL Server databases and other database like Oracle, Microsoft Access...ect. other using for It is to send data to several systems. XML transfer data between mixed programs or data sources. SQL Server 2008 allow the user to report on SQL Server 2008 adds a new Policy Based Management system to SQL Server, which allows you to report on and put in force a exact pattern for any database object. SQL Server 2008 can track the performance and other data in a central location and raise a report. In SQL server a provider for power shell is including. This helps the user to save scrip programs and for windows, Microsoft exchange, and Microsoft office too. The SQL Server provider deals with SQL Server Instances, Databases and Database Objects . this let the user deal with them in perceptive way. Reporting services has been built in SQL server 2000 and to SQL server 2005 and next other modern versions. The feature helps the user to do their work through SQL server directly via the browser. 3.2.3 Research on Oracle The Oracle Database [12]referred to RDBMS or as Oracle (RDBMS). As of 2009[update], Oracle remains a major presence in database computing. Oracle RDBMS stored data logically in table spaces and in data files if the it is in physicaly form. Oracle DBM tracks the data stored with the help of the information that is stored in the system table space itself. In 1970, oracle was the first commercial relation database based on the relational language SQL. It is goes in such development; oracle tools could support many methodologies. Oracle users would purchase the database, but it was possible to purchase some tools that make up the oracle product rang but not others. Designer 2000 was formally known as oracle CASE, and is helpful on data and process modeling. Developer 2000 is used to build a application, once it has been designed using oracle designer 2000. Discoverer 2000 consists of a suite of user friendly query tools designed for ad-hoc reporting. The oracle database management system is central to these tools, through many ends -user are hardly aware of its presence. The main method of communicating directly with the oracle database is by using the SQL language. Although this stands for SQL. It enables the experienced user to do more than just handle queries. There are extensions to the basic SQL language provided on Oracle and together they from PL/SQL. The SQL optimizer attempts to make each SQL statements as efficient as possible when executed. SQL procedure can be trigger by certain events, for example after updating or deleting a record. The security feature are now very sophisticated; you need system previleges to access the database and object level privileges at different levels to query, insert, delete and updating, any object stored in the database. Data is validated through data constraints, that is, only allowing permitted values of data to be entered into the database Oracle database management tracks its data storage with the help of information stored in the system space table. Oracle RDBMS is supported localy managed table space which help store space management information in their headers better than in the table space. The history of Oracle database. (1979) was the first released Oracle version 2 (1982) released Oracle version 3 (1984) released Oracle version 4 (1986) released Oracle version 5 (1988) released especial version for OS mackintosh Oracle version 1 (1989) released Oracle version 6 (1993) released Oracle version 7 (1997) released Oracle version 8 (1999) released Oracle version 8i (2001) released Oracle version 9i (2003) released Oracle version 10g (2007) released Oracle version 11g 3.4 Research into Methodologies 3.4.1 Research into Traditional System Development Life Cycle (SDLC) The rapid increase in the power[13], speed and capacity of computers, and the demands of clients and the market-place have encouraged software developers to attempt to develop ever more ambitious systems. First attempt was in 1960s and early 1970s but this was a complex and system, it was difficult to maintain and did not do what was required. The system life cycle was an attempt to establish a structured approach to analyzing designing and building software systems. The system life cycle divided the development of system into stages. Feasibility Study: the development team visits the customer and studies their system. They investigate requires in the given system. End of the feasibility study, the team supply a document that holds the different specific recommendations for the candidate system, personnel assignments, costs, project schedule, target dates etc.... The purpose of this phase is to find out the need and to define the problem that needs to be solved. Analysis and design:: The software program testing begins when the code is generated. There are Different testing methodologies are available to solve the bugs. Maintenance: Once the software delivered to the customer, maintenance phase is important for many reasons could change once the system is used by the clients. Bugs could happen because of some unexpected input values into the system and system directly affects the software operations. The software should be developed to accommodate changes that could happen during the post implementation period. 3.4.2 Structured System Analysis and Design Method (SSADM) it is one of the methodologies that launch to analysis and design of IS. It is consist into sequence of stages: Feasibility Stage Analyze the case that you have at a high level, by using Data Flow Diagram to explain how system is work and to think about the problems. These parts are including to this stage: Build up a Business Activity Model(BAM). Study the requirements. study the processing. Study data. Get logical view of present services. Define Requirements Stage Get an idea about the old environment, by know the system requirements and recognized the business environment by modeled them by using a DFD and Logical Data Structure. Requirements Specification Stage This stage is to assist the management, by using BSOs will telling the range and functionalities to provide and present them. They also need financial and risk evaluations to be prepared, and require to be supported by outline implementation reports. These parts are including to first stage: Develop mandatory data model. Receive system functions. Grow user job specifications. Enhance mandatory data model. Develop specification samples. Build up processing specification. Verify system objectives. Logical System Specification Stage Officially this stage is to choice the feasible options. The development/implementation environments are particular based on this choice. Logical System Specification Stage In this stage Both Processes and logical designs should update, as well the dialogs are specified these parts are including to this stage: Classify user dialogue. Define the updating processes. Classify enquiry processes. Physical Design Stage The purpose of this stage is to denote the physical data and process design. The way to define them is to use the language and features of physical environment and incorporating installation. These parts are including to this stage: Arranging for physical design. Supplement the specification of functions. Increase developing for both data and process designs. 3.4.3 Rapid Application Development (RAD) [15] Rapid application development is one of the agile method, it is shorten the life cycle and to produce information system more quickly in order to respond to rapidly changing business requirement . it develops the projects faster and higher quality by using groups to gather requirements. Requirement planning User design constructions cutover in requirement phase , developer seek to obtain input from users in order to determine set of system requirements. In the user phase, is based on the sample cycle, involving experienced users and developers. This phase is consisting of a sequence of workshops, each one of them may be takes three days. During this phase the Tool CASE is used to build the prototype. The construction phase is to produce or generate the code from the CASE prototype, and the new system is validated by users. The final phase is cutover. It is cover the system testing, users training and introduction of the system into the client organization. RAD has a number of obvious advantages: Resistance to change in the organization is minimized and the new system is welcome, because of the high degree of users participation in the whole process. The system is developed and delivered more quickly than with traditional development approaches. The speed of the development and the use of relatively small teams means that the RAD tend to be cheaper than their traditional counterparts. The speed of RAD means also it is closely related to current needs of the business. 3.4.5. Soft System Methodology Soft systems methodology (SSM) was developed by Peter Check land and his colleagues at Lancaster University in the 1970s [19]. It is designed to shape interventions in the problematic situations encountered in management, organizational and policy contexts, where there are often no straightforward 'problems' or easy 'solutions.' Though informed by systems engineering approaches, it breaks with them by recognizing the central importance of perspective or world-view in social situations. It differs significantly from the 'systems science' approaches developed in the 1960s, and is more reflective of action research in its philosophy and approach. SSM is widely described as a seven-stage process, as follows: 1. Identifying the problematic situation that it is desired to intervene in 2. Researching the situation and building a 'rich picture' (interpretive representation) of it 3. Selecting perspectives and building 'root definitions' (key processes that need to take place within the desired system) 4. Developing a conceptual model of the change system 5. Comparing the model with the real-world situation 6. Defining the changes to be implemented 7. Taking action. Stage 1: finding out is concerned with identifying and providing a brief description of the situation it is desired to intervene in. Stage 2: modeling This stage is concerned with producing definitions of transformation processes that should achieve the desired intervention(s). Stage 3: dialogue Conceptually the dialogue stage involves examining the change model against the real-world situation, usually as represented by the rich picture and associated analyses, and checking that it makes sense. Often the change model needs adjusting, and sometimes the rich picture needs to be developed further. Stage 4: defining and taking action This stage will vary depending on the specific change project, but essentially it involves developing the (revised) change model into a concrete plan, and taking action to implement it. At this point formal project management protocols may be useful or a less structured approach could be appropriate. 3.4.6 Waterfall methodology The waterfall model[18] represents a sequential and linear process of software development. It flows through the phases of conception, initiation, analysis, design, construction, testing and maintenance. Waterfall Model - Software Development Model the most important aspect of the waterfall model is that none of the stages can be started off with before the preceding stage is complete. Waterfall consist into six phases: Investigate Requirements Design Construction Integration Testing and rectify Installation Maintenance Requirement Phase this phase is required for small or huge you can never be overridden. This phase is look at the present system, the requirement that it was intended to meet problems in meeting these requirements. Output or the final product is studied and marked. software that is going to be designed should not have certain features, for reasons like security. Specification Phase A final view of how the product should present, is already decided. Design Phase in this phase everything should determined, type of database, type of data supported, etc. are some of the important aspects that are decided in this phase is the algorithm of the process in which the software needs to have the design. Implementation and Testing this phase is based on the code, software is designed as per the algorithm. The software designed as per the algorithm needs to go through steady software testing from error correction processes to find out if there are any bugs or errors. Integration and Testing Phase Codes are integrated together and is tested if the software works as per the specifications provided. the final software which needs to be installed at the clients system is also designed and tested .The product is then handed over to the client. Maintenance Phase The work does not end with the handing of the software to the client. The software designers constantly provide support to the client to resolve any issues may arise. There may be some bugs which get detected during implementation of the project. During the maintenance phase, support and rectify is provided for all such problems. New Requirements Phase The client may be expanding into other fields and it may want new features to be added to the existing software. Hence, it is very important that the updated requirements be taken from the client. Chapter Four Analysis 4.1 Selection of Programming Language VB.net (Visual Basic .net) Microsoft visual studio 2005 is used to design the interface user and to generate behind the codes. 4.2 Selection of Database Our project is depends on Oracle OrcDb 10 stores and retrieves data for multiple sources 4.3 Selection of Methodology Rapid Application Development (RAD) which consist into four stages: Requirement planning User designing Construction Cutover The first phase is combined two techniques joint requirement planning and joint application design. 4.3.1 4.4 System Overview Functional & non Functional system requirement Functional System Requirement Customer Form: Adding new customer details with these following fields: Customer no. Customer name City Country Phone GSM Edit Existing Customer through these fields: Customer no. Customer name City Country Phone GSM Delete a customer by insert the customer no. to define it first then click on delete button, there is a message will verify that the record is deleted. Find record, by insert the customer no. and click on find. The record will identify. Supplier Form: 1. Adding new supplier's details with these following fields: Supplier no. supplier name City Country Phone GSM Contact person 2. Edit Existing supplier through these fields: Supplier no. supplier name City Country Phone GSM Contact person 3. Delete a supplier by insert the supplier no. to define it first then click on delete button, there is a message will verify that the record is deleted. 4. Find record, by insert the supplier no. and click on find. The record will identify. Product Form: Adding a new product's details by fill these fields: Barcode Product no Product name Supplier Buy price Sale price Hand on Quantity Discount Edit Existing product through these fields: Barcode Product no Product name Supplier Buy price Sale price Hand on Quantity Discount 3. Delete a product by insert the product no. to define it first then click on delete button, there is a message will verify that the record is deleted. 4. Find record, by insert the product no. and click on find. The record will identify. 5. Give a discount for a product. The physically area to sale product (POS) by fill these fields and click on add. Once the user did his operation just click on next customer to print out the bill: Barcode Product no. Product name Price Qty Total amount Producing report for customer details: Insert customer no (from - to) then click on preview report, these fields will defined: Customer no. Customer name City Country Phone GSM Producing report for Supplier details report: Insert supplier no (from - to) then click on preview report, these fields will defined: Supplier no. supplier name City Country Phone GSM Contact person Producing report for product details report: Insert product no (from - to) then click on preview report, these fields will defined: Barcode Product no Product name Supplier Supp_no Buy price Sale price Hand on Quantity Producing report for Min hands on Qty product wise details by insert (from-to) Qty: Product no. Barcode Name product Sale price Buy price Hand on Qty Producing report for product wise sale these details will define: Product no. Product name Sale date Counter no. Cashier Price Qty Producing report for date sale wise additional a chart displays the increasing, these details will define: Counter no. Cashier Product no. Product name Price Qty total ATM Date sale 4.5.2 non Functional system requirement(security) Security have been added to this project in order to have a full control on it (Administrator and end user) have been created as a way to secure the system. Administrator user has the ability to create whatever number of users. Additionally he has the ability to discontinuous user or edit a user for such reasons. User creation form 1. Create a new user: Once the administrator create a new user these fields are required to complete the signup (ID, user name, Password, user type, unable) once the administrator did the adding he should mention as E for run the new account or D disable the exiting user. ID customer id sale date counter no Casher. 2. Edit existing user: The administrator can edit the details for existing user directly from the grid view once he click on edit there a message will appearing asking to do the editing through the grid. ID customer id sale date counter no Casher. 3. Delete existing user: The administer can delete a user once he select it from the grid view and click on delete. Chapter Five Design System System Design 5.1 Data Flow Diagram 5.1.2 Context diagram Data flow diagram is one of modeling technique. It is identify the system boundary or automation boundary. It has four basic elements that data flow diagram use to model the system: data flow, processes, data stores and external entities. Context diagram is modeled the whole system as a single process box whose sides represents the boundary of the system. Planning Gantt chart
https://www.ukessays.com/essays/computer-science/supermarket-sales-system-services-computer-science-essay.php
CC-MAIN-2017-09
en
refinedweb
Red Hat Bugzilla – Bug 103385 Unhandled exception (syntax err) in anaconda after identifying existing versions Last modified: 2007-04-18 12:57:15 EDT From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows 98) Opera 7.10 [en] Description of problem: Upgrading RedHat Linux 7.2 to RedHat Linux 9 from CD. CD boots fine, identifies my monitor and video card and asks me to select mouse manually. This then functions correctly when graphical installer launches. I select language (English), keyboard (United Kingdom) and mouse again (serial generic 2-button with 3-button emulation on COM1). Then it tells me it is searching for existing versions of RedHat. The progress bar moves fairly rapidly (5 seconds) and completes. This dialog disappears and then the Unhandled Exception - Syntax Error is reported. The following is copied from the attached anacdump. I believe it is the same as the Unhandled Exception reported in the dialog box at the time. Traceback (most recent call last): File "/usr/lib/anaconda/gui.py", line 764, in nextClicked self.setScreen () File "/usr/lib/anaconda/gui.py", line 960, in setScreen exec s File "<string>", line 1, in ? File "/usr/lib/anaconda/iw/examine_gui.py", line 17, in ? from package_gui import * File "/usr/lib/anaconda/iw/package_gui.py", line 1068 i column = gtk.TreeViewColumn('Text', renderer, text = 1) ^ SyntaxError: invalid syntax Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: See description above. I have tried using computer as-is and also switching the CMOS to fail-safe settings. I recall that this enabled me to install version 7.2 successfully. However, I get the same error with or without fail-safe settings. Additional info: Created attachment 94072 [details] Anaconda core dump, as saved to floppy when the error occurred. I would recommend testing your media if you have not already. This appears to be due to an error loading the python sources from the install media. Closing due to inactivity. Please reopen if you have any further information to add to this bug report
https://bugzilla.redhat.com/show_bug.cgi?id=103385
CC-MAIN-2017-09
en
refinedweb
Opened 5 years ago Closed 4 years ago #18336 closed Bug (fixed) Static files randomly fail to load in Google Chrome Description I've noticed while using Google Chrome that frequently (once every 2 or 3 refresh) some of my static files fail to load, it can be anything: an image, a .js file, etc. Opening that file directly always work, reloading the page usually work as well. I couldn't reproduce this behavior with Firefox. This is happening with the default options for the runserver of django.contrib.staticfiles. I figured this probably had to do with either a timeout or the size of the request queue, possibly both. I tried setting request_queue_size to 10 (default is 5) as demonstrated in the attached file and it completely solve the issue. I the tried setting it to 1 and it makes the issue systematic. I tried to find how many concurrent requests chrome does and found the following: Unless I'm missing something, Chrome actually use less concurrent requests than Firefox. A value of 10 for request_queue_size does seem to solve my problem completely, but I wouldn't know what should be the actual best value. Attachments (1) Change History (17) Changed 5 years ago by comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by I made a test project to demonstrate the behavior, here with a request_queue_size of 1, each refresh seems to load a different set of pictures. comment:4 Changed 4 years ago by I'm curious if this has still been a problem for anyone. I've been noticing this lately with recent versions of Chrome, and can confirm that the attached patch solves it (though, it feels like a bandaid to me). I can reliably reproduce this, so if there's anything I can do to help in debugging this, please let me know. comment:5 Changed 4 years ago by Yes this is a problem for me as well with Chrome 23 on OS X Mountain Lion when there are a lot of concurrent staticfiles requests. comment:6 Changed 4 years ago by I had the same problem. It happened exactly at the spesific location and only with Chrome. I managed to get runserver working by adjusting request_queue_size. It's not a problem for other browsers. comment:7 Changed 4 years ago by Can the users that are experimenting this issue report their OS?. Maybe it's a MAC OS X-specific thing? comment:8 Changed 4 years ago by OP here, I'm indeed running OS X. comment:9 Changed 4 years ago by Chrome has marked this as a wontfix: comment:10 Changed 4 years ago by I'm also on MacOS X. To do any actual development, I have to patch basehttp.py every time I upgrade Django, in each virtualenv. If there were a way to at least override this in some form globally, I could live with it, but for the moment, it's a complete pain and leads to odd, sometimes very subtle bugs. comment:11 Changed 4 years ago by comment:12 Changed 4 years ago by I've also been experiencing this (Django 1.5, Mac OS 10.8, Chrome 25). In some cases it completely locks up Chrome (tab dies). Other times random static files fail to load. In addition, I've seen very similar behavior in AppEngine's dev_appserver (also based on Python's SocketServer module) , and I'm inclined to think request_queue_size is the culprit. comment:13 Changed 4 years ago by BTW I got this fix merged into django-devserver, which is a drop in replacement for runserver in dev. This bug is probably going to be a WontFix on both Chrome and Django ends. comment:14 Changed 4 years ago by For anyone looking for a quick fix that doesn't require patching Django or installing a third party app: you can monkey patch WSGIServer from settings.py. A queue size of 10 has worked really well for me since I opened this ticket, but with this in settings.py you can easily increase the value to match your needs. from django.core.servers.basehttp import WSGIServer WSGIServer.request_queue_size = 10 comment:15 Changed 4 years ago by I could eventually reproduce this, on OS X.8 with Chrome, with the test project provided by Loic in comment 3 and WSGIServer.request_queue_size set to 1. I'm not sure why the original path and devserver resort to monkey-patching instead of simply defining request_queue_size on WSGIServer. Wow, what a weird bug, accepting in general, but have to look into this further.
https://code.djangoproject.com/ticket/18336?cversion=0&cnum_hist=7
CC-MAIN-2017-09
en
refinedweb
only just found out that i could and how to execute python code on my server, and only just found out i can execute python code in the terminal app in os x; all you've got to do is type python, and you're away. anyway, just dipping my toes into python... what is python good and bad for? (i know the syntax is nice and clean). would you use it instead of php for example to do webpages with a bit of functionality? or not? also i have to put the python code file into the cgi-bin folder with a file extension of cgi then call it from browser to run it. if i wanted to use python like php, for webpages etc., how would the python file be called? i don't want urls xyz.com/cgi-bin/whatever.cgi. how's that usually handled? or do you not usually use python as a replacement for php, only use it in particular situations? if so, what kind of situations? thanks. Well, first of all, you should never have urls with .cgi no matter what your backend language. If you're running Apache you might want to check out mod_python which is basically a replacement for cgi (which is slow). The reason you would switch from PHP to Python might be if you liked Python more. Python is not a language that tries to be the fastest (pythonistas do not seek excessive optimisation), so you'll want other reasons like coding style, the extensive Python libraries (one of Python's strong points, like Perl's CPAN), or maybe because something else you're using is coded in Python (Gimp for example allows users to write their own Gimp scripts in Python). So far as I know, you can choose to run an entire site on Python just as with any other popular backend language, so it can replace PHP if you want. > Well, first of all, you should never have urls with .cgi no matter what your backend language. right, i'm just messing round at the moment though. just running a hello world script to see it work. > If you're running Apache you might want to check out mod_python which is basically a replacement for cgi (which is slow). would that appear in the phpinfo() output if it were installed/avaliable? i guess so. i'm using a shared hosting set up. i see right, thanks. anyway, just dipping my toes into python... what is python good and bad for? python is a general purpose programming language, which can be used to do anything. Because of it's clean syntax, great libraries, and the numerous python books (many recently published), I highly recommend python. Famously, google hired the top python gurus to work for them because google uses python extensively in house. would you use it instead of php for example to do webpages with a bit of functionality? Yes. In addition, there are also python 'frameworks' available for more complex websites. if i wanted to use python like php, for webpages etc., how would the python file be called? I think it depends on the server. For instance, with Apache set up for local development on my pc (which is something everyone should have set up), the url I use to run a python script that uses cgi (cgi is used to communicate with the server) is: Here is the script: #!/usr/bin/env python import cgitb; cgitb.enable() print "Content-type: text/html" print print "<h1>Hello World</h1>" 1) You need a 'shebang' line at the top of myprog.py. 2) You need to change the permissions for the file myprog.py to give everyone execute privileges, e.g. $ chmod a+x myprog.py 3) Put the file in the cgi-bin directory on the server, e.g. /Library/Apache2/cgi-bin i don't want urls xyz.com/cgi-bin/whatever.cgi. how's that usually handled? Servers, like Apache, allow you to map fake urls like: to real urls like: In addition, python frameworks provide additional ways to map urls, an example is here: Setting up mod_python and figuring out how to call scripts is a bit of a pain, but if your host already has it setup, it is well worth learning. If you are going to use mod_python on your host, then you should bite the bullet and set it up for local development on your pc, too. Python is not a language that tries to be the fastest (pythonistas do not seek excessive optimisation) That isn't true. python is a language that has been developed with an eye on the speedometer. All the 'p' languages along with ruby, compete for users--and speed is a major selling point. Furthermore, python allows you to identify bottlenecks in your code, so that you can opt to rewrite those portions in an even speedier language like C\C++, and then call those functions from your python program. In addition, on discussion forums the efficiency of various solutions is always discussed. ruby is the language that plays down optimizations/speed--presumably because it can't compete with python's speed, so the rubyist's fall back motto is: if you need speed, you probably don't, but if you really do, then write the code in C--not ruby. There are plenty of speed tests posted around the internet, so you can decide for yourself where python stands. Because sitepoint puts no effort into any language except php, the best python discussion forum resides here: Maybe my impressions of Python not trying to be the fastest was based on old speed tests between Python and PHP in the cgi area (I read them years ago), and some once-popular retorts from the Python camp ("python should be good enough, otherwise look for bottlenecks in code"). I've heard of the "freezing" you can do with code but I don't know how it differs from something like mod_perl where all your modules get pre-compiled anyway. Ruby is of course known for being slow, even when it isn't. The Fail Whale is burnt into our minds too. So, maybe my impressions of Python are old. Initially the community was not trying to be the fastest. They wanted their code to be the cleanest, the easiest to work with, and focussed on the extensive libraries to make a ready-made Python solution to whatever problem you came across. If I hadn't already had Perl on the table as the planned First Backend Language, I might have chosen Python. It's a nice language. It's unfortunate that PHP dominates everything, but that's my opinion too. > Maybe my impressions of Python not trying to be the fastest was based on old speed tests between Python and PHP in the cgi area (I read them years ago) cgi, regardless of which language is being run, is slow though itsn't it? it creates a new process for each run (possibly, could be wrong there). fast cgi which is what php makes use of i think avoids creating a new process each time, i think. could easily be talking total rubbish there. anyway, thanks both for all the info there. very useful. i found this which i haven't read yet about using python on a webserver: thanks The python docs you linked don't recommend mod_python, so I would pay attention to that and disregard what I said above. The docs recommend something called WSGI if you are writing new programs. cgi, regardless of which language is being run, is slow though itsn't it? it creates a new process for each run (possibly, could be wrong there) Yes, this is actually where Perl first got its reputation for being slow, since In The Beginning...Perl was CGI in many people's minds. And it had that very problem, everything being started up again every single time. fastCGI was one solution to that, but that got superceded by mod_perl, which is what PHP ended up doing too (mod_php) where your modules and stuff are pre-compiled... mod_perl was made specifically to work excellently with Apache so you could have just everything get going on startup and that was it... everything just kept running. The python docs you linked don't recommend mod_python, so I would pay attention to that and disregard what I said above. The docs recommend something called WSGI if you are writing new programs. Yeah the pages I found mentioning mod_python mentioned it being a bit hard to get going with Apache and also people who use 3rd-party hosters may have problems too. I did see something about "mod_wsgi" but hadn't read further into it. I don't know how it compares to the Python Server Pages that mod_python uses. Obviously if the OP has 3rd-party hosting then the hoster gets to determine a lot. Or if the server isn't Apache. my server is apache. right so basically i'd have to ask the people/company who run my server if WSGI is possible. if not i'm stuck with putting things in cgi-bin and possibly url rewriting. or go to another hosting company who do offer WSGI. ok, thanks. url writing is pretty much a separate thing, and it's recommended you do it anyway... supposedly, not letting people see what language you use behind your site is safer, or just a Smart Thing To Do. So even if you ran PHP, Perl, whatever... you're going to want to use mod_rewrite and there's a module for dealing with rewriting to a script who builds a page (scriptAlias), and those are just Apache, regardless of chosen back-end language. Anyway, hope you enjoy Python! Hook up with the Python community while you're at it. A language with a good community is nice. > url writing is pretty much a separate thing right, but it takes on bit more of a prominant/necessary role if/while having to put pythonsourcecodefile.cgi in the cgi-bin folder. yup, i've always been a big fan of url rewriting to make urls as human as possible. not for any security reason but ease of use for the user. and more aesthetically pleasing (for me anyway) > Anyway, hope you enjoy Python! yup, thanks, i quite fancy it. know next to nothing about it at the moment apart from: it has a clean/simple syntax, it's object oriented, and can be used like php given a bit of preliminary set up. am in the process of starting a new project, kind of cms, so am thinking of python for that. looks like my hosting company can and will set up WSGI for me. not done and dusted yet but looks like it. great, thanks.
https://www.sitepoint.com/community/t/python-just-getting-toes-wet-pros-cons/96229
CC-MAIN-2017-09
en
refinedweb
Need to edge out the competition for your dream job? Train for certifications today. Submit The CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure. public class CustomClasses { public static class LabelC extends JLabel { public LabelC() { setText("HelloWord"); } } public static class TextFieldC extends JTextField { public TextFieldC() { } } public static class TextFieldDisableC extends JTextField { public TextFieldDisableC() { enable(false); } } public static class TableC extends JTable { public TableC() { } } } Select all Open in new window public class CustomLabel extends JLabel { public CustomLabel() { setText("HelloWord"); } } Should I create individual .java file per custom CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure. I have a question: I want to know what is the right way of creating custom. 1. Previously, I had one class (one .java file) as follows Open in new windowWith this one, From the palette, I can only see CustomClasses, NOT the individual custom classes. 2. I tried to create an individual .java file for the custom class, and I can see it in the palette. Open in new window My question is: Should I create individual .java file per custom class. If I want to keep them in one .java file, how can I make them visible in the palette. Regards.
https://www.experts-exchange.com/questions/28670747/Netbeans-Add-custom-classes-to-the-palette.html
CC-MAIN-2018-26
en
refinedweb
Let's take a look at this function: func nothing(){ return } return return The following signatures all return an instance of the empty tuple () (typealiased as Void). func implicitlyReturnEmptyTuple() { } func explicitlyReturnEmptyTuple() { return () } func explicitlyReturnEmptyTupleAlt() { return } In all of the three above, the return type of the function, in the function signature, has been omitted, in which case it is implicitly set to the empty tuple type, (). I.e., the following are analogous to the three above func implicitlyReturnEmptyTuple() -> () { } func explicitlyReturnEmptyTuple() -> () { return () } func explicitlyReturnEmptyTupleAlt() -> () { return } With regard to your comment below (regarding body of implicitlyReturnEmptyTuple() where we don't explicitly return ()); from the Language Guide - Functions: Functions without return values: Functions are not required to define a return type. Here’s a version of the sayHello(_:)function, called sayGoodbye(_:), which prints its own Stringvalue rather than returning it: func sayGoodbye(personName: String) { print("Goodbye, \(personName)!") } ... Note Strictly speaking, the sayGoodbye(_:)function does still return a value, even though no return value is defined. Functions without a defined return type return a special value of type Void. This is simply an empty tuple, in effect a tuple with zero elements, which can be written as (). Hence, we may omit return ... only for ()-return ( Void-return) function, in which case a () instance will be returned implicitly.
https://codedump.io/share/2Y77JF53v7nb/1/return-in-function-without-return-value-in-swift
CC-MAIN-2018-26
en
refinedweb
Key Performance Indicators, most often called KPIs, may also be referred to as Key Success Indicators (KSIs). Regardless of what you call them, they can help your organization define and measure quantitative progress toward organizational goals. Business users often manage organizational performance using KPIs. Many business application vendors now provide performance management tools (namely dashboard applications) that collect KPI data from source systems and present KPI results graphically to end business users. Microsoft Office Business Scorecard Manager 2005 is an example of a KPI application that can leverage the KPI capabilities of Analysis Services 2005. Analysis Services 2005 provides a framework for categorizing the KPI MDX expressions for use with the business data stored in cubes. Each KPI uses a predefined set of data roles — actual, goal, trend, status, and weight — to which MDX expressions are assigned. Only the metadata for the KPIs is stored by an Analysis Services instance, while a new set of MDX functions for applications is available to easily retrieve KPI values from cubes using this metadata. The Cube Designer provided in Business Intelligence Development Studio (BIDS) also lets cube developers easily create and test KPIs which you learn in the following section. Figure 9-25 shows the KPIs in the Adventure Works cube using the KPI browser in the Cube Designer. You can get to the KPI browser by clicking on the KPI tab in the Cube Designer and then clicking on the KPI browser icon (second icon in the tool bar in the KPI tab). Figure 9-25 Consider the following scenario: The Adventure Works sales management team wants to monitor the sales revenue for the new fiscal year. Sales revenue for prior fiscal years is available in the Adventure Works cube. The management team has identified the goal of 15% growth for sales revenue year over year. If current sales revenue is over 95% of the goal, sales revenue performance is satisfactory. If, however, the sales revenue is within 85% to 95% of the goal, management must be alerted. If the sales revenue drops under 85% of the goal, management must take immediate action to change the trend. These alerts and calls to action are commonly associated with the use of KPIs. The management team is interested in the trends associated with sales revenue; if the sales revenue is 20% higher than expected, the sales revenue status is great news and should be surfaced as well — it's not all doom and gloom. Use the following steps to design the KPIs for the sales management team: Open the Adventure Works DW sample project located at C:\Program Files\Microsoft SQL Server\90\Tools\Samples\AdventureWorks Analysis Services Project\Enterprise. Double-click the Adventure Works cube in Solution Explorer to open the Cube Designer. Click the KPIs tab to open the KPI editor. Click the New KPI icon in the KPI toolbar to open a template for a new KPI. As you can see in Figure 9-26, there are several properties to fill in. Figure 9-26 Type Sales Revenue KPI in the Name text box and then choose Sales Summary in the drop-down box for Associated Measure Group. The revenue measure is Sales Amount which is included in the Sales Summary measure group. Type the following MDX expression in the Value Expression text box. [Measures].[Sales Amount] When managers browse the KPI, the value of Sales Amount value will be retrieved from the cube. Now you need to translate the sales revenue goal to increase 15% over last year's revenue into an MDX expression. Put another way, this year's sales revenue goal is 1.15 times last year's sales revenue. Use the ParallelPeriod function to get the previous year's time members for each current year time member. Type the resulting MDX expression, shown below, in the Goal Expression text box. 1.15 * ( [Measures].[Sales Amount], ParallelPeriod ( [Date].[Fiscal].[Fiscal Year], 1, [Date].[Fiscal].CurrentMember ) ) In the Status section of the KPI template, you can choose a graphical indicator for the status of the KPI to display in the KPI browser. You can see several of the available indicators in Figure 9-27. For your own KPI applications, you must programmatically associate the KPI status with your own graphical indicator. For now, select the Traffic Light indicator. The MDX expression that you define for status must return a value between -1 and 1. The KPI browser displays a red traffic light when the status is -1 and a green traffic light when the status is 1. When the status is 0, a yellow traffic light displays. Figure 9-27 Type the following expression in the Status Expression text box. Case When KpiValue ("Sales Revenue KPI")/KpiGoal ("Sales Revenue KPI" )>=.95 Then 1 When KpiValue ("Sales Revenue KPI")/KpiGoal ("Sales Revenue KPI")<.95 And KpiValue ("Sales Revenue KPI")/KpiGoal ("Sales Revenue KPI")>=.85 Then 0 Else -1 End The above expression uses the Case MDX statement available for use with Analysis Services 2005. In addition, you now have a set of MDX functions to use with KPI metric values. In the previous MDX expression, the KpiValue function retrieves the value of Sales Revenue KPI, and the KpiGoal function retrieves the goal value of Sales Revenue KPI. More precisely, the KpiValue function is a member function that returns a calculated measure from the Measures dimension. By using KPI functions, you can avoid a lot of typing if your value or goal expression is complex. This Status expression will return one of three discrete values — 1 if revenue exceeds 95% of goal, 0 if revenue is between 85%-95% of goal, and -1 if revenue is below 85% of goal. Choose the default indicator (Standard Arrow) for Trend indicator. Type the following MDX expression in the Trend Expression text box. This expression compares current KPI values with last year's values from the same time period to calculate the trend of the KPI. Case Else 0 End Expand the Additional properties section at the bottom of the KPI template to type a name in the Display Folder combo box for a new folder, or to pick an existing display folder. The KPI browser will show all KPIs in a folder separate from other measures and dimensions, but you can further group related KPIs into folders and subfolders. A subfolder is created when the folder names are separated by a backslash, "\." In the Display Folder combo box, type SampleKPI\RevenueFolder as shown in Figure 9-28. Figure 9-28 You can also choose to set Parent KPI so that the KPI browser displays KPIs hierarchically. Using the Parent KPI setting is for display purposes only and doesn't actually create a physical relationship between parent and child KPIs. You could, however, design a Parent KPIs that uses values from child KPIs via KPI functions; there is even a Weight expression to adjust the value of a Parent KPI. The display folder setting is ignored if you select a Parent KPI because the KPI will display inside its parent's folder. To complete your KPI, leave the Parent KPI as (None). Congratulations, you just created your first KPI! Deploy the project to an instance of Analysis Services 2005 so you can view the KPI values. To deploy, select the Build menu item and then select Deploy Adventure Works DW. Like MDX scripts, KPI definitions are only metadata, so changing and saving the KPI definitions will only update the metadata store. A cube reprocess is not required, allowing you to use a KPI right after deploying it to the Analysis Services instance. To view the KPI, follow these steps: In the cube designer, click the Browser View icon in the KPI toolbar, as shown in Figure 9-29. Figure 9-29 Your new KPI is at the bottom of the view window and should look like Figure 9-30 Figure 9-30 The KPI browser supports the standard slicer window at the top of the browser. You can select specific members to narrow down the analysis to areas of interest. For example, suppose you are interested in the sales revenue KPI for August 2003. In the slicer window, select the Date dimension, Fiscal hierarchy, and August 2003 (found in semester H1 FY 2004 and quarter Q1 FY 2004) as shown in Figure 9-31. Figure 9-31 You will notice the KPI values have changed as shown in Figure 9-32, as have the Goals — August beats the goal! Figure 9-32 Every Analysis Services cube can have an associated collection of KPIs, and each KPI has five properties as its set of metadata. These properties are MDX expressions that return numeric values from a cube as described in the following table. Analysis Services 2005 creates hidden calculated members on measure dimensions for each KPI metric (value, goal, status, trend, and weight). However, if a KPI expression directly references a measure, Analysis Services optimization uses the measure directly instead of creating a new calculated measure. You can query the calculated measure used for KPIs in an MDX expression, even though it's hidden. To see how this works, open SSMS, and connect to Analysis Services. Click the Analysis Services MDX Query icon in the toolbar to open a new MDX query window. Make sure you're connected to the Adventure Works DW database in the Available Databases list box, type the following query in the MDX query window, and then click the Execute button. SELECT {Measures.[Sales Revenue KPI Goal] } ON 0, [Date].[Fiscal].[Fiscal Quarter].members on 1 FROM [Adventure Works] Figure 9-33 shows the results of executing the query. Figure 9-33 The Analysis Services instance hosting the database cubes also maintains the KPI definition metadata. As you learned in the previous section, you can access KPI values directly by using KPI functions. Client applications can also access this KPI metadata information and retrieve values programmatically through an Analysis Services client-side component ADOMD.NET. ADOMD.NET provides native support for KPIs. It includes a KPI class that contains a method called Kpi.Properties ("KPI_XXX"), which is used to retrieve properties of each KPI. This method returns a string of unique measures for the developer to use in the construction of MDX queries that retrieve the KPI values. The following code example demonstrates how to access a KPI using ADOMD.NET and how to construct a parameterized MDX query. Because KPI metrics are just calculated measures, you execute a KPI query with ADOMD.NET the same way you execute regular MDX queries. using System; using System.Collections.Generic; using System.Text; using Microsoft.AnalysisServices.AdomdClient; namespace QueryKPIs { class Program { static void Main (string[] args) { string connectionString = "Provider = MSOLAP.3;Data Source=localhost;Initial Catalog=Adventure Works DW"; AdomdConnection acCon = new AdomdConnection (connectionString); try { acCon.Open (); CubeDef cubeObject = acCon.Cubes["Adventure Works"]; foreach (Microsoft.AnalysisServices.AdomdClient.Kpi cubeApi in cubeObject.Kpis) { string commandText = @"SELECT { strtomember (@Value), strtomember (@Goal), strtomember (@Status), strtomember (@Trend) } ON COLUMNS FROM [" + cubeObject.Name + "]"; AdomdCommand command = new AdomdCommand (commandText, acCon); foreach (Microsoft.AnalysisServices.AdomdClient.Kpi kpi in cubeObject.Kpis) { command.Parameters.Clear (); command.Parameters.Add (new AdomdParameter ("Value", kpi.Properties["KPI_VALUE"].Value)); command.Parameters.Add (new AdomdParameter ("Goal", kpi.Properties["KPI_GOAL"].Value)); command.Parameters.Add (new AdomdParameter ("Status", kpi.Properties["KPI_STATUS"].Value)); command.Parameters.Add (new AdomdParameter ("Trend", kpi.Properties["KPI_TREND"].Value)); CellSet cellset = command.ExecuteCellSet (); Console.WriteLine ("KPI Name:" + kpi.Name); Console.WriteLine ("Value:" + cellset.Cells[0].FormattedValue); Console.WriteLine ("Goal:" + cellset.Cells[1].FormattedValue); Console.WriteLine ("Status:" + cellset.Cells[2].FormattedValue); Console.WriteLine ("Trend:" + cellset.Cells[3].FormattedValue); } } } finally { acCon.Close (); } } } } Note that this example uses a parameterized MDX query and the StrToMember function to avoid MDX injection. The developer of a client-side application needs to be cautious with user input; a simple string concatenation would allow a malicious user to input and run harmful code. You can create a new C# program called QueryKPI, copy the above code, add the Microsoft.AnalysisServices.AdomdClient DLL as a reference and run the program. We recommend you explore the .NET Adomd client object model by writing client programs that leverage the object model.
https://flylib.com/books/en/1.125.1.79/1/
CC-MAIN-2018-26
en
refinedweb
First, this article should not be construed to be encouraging the sharing of objects between asynchronous threads. Indeed, we view the sharing of objects between threads as an often problematic but sometimes unavoidable practice. Generally, when designing concurrent code we recommend favoring the "isolation + asynchronous messages" paradigm when practical. That said, when you do share objects between threads you're going to want to do it as safely as possible. The scenarios when an object is shared between threads in C++ can be divided into two categories - a "read-only" one where the object is never modified, and a "non-read-only" one. Scenarios in the non-read-only category are going to require an access control mechanism. Note that in C++, the fact that an object is declared const does not guarantee that it won't actually be modified due to the possibility of the object having "mutable" members. Sometimes those mutable members are "protected" (by a mutex or equivalent) making them "thread safe". But sometimes they are not. (Particularly in older code.) So extra vigilance may be called for when determining whether or not an object could be modified. const So first let's consider the general scenario where the programmer wants to allow for the shared object to be modified, and for the possibility that the shared object has unprotected mutable members. For these scenarios, you can use mse::TAsyncSharedReadWriteAccessRequester<> as demonstrated in the following example: #include "mseasyncshared.h" #include <future> #include <list> #include <random> #include <iostream> #include <ratio> #include <chrono> #include <string> int main() { std::default_random_engine rand_generator1; std::uniform_int_distribution<int> udist_0_9(0, 9); const size_t num_tasks = 10; const size_t num_digits_per_task = 10000; const size_t num_digits = num_tasks * num_digits_per_task; /* This block contains a simple example demonstrating the use of mse::TAsyncSharedReadWriteAccessRequester to safely share an object between threads. */ class CObj1WithUnprotectedMutable { public: std::string text() const { m_last_access_time = std::chrono::system_clock::now(); return m_text1; } void set_text(const std::string& text) { m_last_access_time = std::chrono::system_clock::now(); m_text1 = text; } std::chrono::system_clock::time_point last_access_time() { return m_last_access_time; } private: std::string m_text1 = "initial text"; /* Note that mutable members can affect the safety of object sharing. */ mutable std::chrono::system_clock::time_point m_last_access_time; }; class B { public: static size_t num_occurrences( mse::TAsyncSharedReadWriteAccessRequester<CObj1WithUnprotectedMutable> obj1_access_requester, const char ch, size_t start_pos, size_t length) { /* Here we're counting the number of occurrences of the given character in the specified section of the (shared) object's string of digits. */ auto obj1_readlock_ptr = obj1_access_requester.readlock_ptr(); auto end_pos = start_pos + length; assert(end_pos <= obj1_readlock_ptr->text().length()); size_t num_occurrences = 0; for (size_t i = start_pos; i < end_pos; i += 1) { if (obj1_readlock_ptr->text().at(i) == ch) { num_occurrences += 1; } } return num_occurrences; /* At the end of the scope, obj1_readlock_ptr will be destroyed and its lock on the shared object will be released. */ } }; /* mse::make_asyncsharedreadwrite<>, like std::make_shared<>, actually allocates the target object. */ auto obj1_access_requester = mse::make_asyncsharedreadwrite<CObj1WithUnprotectedMutable>(); std::string rand_digits_string; for (size_t i = 0; i < num_digits; i += 1) { /* Just generating a random string of digits. */ rand_digits_string += std::to_string(udist_0_9(rand_generator1)); } /* In the next line we temporarily grab a pointer to the object with a "write lock" so we can (safely) call a non-const member function. */ obj1_access_requester.writelock_ptr()->set_text(rand_digits_string); std::list<std::future<size_t>> futures; for (size_t i = 0; i < num_tasks; i += 1) { /* Here we're dividing the (shared) object's string of digits into sections and setting up some (potentially) asynchronous tasks to count the number of occurrences of the character '5' in each section. */ futures.emplace_back(std::async(B::num_occurrences, obj1_access_requester, '5', i*num_digits_per_task, num_digits_per_task)); } size_t total_num_occurrences = 0; for (auto it = futures.begin(); futures.end() != it; it++) { total_num_occurrences += (*it).get(); } } mse::TAsyncSharedReadWriteAccessRequester<> automatically protects the shared object from being accessed while it's being modified in another thread. (Although that's not really an issue in this simple example.) But because of the possibility of the shared object having unprotected mutable members, out of prudence mse::TAsyncSharedReadWriteAccessRequester<> does not, by default, allow for simultaneous access, even through readlock_ptrs. mse::TAsyncSharedReadWriteAccessRequester<> readlock_ptr But sometimes you might really want to allow for simultaneous read operations. For those situations you can use mse::TAsyncSharedObjectThatYouAreSureHasNoUnprotectedMutablesReadWriteAccessRequester<> (and mse::make_asyncsharedobjectthatyouaresurehasnounprotectedmutablesreadwrite<>()). It has the same interface as mse::TAsyncSharedReadWriteAccessRequester<>, but an unwieldy name to help remind users of the prerequisite for using it. mse::TAsyncSharedObjectThatYouAreSureHasNoUnprotectedMutablesReadWriteAccessRequester<> mse::make_asyncsharedobjectthatyouaresurehasnounprotectedmutablesreadwrite<>() And lastly, a common scenario is the simple one where only read access is required and the programmer has reliably determined that the shared object has no unprotected mutable members. In this case you can get away with not having any access control mechanism. For this scenario you can use mse::TReadOnlyStdSharedFixedConstPointer<> which is basically just a thin wrapper around (and publicly derived from) an std::shared_ptr that tries to ensure that the shared object is const and make clear to others reading the code the intended purpose of the pointer. (To share an object that will not be modified between threads.) std::shared_ptr #include "mseasyncshared.h" #include <future> #include <list> #include <random> #include <iostream> #include <string> int main() { /* This block contains an example demonstrating the use of mse::TReadOnlyStdSharedFixedConstPointer to share an object between threads in simple read only situations. */ class CObj1WithNoMutables { public: CObj1WithNoMutables(const std::string& text) : m_text1(text) {} std::string text() const { return m_text1; } void set_text(const std::string& text) { m_text1 = text; } private: std::string m_text1 = "initial text"; }; class B { public: static size_t num_occurrences(const std::shared_ptr<const CObj1WithNoMutables> obj1_shptr, const char ch, size_t start_pos, size_t length) { auto end_pos = start_pos + length; assert(end_pos <= obj1_shptr->text().length()); size_t num_occurrences = 0; for (size_t i = start_pos; i < end_pos; i += 1) { if (obj1_shptr->text().at(i) == ch) { num_occurrences += 1; } } return num_occurrences; } }; std::string rand_digits_string; for (size_t i = 0; i < num_digits; i += 1) { rand_digits_string += std::to_string(udist_0_9(rand_generator1)); } /* mse::make_readonlystdshared<> returns an mse::TReadOnlyStdSharedFixedConstPointer which is compatible with the corresponding std::shared_ptr. Aside from enforcing constness, the main reason for using mse::make_readonlystdshared<> over std::make_shared<> is to make clear the intended purpose of the pointer. Namely, to share an object between threads with the intent that the object not be modified. */ auto obj1_roshfcptr = mse::make_readonlystdshared<CObj1WithNoMutables>(rand_digits_string); std::list<std::future<size_t>> futures; for (size_t i = 0; i < num_tasks; i += 1) { futures.emplace_back(std::async(B::num_occurrences, obj1_roshfcptr, '5', i*num_digits_per_task, num_digits_per_task)); } size_t total_num_occurrences = 0; for (auto it = futures.begin(); futures.end() != it; it++) { total_num_occurrences += (*it).get(); } } Roughly speaking, a "race condition" is a situation where a program or code's results can vary as a function of the relative execution timing of concurrent threads. For example, say you have an integer variable, x, with an initial value of say, 5, and two threads. Let's suppose one of those threads will increment x by 1, and the other one will double x. So the value of x might end up as either 11 or 12, depending on which thread gets to x first. Right? x A "data race" is a situation where one or more asynchronous threads are allowed to access a piece of memory while it's being modified (by another thread). We consider data races to be a specific case of race conditions, but others choose to exclude data races from their definition of race condition. Any communication between asynchronous threads is potentially sufficient for a race condition to occur. Data races, on the other hand, require that a piece of memory be "shared" by multiple asynchronous threads. That is, each thread has direct access to the memory for some common period of time. Data race bugs officially result in "undefined behavior". "Undefined behavior" here being basically a euphemism for "potential consequences of the most severe kind", including invalid memory access and, in some scenarios, remote code execution. In particular, data race bugs have the ability to cause an object to be accessible in an inconsistent state. For example, if we consider an std::vector, we can imagine that it contains a pointer to some allocated memory, some kind of integer indicating the number of elements contained, and another one indicating the capacity of the allocated memory, in terms of number of elements. (To the user) it should always be the case that the number of elements contained is less than or equal to the capacity. (Relationships that should always be true like this are called "invariants".) If ever this relationship is observed (by the user) to be false, the std::vector would be considered to be in an "inconsistent" or "corrupt" state. In the case of std::vector, possibly resulting in invalid memory access. std::vector Invalid memory access can be considered one of, if not the most severe types of bugs in C++. (Although in theory, all "undefined behavior" bugs are equivalent.) They are particularly bad, because in practice they can cause your program to stop executing, expose sensitive data stored in otherwise inaccessible memory, or even allow for arbitrary code execution and the compromisation of the host environment. So it is often a priority to reduce or eliminate the possibility of invalid memory access, even if it comes at some cost (in terms of performance, flexibility, "bug hiding", increasing the likelihood of other less severe bugs, etc.). To that end, while it may not be possible to prevent all race condition bugs, we'd like to be able to reduce or eliminate the ones that unavoidably risk invalid memory access. These include data race bugs, but also a slightly more general set we might call "object race" bugs. Data race bugs involve direct access of shared memory, while "object race" bugs also include indirect access of shared memory via any part of a shared object's interface, including member and friend functions and operators. The idea here is that objects that maintain consistent internal state generally allow their functions and operators to temporarily change the internal state to an inconsistent one, as long as consistency is restored before the function or operator returns. "Object race" bugs include bugs where one thread is allowed to access a shared object while it has been temporarily put in an inconsistent state by a member function or operator executing in another asynchronous thread. If we consider the main factors that make a bug problematic i) frequency/unpredictability of occurrence ii) severity of consequences iii) difficulty in reproducing/debugging iv) ability to evade detection during testing we note that "object race" bugs, as a category, achieves the superfecta - all four factors apply in spades - making it perhaps the worst class of bugs in all of computer programming. Which is why it's best to avoid sharing objects between asynchronous threads when practical, and spare no safety mechanism when not. So consider how std::shared_ptr can be used to address "object lifespan" bugs. That is, bugs where you (attempt to) access an object after it's been deallocated. If you were to hypothetically write a program such that all objects are allocated via std::make_shared and accessed through (appropriate) std::shared_ptrs only, then you would be essentially guaranteed to have no "object lifespan" bugs. Right? Because the std::shared_ptrs do two things - they provide access to the object, and they control when the object will be deallocated. So they ensure that the object is not deallocated until after they (all permanently) stop providing access to the object. std::make_shared Well, a similar technique can be used to ensure that a shared object is not accessed while another thread is modifying it. In order to safely access an object, a thread needs to "own" an appropriate (write or read) "access lock" on the object. Well, imagine smart pointers that control when their target objects are deallocated (i.e. have "ownership" of the object's lifespan) in the same way that std::shared_ptrs do, but in addition, also control when the access lock is released (i.e. have "ownership" of the access lock). Let's call these smart pointers "writelock_ptr"s and "readlock_ptr"s, as appropriate. They won't release their access lock until after they stop providing access to the shared object (i.e. when they are destroyed). So restricting the access of shared objects to accesses through writelock_ptrs and/or readlock_ptrs only will ensure that the accesses are safe. While similar, there are differences in the way writelock_ptrs and readlock_ptrs are used compared with std::shared_ptrs. In particular, if you expect to need future access to a target object, you might consider storing an std::shared_ptr for later use. In contrast, you would not do that with writelock_ptrs and/or readlock_ptrs because as long as they exist, they hold a lock on the shared object, possibly preventing other threads from accessing the object. In general, you want to minimize the lifespan of writelock_ptrs and readlock_ptrs in order to maximize the availability of the shared object. This calls for a separate helper object we'll call an "access requester". Access requesters do not provide direct access to the shared object or hold any locks on it. They do (try to) provide writelock_ptrs and/or readlock_ptrs upon request. So if you expect to need future access to a shared object, rather than holding on to a writelock_ptr or readlock_ptr you can just use an access requester to reacquire one whenever you need. (See the first example in the "Summary" section and/or download the accompanying source code for a more comprehensive example.) A peculiarity of C++ is that a const object is not necessarily guaranteed to be unmodifiable, because, for example, C++ permits "mutable" members. Now, it's always been understood that while the mutable keyword may be used to subvert the mechanics of const, the semantics of const should be preserved. That is, an object declared const, should, to the user (of its public interface), behave as if it were const even if under the hood private mutable members are actually being modified. But not until the arrival of C++11 did, by convention, the preservation of const semantics include notions of thread safety - the ability to be accessed simultaneously by asynchronous threads without issue. So the problem is that there are a bunch of legacy objects with mutable members out there that you might want to share between threads, but cannot be assumed to be safely shareable when declared const. mutable And it's not clear that this is an issue with just legacy objects. The current convention is that objects with mutable members should preserve thread safety as part of preserving const semantics, but doing so often has a cost in terms of performance and scalability. (Scalability because, for example, thread safety mechanisms often require locking cpu cache lines thus holding up other threads that need to access the same cache lines.) So when using such an object in a context where it is not being shared between threads, often a (small but) unnecessary performance cost is being paid. Now there are those of us that don't mind seeing the establishment of a convention that sacrifices a little performance in the name of safety. But you have to wonder how reliably a performance obsessed programming community is going to adhere to the rule when it's not clear that the trade-off is even necessary. Consider an alternative convention where classes with mutable members are compelled to clearly indicate (in their name, for example) whether or not they can be safely shared between threads. And additionally, encourage any class that is not safely shareable (for performance reasons) to be provided as a pair with a compatible safely shareable version of the class. It's probably better to have the author positively indicate that their class is (or is not) safe to share rather than have the user just assume it. And this being C++, the mutable keyword is not the only hole in the enforcement of const. For example, you may declare a const instance of a class with the intent of sharing it between threads. But if this class happens to be sort of a "compound" class that contains references to "child" objects (or "indirect members", if you will), the constness is not propagated to those child objects leaving them vulnerable to race condition and data race bugs. Such child objects can also be considered "mutable members" and with respect to safety, should be treated as such. int main() { class CObjWithIndirectMember { public: CObjWithIndirectMember() : m_string1(*(new std::string("initial text"))) {} ~CObjWithIndirectMember() { delete (&m_string1); } void set_string2(const std::string& string_cref) const { /* We know the "mutable" keyword can be used to subvert "const"ness. */ m_string2 = string_cref; } void set_string1(const std::string& string_cref) const { /* As with members declared with the "mutable" keyword qualifier, "const"ness does not propagate to "indirect" members. */ m_string1 = string_cref; } mutable std::string m_string2 = "initial text"; std::string& m_string1; }; const CObjWithIndirectMember const_obj_with_indirect_member; /* "const" objects aren't necessarily unmodifiable if they have members declared "mutable". */ const_obj_with_indirect_member.m_string2 = "new text"; /* Or if they have "indirect" members. That is, members that are actually references to other objects. */ const_obj_with_indirect_member.m_string1 = "new text"; /* So declaring an object "const" doesn't necessarily make it safe to share without access controls. */ } So, unfortunately, in C++ there is really no general way to ensure that an object will not be modified, and consequently no general way to ensure that simultaneous access to a shared object will be safe. So the prudent policy would be to permit simultaneous access only in cases where the shared object is simple enough that it is universally apparent that there is no issue with potentially mutable members, or in cases where the object provides explicit positive indication that it can be shared safely. And in either case, if you are going to permit simultaneous access, the code should clearly indicate it, preferably by using a type specifically dedicated for the purpose (and that may facilitate extra compile time safety). (For example, prefer a type like mse::TReadOnlyStdSharedFixedConstPointer over using std::shared_ptrs directly.) mse::TReadOnlyStdSharedFixedConstPointer So we've introduced data types that, among other things, protect the consistency of a shared object's internal state. But if you're sharing multiple objects, the consistency of any relationship (aka invariant) between those objects is not automatically protected. So while we recommend avoiding the practice of sharing objects between threads when practical, we would even more strongly advise avoiding the practice of sharing multiple interdependent objects between threads. But when you do have to do it, you should be thinking about operations on those objects as part of transactions that need to be executed atomically. #include "mseasyncshared.h" #include <future> #include <list> #include <random> #include <ratio> #include <chrono> int main() { /* This is an example of "atomic" transactions when performing operations on multiple interdependent shared objects. In this case, funds transfers between accounts. */ class CAccount { public: void add_to_balance(double amount) { m_balance += amount; m_last_transaction_time = std::chrono::system_clock::now(); } double balance() const { return m_balance; } private: double m_balance = 0.0; std::chrono::system_clock::time_point m_last_transaction_time; }; class B { public: static bool nonatomic_funds_transfer( mse::TAsyncSharedReadWriteAccessRequester<CAccount> source_ar, mse::TAsyncSharedReadWriteAccessRequester<CAccount> destination_ar, const double amount) { /* Non-atomic transactions between shared objects like this can be bad. They can result in "race condition" bugs. */ if (source_ar.readlock_ptr()->balance() >= amount) { source_ar.writelock_ptr()->add_to_balance(-amount); destination_ar.writelock_ptr()->add_to_balance(amount); return true; } else { return false; } } static bool atomic_funds_transfer( mse::TAsyncSharedReadWriteAccessRequester<CAccount> source_ar, mse::TAsyncSharedReadWriteAccessRequester<CAccount> destination_ar, const double amount) { /* You want your transactions between shared objects to be atomic like this one to avoid "race condition" bugs. */ /* To make your transaction atomic, first obtain a lock on all the parties in the transaction. */ auto source_writelock_ptr = source_ar.writelock_ptr(); auto destination_writelock_ptr = destination_ar.writelock_ptr(); if (source_writelock_ptr->balance() >= amount) { source_writelock_ptr->add_to_balance(-amount); destination_writelock_ptr->add_to_balance(amount); return true; } else { return false; } } }; /* create the accounts */ auto bobs_account_access_requester = mse::make_asyncsharedreadwrite<CAccount>(); auto bills_account_access_requester = mse::make_asyncsharedreadwrite<CAccount>(); auto barrys_account_access_requester = mse::make_asyncsharedreadwrite<CAccount>(); /* set initial balances */ bobs_account_access_requester.writelock_ptr()->add_to_balance(100.0); bills_account_access_requester.writelock_ptr()->add_to_balance(200.0); barrys_account_access_requester.writelock_ptr()->add_to_balance(300.0); /* do some concurrent fund transfers */ std::future<bool> bob_to_bill_res = std::async(B::atomic_funds_transfer, bobs_account_access_requester, bills_account_access_requester, 10.0); std::future<bool> bill_to_barry_res = std::async(B::atomic_funds_transfer, bills_account_access_requester, barrys_account_access_requester, 20.0); std::future<bool> barry_to_bob_res = std::async(B::atomic_funds_transfer, barrys_account_access_requester, bobs_account_access_requester, 30.0); bool all_transfers_were_executed = (bob_to_bill_res.get() && bill_to_barry_res.get() && barry_to_bob_res.get()); } The standard library recognized the need to allow for the locking of a mutex multiple times by a single thread, so they provide std::recursive_mutex. They also recognized the need for a mutex that supports shared locking and locking attempts that expire after a timeout period, so they provide std::shared_timed_mutex. But of course for the general case you'd want a mutex that supports all of these features - an std::recursive_shared_timed_mutex. But frustratingly, (at the time of this writing) the standard library provides no such mutex. Perhaps because it's not obvious how to implement such a type in a way that is optimized for performance and memory footprint. std::recursive_mutex std::shared_timed_mutex std::recursive_shared_timed_mutex Anyway, we can't wait on the standard library, so we provide the mutex - mse::recursive_shared_timed_mutex. It's the mutex that's called for when implementing a general solution for shared objects like ours. And if you find yourself needing such a mutex for other purposes, it's there in the header file. mse::recursive_shared_timed_mutex Speaking of locking attempts that expire after a timeout period, while we don't use them in the examples, the "access requester" types do support the functionality: auto access_requester = mse::make_asyncsharedreadwrite<std::string>("some text"); auto writelock_ptr1 = access_requester.try_writelock_ptr(); if (writelock_ptr1) { // lock request succeeded } auto readlock_ptr2 = access_requester.try_readlock_ptr_for(std::chrono::seconds(10)); auto writelock_ptr3 = access_requester.try_writelock_ptr_until(std::chrono::steady_clock::now() + std::chrono::seconds(10)); When programming for today's multiprocessing architectures it can just seem convenient and natural to share objects between concurrent threads. And it can be easy to overlook the fact that the practice of sharing objects between threads brings with it a fundamentally different and more problematic kind of bug than we've been used to dealing with. A bug in the control of access to a shared object can result in any part of that object (or anything referenced by the object) being modified at any time. It's the "any time" aspect that is different and particularly problematic. This means that if your program crashes, or an assert fails, even if you have a full stack trace and memory dump, it still may not be possible to deduce the sequence that led to the failure state. And as an extra whammy, it is not rare for this type of bug to be impractical to reproduce. The position of not being able to deduce or reproduce the steps that occurred to arrive at the failure state is a disturbing one. But not one that we, as a species, are unfamiliar with. For example, the ancient Egyptians were unable to deduce the sequence of events that caused occasional catastrophic flooding of the Nile. Nor could they reproduce the phenomenon on demand. They dealt with the situation by making ritual offerings to a ram-headed deity. Maybe that'd help with our data race bugs too. :) And maybe some extra prudence when sharing objects between threads wouldn't hurt either..
https://www.codeproject.com/Articles/1106491/Sharing-Objects-Between-Threads-in-Cplusplus-the-S
CC-MAIN-2018-26
en
refinedweb
Remote Debugging with PyCharm Introduction What to do, if the interpreter you are going to use, is located on the other computer? Say, you are going to debug an application on Windows, but your favorite interpreter is located on Mac... With PyCharm, it's not a problem. Before you start Make sure that you have an SSH access to Mac computer! Creating a project On Windows, create a pure Python project, as described in the section Creating Pure Python Project. In this tutorial, the project name is QuadraticEquation. Preparing an example Add a Python file to this project (Alt+Insert - Python File). Then, type the following code: import math) Using a remote interpreter Configuring a remote interpreter on Windows Note that any remote interpreter will do. Click Ctrl+Alt+S to open the Settings dialog on the Windows machine (the whole process is described in the section Accessing Settings). Next, click the Project Interpreter node. On this page, click the gear button ( ): Then choose the remote interpreter: Next, in the Configure Remote Python Interpreter dialog box, click the Deployment Configuration radio-button: Then click the browse button ( ) next to the Deployment configuration field. The Add server dialog box opens. There, enter the server name (let it be MySFTPConnection) and choose the server type from the drop-down list. Select SFTP to tell PyCharm to transfer files over the SSH connection. Next, the deployment settings dialog opens for MySFTPConnection. In the Connection tab, enter the following:: - In the field Name, type the connection name. Here it's MySFTPConnection. - In the Type field, choose the connection type. Here it's SFTP. - In the SFTP host field, enter the address of your Mac computer (for example, IP). - Type user name on the Mac computer. - As the authentication type is set to Password, choose this option and enter your Mac password. In general, it's recommended to use the default settings. The only setting worth checking is the check box Save password. Next, click the Mapping tab. The Local path field is set by default. You have to specify the path on the server MySFTPConnection. To do that, click the browse button ( ) next to the Deployment path on server 'MySFTPConnection' field. The dialog box that opens shows the contents of your Mac computer: Now your remote interpreter is ready. Deploying your application to a remote host Next, your application must be deployed to the remote host. Let's do it. On themenu, point to node, and then choose : File Transfer tool window appears, with the balloon showing the number of transferred files. Debugging your application Now you are ready for debugging! Right-click the editor background and choose the(here ). Note that debugging actually takes place on the Mac computer! Using the Python Remote Debug configuration Let's do same not with the remote interpreter, as it has been done in the first part of this tutorial, but with the dedicated run/debug configuration, namely, Run/Debug Configuration: Python Remote Debug. Creating run/debug configuration On the main menu, choose Run/Debug Configurations Dialog opens. You have to click on the toolbar, and from the list of available configurations, select Python Remote Debug. Enter the name of this run/debug configuration - let it be MyRemoteMac. Specify the port number (here 12345), and in the field Local host name change localhost to the IP address of the Windows computer. Don't forget to specify the path mappings! You have to map the local path on Windows to the remote path on Mac: Then look at the dialog box. You see the message: Let's do what is suggested there and change the source code. First, let's copy the pycharm-debug-py3k.egg file to the project root. Second, let's change the quadratic_equation file as follows: import math #==============this code added==================================================================================: import sys sys.path.append("pycharm-debug-py3k.egg") import pydevd pydevd.settrace('<address of the Windows machine>', port=12345, stdoutToServer=True, stderrToServer=True) #===============================================================================================================) Creating a SFTP connection First, create the folder on Mac, where the file quadratic_equation.py should be uploaded. To do it, in the Terminal create the directory QuadraticEquation__: mkdir QuadraticEquation__ Now, considering that we should upload this code, let's create a connection profile. To do it, on the main menu, choose , and in the Add Server dialog select the connection type (here SFTP) and enter its name (here MySFTPConnection.) In the Connection tab, specify the SFTP host (address of the Mac computer), user name and password (for Mac). Note that the user should have an SSH access to Mac computer! Next, click Mappings tab, and enter the deployment path in server. The server is MySFTPConnection, so click the browse button and select the required folder. Note that the browse button shows the contents of Mac! Apply changes and close the dialog. Deploying files to Mac Next, you have to deploy the two files (copy of pycharm-debug-py3k.egg and quadratic_equation.py) from Windows to Mac. Then on Windows, in the Project Tool Window, select the two files ( pycharm-debug-py3k.egg and quadratic_equation.py), right-click the selection and choose Upload to MySFTPConnection. The files from Windows will be uploaded to Mac, which is reflected in the File Transfer tool window: Launching the Debug Server Choose the run/debug configuration created in the section Remote Debugging with PyCharm, and click : The Debug Tool Window shows the Waiting for process connection.. message, until you launch your script on Mac, and this script will connect to the Debug Server. Launching file on Mac Next, you have to launch the quadratic_equation.py file on Mac. To do that, in the Terminal, enter the following command: ...$python3 quadratic_equation.py Debugging on Windows Now lo and behold! Your code on Windows shows the results of the inline debugging and hit breakpoint; the Debug tool window switches to the Debugger tab and shows the stepping toolbar, frames and variables with their values (entered on Mac): Note that the code is actually executed on Mac, but debugged on Windows! Summary In order to debug with a remote interpreter, you have to start your program through PyCharm, which is not always possible. On the other hand, when using the Debug Server, you can connect to a running process. Compare the two approaches. In the first case, we - configured a remote interpreter on Windows. - deployed the script to Mac. - debugged the script. Note that we've started the debugger session on Windows, but actual debugging takes place on the Mac computer. In the second case, we - created a debug configuration (Debug Server) on Windows. - launched the Debug Server on Windows. The Debug Server waits for connection. - executed the Python script on Mac. The script connects to the Debug Server. - debugged the script on Windows. It's up to you to choose the way you debug your scripts.
https://www.jetbrains.com/help/pycharm/2017.3/remote-debugging-with-pycharm.html
CC-MAIN-2018-26
en
refinedweb
This post is part of a series. Click for Part 1 or Part 2 In the last two posts I explored how Live.js can help you do client side testing, particularly for responsive layouts. Now we’ll be looking at another way that live.js can help out in your client side development. But before we can do that, we have to take a brief foray into the world of javascript based unit testing. I’m not going to try to give a full treatise on the subject, but just a brief introduction so that we can see how live.js can help with this part of your development workflow too. If you aren’t familiar with client side unit testing, don’t sweat it, it’s pretty straight forward. If you want a good overview check out smashing magazine’s intro or this great video on the qUnit framework. At a high level though it looks something like this. 1. Just like with your backend code, javascript testing starts with how you structure your code in the first place. Focus on small methods with minimal dependencies that return values that you can validate. 2. There are alot of javascript unit testing frameworks out there, but they all generally work the same way. Tests are functions passed into a method defined by the framework. Your to run your tests, you build a simple html page which has script references to the framework library, your test code and your application code. To run the tests, you load the page and the framework manipulates the html to report your results. With this high level understanding, it’s pretty straight forward to see how live.js can help on this front. If you add live.js to that html page that runs your tests, then that page can refresh automatically and run your tests every time your test code or application code changes. Note, that your automated testing page doesn’t have to be static html either. For example, in mvc we can set up a TestsController and Tests view that look a little like this. Controller public class TestsController : Controller { // // GET: /Tests/ public ActionResult Index() { var testFiles = Directory.EnumerateFiles(Server.MapPath("~/Scripts/spec")).Where(f => f.EndsWith(".js")); var sutFiles = testFiles.Select(s => s.Replace("_spec", "")); ViewBag.SutFiles = sutFiles; ViewBag.TestFiles = testFiles; return View(); } } View <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width" /> <title>Tests</title> <meta http- <script src="/Scripts/spec/lib/<your-testing-framework>.js"></script> @foreach (var fullpath in ViewBag.SutFiles) { var fileName = Path.GetFileName(fullpath); <script src="/Scripts/@fileName"></script> } @foreach (var fullpath in ViewBag.TestFiles) { var fileName = Path.GetFileName(fullpath); <script src="/Scripts/spec/@fileName"></script> } <script> onload = function () { var runner = mocha.run(); }; </script> </head> <body> </body> </html> The basic idea, is that we have a controller that builds up a list of files by looking in a specific folder where we put all of our tests. For all of the files it finds, it passes them along to the view, which then renders a set of script reference tags. The result is that our page dynamically adds all the assets it needs to test our javascript. Then live.js will do its thing and automatically refresh to run the tests any time there is a change. 2 thoughts on “Live.js and Visual Studio Part 3 – Automated Testing”
https://beyondoverload.wordpress.com/2013/01/31/live-js-and-visual-studio-part-3-automated-testing/
CC-MAIN-2018-26
en
refinedweb
See also: IRC log <JeffSchiller> give me a couple mins here <shepazu> sure <trackbot> Date: 04 December 2008 howdy i'm the 1.149 number (caller id messed up on the voip) <HelderMagalhaes> (just to ease on the logs and general identification - also, I'll be following from IRC as usual :-) ) <aneumann> I am also just on IRC, not on the phone. I'd like to discuss SVG Open in the first 10-15 minutes because I have to leave then <shepazu> scribeNick: Rob_Russell <aneumann> as you may have heard we did not find a local organizer for SVG Open 2009 in California <aneumann> our next lead is now Pittsburgh, Pennsylvania - David Dailey is looking into it <JeffSchiller> yes, i've seen some discussions about an alternate US location, PA <aneumann> a research company () is interested in helping with the organization doug: penn is okay, CA is better jeff: CA is more popular tech destination <aneumann> yes, but without a local organizer its too risky doug: mozilla has joined the SVG WG, hired Jonathan Watt for fulltime svg dev <aneumann> There is also Carnegie Mellon in Pittsburgh <HelderMagalhaes> Rob_Russell: great to know about that! :-) <aneumann> cool - good to hear abouth Jonathan doug: jwatt previously worked for joost, was a memeber of the svg wg and ig but joost has dropped svg ... asked jwatt to see if moz would be interested in helping organize <aneumann> I hope that we have a decision about Pittsburgh before Christmas and are able to put out information and Call for papers before end of the year doug: before going to a different location we should look into the moz option <aneumann> yes, the moz option would be fine also - but we'd need a decision soon - we can't wait much longer doug: jwatt may be able to get in on the IG calls within the next week or so (but he is busy) <aneumann> also Jwatt may be too busy with getting into Moz and not sure he can devote much time to SVG Open <aneumann> but he could help to get moz as a supporter/sponsor <shepazu> aneumann, do you think it would bad to wait until January? <shepazu> we could CfP before before the final location is announced <aneumann> yes, it would be good to have decided on a location before christmas doug: could say PA and CA are being considered <aneumann> hm - I don't know - we can already prepare the Cfp before and then put it immediately live when we have the decision <shepazu> I will respond more on the SVGOpen list doug: having moz sponsor the open would be a "feather in the cap", worth waiting a month (personal opinion) and CA would get more people jeff: agrees, CA is closer to tech stuff <aneumann> ok - I have to leave now anyway. Lets discuss it further on the SVG Open list doug: higher profile conf puts more pressure on MS than a smaller one <aneumann> I wouldn't mind if someone else would take the lead who is in better contact with moz than me doug: ultimately Andreas' decision ... will pursue Mozilla for a person to organize <shepazu> I will contact other Moz people than jwatt, too <aneumann> not only - I follow your advice <aneumann> ok - thanks and bye for now ... jeff: will jwatt be doing what tor was doing at moz? doug: yes, jonathan said mozilla is going for Full SVG 1.1 compliance jeff: interesting to see if we can get a plugin from Moz (for ie?) that's fully supported doug: update from the SVG WG. Mozilla has joined. Most people aren't in on the politics and pain of getting the browser vendors to participate, this has been years in coming. ... Dedicating a Mozilla employee to SVG is huge. Robert O'Calahan has been proposing extensions to SVG for CSS, layout & positioning & ease of authoring. ... Don't think they'll simply do SVG 1.1 and that's all - more of a long term dedication to SVG as part of an open web platform. ... Despite some misgivings at the WG, we're going ahead with hixe's proposal to do inline SVG in html5 (plain text html). The WG would like feedback, especially use cases and tests. jeff: I followed some of the discussion, hixie proposal is the less rigid? doug: hixie's proposal allows for error correction. In html5 it's not considered an error (badly formed dom) but in XML SVG it would be. jeff: I don't mind going along with html5ish way as long as there's a way to export the SVG dom as XML doug: that's something this group could do, come up with test cases & use cases wade: do we have a pointer to this spec? doug: will ask hixie to make a version available ... you basically have to understand all of html5 to get it jeff: there's some contention but michael smith put out a spec for just the parsing doug: the edge cases might not show up there jeff: font element & text area clash with elements in html, worried about how that will work doug: WG is taking a hard line in that case, no whitelist or blacklist of elements or attributes for svg in html ... may lose things like entities jeff: not a big loss to me doug: illustrator uses entities heavily ... thinking of making an SVG Tidy that would replace entities and strip out unneeded namespaces jeff: sam ruby made a post that ranks highly about scrubbing svg - google SVG Tidy (written in ruby language) <scribe> ACTION: shepazu to publish version of HTML5 SVG syntax [recorded in] <trackbot> Created ACTION-15 - Publish version of HTML5 SVG syntax [on Doug Schepers - due 2008-12-11]. rob: it'd be nice to get this covered on This Week In HTML5 (mark pilgrim) doug: discussed SVG in HTML with the HTML WG at the tech plenary ... I think hixie's plan will win out over passing SVG to a strict parser ... this could have implications for standalone content as well ... it's possible that other browsers would have to understand this syntax which could be disruptive to existing SVG toolchains jeff: last time we talked about generating content & rob would keep up on theming <JeffSchiller> rob: has put up a version of the theme on planetsvg <shepazu> <JeffSchiller> rob: go to planetsvg.com, log in, My Account > Edit, then choose the "Genesis_psc1" <JeffSchiller> rob: then click save, the site will change theme <JeffSchiller> rob: this theme is still a draft, lots of visual quirks <JeffSchiller> rob: so things are moving along, want people to take a look, but I'm aware there are problems <JeffSchiller> rob: might want to put the background in so it doesn't look so white-on-dark <JeffSchiller> jeff: are you looking for feedback yet? <JeffSchiller> rob: might make more sense for me to get it a little closer to 'done' <JeffSchiller> manuel: can i download the theme so i can work on it and submit it back? <JeffSchiller> jeff: can you email him a tgz? <JeffSchiller> rob: will do <JeffSchiller> doug: is there a feed for news items? <JeffSchiller> rob: i believe it's possible, but not sure why its' not available yet jeff: maybe we need a global feed, news tutorials, we should discuss what feeds we need exactly <HelderMagalhaes> JeffSchiller: Yes, having the possibility to syndicate to specific feed was rather useful <HelderMagalhaes> For example, a developer might <HelderMagalhaes> just want news, an artist might prefer the uploaded images feed <JeffSchiller> Helder: yes, i'll starta little page for this on the SVG IG wiki rob: we need to decide on blockers from "prime time" theme, feeds, primary navigation jeff: will start a wiki page on rss feeds ... we should discuss content on the site ... I did a tutorial, shiny buttons, I wasn't sure of the format of the content.I hosted images and svg on my site. How do we do this going forward? rob: there's g2 and there are image modules for drupal ... I think we should use a drupal image/gallery module jeff: if images were uploadable then it'd be easier to create content. Maybe authors who need js could ship their files to an admin who could post it rob: good idea jeff: rasters are easy, for svg & js standalone we'd want the author to send that to an admin ... we'd need to decide on paths rob: we could publish guidelines for that doug: I've hacked mediawiki to allow svg, I think it was using IFRAME but OBJECT might be better now ... Wondering if we could combine upload with another problem. Helping people on irc I use pastebin, it's generic but we need a dedicated svg pastebin. ... Ideally it would show the svg source and the image rob: sounds good, like a standalone site though doug: maybe a subdomain on planetsvg ... it'd be nice to be able to highlight code on a pastebin too <HelderMagalhaes> Yeah, having a raster feature for a paste bin would be great! rob: i think there could be xss issues with that <HelderMagalhaes> (even if limited to static SVG only, which would be the natural option for security reasons...) doug: that could be avoided by stripping out all js <HelderMagalhaes> maybe using a rasterizer? rsvg or Batik server-side <HelderMagalhaes> (that could also help new users to try SVG even without local support (in IE, of course) <HelderMagalhaes> ) doug: it'd be nice if, at least, people could paste their svg in an edit box and use that as a way to add images to the gallery <stelt> HelderMagalhaes: i have a GUI for a rasterizer service somewhere <gwadej> I've got to run to a meeting. jeff: to make things easier for authors, maybe we could allow people to upload js & svg then an admin/mod approves it rob: yes, i think that's a great role for a specific type of moderator <HelderMagalhaes> stelt: Yes, I know about it, but the idea was allowing that online (server-side) or client side for click-and-see experience like in w3schools. For example, <DavePorter> Sorry guys, I have to drop off! Thanks for the meeting and hearing about recent progress. jeff: might try to put out a part 2 to the aqua button tutorial a week or so from now doug: i'd like to do a follow up to the button tutorial but different ... try to start new telcon time in the new year This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found ScribeNick: Rob_Russell Inferring Scribes: Rob_Russell Default Present: Doug_Schepers, JeffSchiller, gwadej, Rob_Russell, Manuel, Dave_Porter Present: Doug_Schepers JeffSchiller gwadej Rob_Russell Manuel Dave_Porter WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 04 Dec 2008 Guessing minutes URL: People with action items: shepazu: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2008/12/04-svgig-minutes.html
CC-MAIN-2018-26
en
refinedweb
I need to tell if my device has Internet connection or not. I found many answers like: private boolean isNetworkAvailable() { ConnectivityManager connectivityManager = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo activeNetworkInfo = connectivityManager.getActiveNetworkInfo(); return activeNetworkInfo != null; } You are right. The code you've provided only checks if there is a network connection. The best way to check if there is an active Internet connection is to try and connect to a known server via http. public static boolean hasActiveInternetConnection(Context context) { if (isNetworkAvailable(context)) { try { HttpURLConnection urlc = (HttpURLConnection) (new URL("").openConnection()); urlc.setRequestProperty("User-Agent", "Test"); urlc.setRequestProperty("Connection", "close"); urlc.setConnectTimeout(1500); urlc.connect(); return (urlc.getResponseCode() == 200); } catch (IOException e) { Log.e(LOG_TAG, "Error checking internet connection", e); } } else { Log.d(LOG_TAG, "No network available!"); } return false; } Of course you can substitute the URL for any other server you want to connect to, or a server you know has a good uptime. As Tony Cho also pointed out in this comment below, make sure you don't run this code on the main thread, otherwise you'll get a NetworkOnMainThread exception (in Android 3.0 or later). Use an AsyncTask or Runnable instead. If you want to use google.com you should look at Jeshurun's modification. In his answer he modified my code and made it a bit more efficient. If you connect to HttpURLConnection urlc = (HttpURLConnection) (new URL("") .openConnection()); and then check the responsecode for 204 return (urlc.getResponseCode() == 204 && urlc.getContentLength() == 0); then you don't have to fetch the entire google home page first.
https://codedump.io/share/VcO4HavHoEg4/1/detect-if-android-device-has-internet-connection
CC-MAIN-2018-26
en
refinedweb
Copying Code from the Articles This beginners article discusses copy and pasting code from a web page. Many of the articles on Tek Eye will have code listings, they will appear like this piece of code here (handling a button click in an Android App using an anonymous inner class). public class main extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); findViewById(R.id.button1).setOnClickListener(new OnClickListener(){ public void onClick(View arg0) { Button btn = (Button)arg0; TextView tv = (TextView) findViewById(R.id.textview1); tv.setText("You pressed " + btn.getText()); } }); } } When moving the pointer over the code it should change to a text cursor (a.k.a. caret). This allows the code to be selected. Highlight the code to copy it. Use the context menu (normally right-click) on the selected code and click Copy. The code is copied to the clipboard. It can then be pasted into the code editor of your favourite Integrated Development Environment (IDE). Some programs may not correct the line breaks from the HTML copy: This can be solved by either first using a program that does use the correct line breaks: Or using the view source code option on the web page (again via the context menu), scrolling to the code and highlighting and copying it from web page source: Once the code is copied it should paste into the program correctly. Code on a web page is easy to copy into a IDE using standard copy and paste. Some websites will use a more elaborate plug-in to display code which will have a small toolbar with a copy or view source option. This allows for the same copy and paste functionality. Android Studio Imports When Java code is pasted into Android Studio it may require extra import statements to be added to the import section of the class file. Unknown objects will be shown in red (default settings). If Studio is able to detect the required import a prompt is displayed. Press Alt and Enter to automatically add the correct import statement. Occasionally more than one object is shown from which the import is chosen. If in doubt use the Andorid Developer documentation to determine the correct object import. Author:Daniel S. Fowler Published: Updated:
https://tekeye.uk/programming/copying-code-articles
CC-MAIN-2018-26
en
refinedweb
A pure-python headless browser Project description A simple library for interacting with the web from python Description activesoup combines familiar python web capabilities for convenient headless “browsing” functionality: - Modern HTTP support with requests - connection pooling, sessions, … - Convenient access to the web page with an interface inspired by beautifulsoup - convenient HTML navigation. - Robust HTML parsing with html5lib - parse the web like browsers do. Use cases Consider using activesoup when: - You’ve already checked out the very talented Kenneth Reitz’s requests-html - You need to actively interact with some web-page from Python (e.g. submitting forms, downloading files) - You don’t control the site you need to interact with (if you do, just make an API). - You don’t need javascript support (you’ll need selenium or phantomjs). Usage examples Log into a website, and download a CSV file that’s access-protected: from activesoup import driver d = driver.Driver() login_page = d.get('') login_form = login_page.form member_portal = login_form.submit({'username': secret_store['username'], 'password': secret_store['password']}) if member_portal.response.status_code not in range(200, 300): raise RuntimeError("Couldn't log in") # Logged in now csv_report = d.get('/members_area/file.csv') csv_report.save_to('~/interesting_resport.csv') Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/activesoup/
CC-MAIN-2020-50
en
refinedweb
NoMethodError in UsersController # import undefined method `import 'for #<Class: 0x00007f61f4ba6708> Extracted source (around line # 18): 16 17 def import 18 User.import (params [: file]) 19 redirect_to user_import_path @user 20 end 21 csvI am in trouble without getting out of the above error while implementing the import function. [I want to realize] ・ Improved that the above error is thrown when transitioning to the user list. ・ From undefined method path for nil: NilClass, it can be read that the filemethod is empty, but I want to get rid of the situation where the solution method is unknown. ・ Completed csv function implementation. [I have tried it] ・ From the following article, I checked what the reference code and the undefined methodare telling us in the first place. ・ views file_field_tag filefrom import_users_pathJump to code>importaction and separate processing by view and model. ← Is this area suspicious? I think that some controllers ( User.import (params [: file])) and models ( self.import (file)) fileI tried to fix the part, but the situation has not changed. [Reference article] Csv import feature article ・ 【Ruby on Rails】 CSV import -How to implement CSV/Excel/OpenOffice upload function in Rails Article about undefined method ・ Solution of Ruby error message undefined method [beginners] nil: NilClass article ・ How to handle "NoMethodError" of nil: NilClass It seems very rudimentary. However, although my lack of power was revealed, I couldn't solve it but I asked you to lend me your help. I hope you will study. [routes.rb] resources: users do collection {post: import} get 'import', to: 'users # import' get 'index_attendance', to: 'users # index_attendance' member do get 'edit_basic_info' patch 'update_basic_info' get 'attendances/edit_one_month' patch 'attendances/update_one_month' get 'attendances/edit_overtime_app' patch 'attendances/update_over_app' end resources: attendances, only:: update end [users.rb] class User</pre> <p><br /> <strong>[users_controller.rb]</strong></p> <pre><code data-class UsersController " + @ user.errors.full_messages.join ("<br>") end redirect_to users_url end def admin_or_correct_user unless current_user? (@ user) || current_user.admin? flash [: danger] = "I was sorry!" redirect_to (root_url) end end private def user_params params.require (: user) .permit (: name,: email,: affiliation,: employee_number,: password) end def basic_params params.require (: user) .permit (: basic_time,: work_time) end end [import.html.erb] <% provide (: title, "user list")%> User list <% if flash [: notice]%> <% = flash [: notice]%> <% end%> <% = form_tag import_users_path, multipart: true do%> <% = file_field_tag: file%> <% = submit_tag "Import"%> <% end%> <p>Example</p> <% if current_user.admin?%> <span>|</span><% = link_to "delete", "#", method:: delete, class: "btn btn-lg, btn-primary btn-delete"%> <% = link_to "edit", "#", class: "btn btn-lg btn-primary w-10"%> <% end%> <script type = "text/javascript"> function file_selected (file_field) { var filename = $(file_field) [0] .files [0] .name; $("# filename"). val (filename); } </script> - Answer # 1 - Answer # 2 def import User.import (params [: file]) redirect_to user_import_path @user end Why do I need to redirect? Related articles - ruby - undefined method `save'for nil: nilclass cannot be resolved - ruby - undefined method `[]'for nil: nilclass error when pushing heroku - ruby undefined method `find_zone!' for time:class (nomethoderror) error - ruby - the method defined in the model cannot be used - ruby on rails 5 - undefined method `tomorrow'for 00: i would like to know how to solve bigdecimal - ruby on rails 6 - i want to resolve undefined method `[]' for nil:nilclass - ruby on rails5 nomethoderror (undefined method `cart_items' for nil:nilclass) implementation of ec site cart function - ruby - undefined method `build_address' for order (table doesn't exist): class send data to db - ruby - undefined method `model_name'for nil: i got an error in nilclass and want to know where the cause is - ruby - # undefined method i want to resolve an error - ruby - nomethoderror: undefined method `new'for: uglifier: symbol error on heroku deployment - ruby - cannot save data by save method - csv - undefined method `strftime'for nil: nilclass cannot be resolved - ruby on rails - undefined method `name' for nil:nilclass - ruby on rails: undefined method ʻuser'for nil: about nomethoderror of nilclass - ruby - improvement of nomethoderror undefined method `paginate' - ruby - param is missing or the value is empty cannot be resolved in controller test (request_spec) of rspec - ruby on rails - i cannot use the favorite function! - ruby on rails 5 - cannot find syntaxerror please let me know if there is a mistake in the closing tag or closing bracket somewhe - ruby on rails - data sent to prams by rails cannot be sent because permitted: false - ruby - resource: the server responded with a status of 404 () error - ruby - please tell me the knowledge etc to reproduce the problem solving with the solved question - ruby - aws deployment the asset "" is not present in the asset pipeline cannot be resolved - javascript - can you create a homepage using rails? - ruby - http error 500 error occurred in aws deployment, but the cause cannot be determined - ruby - [rails] i want to find the date difference using the model created_at - ruby - about the matter that the error message at the time of "rails" validation check is not displayed - ruby - [rails] i want to create an image link using the url of the image output using active storage - ruby - i want to introduce and utilize bootstrap in rails6 - ruby - xcode-select --install not possible! This is an error when a CSV file is not attached.
https://www.tutorialfor.com/questions-150930.htm
CC-MAIN-2020-50
en
refinedweb
Enter your language here Java while statement Code Since there is a problem that cannot be solved by studying java, I would appreciate a solution. Problem description ↓ Declares a while statement that performs loop processing when the value of variable num is 100 or less. In the processing of the while statement, add a numerical value to the variable num, and the value to be added adds the current loop count, and the first processing is +1 The second process is +2 The third process is +3 ... The variable value is displayed on the screen after the loop. ### Problems that occur I'm in trouble because all five loop processing results are not displayed properly. ### Corresponding source code import java.util.Scanner; public class Main { public static void main (String [] args) { Scanner scan = new Scanner (System.in); String text = scan.next (); int num = Integer.parseInt (text); int num1 = 0; int num2 = 1; while (num<= 100) { num + = num1 + num2; num ++; } System.out.println (num); } } Please describe what you tried for the problem here. I did some research on the internet, but I couldn't find a site to get clear tips.Supplemental information (FW/tool version etc.) Please provide more detailed information here. - Answer # 1 Related articles - regarding for or while statement [java] - java - i have a question about a while statement - swift - how to set the condition of the while statement using dispatchgroup - how to read if statement (in java - about if statement in java - java - how to use if statement with the value specified in servlet in jsp? - javascript - regarding the syntax of the while statement [a fairly basic question] - java i want to store them in hashmap while dividing them by the same key - about conditional branch of java if statement - java for statement of an array that is assigned a random number - java - update statement is used in dao, but it is not reflected - java for statement scope - minority calculation in python while statement - calculation using java array and if statement - python - how to use while statement - a method of adding multiples of 3 when java while num is 100 or less - mysql - sql count statement condition specification - python - the while statement cannot resolve the invalid syntax error - java - i want to make the scanned character true in the if statement - java if statement is not applied Related questions - java - i would like to know in the while and for statements that integers between 1 and 16 are displayed as common multiples of - c - the while statement is not applied and ends without looping - python - i shouldn't be out of the while loop, but it doesn't print - php - how to loop with undecided number of times - javascript - i can't get out of while - java for loop character addition - java - which is used for the infinite loop? - java - how to use try-catch - python - how to exit the infinite loop while leaving the output of jupyter notebook - i want to write a number guessing program that uses endless loop using break in python What is the logic below?
https://www.tutorialfor.com/questions-151315.htm
CC-MAIN-2020-50
en
refinedweb
People watching this port, also watch: dahdi-kmod26 make generate-plist cd /usr/ports/misc/dahdi-kmod/ && make install clean Deleted ports which required this port: Number of commits found: 32 Sort ARCHS. While here, pet portlint. Approved by: portmgr (tier-2 blanket) Remove ${PORTSDIR}/ from dependencies, categories m, n, o, and p. With hat: portmgr Sponsored by: Absolight Unbreak on -CURRENT (taskqueue_enqueue_fast(9) removed). When building with FAST_DEPEND, don't use -MP as the port has a 'version.h::' dependency which conflicts with the 'version.h:' dependency that -MP Give kernel module a few seconds to initialize hardware before calling dahdi_cfg. PR: 188780 Submitted by: Dan Lukes Fix build with clang 3.6. Unbreak on -CURRENT (ignore unused command line arguments for clang). Allow staging as a regular user SYSCTL_ROOT_NODE exists only on FreeBSD >= 1100024 Unbreak on -CURRENT. Unbreak on FreeBSD - Don't rely on namespace pollution and #include <sys/mbuf.h> explicitly - Stage support - Remove obsolete NO_PACKAGE - Remove IGNORE check for obsolete versions of FreeBSD - Fix typo introduced in converting this port to USES=kmod Approved by: portmgr (infrastructure blanket) Unbreak FreeBSD 10 (no D_PSEUDO) and clang build. CONFLICTS definition. This port conflicts with dahdi-kmod26-*. Fix portlint error: use a tab (not space) after a variable name - update png to 1.5.10 - Fix wcte12xp unloading (implement flush_workqueue) - Fix nethdlc -> bchan/dchan reconfiguration (unlock chan mtx before destroying iface) - Bump PORTREVISION - Fix build on FreeBSD 9 and later where "bool" is defined in <sys/types.h> - Bump PORTREVISION - Fix latency reconfiguration on 5th gen wct4xxp cards: FILTER_SCHEDULE_THREAD does not work on FreeBSD as described -- it is not actually a bit flag, so should not be used with FILTER_HANDLED - Bump port version to 2.4.0rc5. - increase the number of buffers for nethdlc because receiver is now run in a taskqueue - return ENOBUFS when there are not enough output buffers on nethdlc send Implement nethdlc with Cisco HDLC encapsulation: when a span is configured in nethdlc mode ngX network interface is created. Use the canonical way to test for the presence of FreeBSD src files. Approved Split dahdi port into two parts: - dahdi - userland libraries and utilities - dahdi-kmod - kernel modules dahdi port can be packaged and this allows asterisk package (that depends on dahdi) to be built as well. Servers and bandwidth provided byNew York Internet, iXsystems, and RootBSD 9 vulnerabilities affecting 54 ports have been reported in the past 14 days * - modified, not new All vulnerabilities Last updated:2020-11-22 15:51:41
https://www.freshports.org/misc/dahdi-kmod/
CC-MAIN-2020-50
en
refinedweb
The web frameworks that powered much of the page-based web aren't necessarily cut out to serve the real-time web. This is where Vert.x excels. Vert.x has been designed from the ground up to enable you to build scalable and real-time web applications. In this chapter, you will take your very first steps in Vert.x development by installing Vert.x, and running it from the command line. You will also get familiar with some key Vert.x concepts and put them into use by creating a web server for serving static files. To install Vert.x, you simply download and unpack a ZIP file. A Vert.x installation lives in a self-contained directory structure, which can be located anywhere on your machine. It's also recommended that you include the path to the Vert.x executable in your PATH environment variable. This will make working with the Vert.x command line easier. In the following pages, we'll go through these steps, after which you'll have everything up and running. Vert.x is written in Java, and needs the Java JDK 7 to run the program. Older versions of Java are not supported. If you're not sure whether you already have Java or you're not sure about its version, launch a terminal or a command prompt and check the Java version: java âversion In the output, you should see a version number beginning with 1.7. If not, you'll need to obtain the JDK before you can start working with Vert.x. If you're running Mac OS X or Windows, I recommend you to install Oracle's Java SE Version 7 or newer. You will find it from Oracle's website at (be sure to select the full JDK or the for Developers option, and not just the JRE or the for Consumers option). Follow the instructions provided by the web page and the installer to complete the installation. If you're running Linux, I recommend you install OpenJDK 7 from your package manager, if available. On Ubuntu and Debian, it will be in the openjdk-7-jdk package: apt-get install openjdk-7-jdk Alternatively, Oracle's Java SE is also available for Linux. Because Vert.x is built in Java, the distribution package is the same regardless of your operating system or hardware. Head to, and download the latest version of Vert.x. Note In the following instructions, the Vert.x version is marked as x.x.x. Substitute it with the version of Vert.x that you have downloaded. The next steps will be different for different operating systems. You'll need to unpack the downloaded ZIP file and add the bin directory of the distribution to your PATH environment variable. This will allow you to easily launch Vert.x from anywhere on your machine. If you are using the Homebrew package manager on OS X, you can just install the Vert.x package and skip these steps. Other package manager integrations may exist, but obtaining the distribution from the Vert.x website is still the most common installation method. Start a command line shell (such as the terminal applications on OS X and Ubuntu), and navigate to a directory into which you want to install Vert.x. For example, if you have one named devwithin your Homedirectory: $ cd ~/dev Unzip the Vert.x distribution package you downloaded earlier. This will create a folder named vert.x-x.x.x.final, from where we will run Vert.x. $ unzip ~/Downloads/vertx-x.x.x.final.zip Open your .bash_profilefile in a text editor. $ pico ~/.bash_profile Add the path to the binsubdirectory of the Vert.x distribution to the PATHenvironment variable, by adding the following line: export PATH=$PATH:$HOME/dev/vert.x-x.x.x.final/bin Save and close the editor. The Vert.x executable will now be in your PATHfor all the future terminal windows you open. You can also apply this change immediately to your current terminal window by typing: $ source ~/.bash_profile You can now proceed to Running Vert.x section to test your installation. Tip Downloading the example code You can download the example code files for all Packt books you have purchased from your account at. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you. To install Vert.x on Windows 7 or Windows 8, perform with the following steps: Open File Explorer (in Windows 8) or Windows Explorer (in Windows 7) and find the downloaded Vert.x ZIP file. Right-click on the file and select Extract Allâ¦. Select the folder into which you want to install Vert.x (for example, one named devwithin your Homedirectory). After selecting the directory, click on Extract. Open Control Panel. Navigate to System and Security | System | Advanced system settings | Environment Variables. In the System variables listbox, find the variable Path. Select it and click on Edit. At the end of the existing variable value, add a semicolon followed by the path to the binfolder of the extracted Vert.x distribution. ;C:\Users\YourUsername\dev\vert.x-x.x.x.final\bin If you do not already have an environment variable named JAVA_HOMEin either the User variables or the System variables listbox, you will need to add it, so that Vert.x will know where to find Java. You can now proceed to the Running Vert.x section to verify your installation. Let's verify that the Vert.x installation was successful by trying to run it. Go to a terminal (on OS X/Linux) or a Command Prompt (on Windows). Try running the vertx command: vertx version This should simply output the version of your Vert.x distribution. If you see the version, you have successfully installed Vert.x! The vertx command has multiple uses, but the main one is to launch a Vert.x instance, which is something you'll be doing a lot. Note A Vert.x instance is the container in which you run the Vert.x applications. It is a single Java virtual machine, which is hosting the Vert.x runtime and its thread pools, classloaders, and other infrastructure. If you want to see the different things that you can do with the vertx command, just run it without any arguments, and it will print out a description of all the different use cases: vertx As an alternative to using the vertx command, it is also possible to launch an embedded Vert.x instance from within an existing Java (or other JVM language) application. This is useful when you have an existing application and want to integrate the Vert.x framework in to it. To embed Vert.x, you need to add the Vert.x JARs to your application's classpath. The JARs are available in the Vert.x installation directory, and also as Maven dependencies. After this, you can instantiate a Vert.x instance programmatically. We won't be using embedded Vert.x instances in this book, but you can find more information about it in the Embedding the Vert.x platform section of the Vert.x documentation available at. Now that you have Vert.x installed and are able to run it, you're all set to write your first Vert.x application. This application will consist of a single verticle that prints out the classic "Hello world" message. Note A verticle is the fundamental building block of the Vert.x applications. You can think of a verticle as a component of an application, which typically consists of one or a few code files and is focused on a specific task. Verticles are similar to packages in Java or namespaces in C#; they are used to organize different parts of a system. However, as opposed to packages or namespaces, verticles are also a runtime construct with some interesting properties when it comes to concurrency. We will discuss them in Chapter 2, Developing a Vert.x Web Application. Verticles can be implemented in any of the supported Vert.x languages (JavaScript, CoffeeScript, Java, Ruby, Python, or Groovy). For this one, let's use JavaScript. Create a file named hello.js (it doesn't matter where you put it). Open the file in an editor and add the following contents: var console = require("vertx/console"); console.log("Hello world"); That's all the code you need for this simple application. Now, let's launch a Vert.x instance and run hello.js in it as a verticle: vertx run hello.js This should print out the message. Even though there's nothing else to do, the Vert.x instance will keep running until we explicitly shut it down. Use Ctrl + C to shut down the Vert.x instance when you're done. So, what just happened? You wrote some code that loads the Vert.x console library and then uses it to log a message to the screen You fired up a Vert.x instance using the vertxcommand In that instance, you deployed the code as a verticle Note In JavaScript verticles, we will always use the require function to load the code from within the Vert.x framework or from other JavaScript files. The require function is defined by the CommonJS modules/1.1 standard, which Vert.x implements. You can find more information about it at. You have just gone through the basic process of writing and running a Vert.x application. Now, let's turn to something a bit more useful. Our application is going to need a web server, which is used to serve all the HTML, CSS, and JavaScript files to web browsers. Vert.x includes all the building blocks for setting up a web server, including the HTTP networking and filesystem access. However, there is also something more high-level we can use, that is, a publicly available web server module, which does file serving over HTTP for us out of the box. Note Vert.x modules are a solution for packaging and distributing the Vert.x applications or pieces of application functionality for reuse. A module can include one or more verticles, and an application can make use of any number of modules written in different programming languages. Vert.x has a growing public-module registry (), from which you can get a variety of open source modules to your applications. The web server module is one of them, and we will install some more later in the book. In addition to public modules, it is also possible and highly encouraged to package your own applications and libraries as modules. You will learn how to do this in Chapter 5, Polyglot Development and Modules. First create a folder for the application we will be building for the duration of this book. You can just call it mindmap, and put it within your Home directory: cd mkdir mindmap cd mindmap In this folder, create a file named app.js. This will be the deployment verticle of our application. We will use it to deploy all the other verticles and modules that our application needs. As far as the Vert.x itself is concerned, there is nothing special about a deployment verticle; it is just a code organization practice. We are going to deploy the mod-web-server module. The latest version of the module at the time of writing is 2.0.0-final, which we will be using here. Alternatively, you can look for the latest version in the Vert.x module registry. Add the following code to app.js: var container = require("vertx/container"); container.deployModule("io.vertx~mod-web-server~2.0.0-final", { port: 8080, host: "localhost" }); Let's go through the contents of this file: On the first line, we have used the requirefunction again; it loads into the Vert.x container. It represents the runtime in which the current verticle is running, and can be used to deploy and undeploy other verticles and modules. Next, we called the deployModulefunction of the container to deploy the web server module. We gave two arguments to the function: The fully qualified name and version of the module to deploy A module-specific configuration object. Within the configuration object, we passed two entries to the module: the host name and port to which to bind the server Now you can run this code as a verticle: vertx run app.js The first time a new module is deployed, Vert.x will automatically download and install it from the public module registry. You will see a subfolder named mods appearing in the project folder, and within it the contents of the installed modules. (In this case, the web server module.) If you now point your web browser at, you will see a 404 Resource not found message. This means that the web server is running, but there is nothing to serve. Let's fix that. Create a subfolder named web in the project folder. By default, this is where the web server module looks for static files to serve the browsers (this folder is configurable through the web server's module configuration object). mkdir web In this folder, add a simple HTML document in a file named index.html, with the following contents: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> </head> <body> Hello! </body> </html> Now, as you point your browser at, you will see the Hello! message. You are running a web server in Vert.x! In this chapter, you have installed Vert.x and taken the very first steps towards building a real-time web application by configuring and running a web server. You have already covered a lot of ground: Obtaining and installing Vert.x Launching Vert.x instances and running verticles from the command line Embedding a Vert.x instance to an existing application Installing and deploying the Vert.x modules Some key concepts, such as Vert.x instances, verticles, and modules Accessing the Vert.x core API from JavaScript Running a web server In the next chapter, we'll build on our simple web server by adding the very first actual features to our mind map application.
https://www.packtpub.com/product/real-time-web-application-development-using-vert-x-2-0/9781782167952
CC-MAIN-2020-50
en
refinedweb
In this lab I'm writing a simple Portable Executable (PE) file header parser for 32bit binaries, using C++ as the programming language of choice. The lab was inspired by the techniques such as reflective DLL injection and process hollowing which both deal with various parts of the PE files. The purpose of this lab is two-fold: Get a bit more comfortable with C++ Get a better understanding of PE file headers This lab is going to be light on text as most of the relevant info is shown in the code section, but I will touch on the piece that confused me the most in this endevour - parsing the DLL imports. Below is a graphic showing the end result - a program that parses a 32bit cmd.exe executable and spits out various pieces of information from various PE headers as well as DLL imports. The code is not able to parse 64bit executables correctly. This will not be fixed. The code was not meant to be clean and well organised - it was not the goal of this lab The parser is not full-blown - it only goes through the main headers and DLL imports, so no exports, relocations or resources will be touched. For the most part of this lab, header parsing was going smoothly, until it was time to parse the DLL imports. The bit below is the final solution that worked for parsing out the DLL names and their functions: Parsing out imported DLLs and their functions requires a good number of offset calculations that initially may seem confusing and this is the bit I will try to put down in words in these notes. So how do we go about extracting the DLL names the binary imports and the function names that DLL exports? First off, we need to define some terms: Section - a PE header that defines various sections contained in the PE. Some sections are .text - this is where the assembly code is stored, .data contains global and static local variables, etc. File item - part of a PE file, for example a code section .text Relative Virtual Address (RVA) - address of some file item in memory minus the base address of the image. Virtual Address (VA) - virtual memory address of some file item in memory without the image base address subtracted. For example, if we have a VA 0x01004000 and we know that the image base address is 0x0100000, the RVA is 0x01004000 - 0x01000000 = 0x0004000. Data Directories - part of the Optional Header and contains RVAs to various tables - exports, resources and most importantly for this lab - dll imports table. It also contains size of the table. If we look at the notepad.exe binary using CFF Explorer (or any other similar program) and inspect the Data Directories from under the Optional Header , we can see that the Import Table is located at RVA 0x0000A0A0 that according to CFF Explorer happens to live in the .text section: Indeed, if we look at the Section Headers and note the values Virtual Size and Virtual Address for the .text section: and check if the Import Directory RVA of 0x0000A0A0 falls into the range of .text section with this conditional statement in python: 0x000a0a0 > 0x00001000 and 0x000a0a0 < 0x00001000 + 0x0000a6fc ...we can confirm it definitely does fall into the .text section's range: In order to read out DLL names that this binary imports, we first need to populate a data structure called PIMAGE_IMPORT_DESCRIPTOR with revlevant data from the binary, but how do we find it? We need to translate the Import Directory RVA to the file offset - a place in the binary file where the DLL import information is stored. The way this can be achieved is by using the following formula: where imageBase is the start address of where the binary image is loaded, text.RawOffset is the Raw Address value from the .text section, text.VA is Virtual Address value from the .text section and importDirectory.RVA is the Import Directory RVA value from Data Directories in Optional Header. If you think about what was discussed so far and the above formula for a moment, you will realise that: imageBase in our case is 0 since the file is not loaded to memory and we are inspecting it on the disk import table is located in .text section of the binary. Since the binary is not loaded to disk, we need to know the file offset of the .text section in relation to the imageBase imageBase + text.RawOffset gives us the file offset to the .text section - we need it, because remember - the import table is inside the .text section Since importDirectory.RVA, as mentioned earlier, lives in the .text section, importDirectory.RVA - text.VA gives us the offset of the import table relative to the start of the .text section We take the value of importDirectory.RVA - text.VA and add it to the text.RawOffset and we get the offset of the import table in the raw .text data. Below is some simple powershell to do the calculations for us to get the file offset that we can later use for filling up the PIMAGE_IMPORT_DESCRIPTOR structure with: PIMAGE_IMPORT_DESCRIPTORPS C:\Users\mantvydas> $fileBase = 0x0PS C:\Users\mantvydas> $textRawOffset = 0x00000400PS C:\Users\mantvydas> $importDirectoryRVA = 0x0000A0A0PS C:\Users\mantvydas> $textVA = 0x00001000PS C:\Users\mantvydas>PS C:\Users\mantvydas> # this points to the start of the .text sectionPS C:\Users\mantvydas> $rawOffsetToTextSection = $fileBase + $textRawOffsetPS C:\Users\mantvydas> $importDescriptor = $rawOffsetToTextSection + ($importDirectoryRVA - $textVA)PS C:\Users\mantvydas> [System.Convert]::ToString($importDescriptor, 16)// this is the file offset we are looking for for PIMAGE_IMPORT_DESCRIPTOR94a0 If we check the file offset 0x95cc, we can see we are getting close to a list of imported DLL names - note that at we can see the VERSION.dll starting to show - that is a good start: Now more importantly, note the value highlighted at offset 0x000094ac - 7C A2 00 00 (reads A2 7C due to little indianness) - this is important. If we consider the layout of the PIMAGE_IMPORT_DESCRIPTOR structure, we can see that the fourth member of the structure (each member is a DWORD, so 4 bytes in size) is DWORD Name, which implies that 0x000094ac contains something that should be useful for us to get our first imported DLL's name: Indeed, if we check the Import Directory of notepad.exe in CFF Explorer, we see that the 0xA27C is another RVA to the DLL name, which happens to be ADVAPI32.dll - and we will manually verify this in a moment: If we look closer at the ADVAPI32.dll import details and compare it with the hex dump of the binary at 0x94A0, we can see that the 0000a27c is surrounded by the same info we saw in CFF Explorer for the ADVAPI32.dll: Let's see if we can translate this Name RVA 0xA27c to the file offset using the technique we used earlier and finally get the first imported DLL name. This time the formula we need to use is: where nameRVA is Name RVA value for ADVAPI32.dll from the Import Directory and text.VA is the Virtual Address of the .text section. Again, some powersehell to do the RVA to file offset calculation for us: # first dll name$nameRVA = 0x0000A27C$firstDLLname = $rawOffsetToTextSection + ($nameRVA - $textVA)[System.Convert]::ToString($firstDLLname, 16)967c If we check offset 0x967c in our hex editor - success, we found our first DLL name: Now in order to get a list of imported functions from the given DLL, we need to use a structure called PIMAGE_THUNK_DATA32which looks like this: In order to utilise the above structure, again, we need to translate an RVA of the OriginalFirstThunk member of the structure PIMAGE_IMPORT_DESCRIPTOR which in our case was pointing to 0x0000A28C: If we use the same formula for calculating RVAs as previously and use the below Powershell to calculate the file offset, we get: # first thunk$firstThunk = $rawOffsetToTextSection + (0x0000A28C - $textVA)[System.Convert]::ToString($firstThunk, 16)968c At that offset 968c+4 (+4 because per PIMAGE_THUNK_DATA32 structure layout, the second member is called Function and this is the member we are interested in), we see a couple more values that look like RVAs - 0x0000a690 and 0x0000a6a2: If we do a final RVA to file offset conversion for the second (we could do the same for 0x0000a690) RVA 0x0000a6a2: $firstFunction = $rawOffsetToTextSection + (0x0000A6A2 - $textVA)[System.Convert]::ToString($firstFunction, 16)9aa2 Finally, with the file offset 0x9aa2, we get to see a second (because we chose the offset a6a2 rather than a690) imported function for the DLL ADVAPI32. Note that the function name actually starts 2 bytes further into the file, so the file offset 9aa2 becomes 9aa2 + 2 = 9aa4 - currently I'm not sure what the reason for this is: Cross checking the above findings with CFF Explorer's Imported DLLs parser, we can see that our calculations were correct - note the OFTs column and the values a6a2 and a690 we referred to earlier: The below code shows how to loop through the file in its entirety to parse all the DLLs and all of their imported functions. #include "stdafx.h"#include "Windows.h"#include <iostream>int main(int argc, char* argv[]) {const int MAX_FILEPATH = 255;char fileName[MAX_FILEPATH] = {0};memcpy_s(&fileName, MAX_FILEPATH, argv[1], MAX_FILEPATH);HANDLE file = NULL;DWORD fileSize = NULL;DWORD bytesRead = NULL;LPVOID fileData = NULL;PIMAGE_DOS_HEADER dosHeader = {};PIMAGE_NT_HEADERS imageNTHeaders = {};PIMAGE_SECTION_HEADER sectionHeader = {};PIMAGE_SECTION_HEADER importSection = {};IMAGE_IMPORT_DESCRIPTOR* importDescriptor = {};PIMAGE_THUNK_DATA thunkData = {};DWORD thunk = NULL;DWORD rawOffset = NULL;// open filefile = CreateFileA(fileName, GENERIC_ALL, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);if (file == INVALID_HANDLE_VALUE) printf("Could not read file");// allocate heapfileSize = GetFileSize(file, NULL);fileData = HeapAlloc(GetProcessHeap(), 0, fileSize);// read file bytes to memoryReadFile(file, fileData, fileSize, &bytesRead, NULL);// IMAGE_DOS_HEADERdosHeader = (PIMAGE_DOS_HEADER)fileData;printf("******* DOS HEADER *******\n");printf("\t0x%x\t\tMagic number\n", dosHeader->e_magic);printf("\t0x%x\t\tBytes on last page of file\n", dosHeader->e_cblp);printf("\t0x%x\t\tPages in file\n", dosHeader->e_cp);printf("\t0x%x\t\tRelocations\n", dosHeader->e_crlc);printf("\t0x%x\t\tSize of header in paragraphs\n", dosHeader->e_cparhdr);printf("\t0x%x\t\tMinimum extra paragraphs needed\n", dosHeader->e_minalloc);printf("\t0x%x\t\tMaximum extra paragraphs needed\n", dosHeader->e_maxalloc);printf("\t0x%x\t\tInitial (relative) SS value\n", dosHeader->e_ss);printf("\t0x%x\t\tInitial SP value\n", dosHeader->e_sp);printf("\t0x%x\t\tInitial SP value\n", dosHeader->e_sp);printf("\t0x%x\t\tChecksum\n", dosHeader->e_csum);printf("\t0x%x\t\tInitial IP value\n", dosHeader->e_ip);printf("\t0x%x\t\tInitial (relative) CS value\n", dosHeader->e_cs);printf("\t0x%x\t\tFile address of relocation table\n", dosHeader->e_lfarlc);printf("\t0x%x\t\tOverlay number\n", dosHeader->e_ovno);printf("\t0x%x\t\tOEM identifier (for e_oeminfo)\n", dosHeader->e_oemid);printf("\t0x%x\t\tOEM information; e_oemid specific\n", dosHeader->e_oeminfo);printf("\t0x%x\t\tFile address of new exe header\n", dosHeader->e_lfanew);// IMAGE_NT_HEADERSimageNTHeaders = (PIMAGE_NT_HEADERS)((DWORD)fileData + dosHeader->e_lfanew);printf("\n******* NT HEADERS *******\n");printf("\t%x\t\tSignature\n", imageNTHeaders->Signature);// FILE_HEADERprintf("\n******* FILE HEADER *******\n");printf("\t0x%x\t\tMachine\n", imageNTHeaders->FileHeader.Machine);printf("\t0x%x\t\tNumber of Sections\n", imageNTHeaders->FileHeader.NumberOfSections);printf("\t0x%x\tTime Stamp\n", imageNTHeaders->FileHeader.TimeDateStamp);printf("\t0x%x\t\tPointer to Symbol Table\n", imageNTHeaders->FileHeader.PointerToSymbolTable);printf("\t0x%x\t\tNumber of Symbols\n", imageNTHeaders->FileHeader.NumberOfSymbols);printf("\t0x%x\t\tSize of Optional Header\n", imageNTHeaders->FileHeader.SizeOfOptionalHeader);printf("\t0x%x\t\tCharacteristics\n", imageNTHeaders->FileHeader.Characteristics);// OPTIONAL_HEADERprintf("\n******* OPTIONAL HEADER *******\n");printf("\t0x%x\t\tMagic\n", imageNTHeaders->OptionalHeader.Magic);printf("\t0x%x\t\tMajor Linker Version\n", imageNTHeaders->OptionalHeader.MajorLinkerVersion);printf("\t0x%x\t\tMinor Linker Version\n", imageNTHeaders->OptionalHeader.MinorLinkerVersion);printf("\t0x%x\t\tSize Of Code\n", imageNTHeaders->OptionalHeader.SizeOfCode);printf("\t0x%x\t\tSize Of Initialized Data\n", imageNTHeaders->OptionalHeader.SizeOfInitializedData);printf("\t0x%x\t\tSize Of UnInitialized Data\n", imageNTHeaders->OptionalHeader.SizeOfUninitializedData);printf("\t0x%x\t\tAddress Of Entry Point (.text)\n", imageNTHeaders->OptionalHeader.AddressOfEntryPoint);printf("\t0x%x\t\tBase Of Code\n", imageNTHeaders->OptionalHeader.BaseOfCode);//printf("\t0x%x\t\tBase Of Data\n", imageNTHeaders->OptionalHeader.BaseOfData);printf("\t0x%x\t\tImage Base\n", imageNTHeaders->OptionalHeader.ImageBase);printf("\t0x%x\t\tSection Alignment\n", imageNTHeaders->OptionalHeader.SectionAlignment);printf("\t0x%x\t\tFile Alignment\n", imageNTHeaders->OptionalHeader.FileAlignment);printf("\t0x%x\t\tMajor Operating System Version\n", imageNTHeaders->OptionalHeader.MajorOperatingSystemVersion);printf("\t0x%x\t\tMinor Operating System Version\n", imageNTHeaders->OptionalHeader.MinorOperatingSystemVersion);printf("\t0x%x\t\tMajor Image Version\n", imageNTHeaders->OptionalHeader.MajorImageVersion);printf("\t0x%x\t\tMinor Image Version\n", imageNTHeaders->OptionalHeader.MinorImageVersion);printf("\t0x%x\t\tMajor Subsystem Version\n", imageNTHeaders->OptionalHeader.MajorSubsystemVersion);printf("\t0x%x\t\tMinor Subsystem Version\n", imageNTHeaders->OptionalHeader.MinorSubsystemVersion);printf("\t0x%x\t\tWin32 Version Value\n", imageNTHeaders->OptionalHeader.Win32VersionValue);printf("\t0x%x\t\tSize Of Image\n", imageNTHeaders->OptionalHeader.SizeOfImage);printf("\t0x%x\t\tSize Of Headers\n", imageNTHeaders->OptionalHeader.SizeOfHeaders);printf("\t0x%x\t\tCheckSum\n", imageNTHeaders->OptionalHeader.CheckSum);printf("\t0x%x\t\tSubsystem\n", imageNTHeaders->OptionalHeader.Subsystem);printf("\t0x%x\t\tDllCharacteristics\n", imageNTHeaders->OptionalHeader.DllCharacteristics);printf("\t0x%x\t\tSize Of Stack Reserve\n", imageNTHeaders->OptionalHeader.SizeOfStackReserve);printf("\t0x%x\t\tSize Of Stack Commit\n", imageNTHeaders->OptionalHeader.SizeOfStackCommit);printf("\t0x%x\t\tSize Of Heap Reserve\n", imageNTHeaders->OptionalHeader.SizeOfHeapReserve);printf("\t0x%x\t\tSize Of Heap Commit\n", imageNTHeaders->OptionalHeader.SizeOfHeapCommit);printf("\t0x%x\t\tLoader Flags\n", imageNTHeaders->OptionalHeader.LoaderFlags);printf("\t0x%x\t\tNumber Of Rva And Sizes\n", imageNTHeaders->OptionalHeader.NumberOfRvaAndSizes);// DATA_DIRECTORIESprintf("\n******* DATA DIRECTORIES *******\n");printf("\tExport Directory Address: 0x%x; Size: 0x%x\n", imageNTHeaders->OptionalHeader.DataDirectory[0].VirtualAddress, imageNTHeaders->OptionalHeader.DataDirectory[0].Size);printf("\tImport Directory Address: 0x%x; Size: 0x%x\n", imageNTHeaders->OptionalHeader.DataDirectory[1].VirtualAddress, imageNTHeaders->OptionalHeader.DataDirectory[1].Size);// SECTION_HEADERSprintf("\n******* SECTION HEADERS *******\n");// get offset to first section headeerDWORD sectionLocation = (DWORD)imageNTHeaders + sizeof(DWORD) + (DWORD)(sizeof(IMAGE_FILE_HEADER)) + (DWORD)imageNTHeaders->FileHeader.SizeOfOptionalHeader;DWORD sectionSize = (DWORD)sizeof(IMAGE_SECTION_HEADER);// get offset to the import directory RVADWORD importDirectoryRVA = imageNTHeaders->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_IMPORT].VirtualAddress;// print section datafor (int i = 0; i < imageNTHeaders->FileHeader.NumberOfSections; i++) {sectionHeader = (PIMAGE_SECTION_HEADER)sectionLocation;printf("\t%s\n", sectionHeader->Name);printf("\t\t0x%x\t\tVirtual Size\n", sectionHeader->Misc.VirtualSize);printf("\t\t0x%x\t\tVirtual Address\n", sectionHeader->VirtualAddress);printf("\t\t0x%x\t\tSize Of Raw Data\n", sectionHeader->SizeOfRawData);printf("\t\t0x%x\t\tPointer To Raw Data\n", sectionHeader->PointerToRawData);printf("\t\t0x%x\t\tPointer To Relocations\n", sectionHeader->PointerToRelocations);printf("\t\t0x%x\t\tPointer To Line Numbers\n", sectionHeader->PointerToLinenumbers);printf("\t\t0x%x\t\tNumber Of Relocations\n", sectionHeader->NumberOfRelocations);printf("\t\t0x%x\t\tNumber Of Line Numbers\n", sectionHeader->NumberOfLinenumbers);printf("\t\t0x%x\tCharacteristics\n", sectionHeader->Characteristics);// save section that contains import directory tableif (importDirectoryRVA >= sectionHeader->VirtualAddress && importDirectoryRVA < sectionHeader->VirtualAddress + sectionHeader->Misc.VirtualSize) {importSection = sectionHeader;}sectionLocation += sectionSize;}// get file offset to import tablerawOffset = (DWORD)fileData + importSection->PointerToRawData;// get pointer to import descriptor's file offset. Note that the formula for calculating file offset is: imageBaseAddress + pointerToRawDataOfTheSectionContainingRVAofInterest + (RVAofInterest - SectionContainingRVAofInterest.VirtualAddress)importDescriptor = (PIMAGE_IMPORT_DESCRIPTOR)(rawOffset + (imageNTHeaders->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_IMPORT].VirtualAddress - importSection->VirtualAddress));printf("\n******* DLL IMPORTS *******\n");for (; importDescriptor->Name != 0; importDescriptor++) {// imported dll modulesprintf("\t%s\n", rawOffset + (importDescriptor->Name - importSection->VirtualAddress));thunk = importDescriptor->OriginalFirstThunk == 0 ? importDescriptor->FirstThunk : importDescriptor->OriginalFirstThunk;thunkData = (PIMAGE_THUNK_DATA)(rawOffset + (thunk - importSection->VirtualAddress));// dll exported functionsfor (; thunkData->u1.AddressOfData != 0; thunkData++) {//a cheap and probably non-reliable way of checking if the function is imported via its ordinal number ¯\_(ツ)_/¯if (thunkData->u1.AddressOfData > 0x80000000) {//show lower bits of the value to get the ordinal ¯\_(ツ)_/¯printf("\t\tOrdinal: %x\n", (WORD)thunkData->u1.AddressOfData);} else {printf("\t\t%s\n", (rawOffset + (thunkData->u1.AddressOfData - importSection->VirtualAddress + 2)));}}}return 0;}
https://www.ired.team/miscellaneous-reversing-forensics/pe-file-header-parser-in-c++
CC-MAIN-2020-50
en
refinedweb
A small example about sampling and fitting. More... #include <vcg/complex/complex.h> #include <vcg/complex/algorithms/create/platonic.h> #include <vcg/complex/algorithms/point_sampling.h> #include <wrap/io_trimesh/import_off.h> #include <vcg/space/fitting3.h> Go to the source code of this file. A small example about sampling and fitting. Given a mesh (an icosahedron) for each face we get a few random samples over it, and then we recover: Definition in file trimesh_fitting.cpp.
http://vcglib.net/trimesh__fitting_8cpp.html
CC-MAIN-2020-50
en
refinedweb
The Swift programming language, originally developed by Apple and released in 2014, has just reached version 5.2. Swift 5.2, available in the Xcode 11.4 Beta, is a release designed to include significant improvements in quality and performance. Following the Swift Evolution process, this version brings "callAsFunction", subscript with default arguments, Key Path Expressions as Functions, a new diagnostic architecture, and more. Swift 5.2 callAsFunction() or "callable values of user-defined nominal types" introduces statically callable values to Swift. Callable values can be called using function call syntax, since they are values that define a function-like behavior. This feature supports argument labels and parameter types, throws and rethrows, and is not constrained to primary type declarations. Furthermore, it is possible to define multiple callAsFunction methods on a single type, and Swift will handle which one to call, similar to a simple overloading. struct Adder { var base: Int func callAsFunction(_ x: Int) -> Int { return base + x } } let add3 = Adder(base: 3) add3(10) // returns 13, same as adder.callAsFunction(10) Subscripts are shortcuts for accessing member elements of a collection, sequence or list, and can be defined in classes, structures, and enumerations. Developers can use subscripts to set and retrieve values by index without needing to write a method for setting and another for retrieval. In Swift 5.2, it is now possible to add subscripts with default arguments for any type of parameter. The following snippet of code shows the BucketList structure that returns a default value when someone tries to read an index out of bound: struct BucketList { var items: [String] subscript(index: Int, default: String = "your list is over") -> String { if index >= 0 && index < items.count { return items[index] } else { return `default` } } } let bucketList = BucketList(items: ["travel to Italy", "have children", "watch super bowl") print(bucketList[0]) print(bucketList[4]) This code will print "travel to Italy" and then "your list is over" because there is no value at index 4. The Key Path Expressions as Functions introduces the ability to use the key path expression \Root.value wherever functions of (Root) -> Value are allowed. Considering the following User struct: struct User { let email: String let isActive: Bool } Suppose we have already created an array of users; to get a list of users before the key path feature, we can do this applying users.map { $0.email } to gather an array of emails, or similarly users.filter { $0.isActive } to get an array of active users. With the key path, it is now possible to apply users.map(\.email) to gather an array of emails, or users.map { $0[keyPath: \User.email] } to get an array of active users. Swift 5.2 also brings a new diagnostic architecture designed to improve the quality and precision of error messages when a developer makes a coding mistake. Let's explore some examples of less accurate diagnostics before the new diagnostic architecture, and how it is in Swift 5.2: Invalid Optional Unwrap struct S { init(_: [T]) {} } var i = 42 _ = S ([i!]) Previously, this resulted in the following diagnostic error: type of expression is ambiguous without more context, and now this is diagnosed as error: cannot force unwrap value of non-optional type 'Int' _ = S<Int>([i!]) ~^. Argument-to-Parameter Conversion Mismatch import SwiftUI struct Foo: View { var body: some View { ForEach(1...5) { Circle().rotation(.degrees($0)) } } } Previously, this resulted in the following diagnostic error: Cannot convert value of type '(Double) -> RotatedShape<Circle>' to expected argument type '() -> _'. Now this is diagnosed as error: cannot convert value of type 'Int' to expected argument type 'Double' Circle().rotation(.degrees($0)) ^ Double( ). Other features not directly related to Swift 5.2, but which are worth taking note of, include: - Xcode 11.4 beta supports building and distributing macOS apps as a universal purchase. Universal purchase is enabled by default for new Mac Catalyst apps created in Xcode 11.4 - Build settings have a new evaluation operator, default, allowing developers to specify the default value of a build setting if it evaluates to nil in the context of the evaluation, such as $(SETTING:default=something) - Remote Swift packages with tools version 5.2 and above no longer resolve package dependencies that are only used in their test targets, improving performance and reducing the risk of dependency version conflicts The complete list of features, bug fixes, and known issues can be found on the release notes of the Xcode 11.4 beta. Community comments
https://www.infoq.com/news/2020/02/swift-5-2/?itm_source=presentations_about_Swift&itm_medium=link&itm_campaign=Swift
CC-MAIN-2020-50
en
refinedweb
Popularity 1.9 Growing Activity 0.7 Growing 107 4 1 Monthly Downloads: 0 Programming language: JavaScript License: MIT License react-measurements alternatives and similar libraries Based on the "UI Components" category react-beautiful-dndBeautiful and accessible drag and drop for lists with React sortablejs9.8 6.9 react-measurements VS sortablejsSortable react-selectA Select control built with and for React JS. react-virtualizedReact components for efficiently rendering large lists and tabular data. draft-js9.8 8.5 L3 react-measurements VS draft-jsA React framework for building text editors. Plyr9.7 7.8 L5 react-measurements VS PlyrA simple HTML5, YouTube and Vimeo player recharts9.6 8.1 L3 react-measurements VS rechartsRedefined chart library built with React and D3. react-dnd9.6 9.6 react-measurements VS react-dndDrag and Drop for React. react-tabledemo react-datesAn easily internationalizable, mobile-friendly datepicker library for the web. sweetalert29.4 8.9 L1 react-measurements VS sweetalert2demo/docs react-vis9.4 5.8 react-measurements VS9.1 9.5 react-measurements VS. react-toastifydemo8.5 7.5 react-measurements VS. material-tabledemo/docs react-bootstrap-tableIt's a react table for bootstrap. react-joyrideCreate walkthroughs and guided tours for your ReactJS apps. Now with standalone tooltips!. react-konva8.2 6.2 L4 react-measurements VS react-konvaReact Konva is a JavaScript library for drawing complex canvas graphics with bindings to the Konva Framework. typography8.2 2.4 react-measurements VS typographyA powerful toolkit for building websites with beautiful typography. react-custom-scrollbarsReact scrollbars component. react-infiniteA browser-ready efficient scrolling container based on UITableView. react-measurements or a related project? README react-measurements A React component for measuring & annotating images. Demo Usage import React from "react"; import { MeasurementLayer, calculateDistance, calculateArea } from "react-measurements"; class App extends React.Component { state = { measurements: [] }; render() { return ( <div style={{ position: "absolute", width: "300px", height: "300px", backgroundColor: "#1a1a1a", fontFamily: "sans-serif" }} > <MeasurementLayer measurements={this.state.measurements} widthInPx={300} heightInPx={300} onChange={this.onChange} measureLine={this.measureLine} measureCircle={this.measureCircle} /> </div> ); } onChange = measurements => this.setState({ ...this.state, measurements }); measureLine = line => Math.round(calculateDistance(line, 100, 100)) + " mm"; measureCircle = circle => Math.round(calculateArea(circle, 100, 100)) + " mm²"; } Scope The component is currently read-only on mobile. A mouse is required to create and edit measurements. License MIT *Note that all licence references and agreements mentioned in the react-measurements README section above are relevant to that project's source code only.
https://react.libhunt.com/react-measurements-alternatives
CC-MAIN-2020-50
en
refinedweb
In this chapter, we will set up our development environment and discuss how we can leverage SpringSource Tool Suite (STS) to its maximum. Although any popular Java development IDE such as Eclipse, intelliJ, NetBeans, and others can be used for developing Spring Integration solutions, pivotal, the company spearheading Spring Integration, recommends that you use STS which is an Eclipse-based IDE. STS comes with many off-the-shelf plugins, visual editors, and other features, which ease the development of Spring-powered enterprise applications. The look and feel of the IDE is very similar to Eclipse. Install STS by following these steps: JDK 1.6 and above is a prerequisite, download and install it from. Set JAVA_HOMEproperties as explained in the documentation at. Download STS from. The downloaded file is in ZIP format. Extract it to the preferred folder and it's all set. Go to <installation-directory>\sts-bundle\sts-3.6.1.RELEASE. The STS.exefile is the executable for launching the IDE. This step is optional but can help in efficient functioning of the OS editorâchange the memory allocation parameter. Locate STS.ini(in the same folder as STS.exe) and change the value of Xmx. For 2 GB, I've put it as Xmx2048m. The following steps will help you in creating your first project: Create a Spring Integration project by navigating to File | Spring Project, as shown in the following screenshot: Under the templates section, select Spring Integration Project - Simple. Provide a project name, for example, sisimple, as shown in the following screenshot: Fill in the information required to create a Maven-based project, as shown in this screenshot: Click on Finish; this will create a project with the name that was provided by us ( sisimple), as shown in this screenshot: This project is as simple as it can be. Let's take a quick look at the generated Java classes in the following points: Main.java: This file is located at the path: /sisimple/src/main/java/com/chandan/example/si/. It has the main method and will be used to run this sample. Right-click on this file from the package explorer and click on Run As | Java Applicationâthis will start the program. This class has the code to bootstrap Spring Integration configuration files and load components defined in it. Additionally, it converts user input to upper case. StringConversionService.java: This file is located at the path: /sisimple/src/main/java/com/chandan/example/si/service/. This is the service interface that is used to convert user input to upper case. spring-integration-context.xml: This file is located at the path: /sisimple/src/main/resources/META-INF/spring/integration/. It is the Spring Integration configuration file. It contains the XML-based declaration of Spring Integration components. log4j.xml: This file is located at the path: /sisimple/src/main/resources/. It is the Log4jconfiguration file. It can be edited to control the log level, appenders, and other logging-related aspects. StringConversionServiceTest.java: This file is located at the path: /sisimple/src/test/java/com/chandan/example/si/. This is the test file for StringConversionService. This will be used to run tests against the service classes. pom.xml: This is the file used for rmaven dependency management, located in /sisimple/. It has entries for all the dependencies used by the project. It will be a bit heavy and premature to explain each of the components in these classes and configuration files without having built up some theoretical conceptsâwe will discuss each of the elements in detail, as we move ahead in the chapters. STS provides visual ways to add different namespaces. Locate spring-integration-context.xml under /sisimple/src/main/resources/META-INF/spring/integration/ and open it. This is the default Spring configuration file. Click on the Namespaces tab to manage different namespaces of Spring Integration. The following screenshot shows imported namespaces for this sample project: In the same editor, clicking on the Integration-graph tab will open a visual editor, which can be used to add/modify or delete endpoints, channels, and other components of Spring Integration. The following screenshot contains the integration graph for our sample project: Let's have a quick look at the generated Maven POMâoverall, there are three dependencies; only one for Spring Integration, and the other ones for Junit and log4j, as shown in the following screenshot: This is still in the very early stages and is an incubation project. Scala DSL should not be confused with other EIP implementations being offered in Scalaârather, it is built on top of Spring Integration and provides DSL-based configuration and flow management. Note Check out the official Spring Integration Scala DSL blog at and the GitHub page at. In this chapter, you learned how to set up your IDE and created a basic project. We also tried our hands at the visual editor of STS and covered a quick introduction of the upcoming Scala DSL for Spring Integration. We will leverage this knowledge to build a compelling Spring Integration application using STS throughout the rest of the chapters. In the next chapter, we will cover how to ingest messages in the application and then how to process them.
https://www.packtpub.com/product/spring-integration-essentials/9781783989164
CC-MAIN-2020-50
en
refinedweb
The section to enumerations has eight rules. Since C++11, we have scoped enumerations which overcome a lot of the drawbacks of classical enumerations. Enumerations are sets of integer values, which behave like a type. Here is the summary of the rules: enum class enum ALL_CAPS As I mentioned it in the opening to this post: classical enumerations have a lot of drawbacks. Let me explicitly compare classical (unscoped) enumerations and scoped enumerations (sometimes called strongly typed enumerations), because this important comparison is not explicitly described in the rules. Here is a classical enumeration: enum Colour{ red, blue, green }; Here are the drawbacks of the classical enumerations: By using the keyword class or struct, the classical enumeration becomes a scoped enumeration (enum class): enum class ColourScoped{ red, blue, green }; Now, you have to use the scope operator for accessing the enumerators: ColourScoped::red. ColourScoped::red will not implicitly convert to int and will, therefore, not pollute the global namespace. Additionally, the underlying type is per default int. After providing the background information we can directly jump into the rules. Macros don't respect a scope and have no type. This means you can override a previously set macro create a kind of a type. The enumerators of a scoped enum (enum class) will not automatically convert to int. You have to access them with the scope operator. // scopedEnum.cpp #include <iostream> enum class ColourScoped{ red, blue, green }; void useMe(ColourScoped color){ switch(color){ case ColourScoped::red: std::cout << "ColourScoped::red" << std::endl; break; case ColourScoped::blue: std::cout << "ColourScoped::blue" << std::endl; break; case ColourScoped::green: std::cout << "ColourScoped::green" << std::endl; break; } } int main(){ std::cout << static_cast<int>(ColourScoped::red) << std::endl; // 0 std::cout << static_cast<int>(ColourScoped::red) << std::endl; // 0 std::cout << std::endl; ColourScoped colour{ColourScoped::red}; useMe(colour); // ColourScoped::red } The rules define an enumeration Day which supports the increment operation. enum Day { mon, tue, wed, thu, fri, sat, sun }; Day& operator++(Day& d) { return d = (d == Day::sun) ? Day::mon : static_cast<Day>(static_cast<int>(d)+1); } Day today = Day::sat; Day tomorrow = ++today; The static_cast is necessary in this example because applying the increment operator inside the increment operator would cause an infinite recursion: Day& operator++(Day& d) { return d = (d == Day::sun) ? Day::mon : Day{++d}; // error } If you use ALL_CAPS for enumerators, you may get a conflict with macros because they are typically written in ALL_CAPS. #define RED 0xFF0000 enum class ColourScoped{ RED }; // error If you can't find a name for the enumerations, the enumerations may be not related. In this case, you should use a constexpr value. // bad enum { red = 0xFF0000, scale = 4, is_signed = 1 }; // good constexpr int red = 0xFF0000; constexpr short scale = 4; constexpr bool is_signed = true; Since C++11, you can specify the underlying type of the enumeration and save memory. Per default the type of a scoped enum is int and, therefore, you can forward declare an enum. // typeEnum.cpp #include <iostream> enum class Colour1{ red, blue, green }; enum struct Colour2: char { red, blue, green }; int main(){ std::cout << sizeof(Colour1) << std::endl; // 4 std::cout << sizeof(Colour2) << std::endl; // 1 } By specifying the enumerator values it may happen that you set a value twice. The following enumeration Col2 has this issue. enum class Col1 { red, yellow, blue }; enum class Col2 { red = 1, yellow = 2, blue = 2 }; // typo enum class Month { jan = 1, feb, mar, apr, may, jun, jul, august, sep, oct, nov, dec }; // starting with 1 is conventional I made it relatively short in this post. The meta-rule that you should have to keep in mind is: use scoped enums. The next section of the C++ core guidelines deals with about 35 rules to resource management. This means we dive in the next post right into the heart21 Yesterday 8573 Week 9395 Month 167826 All 5037140 Currently are 104 guests and no members online Kubik-Rubik Joomla! Extensions Read more... Read more...
http://www.modernescpp.com/index.php/c-core-guidelines-rules-for-enumerations
CC-MAIN-2020-50
en
refinedweb
HOME HELP PREFERENCES SearchSubjectsFromDates > I was running version 2.38, so I upgraded to 2.39, but I'm still > getting the same results. Did you remove the archives and re-import with the new version of greenstone? If the doc.xml files are incorrect, then this is the result of the import process, and the build process will then use those archives. The HTMLPlug in gsdl 2.38 and earlier had this problem in some situations, while the version in 2.39 should always get entities right. I tried importing your test document in 2.39 (after manually putting the ä etc umlaut entities back in) and it all imported and built and displayed correctly. John McPherson
http://www.nzdl.org/gsdlmod?e=q-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-----dfr--0-0l--11-en-50---20-help-John+R.+McPherson--00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=gsarch&srp=0&srn=0&cl=search&d=20030606223114-GA3476-wesson-cs-waikato-ac-nz
CC-MAIN-2020-50
en
refinedweb
In-Depth Avoid common pitfalls with these step-by-step instructions for writing your first device application. Now that you're familiar with what's available and what's required in Microsoft's device development space it's time to build an application. Your first application will be a device application that, like a Windows application in the desktop framework, will have a UI component. You will use Windows Mobile 6 to create this application. Other types of projects for Smart Device Development include a console application, a control and a class library. You'll explore these project types in later installments of this column. By design, Pocket PCs and Smartphones are disconnected, that is they don't always have a direct connection to a network. As you begin to design and write your device application, you must plan for a new paradigm: connectivity. The functionality of your application will include the ability to check if a connection exists. Getting Started Open Visual Studio, and create a new project. Next, add a button control and a label. You can use Figure 1 as a guide. First, you must check if you have a connection to the internet. You can use several methods to perform this check. For this application, use objects from the System.Net namespace. Call the code shown in Listing 1 from the button click event. This method takes no parameters; the required fields are populated in it. As a test site, attempt a connection to the Microsoft Web site where credentials are not required. Notice in Listing 1 that if you get back a valid HttpWebResponse and it has a Status of OK, then True is returned from your method. Any other return from your call to GetResponse will be False. Here is the code that you must add to the event handler for your button click: if (CheckConnectionState()) lblConnectedState.Text = "Yea, we can see out!"; else lblConnectedState.Text = "No Connection found."; Next, run your application using the Windows Mobile 6 Professional Emulator, and tap on your button. Unless you've been playing with the emulator, you should see your label change to "No connection found" (see Figure 2). By default, your emulator is not connected to a network. You need to "cradle" the emulator to get a connection, just as you would your Windows-based phone. To cradle the emulator, select "Device Emulator Manager?" from the Tools menu in Visual Studio. When you open this dialog, your list will show a little green arrow in the circle next to the Professional Emulator (see Figure 3). This icon tells you that the device is running and connected. Right-click the Windows Mobile 6 Professional Emulator, and select Cradle from the Context menu. Your emulator will connect to your computer through Active Sync. You may encounter two problems when connecting your emulator. First, ActiveSync supports only one connected device at a time. If you have a Smartphone or another Pocket PC device connected to your system, then you will need to remove it before the emulator can connect. Second, you may need to change a setting in ActiveSync on your desktop. Open ActiveSync, and select "Connection Setting?" from the File menu. For the emulator to connect, ActiveSync must connect to your emulator through direct memory access (DMA). Check the box next to "Allow connections to one of the following:" and select DMA in the combo box. Figure 4 shows your Device Emulator Manager once you have cradled the emulator. After you've completed this, run your application and tap the Check Connection button. You should receive a message saying that you have a network connection. But, what about that delay when the application is checking the connection? Is it working? Is it broken? You tap the button and nothing happens for up to 20 seconds (that's the timeout that you used in the code). You need a UI clue to tell your users that an action is occurring (users love that kind of thing). In the button event, you can set the current cursor to WaitCursor, and then reset it before you leave the event. But what if the event raises an exception? You may leave the method without resetting the cursor to Normal. To prevent this, create the new class shown in Listing 2. Listing 2 uses the IDisposable interface to allow you to write unique code for the button event: using (CursorWait cw = new CursorWait()) { if (CheckConnectionState()) lblConnectedState.Text = "Yea, we can see out!"; else lblConnectedState.Text = "No Connection found."; } Finally, if your CheckConnectionState method raises an exception, the using statement will make sure that the Dispose method of the CursorWait class is called — thereby resetting the cursor to its default state. You've written your first application; congratulations! In addition, you have learned some tricks for resolving problems that tend to frustrate developers who are moving from desktop to device
https://visualstudiomagazine.com/articles/2007/06/01/build-a-simple-device-application.aspx
CC-MAIN-2020-50
en
refinedweb
Posted 16 Sep 2010 Link to this post <%@ Register TagPrefix="radE" Namespace="Telerik.SharePoint.FieldEditor" Assembly="RadEditorSharePoint, Version=4.5.6.0, Culture=neutral, PublicKeyToken=1f131a624888eeed" %> <%@ Register tagprefix="SharePoint1" namespace="Telerik.SharePoint" assembly="RadEditorSharePoint, Version=4.5.6.0, Culture=neutral, PublicKeyToken=1f131a624888eeed" %> <SharePoint1:MOSSRadEditor</SharePoint1:MOSSRadEditor> but the sharepoint designer shows the following error Error Creating Control - rftDefaultValueCannot instantiate type 'MOSSRadEditor' because there is no public parameterless constructor. How do i get this MOSSRadEditor control working on custom aspx page. please help. thank you all hi stanmir, thanks for the quick reply. i am not using lite edition , i am using full featured trial version of radeditormoss.wsp 4.5.6.0. and i have been looking for radeditor for moss 5.x in account page but not able to find it. is it possible that you could give me a link to the download for it? i appreciate your help thanks Christ
http://www.telerik.com/forums/how-to-use-radeditormoss-in-custom-aspx-page
CC-MAIN-2017-17
en
refinedweb
{ "terms":[ { "term": "aufs", "def": " aufs (advanced multi layered unification filesystem) is a Linux filesystem that\nDocker supports as a storage backend. It implements the\nunion mount for Linux file systems.\n" }, { "term": "base image", "def": " An image that has no parent is a base image.\n" }, { "term": "boot2docker", "def": " boot2docker is a lightweight Linux distribution made\nspecifically to run Docker containers. The boot2docker management tool for Mac and Windows was deprecated and replaced by docker-machine which you can install with the Docker Toolbox. btrfs (B-tree file system) is a Linux filesystem that Docker\nsupports as a storage backend. It is a copy-on-write\nfilesystem.\n" }, { "term": "build", "def": " build is the process of building Docker images using a Dockerfile.\nThe build uses a Dockerfile and a “context”. The context is the set of files in the\ndirectory in which the image is built.\n" }, { "term": "cgroups", "def": " cgroups is a Linux kernel feature that limits, accounts for, and isolates\nthe resource usage (CPU, memory, disk I/O, network, etc.) of a collection\nof processes. Docker relies on cgroups to control and isolate resource limits.\n\n Also known as : control groups\n" }, { "term": "Compose", "def": " Compose is a tool for defining and\nrunning complex applications with Docker. With compose, you define a\nmulti-container application in a single file, then spin your\napplication up in a single command which does everything that needs to\nbe done to get it running.\n\n Also known as : docker-compose, fig\n" }, { "term": "copy-on-write", "def": " Docker uses a\ncopy-on-write\ntechnique and a union file system for both images and\ncontainers to optimize resources and speed performance. Multiple copies of an\nentity share the same instance and each one makes only specific changes to its\nunique layer.\n\n Multiple containers can share access to the same image, and make\ncontainer-specific changes on a writable layer which is deleted when\nthe container is removed. This speeds up container start times and performance.\n\n Images are essentially layers of filesystems typically predicated on a base\nimage under a writable layer, and built up with layers of differences from the\nbase image. This minimizes the footprint of the image and enables shared\ndevelopment.\n\n For more about copy-on-write in the context of Docker, see Understand images,\ncontainers, and storage\ndrivers.\n" }, { "term": "container", "def": " A container is a runtime instance of a docker image.\n\n A Docker container consists of\n\n The concept is borrowed from Shipping Containers, which define a standard to ship\ngoods globally. Docker defines a standard to ship software.\n" }, { "term": "Docker", "def": " The term Docker can refer to\n\n The Docker Datacenter is subscription-based service enabling enterprises to leverage a\nplatform built by Docker, for Docker. The Docker native tools are integrated to create\nan on premises CaaS platform, allowing organizations to save time and seamlessly take\napplications built in dev to production.\n" }, { "term": "Docker for Mac", "def": " Docker for Mac is an easy-to-install,\nlightweight Docker development environment designed specifically for the Mac. A\nnative Mac application, Docker for Mac uses the macOS Hypervisor framework,\nnetworking, and filesystem. It’s the best solution if you want to build, debug,\ntest, package, and ship Dockerized applications on a Mac. Docker for Mac\nsupersedes Docker Toolbox as state-of-the-art Docker on macOS.\n" }, { "term": "Docker for Windows", "def": " Docker for Windows is an\neasy-to-install, lightweight Docker development environment designed\nspecifically for Windows 10 systems that support Microsoft Hyper-V\n(Professional, Enterprise and Education). Docker for Windows uses Hyper-V for\nvirtualization, and runs as a native Windows app. It works with Windows Server\n2016, and gives you the ability to set up and run Windows containers as well as\nthe standard Linux containers, with an option to switch between the two. Docker\nfor Windows is the best solution if you want to build, debug, test, package, and\nship Dockerized applications from Windows machines. Docker for Windows\nsupersedes Docker Toolbox as state-of-the-art Docker on Windows.\n" }, { "term": "Docker Hub", "def": " The Docker Hub is a centralized resource for working with\nDocker and its components. It provides the following services:\n\n A Dockerfile is a text document that contains all the commands you would\nnormally execute manually in order to build a Docker image. Docker can\nbuild images automatically by reading the instructions from a Dockerfile.\n" }, { "term": "ENTRYPOINT", "def": " In a Dockerfile, an ENTRYPOINT is an optional definition for the first part\nof the command to be run. If you want your Dockerfile to be runnable without\nspecifying additional arguments to the docker run command, you must specify\neither ENTRYPOINT, CMD, or both. If ENTRYPOINT is specified, it is set to a single command. Most official\nDocker images have an ENTRYPOINT of /bin/sh or /bin/bash. Even if you\ndo not specify ENTRYPOINT, you may inherit it from the base image that you\nspecify using the FROM keyword in your Dockerfile. To override the\n ENTRYPOINT at runtime, you can use --entrypoint. The following example\noverrides the entrypoint to be /bin/ls and sets the CMD to -l /tmp. \n\n $ docker run --entrypoint=/bin/ls ubuntu -l /tmp\n \n\n CMD is appended to the ENTRYPOINT. The CMD can be any arbitrary string\nthat is valid in terms of the ENTRYPOINT, which allows you to pass\nmultiple commands or flags at once. To override the CMD at runtime, just\nadd it after the container name or ID. In the following example, the CMD\nis overridden to be /bin/ls -l /tmp. \n\n $ docker run ubuntu /bin/ls -l /tmp\n In practice, ENTRYPOINT is not often overridden. However, specifying the\n ENTRYPOINT can make your images more fiexible and easier to reuse. A file system is the method an operating system uses to name files\nand assign them locations for efficient storage and retrieval.\n\n Examples :\n\n Docker images are the basis of containers. An Image is an\nordered collection of root filesystem changes and the corresponding\nexecution parameters for use within a container runtime. An image typically\ncontains a union of layered filesystems stacked on top of each other. An image\ndoes not have state and it never changes.\n" }, { "term": "Kitematic", "def": " A legacy GUI, bundled with Docker Toolbox, for managing Docker\ncontainers. We recommend upgrading to Docker for Mac or\nDocker for Windows, which have superseded Kitematic.\n" }, { "term": "layer", "def": " In an image, a layer is modification to the image, represented by an instruction in the\nDockerfile. Layers are applied in sequence to the base image to create the final image.\nWhen an image is updated or rebuilt, only layers that change need to be updated, and\nunchanged layers are cached locally. This is part of why Docker images are so fast\nand lightweight. The sizes of each layer add up to equal the size of the final image.\n" }, { "term": "libcontainer", "def": " libcontainer provides a native Go implementation for creating containers with\nnamespaces, cgroups, capabilities, and filesystem access controls. It allows\nyou to manage the lifecycle of the container performing additional operations\nafter the container is created.\n" }, { "term": "libnetwork", "def": " libnetwork provides a native Go implementation for creating and managing container\nnetwork namespaces and other network resources. It manage the networking lifecycle\nof the container performing additional operations after the container is created.\n" }, { "term": "link", "def": " links provide a legacy interface to connect Docker containers running on the\nsame host to each other without exposing the hosts’ network ports. Use the\nDocker networks feature instead.\n" }, { "term": "Machine", "def": " Machine is a Docker tool which\nmakes it really easy to create Docker hosts on your computer, on\ncloud providers and inside your own data center. It creates servers,\ninstalls Docker on them, then configures the Docker client to talk to them.\n\n Also known as : docker-machine\n" }, { "term": "namespace", "def": " A Linux namespace\nis a Linux kernel feature that isolates and vitualizes system resources. Processes which restricted to\na namespace can only interact with resources or processes that are part of the same namespace. Namespaces\nare an important part of Docker’s isolation model. Namespaces exist for each type of\nresource, including net (networking), mnt (storage), pid (processes), uts (hostname control),\nand user (UID mapping). For more information about namespaces, see Docker run reference\nand Introduction to user namespaces{ :target=”blank” class=”” }. A node is a physical or virtual\nmachine running an instance of the Docker Engine in swarm mode.\n\n Manager nodes perform swarm management and orchestration duties. By default\nmanager nodes are also worker nodes.\n\n Worker nodes execute tasks.\n" }, { "term": "overlay network driver", "def": " Overlay network driver provides out of the box multi-host network connectivity\nfor docker containers in a cluster.\n" }, { "term": "overlay storage driver", "def": " OverlayFS is a filesystem service for Linux which implements a\nunion mount for other file systems.\nIt is supported by the Docker daemon as a storage driver.\n" }, { "term": "registry", "def": " A Registry is a hosted service containing repositories of images\nwhich responds to the Registry API.\n\n The default registry can be accessed using a browser at Docker Hub\nor using the docker search command. A repository is a set of Docker images. A repository can be shared by pushing it\nto a registry server. The different images in the repository can be\nlabeled using tags.\n\n Here is an example of the shared nginx repository\nand its tags.\n" }, { "term": "service", "def": " A service is the definition of how\nyou want to run your application containers in a swarm. At the most basic level\na service defines which container image to run in the swarm and which commands\nto run in the container. For orchestration purposes, the service defines the\n“desired state”, meaning how many containers to run as tasks and constraints for\ndeploying the containers.\n\n Frequently a service is a microservice within the context of some larger\napplication. Examples of services might include an HTTP server, a database, or\nany other type of executable program that you wish to run in a distributed\nenvironment.\n" }, { "term": "service discovery", "def": " Swarm mode service discovery is a DNS component\ninternal to the swarm that automatically assigns each service on an overlay\nnetwork in the swarm a VIP and DNS entry. Containers on the network share DNS\nmappings for the service via gossip so any container on the network can access\nthe service via its service name.\n\n You don’t need to expose service-specific ports to make the service available to\nother services on the same overlay network. The swarm’s internal load balancer\nautomatically distributes requests to the service VIP among the active tasks.\n" }, { "term": "swarm", "def": " A swarm is a cluster of one or more Docker Engines running in swarm mode.\n" }, { "term": "Docker Swarm", "def": " Do not confuse Docker Swarm with the swarm mode features in Docker Engine.\n\n Docker Swarm is the name of a standalone native clustering tool for Docker.\nDocker Swarm pools together several Docker hosts and exposes them as a single\nvirtual Docker host. It serves the standard Docker API, so any tool that already\nworks with Docker can now transparently scale up to multiple hosts.\n\n Also known as : docker-swarm\n" }, { "term": "swarm mode", "def": " Swarm mode refers to cluster management and orchestration\nfeatures embedded in Docker Engine. When you initialize a new swarm (cluster) or\njoin nodes to a swarm, the Docker Engine runs in swarm mode.\n" }, { "term": "tag", "def": " A tag is a label applied to a Docker image in a repository.\nTags are how various images in a repository are distinguished from each other.\n\n Note : This label is not related to the key=value labels set for docker daemon.\n" }, { "term": "task", "def": " A task is the\natomic unit of scheduling within a swarm. A task carries a Docker container and\nthe commands to run inside the container. Manager nodes assign tasks to worker\nnodes according to the number of replicas set in the service scale.\n\n The diagram below illustrates the relationship of services to tasks and\ncontainers.\n\n \n" }, { "term": "Toolbox", "def": " Docker Toolbox is a legacy\ninstaller for Mac and Windows users. It uses Oracle VirtualBox for\nvirtualization.\n\n For Macs running OS X El Capitan 10.11 and newer macOS releases, Docker for\nMac is the better solution.\n\n For Windows 10 systems that support Microsoft Hyper-V (Professional, Enterprise\nand Education), Docker for\nWindows is the better solution.\n" }, { "term": "Union file system", "def": " Union file systems implement a union\nmount and operate by creating\nlayers. Docker uses union file systems in conjunction with\ncopy-on-write techniques to provide the building blocks for\ncontainers, making them very lightweight and fast.\n\n For more on Docker and union file systems, see Docker and AUFS in\npractice,\nDocker and Btrfs in\npractice,\nand Docker and OverlayFS in\npractice.\n\n Example implementations of union file systems are\nUnionFS,\nAUFS, and\nBtrfs.\n" }, { "term": "virtual machine", "def": " A virtual machine is a program that emulates a complete computer and imitates dedicated hardware.\nIt shares physical hardware resources with other users but isolates the operating system. The\nend user has the same experience on a Virtual Machine as they would have on dedicated hardware.\n\n Compared to containers, a virtual machine is heavier to run, provides more isolation,\ngets its own set of resources and does minimal sharing.\n\n Also known as : VM\n" }, { "term": "volume", "def": " A volume is a specially-designated directory within one or more containers\nthat bypasses the Union File System. Volumes are designed to persist data,\nindependent of the container’s life cycle. Docker therefore never automatically\ndelete volumes when you remove a container, nor will it “garbage collect”\nvolumes that are no longer referenced by a container.\nAlso known as: data volume\n\n There are three types of volumes: host, anonymous, and named:\n\n A host volume lives on the Docker host’s filesystem and can be accessed from within the container.\n A named volume is a volume which Docker manages where on disk the volume is created,\nbut it is given a name.\n An anonymous volume is similar to a named volume, however, it can be difficult, to refer to\nthe same volume over time when it is an anonymous volumes. Docker handle where the files are stored.\n
https://docs.docker.com/glossary.txt
CC-MAIN-2017-17
en
refinedweb
Follow large sets of icon assets. We needed a technique that works on our supported browsers, looks crisp on HiDPI displays and is easily maintainable. The solution I eventually came to was to create an icon font containing all these icons. Key reasons for using an icon font at Atlassian: - A single source of truth so we never have two versions of the same icon - It can be included in the Atlassian User Interface (AUI), the UI library for all Atlassian product, for easy consumption for all our product teams - It works all the way back to IE7 - Scales up and down so you don’t have to manage normal resolution and @2x resolution assets - Image sprites can be removed for your code base - Less resources to load and a lighter page weight Our new icons Part of the icon font roll out was to overhaul their visual style. The old icons were starting to look a little tired, and with the ADG moving into our products we needed an icon style that would match our design principles. Designing icons is not an easy task. You have 16 pixels to convey a clear metaphor which can be quite challenging. There’s one main topic around icon design that I want to discuss. The conceptual metaphors around icons is a huge topic but for this post I will be focusing on the craft and production side of icon design for an icon font (optimized for 16px) step-by-step. Step 1: Make your icons sharp Before we get into this I wanted to point out the tools I will be using along the way. I’ve got Photoshop and Illustrator CS6 installed along with Glyphs for Mac and for the coding part you can us whatever IDE you prefer. I’ve been using the TextMate 2 alpha version and is my IDE of choice right now. The first and most important thing is to make sure that your anchors snap to a pixel grid. Sometimes, snapping to pixels is not good but generally speaking, your icons will look much sharper if they do. In Photoshop (or Illustrator) this is easy to do: go to the Preferences and turn pixel snapping on/off. I ended up recording an action for it to make the switching a simple click of the mouse. You might consider taking one of the existing icon sets out there like Pictos or Glyphicons instead of creating your own icons. Just be careful when you resize them down to 16px. Designing icons can be very time consuming. The best results will only come from being obsessive with the details. That’s what makes the difference between good icons and great icons. Step 2: Moving from Photoshop/Illustrator to Glyphs I like to make my icons in Photoshop as vector shapes because I feel it gives me more control in a tool I’m familiar with but it’s just a personal preference. Once an icon is created in Photoshop, I then open up the PSD in Illustrator and then copy the shape over to a new Illustrator document that is 1024pts x 1024pts. I’ve left Illustrator to render values at points and not pixels, changing it didn’t make a difference. In Illustrator, your 16px icon is going to look amazing at any size. Make sure you change your viewing mode in Illustrator by going to View > Pixel Preview. This will make Illustrator view your vector like a rasterised image even though it isn’t. Resize the icon up to fill the 1024pts x 1024pts and centre it on the canvas with the x,y co-ordinates at 512pts (assuming you have a 16×16 icon). This shape is what you’ll be copying into Glyphs so make sure that it looks awesome and all your paths are still behaving. If they aren’t, Glyphs will render exactly what you have on screen at 1024pts. Once you’re happy with how the icon looks, copy the shape and head over to Glyphs. Step 3: Setting up Glyphs If you haven’t installed Glyphs yet, you can grab it from the Mac App Store or get a trial from the Glyphs website. Unfortunately it’s only for Mac, but I’m sure there is a Windows equivalent out there. First, create a new font file and then remove all the alphabet characters they pre-fill the font with. We don’t need alphanumeric characters because this is a symbol font. We’ll be using the private use unicode character ranges. If you have a small set of icons that can fit into a normal alphabet then you can use that instead of the unicode ranges. The major advantage of the unicode ranges is that it holds 6,400 possible icons in the private use range which is more than enough for all our products combined. Secondly, make sure the settings under the hood are correct. Go to File > Font Info and you’ll see the settings for the font. Make note of the keyboard short-cut (cmd + i), you’ll be coming back to this settings pane a lot. On the font screen, make sure you fill out: - Font family name - Designer - Date - Set the ‘units per Em’ to 1024 (default is 1000) I hadn’t worked with type design before so I had to research a little and play with the values in Glyphs to get the icons to sit on the right baseline. The values that I have used in Glyphs don’t match the diagram exactly, I think this may just be because of the way Glyphs is calculating it. It doesn’t seem to matter though, the icons render correctly anyway. - Ascender: 832 - Cap height: 768 - X-height: 576 - Descender: -192 We’re using the 1024pts as the direct representation of 16px in the browser. Think of 1px in the browser as 64pts in Glyphs. I found it handy to write out the corresponding pixel amounts for easy math when pushing lots of these icons into the font. In the settings pane is the ‘Other Settings’ tab which has the important grid spacing field. That should be set to 1 to start with. Later on you’ll want to change this value to 64 so you can see the grid that will represent the 16px displayed in the browser. Remember, 1px in the browser is 64pts in Glyphs. Keep using the keyboard shortcut to get back to this window (cmd + i), it saves a lot of time clicking around. The last thing to do in the settings pane is to make sure you’ve checked the checkbox ‘Don’t use nice names’. You won’t be able to enter unicode values for your icons without having this confusing label checked. Step 4: Creating icons in Glyphs Once you’ve followed these steps you can start turning your lovely vectors from Photoshop/Illustrator into unicode mapped icons. Move back to your 1024pt icon in Illustrator that we first resized and copy the whole shape. Move over to Glyphs and add your first character by clicking the ‘+’ button at the bottom of the app. Once the new glyph is selected, change the value of the width in the sidebar from 600 to 1024 and make sure the padding on both sides is set to 0. Now double-click on the new glyph to edit it. Paste (cmd + v) in your copied icon shape from Illustrator. Glyphs will probably prompt you to reset the bounding box, accept the changes. Your icon should now be in Glyphs but not in the position you want. Make sure the whole icon is selected (cmd + a) and then change the x,y coordinates accordingly. This is where that 64pts = 1px post-it note comes in handy. The majority of the time I set the anchor to be in the middle, the Y axis to be 320 and the X axis to be 512. For icons that are 16px x 14px or some other shape, you will need to use the 64pts increments to make it sit where you want. Keep in mind that the whole 16px box will be displayed in the browser. For example, if you have a 12px high icon and position it at the top of the 16px container, it will be 2px off centre when displayed in the browser. If you need to nudge the icon into position, make sure you change the grid spacing back to 1 (cmd + i to view settings). If you leave it at 64 to see the overlay grid and then nudge your icon into position, Glyphs will grab hold of the anchors and your icon will look like the one below. Unfortunately, Glyphs has some pretty bad undo (cmd + z) capabilities so this disaster will not be recoverable with a simple undo. To fix your icon, you’ll have to re-paste the shape in but before you do, set the grid spacing back to 1. Step 5: Maintaining and organising your font Atlassian has several products that have specific icons and we needed a way to organise the font accordingly. The easy way to do it was to namespace the icons and set up filters in the sidebar. The convention I’m using is to separate global, Confluence, JIRA and Dev Tools (Stash, Bitbucket, FishEye, Crucible and Bamboo) icons. For example, the help icon is named ‘aui-global-help’. With the filters set up in the sidebar, viewing a specific set of icons is easy. Unicode ranges Private use unicode ranges from UTF+E001 – U+F8FF which is 6,400 characters so I don’t think we’re going to run out of characters any time soon. The unicode ranges are hexadecimal. The sequence always starts with numbers ‘1’ through ‘9’ and then the letters ‘a’ through ‘f’ (e000, e001, e002, … e009, e00a, e00b, … e00f, e010, e011, etc). In Glyphs you’ll need to specify the unicode values for every icon you add. Do this by selecting the icon and then changing its value in the sidebar. You don’t need to specify the ‘UTF+’ part in Glyphs, only the hexadecimal part. Now that you’ve completed one icon, you can repeat these same steps for all your remaining icons. Step 6: Exporting When you’re happy with your icons in Glyphs you’ll need to export the saved file (cmd + e). Choose OTF from the menu pane, select a location to export to and click ‘Next…’. Glyphs tends to crash every now and then when it runs exports so make sure you’ve saved all your progress. Once the export is successful, navigate to the folder Glyphs has export the font into to convert this OTF file into a web font to use in your application. Making the web font Font Squirrel is a great free service for turning OTF files into web fonts. Go to the web font generator and add your font file. The configuration I used is below: Mode: Expert Formats: ttf, woff, eot, svg Truetype Hinting: Keep existing Subsetting: Custom Unicode ranges: E000-E3FF, EC00-EFFF Make sure you adjust your unicode range to the unicodes you actually use and tick the ‘Remember my settings’ checkbox. Convert the font and then download it to your desktop and you’re ready to put the final touches on it to make your icon font perfect. Step 7: Subpixel rendering of icons The last bit of tuning comes in the browser to make the new icons extra crisp. Using -webkit-font-smoothing: antialiased; on the font makes a big difference. You might not be able to tell on standard screens when you’re entering this in the CSS but have a closer look on an Apple screen to see the difference in Safari and Chrome. The CSS The basic CSS for the small and large icon sizes we have: [cc lang=’css’ ] @font-face { font-family: “Atlassian Icons”; src: url(atlassian-icons.eot); src: url(atlassian-icons.eot?#iefix) format(“embedded-opentype”), url(atlassian-icons.woff) format(“woff”), url(atlassian-icons.ttf) format(“truetype”), url(atlassian-icons.svg#atlassian-icons) format(“svg”); font-weight: normal; font-style: normal; } .aui-icon-small, .aui-icon-large { line-height: 0; position: relative; vertical-align: text-top; } .aui-icon-small { height: 16px; width: 16px; } .aui-icon-large { height: 32px; width: 32px; } .aui-icon-small:before, .aui-icon-large:before { color: inherit; font-family: “Atlassian Icons”; font-weight: normal; -webkit-font-smoothing: antialiased; /* Improves the rendering of icons */ font-style: normal; left: 0; line-height: 1; position: absolute; text-indent: 0; speak: none; /* This prevents screen readers from pronouncing the pseudo element text content used to trigger the icon font */ top: 50%; } .aui-icon-small:before { font-size: 16px; margin-top: -8px; /* (font-size/2) */ } .aui-icon-large:before { font-size: 32px; margin-top: -16px; /* (font-size/2) */ }[/cc] When coding up the classes for the individual icons you’ll need to remember those unicodes from Glyphs that you entered earlier and put the values in as the content. [cc lang=’css’ ] .aui-iconfont-configure:before { content: “\e001”; }[/cc] The HTML markup is specific to Atlassian so feel free to use your own pattern. The text inside of the span is inserted for screen readers but is not displayed on the page. [cc lang=’css’ ] [/cc] You’re all done It may have seemed like a lot of work but after doing a couple of icons it gets much faster. The real time-saver will be in the developer speed when the icons are reused. I’ve already seen big improvements for the Stash team from using the icon font everywhere in our UI. We’ll be adding more icons to the Atlassian icon font as we convert the old icons over for all our products.
https://www.atlassian.com/blog/archives/how-to-make-an-icon-font-the-8-step-guide
CC-MAIN-2017-17
en
refinedweb
. (For more resources related to this topic, see here) As we've heard a lot about UIKit. We've seen it at the top of our Swift files in the form of import UIKit. We've used many of the UI elements and classes it provides for us. Now, it's time to take an isolated look at the biggest and most important framework in iOS development. Application management Unlike most other frameworks in the iOS SDK, UIKit is deeply integrated into the way your app runs. That's because UIKit is responsible for some of the most essential functionalities of an app. It also manages your application's window and view architecture, which we'll be talking about next. It also drives the main run loop, which basically means that it is executing your program. The UIDevice class In addition to these very important features, UIKit also gives you access to some other useful information about the device the app is currently running on through the UIDevice class. Using online resources and documentation: Since this article is about exploring frameworks, it is a good time to remind you that you can (and should!) always be searching online for anything and everything. For example, if you search for UIDevice, you'll end up on Apple's developer page for the UIDevice class, where you can see even more bits of information that you can pull from it. As we progress, keep in mind that searching the name of a class or framework will usually give you quick access to the full documentation. Here are some code examples of the information you can access: UIDevice.currentDevice().name UIDevice.currentDevice().model UIDevice.currentDevice().orientation UIDevice.currentDevice().batteryLevel UIDevice.currentDevice().systemVersion Some developers have a little bit of fun with this information: for example, Snapchat gives you a special filter to use for photos when your battery is fully charged.Always keep an open mind about what you can do with data you have access to! Views One of the most important responsibilities of UIKit is that it provides views and the view hierarchy architecture. We've talked before about what a view is within the MVC programming paradigm, but here we're referring to the UIView class that acts as the base for (almost) all of our visual content in iOS programming. While it wasn't too important to know about when just getting our feet wet, now is a good time to really dig in a bit and understand what UIViews are and how they work both on their own and together. Let's start from the beginning: a view (UIView) defines a rectangle on your screen that is responsible for output and input, meaning drawing to the screen and receiving touch events.It can also contain other views, known as subviews, which ultimately create a view hierarchy. As a result of this hierarchy, we have to be aware of the coordinate systems involved. Now, let's talk about each of these three functions: drawing, hierarchies, and coordinate systems. Drawing Each UIView is responsible for drawing itself to the screen. In order to optimize drawing performance, the views will usually try to render their content once and then reuse that image content when it doesn't change. It can even move and scale content around inside of it without needing to redraw, which can be an expensive operation: An overview of how UIView draws itself to the screen With the system provided views, all of this is handled automatically. However, if you ever need to create your own UIView subclass that uses custom drawing, it's important to know what goes on behind the scenes. To implement custom drawing in a view, you need to implement the drawRect() function in your subclass. When something changes in your view, you need to call the setNeedsDisplay() function, which acts as a marker to let the system know that your view needs to be redrawn. During the next drawing cycle, the code in your drawRect() function will be executed to refresh the content of your view, which will then be cached for performance. A code example of this custom drawing functionality is a bit beyond the scope of this article, but discussing this will hopefully give you a better understanding of how drawing works in addition to giving you a jumping off point should you need to do this in the future. Hierarchies Now, let's discuss view hierarchies. When we would use a view controller in a storyboard, we would drag UI elements onto the view controller. However, what we were actually doing is adding a subview to the base view of the view controller. And in fact, that base view was a subview of the UIWindow, which is also a UIView. So, though, we haven't really acknowledged it, we've already put view hierarchies to work many times. The easiest way to think about what happens in a view hierarchy is that you set one view's parent coordinate system relative to another view. By default, you'd be setting a view's coordinate system to be relative to the base view, which is normally just the whole screen. But you can also set the parent coordinate system to some other view so that when you move or transform the parent view, the children views are moved and transformed along with it. Example of how parenting works with a view hierarchy. It's also important to note that the view hierarchy impacts the draw order of your views. All of a view's subviews will be drawn on top of the parent view, and the subviews will be drawn in the order they were added (the last subview added will be on top). To add a subview through code, you can use the addSubview() function. Here's an example: var view1 = UIView() var view2 = UIView() view1.addSubview(view2) The top-most views will intercept a touch first, and if it doesn't respond, it will pass it down the view hierarchy until a view does respond. Coordinate systems With all of this drawing and parenting, we need to take a minute to look at how the coordinate system works in UIKit for our views.The origin (0,0 point) in UIKit is the top left of the screen, and increases along X to the right, and increases on the Y downward. Each view is placed in this upper-left positioning system relative to its parent view's origin. Be careful! Other frameworks in iOS use different coordinate systems. For example, SpriteKit uses the lower-left corner as the origin. Each view also has its own setof positioning information. This is composed of the view's frame, bounds, and center. The frame rectangle describes the origin and the size of view relative to its parent view's coordinate system. The bounds rectangle describes the origin and the size of the view from its local coordinate system. The center is just the center point of the view relative to the parent view. When dealing with so many different coordinate systems, it can seem like a nightmare to compare positions from different views. Luckily, the UIView class provides a simple convertPoint()function to convert points between systems. Try running this little experiment in a playground to see how the point gets converted from one view's coordinate system to the other: import UIKit let view1 = UIView(frame: CGRect(x: 0, y: 0, width: 50, height: 50)) let view2 = UIView(frame: CGRect(x: 10, y: 10, width: 30, height: 30)) view1.addSubview(view2) let pointFrom1 = CGPoint(x: 20, y: 20) let pointFromView2 = view1.convertPoint(pointFrom1, toView: view2) Hopefully, you now have a much better understanding of some of the underlying workings of the view system in UIKit. Documents, displays, printing, and more In this section, I'm going to do my best to introduce you to the many additional features of the UIKit framework. The idea is to give you a better understanding of what is possible with UIKit, and if anything sounds interesting to you, you can go off and explore these features on your own. Documents UIKit has built in support for documents, much like you'd find on a desktop operating system. Using the UIDocument class, UIKit can help you save and load documents in the background in addition to saving them to iCloud. This could be a powerful feature for any app that allows the user to create content that they expect to save and resume working on later. Displays On most new iOS devices, you can connect external screens via HDMI. You can take advantage of these external displays by creating a new instance of the UIWindow class, and associating it with the external display screen. You can then add subviews to that window to create a secondscreen experience for devices like a bigscreen TV. While most consumers don't ever use HDMI-connected external displays, this is a great feature to keep in mind when working on internal applications for corporate or personal use. Printing Using the UIPrintInteractionController, you can set up and send print jobs to AirPrint-enabled printers on the user's network. Before you print, you can also create PDFs by drawing content off screen to make printing easier. And more! There are many more features of UIKit that are just waiting to be explored! To be honest, UIKit seems to be pretty much a dumping ground for any general features that were just a bit too small to deserve their own framework. If you do some digging in Apple's documentation, you'll find all kinds of interesting things you can do with UIKit, such as creating custom keyboards, creating share sheets, and custom cut-copy-paste support. Summary In this article, we looked at the biggest and most important UIKit and learned about some of the most important system processes like the view hierarchy. Resources for Article: Further resources on this subject: - Building Surveys using Xcode [article] - Run Xcode Run [article] - Tour of Xcode [article]
https://www.packtpub.com/books/content/understanding-uikitfundamentals
CC-MAIN-2017-17
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byReilly Windon Modified over 2 years ago 2 1944 : Colossus 2 Used for breaking encrypted codes Not Turing Complete Vaccum Tubes to optically read paper tape & apply programmable logic function Parallel I/O! 5 processors in parallel, same program, reading different tapes: 25,000 characters/s 4 1961: IBM 7030 “Stretch” First Transistorized Supercomputer $7.78 million (in 1961!) delivered to LLNL 3-D fluid dynamics problems Gene Amdahl & John Backus amongst the architects Aggressive Uniproc Parallelism “Lookahead”: Prefetch memory instrs, line up for fast arithmetic unit Many firsts: Pipelining, Predication, Multipgming Parallel Arithmetic Unit 5 1961: IBM 7030 “Stretch” R.T. Blosk, "The Instruction Unit of the Stretch Computer,“ 1960 Amdahl Backus 6 1964: CDC 6600 Outperformed ``Stretch’’ by 3 times Seymour Cray, Father of Supercomputing, main designer Features First RISC processor ! Overlapped execution of I/O, Peripheral Procs and CPU “Anyone can build a fast CPU. The trick is to build a fast system.” – Seymour Cray 7 1964: CDC 6600 Seymour Cray 8 1974: CDC STAR-100 First supercomputer to use vector processing STAR: String and Array Operations 100 million FLOPs Vector instructions ~ statements in APL language Single instruction to add two vectors of 65535 elements High setup cost for vector insts Memory to memory vector operations Slower Memory killed performance 9 1975: Burroughs ILLIAC IV “One of most infamous supercomputers” 64 procs in parallel … SIMD operations Spurred the design of Parallel Fortran Used by NASA for CFD Controversial design at that time (MPP) Daniel Slotnick 10 1976: Cray-I One of best known & successful supercomputer Installed at LANL for $8.8 million Features Deep, Multiple Pipelines Vector Instructions & Vector registers Densely packaged into a microprocessor Programming Cray-1 FORTRAN Auto vectorizing compiler! "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?“ 11 1985: Cray-2 Denser packaging than Cray-I 3-D stacking & Liquid Cooling Higher memory capacity 256 Mword (physical memory) 12 > 1990 : Cluster Computing 13 2008: IBM Roadrunner Designed by IBM & DoE Hybrid Design Two different processor arch: AMD dual-core Opteron + IBM Cell processor Opteron for CPU computation + communication Cell : One GPE and 8 SPE for floating pt computation Total of 116,640 cores Supercomputer cluster 14 2009: Cray Jaguar World’s fastest supercomputer at ORNL 1.75 petaflops MPP with 224, 256 AMD opteron processor cores Computational Science Applications 15 Vector Processing* Vector processors have high-level operations that work on linear arrays of numbers: "vectors" + r1 r2 r3 add r3, r1, r2 SCALAR (1 operation) v1 v2 v3 + vector length add.vv v3, v1, v2 VECTOR (N operations) - Slides adapted from Prof. Patterson’s Lecture 16 Properties of Vector Processors Each result independent of previous result long pipeline, with no dependencies High clock rate Vector instructions access memory with known pattern highly interleaved memory amortize memory latency of over 64 elements no (data) caches required! (Do use instruction cache) Reduces branches and branch problems in pipelines Single vector instruction implies lots of work ( loop) fewer instruction fetches 17 Styles of Vector Architectures memory-memory vector processors: all vector operations are memory to memory vector-register processors: all vector operations between vector registers (except load and store) Vector equivalent of load-store architectures Includes all vector machines since late 1980s: Cray, Convex, Fujitsu, Hitachi, NEC 18 Components of Vector Processor Vector Register: fixed length bank holding a single vector has at least 2 read and 1 write ports typically 8-32 vector registers, each holding 64-128 64-bit elements Vector Functional Units (FUs): fully pipelined, start new operation every clock typically 4 to 8 FUs: FP add, FP mult, FP reciprocal (1/X), integer add, logical, shift; may have multiple of same unit Vector Load-Store Units (LSUs): fully pipelined unit to load or store a vector; may have multiple LSUs Scalar registers: single element for FP scalar or address Cross-bar to connect FUs, LSUs, registers 19 Vector Instructions Instr.OperandsOperationComment ADDVV1,V2,V3 V1=V2+V3vector + vector ADDSVV1,F0,V2 V1=F0+V2scalar + vector MULTVV1,V2,V3 V1=V2xV3vector x vector MULSVV1,F0,V2 V1=F0xV2scalar x vector LVV1,R1 V1=M[R1..R1+63]load, stride=1 LVWSV1,R1,R2 V1=M[R1..R1+63*R2]load, stride=R2 LVIV1,R1,V2 V1=M[R1+V2i,i=0..63] indir.("gather") CeqVVM,V1,V2 VMASKi = (V1i=V2i)?comp. setmask MOVVLR,R1 Vec. Len. Reg. = R1set vector length MOVVM,R1 Vec. Mask = R1set vector mask 20 Memory operations Load/store operations move groups of data between registers and memory Three types of addressing Unit Stride Fastest Non-unit (constant) stride Indexed (gather-scatter) Vector equivalent of register indirect Good for sparse arrays of data Increases number of programs that vectorize 21 DAXPY (Y = a * X + Y) Assuming vectors X, Y are length 64 Scalar vs. Vector LDF0,a ADDIR4,Rx,#512 ;last address to load loop: LDF2, 0(Rx) ;load X(i) MULTDF2,F0,F2;a*X(i) LDF4, 0(Ry);load Y(i) ADDDF4,F2, F4;a*X(i) + Y(i) SDF4,0(Ry);store into Y(i) ADDIRx,Rx,#8;increment index to X ADDIRy,Ry,#8;increment index to Y SUBR20,R4,Rx;compute bound BNZR20,loop;check if done LD F0,a;load scalar a LV V1,Rx;load vector X MULTS V2,F0,V1 ;vector-scalar mult. LVV3,Ry;load vector Y ADDVV4,V2,V3;add SVRy,V4;store the result 578 (2+9*64) vs. 6 instructions (96X) 64 operation vectors + no loop overhead also 64X fewer pipeline hazards 22 Virtual Processor Vector Model Vector operations are SIMD (single instruction multiple data)operations Each element is computed by a virtual processor (VP) Number of VPs given by vector length vector control register 23 Vector Architectural State General Purpose Registers Flag Registers (32) VP 0 VP 1 VP $vlr-1 vr 0 vr 1 vr 31 vf 0 vf 1 vf 31 $vdw bits 1 bit Virtual Processors ($vlr) vcr 0 vcr 1 vcr 31 Control Registers 32 bits 24 Vector Implementation Vector register file Each register is an array of elements Size of each register determines maximum vector length Vector length register determines vector length for a particular operation Multiple parallel execution units = “lanes” (sometimes called “pipelines” or “pipes”) 25 Vector Terminology: 4 lanes, 2 vector functional units 26 Vector Execution Time Time = f(vector length, data dependencies, struct. hazards) Initiation rate: rate that FU consumes vector elements (= number of lanes; usually 1 or 2 on Cray T-90) Convoy: set of vector instructions that can begin execution in same clock (no struct. or data hazards) Chime: approx. time for a vector operationVV3,Ry;load vector Y 3:ADDVV4,V2,V3;add 4:SVRy,V4;store the result 4 conveys, 1 lane, VL=64 => 4 x 64 256 clocks (or 4 clocks per result) 27 Vector Load/Store Units & Memories Start-up overheads usually longer for LSUs Memory system must sustain (# lanes x word) /clock cycle Many Vector Procs. use banks (vs. simple interleaving): 1) support multiple loads/stores per cycle => multiple banks & address banks independently 2) support non-sequential accesses Note: No. memory banks > memory latency to avoid stalls m banks => m words per memory lantecy l clocks if m < l, then gap in memory pipeline: clock:0…ll+1 l+2…l+m- 1l+m…2 l word:--…012…m-1--…m may have 1024 banks in SRAM 28 Vector Length What to do when vector length is not exactly 64? vector-length register (VLR) controls the length of any vector operation, including a vector load or store. (cannot be > the length of vector registers) do 10 i = 1, n 10Y(i) = a * X(i) + Y(i) Don't know n until runtime! n > Max. Vector Length (MVL)? 29 Strip Mining Suppose Vector Length > Max. Vector Length (MVL)? Strip mining: generation of code such that each vector operation is done for a size Š to the MVLcontinue low = low+VL /*start of next vector*/ VL = MVL /*reset the length to max*/ 1continue 30 Vector Stride Suppose adjacent elements not sequential in memory do 10 i = 1,100 do 10 j = 1,100 A(i,j) = 0.0 do 10 k = 1,100 10A(i,j) = A(i,j)+B(i,k)*C(k,j) Either B or C accesses not adjacent (800 bytes between) stride: distance separating elements that are to be merged into a single vector (caches do unit stride) => LVWS (load vector with stride) instruction Strides => can cause bank conflicts (e.g., stride = 32 and 16 banks) Think of address per vector element 31 Vector Opt #1: Chaining Suppose: MULVV1,V2,V3 ADDVV4,V1,V5; separate convoy? chaining: vector register (V1) is not as a single entity but as a group of individual registers, then pipeline forwarding can work on individual elements of a vector Flexible chaining: allow vector to chain to any other active vector operation => more read/write port As long as enough HW, increases convoy size 32 Vector Opt #1: Chaining 33 Vector Opt #2: Conditional Execution Suppose: do 100 i = 1, 64 if (A(i).ne. 0) then A(i) = A(i) – B(i) endif 100 continue vector-mask control takes a Boolean vector: when vector-mask register is loaded from vector test, vector instructions operate only on vector elements whose corresponding entries in the vector-mask register are 1. 34 Vector Opt #3: Sparse Matrices Suppose: do i = 1,n A(K(i)) = A(K(i)) + C(M(i)) gather ( LVI ) operation takes an index vector and fetches the vector whose elements are at the addresses given by adding a base address to the offsets given in the index vector => a nonsparse vector in a vector register After these elements are operated on in dense form, the sparse vector can be stored in expanded form by a scatter store ( SVI ), using the same index vector Can't be done by compiler since can't know Ki elements distinct, no dependencies; by compiler directive Use CVI to create index 0, 1xm, 2xm,..., 63xm 35 Applications Multimedia Processing (compress., graphics, audio synth, image proc.) Standard benchmark kernels (Matrix Multiply, FFT, Convolution, Sort) Lossy Compression (JPEG, MPEG video and audio) Lossless Compression (Zero removal, RLE, Differencing, LZW) Cryptography (RSA, DES/IDEA, SHA/MD5) Speech and handwriting recognition Operating systems/Networking ( memcpy, memset, parity, checksum) Databases (hash/join, data mining, image/video serving) Language run-time support (stdlib, garbage collection) even SPECint95 36 Intel x86 SIMD Extensions MMX (Pentium MMX, Pentium II) MM0 to MM7 64 bit registers (packed) Aliased with x87 FPU stack registers Only integer operations Saturation Arithmetic great for DSP 37 Intel x86 SIMD Extensions SSE (Pentium III) 128-bit registers (XMM0 to XMM7) with floating point support Example vec_res.x = v1.x + v2.x; vec_res.y = v1.y + v2.y; vec_res.z = v1.z + v2.z; vec_res.w = v1.w + v2.w; movaps xmm0,address-of-v1 addps xmm0,address-of-v2 movaps address-of-vec_res,xmm0 C code SSE code 38 Intel x86 SIMD Extensions SSE 2 (Pentium 4 – Willamette) Extends MMX instructions to operate on XMM registers (twice as wide as MM) Cache control registers To prevent cache pollution while accessing indefinite stream of instructions 39 Intel x86 SIMD Extensions SSE 3 (Pentium 4 – Prescott) Capability to work horizontally in the register Add/Multiply multiple values stored in a single register Simplify the implementation of DSP oprns New Instruction to conv. fp to int and vice versa 40 Intel x86 SIMD Extensions SSE 4 50 new instructions, some related to multicore Dot product, Maximum, Minimum, Conditional copy, Compare Strings, Streaming load Improve Memory I/O throughput 41 Vectorization: Compiler Support Vectorization of scientific code supported by icc, gcc Requires code to written with regular memory access Using C arrays or FORTRAN code Example: original serial loop: for(i=0; i 42 Classic loop vectorizer 42 dependence graph int exist_dep(ref1, ref2, Loop) Separable Subscript tests Z ero I ndex V ar S ingle I ndex V ar M ultiple I ndex V ar (GCD, Banerjee...) Coupled Subscript tests (Gamma, Delta, Omega…) find SCCs reduce graph topological sort for all nodes: Cyclic: keep sequential loop for this nest. non Cyclic: data dependence tests array dependences for i for j for k A[5] [i+1] [ j] = A[N] [i] [k] for i for j for k A[5] [i+1] [ i] = A[N] [i] [k] replace node with vector code loop transform to break cycles David Naishlos, Autovectorization in GCC, IBM Labs Haifa 43 Assignment #1 Vectorizing C code using gcc’s vector extensions for Intel SSE instructions 45 1993: Connection Machine-5 MIMD architecture Fat tree network of SPARC RISC Processors Supported multiple pgmming models, languages Shared Memory vs Message passing LISP, FORTRAN, C Applications Intended for AI but found greater success in computational science 46 1993: Connection Machine-5 47 2005: Blue Gene/L $100 million research initiative by IBM, LLNL and US DoE Unique Features Low Power Upto 65536 nodes, each with SoC design 3-D Torus Interconnect Goals Advance Scale of Biomolecular simulations Explore novel ideas in MPP arch & systems 49 2002: NEC Earth Simulator Fastest Supercomputer from 2002-2004 640 nodes with 16GB memory at each node SX-6 node 8 vector processors + 1 scalar processors on single chip Branch Prediction, Speculative Execution Application Modeling Global Climate Changes Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3242803/
CC-MAIN-2017-17
en
refinedweb
How can I use a RealView symbol definitions file (symdefs/.sym) file in IAR Embedded Workbench for ARM? This is an technically advanced solution. It requires that you for example know what AEABI is and what calling convention is used. Only a limited numbers of errors and warnings can be produced for the external symbols. You want one image generated with IAR Embedded Workbench for ARM to know the global symbol values of another image generated with RealView Compilation Tools. You can use a symbol definitions (symdefs) file. Generate and copy this file to your project directory. Use the armlink option --symdefs. armlink --symdefs filename For further information regarding the ARM Linker you are refeered to ARM documentation. Use the sym2h.bat script file in prebuild to generate the .h and .f files - IAR Embedded Workbench > Project > Options... > Build Actions > Pre-build command line: $PROJ_DIR$\sym2h.bat $PROJ_DIR$\filename.sym Use: sym2h.bat filename The filename should be a fully qualified path name. Output directory is the same as the input directory. Filename input is required. The filename should be a fully qualified path name. Output directory is the same as the input directory. To use the generated symbol definitions file add an Extra Option to ilinkerarm.exe - IAR Embedded Workbench > Project > Options... > Linker > Extra Options > select 'Use command line options' and Command line options: -f $PROJ_DIR$\filename.f Use: -f file Read command line options from file To use the generated external definitions file add include to your code - #include "filename.h" Tested with a symdefs file generated from RealView Compilation Tools version 2.2 [Build 576] and IAR Embedded Workbench for ARM version 5.11. Note that this bat file only generates a standard definition based on if the symbol is A(arm) and T(thumb) or D(data). If A or T script uses - void <SYMBOL_NAME>(); If D script uses - extern int <SYMBOL_NAME>; You may have to cast the use of the symbols to what you need. If you not already have your externally built image on your target, one way is to use - ilinkarm.exe --image_input filename.axf Use: --image_input file[,symbol[,section[,alignment]]] Put image file in section from file There is an example project on the link: Example project including sym2h.zip. IAR Systems neither sells nor supports sym2h.bat - it is not part of our tool chain. Thus these files are provided as is without any promise of further support or information. If you have improvement suggestions regards to sym2h.bat that you want to share with us and other developers we are interested to hear them. All product names are trademarks or registered trademarks of their respective owners.
https://www.iar.com/support/tech-notes/linker/how-can-i-use-realview-.sym-file-in-embedded-workbench-for-arm-5.x/
CC-MAIN-2017-17
en
refinedweb
ncl_gset_text_path man page gset_text_path (Set text path) — sets the text paths or directions in which text is to be drawn. Synopsis #include <ncarg/gks.h> void gset_text_path(Gtext_path text_path); Description - text_path (Input) Gives the direction in which a character string is to be drawn.". The right, left, up, and down directions are relative to the character up vector. The right text path direction is perpendicular to the up vector direction. Thus, to draw a text string at a 45 degree angle the character up vector would be (-1,1), and the text path would be right . Access To use the GKS C-binding routines, load the ncarg_gks and ncarg_c libraries. See Also Online: gtext(3NCARG), gset_text_align(3NCARG), gset_text_font_prec(3NCARG), gset_char_ht(3NCARG), gset_char_space(3NCARG), gset_char_up_vec.
https://www.mankier.com/3/ncl_gset_text_path
CC-MAIN-2017-17
en
refinedweb
<eb@comsec.com> Abstract This article provides an overview of the GNU Radio toolkit for building software radios. Table of Contents Software radio is the technique of getting code as close to the antenna as possible. It turns radio hardware problems into software problems. The fundamental characteristic of software radio is that software defines the transmitted waveforms, and software demodulates the received waveforms. This is in contrast to most radios in which the processing is done with either analog circuitry or analog circuitry combined with digital chips. GNU Radio is a free software toolkit for building software radios. Software radio is a revolution in radio design due to its ability to create radios that change on the fly, creating new choices for users. At the baseline, software radios can do pretty much anything a traditional radio can do. The exciting part is the flexibility that software provides you. Instead of a bunch of fixed function gadgets, in the next few years we'll see a move to universal communication devices. Imagine a device that can morph into a cell phone and get you connectivity using GPRS, 802.11 Wi-Fi, 802.16 WiMax, a satellite hookup or the emerging standard of the day. You could determine your location using GPS, GLONASS or both. Perhaps most exciting of all is the potential to build decentralized communication systems. If you look at today's systems, the vast majority are infrastructure-based. Broadcast radio and TV provide a one-way channel, are tightly regulated and the content is controlled by a handful of organizations. Cell phones are a great convenience, but the features your phone supports are determined by the operator's interests, not yours. A centralized system limits the rate of innovation. We could take some lessons from the Internet and push the smarts out to the edges. Instead of cell phones being second-class citizens, usable only if infrastructure is in place and limited to the capabilities determined worthwhile by the operator, we could build smarter devices. These user-owned devices would generate the network. They'd create a mesh among themselves, negotiate for backhaul and be free to evolve new solutions, features and applications. Figure 1, “Typical software radio block diagram” shows a typical block diagram for a software radio. To understand the software part of the radio, we first need to understand a bit about the associated hardware. Examining the receive path in the figure, we see an antenna, a mysterious RF front end, an analog-to-digital converter (ADC) and a bunch of code. The analog-to-digital converter is the bridge between the physical world of continuous analog signals and the world of discrete digital samples manipulated by software. Figure 1. Typical software radio block diagram ADCs have two primary characteristics, sampling rate and dynamic range. Sampling rate is the number of times per second that the ADC measures the analog signal. Dynamic range refers to the difference between the smallest and largest signal that can be distinguished; it's a function of the number of bits in the ADC's digital output and the design of the converter. For example, an 8-bit converter at most can represent 256 (28) signal levels, while a 16-bit converter represents up to 65,536 levels. Generally speaking, device physics and cost impose trade-offs between the sample rate and dynamic range. Before we dive into the software, we need to talk about a bit of theory. In 1927, a Swedish-born physicist and electrical engineer named Harry Nyquist determined that to avoid aliasing when converting from analog to digital, the ADC sampling frequency must be at least twice the bandwidth of the signal of interest. Aliasing is what makes the wagon wheels look like they're going backward in the old westerns: the sampling rate of the movie camera is not fast enough to represent the position of the spokes unambiguously. Assuming we're dealing with low pass signals - signals where the bandwidth of interest goes from 0 to fMAX, the Nyquist criterion states that our sampling frequency needs to be at least 2 * fMAX. But if our ADC runs at 20 MHz, how can we listen to broadcast FM radio at 92.1 MHz? The answer is the RF front end. The receive RF front end translates a range of frequencies appearing at its input to a lower range at its output. For example, we could imagine an RF front end that translated the signals occurring in the 90 - 100 MHz range down to the 0 - 10 MHz range. Mostly, we can treat the RF front end as a black box with a single control, the center of the input range that's to be translated. As a concrete example, a cable modem tuner module that we've employed successfully has the following characteristics. It translates a 6 MHz chunk of the spectrum centered between about 50 MHz and 800 MHz down to an output range centered at 5.75 MHz. The center frequency of the output range is called the intermediate frequency, or IF. In the simplest-thing-that-possibly-could-work category, the RF front end may be eliminated altogether. One GNU Radio experimenter has listened to AM and shortwave broadcasts by connecting a 100-foot piece of wire directly to his 20M sample/sec ADC. GNU Radio provides a library of signal processing blocks and the glue to tie it all together. The programmer builds a radio by creating a graph (as in graph theory) where the vertices are signal processing blocks and the edges represent the data flow between them. The signal processing blocks are implemented in C++. Conceptually, blocks process infinite streams of data flowing from their input ports to their output ports. Blocks' attributes include the number of input and output ports they have as well as the type of data that flows through each. The most frequently used types are short, float and complex. Some blocks have only output ports or input ports. These serve as data sources and sinks in the graph. There are sources that read from a file or ADC, and sinks that write to a file, digital-to-analog converter (DAC) or graphical display. About 100 blocks come with GNU Radio. Writing new blocks is not difficult. Graphs are constructed and run in Python. Example 1 is the "Hello World" of GNU Radio. It generates two sine waves and outputs them to the sound card, one on the left channel, one on the right. Example 1. Dial Tone Output #!/usr/bin/env pythonfrom gnuradio import grfrom gnuradio import audiodef build_graph (): sampling_freq = 48000 ampl = 0.1 fg = gr.flow_graph () src0 = gr.sig_source_f (sampling_freq, gr.GR_SIN_WAVE, 350, ampl) src1 = gr.sig_source_f (sampling_freq, gr.GR_SIN_WAVE, 440, ampl) dst = audio.sink (sampling_freq) fg.connect ((src0, 0), (dst, 0)) fg.connect ((src1, 0), (dst, 1)) return fgif __name__ == '__main__': fg = build_graph () fg.start () raw_input ('Press Enter to quit: ') fg.stop () We start by creating a flow graph to hold the blocks and connections between them. The two sine waves are generated by the gr.sig_source_f calls. The f suffix indicates that the source produces floats. One sine wave is at 350 Hz, and the other is at 440 Hz. Together, they sound like the US dial tone. audio.sink is a sink that writes its input to the sound card. It takes one or more streams of floats in the range -1 to +1 as its input. We connect the three blocks together using the connect method of the flow graph. connect takes two parameters, the source endpoint and the destination endpoint, and creates a connection from the source to the destination. An endpoint has two components: a signal processing block and a port number. The port number specifies which input or output port of the specified block is to be connected. In the most general form, an endpoint is represented as a python tuple like this: (block, port_number). When port_number is zero, the block may be used alone. These two expressions are equivalent: fg.connect ((src1, 0), (dst, 1))fg.connect (src1, (dst, 1)) Once the graph is built, we start it. Calling start forks one or more threads to run the computation described by the graph and returns control immediately to the caller. In this case, we simply wait for any keystroke. Example 2 shows a somewhat simplified but complete broadcast FM receiver. It includes control of the RF front end and all required signal processing. This example uses an RF front end built from a cable modem tuner and a 20M sample/sec analog-to-digital converter. Example 2. Broadcast FM Receiver #!/usr/bin/env pythonfrom gnuradio import grfrom gnuradio import audiofrom gnuradio import mc4020import sysdef high_speed_adc (fg, input_rate): # return gr.file_source (gr.sizeof_short, "dummy.dat", False) return mc4020.source (input_rate, mc4020.MCC_CH3_EN | mc4020.MCC_ALL_1V)## return a gr.flow_graph#def build_graph (freq1, freq2): input_rate = 20e6 cfir_decimation = 125 audio_decimation = 5 quad_rate = input_rate / cfir_decimation audio_rate = quad_rate / audio_decimation fg = gr.flow_graph () # use high speed ADC as input source src = high_speed_adc (fg, input_rate) # compute FIR filter taps for channel selection channel_coeffs = \ gr.firdes.low_pass (1.0, # gain input_rate, # sampling rate 250e3, # low pass cutoff freq 8*100e3, # width of trans. band gr.firdes.WIN_HAMMING) # input: short; output: complex chan_filter1 = \ gr.freq_xlating_fir_filter_scf (cfir_decimation, channel_coeffs, freq1, # 1st station freq input_rate) (head1, tail1) = build_pipeline (fg, quad_rate, audio_decimation) # sound card as final sink audio_sink = audio.sink (int (audio_rate)) # now wire it all together fg.connect (src, chan_filter1) fg.connect (chan_filter1, head1) fg.connect (tail1, (audio_sink, 0)) return fgdef build_pipeline (fg, quad_rate, audio_decimation): '''Given a flow_graph, fg, construct a pipeline for demodulating a broadcast FM signal. The input is the downconverted complex baseband signal. The output is the demodulated audio. build_pipeline returns a two element tuple containing the input and output endpoints. ''' fm_demod_gain = 2200.0/32768.0 audio_rate = quad_rate / audio_decimation volume = 1.0 # input: complex; output: float fm_demod = gr.quadrature_demod_cf (volume*fm_demod_gain) # compute FIR filter taps for audio filter width_of_transition_band = audio_rate / 32 audio_coeffs = gr.firdes.low_pass (1.0, # gain quad_rate, # sampling rate audio_rate/2 - width_of_transition_band, width_of_transition_band, gr.firdes.WIN_HAMMING) # input: float; output: float audio_filter = gr.fir_filter_fff (audio_decimation, audio_coeffs) fg.connect (fm_demod, audio_filter) return ((fm_demod, 0), (audio_filter, 0)) def main (args): nargs = len (args) if nargs == 1: # get station frequency from command line freq1 = float (args[0]) * 1e6 else: sys.stderr.write ('usage: fm_demod freq\n') sys.exit (1) # connect to RF front end rf_front_end = gr.microtune_4937_eval_board () if not rf_front_end.board_present_p (): raise IOError, 'RF front end not found' # set front end gain rf_front_end.set_AGC (300) # determine the front end's "Intermediate Frequency" IF_freq = rf_front_end.get_output_freq () # 5.75e6 # Tell the front end to tune to freq1. # I.e., freq1 is translated down to the IF frequency rf_front_end.set_RF_freq (freq1) # build the flow graph fg = build_graph (IF_freq, None) fg.start () # fork thread(s) and return raw_input ('Press Enter to quit: ') fg.stop ()if __name__ == '__main__': main (sys.argv[1:]) Like the Hello World example, we build a graph, connect the blocks together and start it. In this case, our source, mc4020.source, is an interface to the Measurement Computing PCI-DAS 4020/12 high-speed ADC. We follow it with gr.freq_xlating_fir_filter_scf, a finite impulse response (FIR) filter that selects the FM station we're looking for and translates it to baseband (0Hz, DC). With the 20M sample/sec converter and cable modem tuner, we're really grabbing something in the neighborhood of a 6 MHz chunk of the spectrum. This single chunk may contain ten or more FM stations, and gr.freq_xlating_fir_filter_scf allows us to select the one we want. In this case, we select the one at the exact center of the IF of the RF front end (5.75 MHz). The output of gr.freq_xlating_fir_filter_scf is a stream of complex samples at 160,000 samples/second. We feed the complex baseband signal into gr.quadrature_demod_cf, the block that does the actual FM demodulation. gr.quadrature_demod_cf works by subtracting the angle of each adjacent complex sample, effectively differentiating the frequency. The output of gr.quadrature_demod_cf contains the left-plus-right FM mono audio signal, the stereo pilot tone at 19kHz, the left-minus-right stereo information centered at 38kHz and any other sub-carriers above that. For this simplified receiver, we finish off by low pass filtering and decimating the stream, keeping only the left-plus-right audio information, and send that to the sound card at 32,000 samples/sec. For a more indepth look at how the FM receiver works, please see "Listening to FM, Step by Step." Graphical interfaces for GNU Radio applications are built in Python. Interfaces may be built using any toolkit you can access from Python; we recommend wxPython to maximize cross-platform portability. GNU Radio provides blocks that use interprocess communication to transfer chunks of data from the real-time C++ flow graph to Python-land. GNU Radio is reasonably hardware-independent. Today's commodity multi-gigahertz, super-scalar CPUs with single-cycle floating-point units mean that serious digital signal processing is possible on the desktop. A 3 GHz Pentium or Athlon can evaluate 3 billion floating-point FIR taps/s. We now can build, virtually all in software, communication systems unthinkable only a few years ago. Your computational requirements depend on what you're trying to do, but generally speaking, a 1 or 2 GHz machine with at least 256 MB of RAM should suffice. You also need some way to connect the analog world to your computer. Low-cost options include built-in sound cards and audiophile quality 96 kHz, 24-bit, add-in cards. With either of these options, you are limited to processing relatively narrow band signals and need to use some kind of narrow-band RF front end. Another possible solution is an off-the-shelf, high-speed PCI analog-to-digital board. These are available in the 20M sample/sec range, but they are expensive, about the cost of a complete PC. For these high-speed boards, cable modem tuners make reasonable RF front ends. Finding none of these alternatives completely satisfactory, we designed the Universal Software Radio Peripheral, or USRP for short. Our preferred hardware solution is the Universal Software Radio Peripheral (USRP). Figure 2, “Universal Software Radio Peripheral” shows the block diagram of the USRP. The brainchild of Matt Ettus, the USRP is an extremely flexible USB device that connects your PC to the RF world. The USRP consists of a small motherboard containing up to four 12-bit 64M sample/sec ADCs, four 14-bit, 128M sample/sec DACs, a million gate-field programmable gate array (FPGA) and a programmable USB 2.0 controller. Each fully populated USRP motherboard supports four daughterboards, two for receive and two for transmit. RF front ends are implemented on the daughterboards. A variety of daughterboards is available to handle different frequency bands. For amateur radio use, low-power daughterboards are available that receive and transmit in the 440 MHz band and the 1.24 GHz band. A receive-only daughterboard based on a cable modem tuner is available that covers the range from 50 MHz to 800 MHz. Daughterboards are designed to be easy to prototype by hand in order to facilitate experimentation. Figure 2. Universal Software Radio Peripheral The flexibility of the USRP comes from the two programmable components on the board and their interaction with the host-side library. To get a feel for the USRP, let's look at its boot sequence. The USRP itself contains no ROM-based firmware, merely a few bytes that specify the vendor ID (VID), product ID (PID) and revision. When the USRP is plugged in to the USB for the first time, the host-side library sees an unconfigured USRP. It can tell it's unconfigured by reading the VID, PID and revision. The first thing the library code does is download the 8051 code that defines the behavior of the USB peripheral controller. When this code boots, the USRP simulates a USB disconnect and reconnect. When it reconnects, the host sees a different device: the VID, PID and revision are different. The firmware now running defines the USB endpoints, interfaces and command handlers. One of the commands the USB controller now understands is load the FPGA. The library code, after seeing the USRP reconnect as the new device, goes to the next stage of the boot process and downloads the FPGA configuration bitstream. FPGAs are generic hardware chips whose behavior is determined by the configuration bitstream that's loaded into them. You can think of the bitstream as object code. The bitstream is the output of compiling a high-level description of the design. In our case, the design is coded in the Verilog hardware description language. This is source code and, like the rest of the code in GNU Radio, is licensed under the GNU General Public License. An FPGA is like a small, massively parallel computer that you design to do exactly what you want. Programming the FPGA takes a bit of skill, and mistakes can fry the board permanently. That said, we provide a standard configuration that is useful for a wide variety of applications. Using a good USB host controller, the USRP can sustain 32 MB/sec across the USB. The USB is half-duplex. Based on your needs, you partition the 32 MB/sec between the transmit and the receive directions. In the receive direction, the standard configuration allows you to select the part or parts of the digitized spectrum you're interested in, translate them to baseband and decimate as required. This is exactly equivalent to what's happening in the RF front end, only now we're doing it on digitized samples. The block of code that performs this function is called a digital down converter (Figure 3, “Digital Down Converter Block Diagram”). One advantage of performing this function in the digital domain is we can change the center frequency instantaneously, which is handy for frequency hopping spread spectrum systems. Figure 3. Digital Down Converter Block Diagram In the transmit direction, the exact inverse is performed. The FPGA contains multiple instances of the digital up and down converters. These instances can be connected to the same or different ADCs, depending on your needs. We don't have room here to cover all the theory behind them; see the GNU Radio Wiki for more information.. Time Division Multiple Access (TDMA) waveforms.. 注:Exploring GNURadio (原文出处,翻译整理仅供参考!) Report Abuse|Powered By Google Sites
http://gnuradio.microembedded.com/exploring-gnuradio
CC-MAIN-2017-17
en
refinedweb
#include <OMX_Audio.h> QCELP13 ( CDMA, EIA/TIA-733, 13.3kbps coder) stream format parameters Frame rate
http://limoa.sourceforge.net/docs/1.0/structOMX__AUDIO__PARAM__QCELP13TYPE.html
CC-MAIN-2017-17
en
refinedweb
Subproblem Tutorial - Running Multiple Optimizations Using SubProblems¶ In this tutorial, we want to find the global minimum of a function that has multiple local minima, and we want to search for those local minima using multiple gradient based optimizers running concurrently. How might we solve this problem in OpenMDAO? If we didn’t care about concurrency, we could just write a script that creates a single Problem containing a gradient optimizer and the function we want to optimize, and have that script iterate over a list of design inputs, set the design values into the Problem, run it, and extract the objective values. If we want to run multiple optimizations concurrently, it turns out that OpenMDAO has a number of drivers, for example CaseDriver, LatinHypercubeDriver, UniformDriver, etc., that will run multiple input cases concurrently. But how can we use multiple drivers during an OpenMDAO run? To do that, we need to have multiple Problems, because in OpenMDAO, only a Problem can have a driver. OpenMDAO has a component called SubProblem, which is a component that contains a Problem and controls which of the Problem’s variables are accessible from outside. We’ll use one of those to contain the Problem that performs a gradient based optimization using an SLSQP optimizer, and we’ll add that to our top level Problem, which will run multiple instances of our SubProblem concurrently using a CaseDriver. Note There is some overhead involved in using a SubProblem, so using one is not recommended unless your approach truly requires nested drivers. Some valid uses of SubProblem would be: - collaborative optimization - an optimizer on top of a DOE - a DOE on top of an optimizer, a.k.a. multistart optimization (our case) - a genetic algorithm driving a gradient based optimizer Let’s first create a Problem to contain the optimization of our function. Later, we’ll use this Problem to create our SubProblem. import sys from math import pi from openmdao.api import Problem, Group, Component, IndepVarComp, ExecComp, \ ScipyOptimizer, SubProblem, CaseDriver sub = Problem(root=Group()) root = sub.root Now let’s define the function we want to minimize. In this case we’ve chosen a simple function with only one input and one output. It’s a cosine function between the bounds += pi that is modified so that the rightmost “valley” is slightly lower than valleys to the left. Between the += pi bounds, there are only two valleys, so we have two local minima and one of those is global. The code below defines a component that represents our function, as well as an independent variable that the optimizer can use as a design variable. We put both of those in the root Group and connect our independent variable to our component’s input. # In the range -pi <= x <= pi # function has 2 local minima, one is global # # global min is: f(x) = -1.31415926 at x = pi # local min at: f(x) = -0.69084489952 at x = -3.041593 # define the independent variable that our optimizer will twiddle root.add('indep', IndepVarComp('x', 0.0)) # here's the actual function we're minimizing root.add("comp", ExecComp("fx = cos(x)-x/10.")) # connect the independent variable to the input of our function component root.connect("indep.x", "comp.x") Now we’ll set up our SLSQP optimizer. We first declare our optimizer object, then add our independent variable indep.x to it as a design variable, then finally add the output of our component, comp.fx, as the objective that we want to minimize. sub.driver = ScipyOptimizer() sub.driver.options['optimizer'] = 'SLSQP' sub.driver.add_desvar("indep.x", lower=-pi, upper=pi) sub.driver.add_objective("comp.fx") The lower level Problem is now completely defined. Next we’ll create the top level Problem that will contain our SubProblem. Also, and this is a little confusing, we add an independent variable top_indep.x to the root of our top level Problem, even though we already have an independent variable that will feed our function inside of our lower level Problem. We need to do this because an OpenMDAO driver can only set its design values into variables belonging to an IndepVarComp, and the IndepVarComp in the SubProblem is not accessible to the driver in the top level Problem. prob = Problem(root=Group()) prob.root.add("top_indep", IndepVarComp('x', 0.0)) Now we create our SubProblem, exposing indep.x as a parameter and comp.fx as an unknown. indep.x must be a parameter on our SubProblem in order for us to connect our top level independent variable top_indep.x to it. It’s OK that indep.x is in fact an unknown inside of our SubProblem. prob.root.add("subprob", SubProblem(sub, params=['indep.x'], unknowns=['comp.fx'])) prob.root.connect("top_indep.x", "subprob.indep.x") Next we specify our top level driver to be a CaseDriver, which is a driver that will execute a user defined list of cases on the model. A case is just a list of (name, value) tuples, where name is the name of a design variable and value is the value that will be assigned to that variable prior to running the model. We’re using a CaseDriver here for simplicity, and because we already know where the local minima are found, but we could just as easily use a LatinHyperCubeDriver that would give us some random distribution of starting points in the design space. Because the function we’re minimizing in this tutorial has only two local minima, we’ll create our CaseDriver with an argument of num_par_doe=2, specifying that we want to run 2 cases concurrently. We’ll also add top_indep.x as a design variable to our CaseDriver, and add subprob.indep.x and subprob.comp.fx as response variables. add_response() is telling our CaseDriver that we want it to save the specified variables each time it runs an input case. Note that add_response() is just a convenience method and results in the creation of a memory resident data recorder in the CaseDriver. Note If you want to run lots of cases and/or the variables you want to record are large, you may want to use some other form of data recorder, e.g., SqliteRecorder, to record results to disk rather than storing them all in memory by using add_response(). Recorders can be added to a CaseDriver in the same way as for any other driver. prob.driver = CaseDriver(num_par_doe=2) prob.driver.add_desvar('top_indep.x') prob.driver.add_response(['subprob.indep.x', 'subprob.comp.fx']) Next we’ll define the cases we want to run. The top_indep.x values of -1 and 1 will end up at the local and global minima when we run the concurrent subproblem optimizers. prob.driver.cases = [ [('top_indep.x', -1.0)], [('top_indep.x', 1.0)] ] Finally, we setup and run the top level problem. Calling run() on the problem will run the concurrent optimizations. prob.setup(check=False) prob.run() After running, we can collect the responses from our CaseDriver and the response with the minimum value of subprob.comp.fx will give us our global minimum. optvals = [] #subprob.comp.fx = %s at subprob.indep.x = %s" % (global_opt['subprob.comp.fx'], global_opt['subprob.indep.x'])) Note If we were trying to minimize a function where we didn’t know all of the local minima ahead of time, there would be no guarantee that this approach would locate all of them, and therefore no guarantee that the minimum of our local minima would be the actual global minimum. Putting it all together, it looks like this: import sys from math import pi from openmdao.api import Problem, Group, Component, IndepVarComp, ExecComp, \ ScipyOptimizer, SubProblem, CaseDriver class MultiMinGroup(Group): """ In the range -pi <= x <= pi function has 2 local minima, one is global global min is: f(x) = -1.31415926 at x = pi local min at: f(x) = -0.69084489952 at x = -3.041593 """ def __init__(self): super(MultiMinGroup, self).__init__() self.add('indep', IndepVarComp('x', 0.0)) self.add("comp", ExecComp("fx = cos(x)-x/10.")) self.connect("indep.x", "comp.x") if __name__ == '__main__': # First, define a Problem to be able to optimize our function. sub = Problem(root=MultiMinGroup()) # set up our SLSQP optimizer sub.driver = ScipyOptimizer() sub.driver.options['optimizer'] = 'SLSQP' sub.driver.options['disp'] = False # disable optimizer output # In this case, our design variable is indep.x, which happens # to be connected to the x parameter on our 'comp' component. sub.driver.add_desvar("indep.x", lower=-pi, upper=pi) # We are minimizing comp.fx, so that's our objective. sub.driver.add_objective("comp.fx") # Now, create our top level problem prob = Problem(root=Group()) prob.root.add("top_indep", IndepVarComp('x', 0.0)) # add our subproblem. Note that 'indep.x' is actually an unknown # inside of the subproblem, but outside of the subproblem we're treating # it as a parameter. prob.root.add("subprob", SubProblem(sub, params=['indep.x'], unknowns=['comp.fx'])) prob.root.connect("top_indep.x", "subprob.indep.x") # use a CaseDriver as our top level driver so we can run multiple # separate optimizations concurrently. This time around we'll # just run 2 concurrent cases. prob.driver = CaseDriver(num_par_doe=2) prob.driver.add_desvar('top_indep.x') prob.driver.add_response(['subprob.indep.x', 'subprob.comp.fx']) # these are the two cases we're going to run. The top_indep.x values of # -1 and 1 will end up at the local and global minima when we run the # concurrent subproblem optimizers. prob.driver.cases = [ [('top_indep.x', -1.0)], [('top_indep.x', 1.0)] ] prob.setup(check=False) # run the concurrent optimizations prob.run() # subprob.comp.fx = %s at subprob.indep.x = %s" % (global_opt['subprob.comp.fx'], global_opt['subprob.indep.x']))
http://openmdao.readthedocs.io/en/1.7.3/usr-guide/tutorials/subproblem.html
CC-MAIN-2017-17
en
refinedweb
wcsstr - find a wide-character substring #include <wchar.h> wchar_t *wcsstr(const wchar_t *ws1, const wchar_t *ws2); The wcsstr() function locates the first occurrence in the wide-character string pointed to by ws1 of the sequence of wide-characters (excluding the terminating null wide-character) in the wide-character string pointed to by ws2. On successful completion, wcsstr() returns a pointer to the located wide-character string, or a null pointer if the wide-character string is not found. If ws2 points to a wide-character string with zero length, the function returns ws1. No errors are defined. None. None. None. wcschr(), <wchar.h>. Derived from the ISO/IEC 9899:1990/Amendment 1:1995 (E).
http://pubs.opengroup.org/onlinepubs/007908775/xsh/wcsstr.html
CC-MAIN-2017-17
en
refinedweb
#include <VertexFormat.h> Defines the format of a vertex layout used by a mesh. A VertexFormat is immutable and cannot be changed once created. Defines a set of usages for vertex elements. Constructor. The passed in element array is copied into the new VertexFormat. Destructor. Gets the vertex element at the specified index. Gets the number of elements in this VertexFormat. Gets the size (in bytes) of a single vertex using this format. Compares to vertex formats for inequality. Compares two vertex formats for equality. Returns a string representation of a Usage enumeration value.
http://gameplay3d.github.io/GamePlay/api/classgameplay_1_1_vertex_format.html
CC-MAIN-2017-17
en
refinedweb
The Apache Jackrabbit community is pleased to announce the release of Apache Jackrabbit 2.0 alpha7. The release is available for download at: See the full release notes below for details about this release. Release Notes -- Apache Jackrabbit -- Version 2.0-alpha7 43 top level JCR 2.0 implementation issues are being tracked in the Jackrabbit issue tracker. Most of them have already been partially implemented, but the issue will only be marked as resolved once no more related work is needed. Open (5 issues) [JCR-1588] JSR 283: Access Control [JCR-1590] JSR 283: Locking [JCR-1712] JSR 283: JCR Names [JCR-2085] test case (TCK) maintenance for JCR 2.0 [JCR-2208] update tests so that both Query.XPATH and Query:SQL are ... Resolved (38 issues) [JCR-1564] JSR 283 namespace handling [JCR-1565] JSR 283 lifecycle management Release Contents ---------------- This release consists of a single source archive packaged as a jar
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200908.mbox/%3C510143ac0908101501h7701f414s4836fb7c300b1d35@mail.gmail.com%3E
CC-MAIN-2017-17
en
refinedweb
Pop Searches: photoshop office 2007 PC Security You are here: Brothersoft.com > Windows > Home & Education > Math > import multiple vcards | limewire intel mac | g-tune 2.51 | keygen.com | free roxio dvd | usb lock freeware | satellite patch maker | unzip multiple | xvdowloader | blackberry rim bold | download game dtdd | unique free message | htc hd2 u | gantt chart maker Please be aware that Brothersoft do not supply any crack, patches, serial numbers or keygen for Multiple Conversion Calculator,and please consult directly with program authors for any problem with Multiple Conversion Calculator. Advertisement multiple players | monopoly naruto | inf to mdb converter | audio messenger | print multiple | length conversion calculator | | icon shining software | dbm conversion calculator | multiple address | weight conversion calculator | multiple display | cool m4v to wmv | m4a to aac | format 360 converter | smart phone 6120 soft | quake 4 pak003.pk 4 | Broadcast Administrator | torque conversion calculator | multiple products | aplikasi untuk edit photo nokia e71 | korea history audio mp3 | only n73 lifco dic
http://www.brothersoft.com/multiple-conversion-calculator-download-179278.html
CC-MAIN-2017-17
en
refinedweb
MASM32 Downloads 10 for i = 0 to 1620 for j = i to 1630 print "*";40 next j50 print60 next i ? "Hello world" And Four Years of MasmBasic I don't know any other language capable of write down a "hello world" like this:Code: [Select]? "Hello world" include \masm32\MasmBasic\MasmBasic.inc Init Let esi="Hello World" Let esi=esi+", how are you?"EndOfCode #include <stdio.h>#include <string.h>int main (void) { char *msg1 = "Hello JJ. ", *msg2 = "How are you having today?", msg3[80]; strcpy (msg3, msg1); strcat(msg3, msg2); printf(msg3); return 0;} The best I found in the early/middle 90s was GFA basic for 16 bit Windows but with the advent of WinNT4/Win95 I found PowerBASIC ? = PRINT in basic Hello avcaballero, what's up today?Here are two little files:;;;; headcomment * -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- WINDOWS.INC for 32 bit MASM (Version 1.6 RELEASE January 2012).. winextra.inc is the second part of the windows.inc file
http://masm32.com/board/index.php?PHPSESSID=3fc99124e2268f13f013469a2f588f5e&topic=5416.0
CC-MAIN-2018-43
en
refinedweb
The application I’m working on right now has a search box that makes suggestions as the user types and does quick, inline searches to provide extra-fast results. Yesterday, I talked about how we improve our timing with debouncing. Today I’ll dive into the technical details of how we built the autocomplete behavior using React–Redux and Apollo. Implementation We’re working with a React-Redux front end connected to a GraphQL server via Apollo, in TypeScript. If you’re not familiar with TypeScript, the below should still be fairly readable; if you’re not familiar with the other technologies mentioned, it probably won’t be. If all of the above sounded like a nerdy word-salad, check out my friend Drew’s post on the TypeScript/GraphQL/ReactJS stack. The application is made up of React components, which take advantage of react-apollo to drive their props from the result of a GQL call on render. React components re-render when their props change, so react-apollo connected components automatically rerun their queries every time they get new props. Our search box component needs to make a GQL query, but an input component gets new props with every key entered. As mentioned above, we don’t want to run the autosuggestion query tens of times in a second, and we certainly don’t want to render the autosuggestion text once for every new result that comes back. To debounce the query calls effectively, we need more fine-grained control of when the query is made. An early attempt at this involved tinkering with the shouldComponentUpdate function of the component, but because it needs to update some things on every keystroke, this got hairy quickly. So, we departed from our usual react-apollo pattern that automatically makes queries on render, and built a function that we could debounce. The Query To start, we have an AutoSuggestingSearchBox component that takes in its props, among other things, a handle to our ApolloClient. It creates a lambda, getAutoSuggestion, that closes over that client and makes the autosuggestion query for a term in the search box. It looks something like this: function AutoSuggestingSearchBox( props: { client: ApolloClient; } & OtherProps ) { const getAutoSuggestion = async ( term: string ): Promise<string | undefined> => { const query = require("./autosuggest.graphql"); const results = await props.client.query<AutoSuggestQuery>({ query, variables: { term } }); return results.data.searchSuggestion; }; return ( <SearchBox getAutoSuggestion={getAutoSuggestion} {...props} /> ); } Okay, so we have a function that gets an autosuggestion for a given term. Now, when do we call it? You’ll see above that the SearchBox component is taking that getAutoSuggestion function that we just made. Let’s see what it does with that. Actions and State We keep the current autoSuggestion string in part of our Redux store: export interface SearchboxState { enteredText: string; suggestion?: string; [...] } So, in Redux style, we want to dispatch an action that makes the query and updates that state when we get a keypress. The autoSuggestion query is an async function, so this action needs to be asynchronous. We use thunk for this, but there’s more than one way to skin that cat. The async action we’re going to dispatch looks like this: export function queryUpdated( text: string, getAutoSuggestion: (text: string) => Promise<string | undefined> ) { return async (dispatch: Dispatch<any>) => { let suggestion = undefined; try { if (text) { suggestion = await getAutoSuggestion(text); } } catch (e) { suggestion = undefined; } dispatch(updateSuggestion(suggestion)); //* See below }; } * We’re using the action builder pattern described here. Dispatch an action however you like to do so. Our suggestionUpdated action updates the state of the SearchBox with a new suggestion string. Debouncin’ It You’ll note that we still haven’t gotten to the debouncing part! We’ll do this inside our mapDispatchToProps function on the SearchBox component. We’ll create a function that closes over our dispatch and dispatches our new thunk, and debounce /that/. Here’s what it looks like: interface ExternalProps { [...] getAutoSuggestion: (searchText: string) => Promise<string | undefined>; } [...] function mapDispatchToProps( dispatch: Dispatch<any>, ownProps: ExternalProps ): DispatchProps { const updateAutoSuggestion = debounce((text: string) => { dispatch( Actions.queryUpdated( text, ownProps.getAutoSuggestion ) ); }, 100); return { onSearchTextChanged: updateAutoSuggestion, [...] }; } The operative lesson here is that we’re debouncing the dispatching of the action that makes the query, not the query itself. This protects us from accidentally tinkering with the state when we end up throwing out a query, and it means that the presentational component for the SearchBox now has a handle to a single function to call every time the text is updated—nice and tidy. By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy3 Comments I don’t usually write comments after reading a guide/tutorial post. In fact, this is my first time, and I’m doing so because this is, by far, the most technically sophisticated tutorial I could find out there. Combined with good writing and code clarity, this is great stuff (kudos Rachel!). Too many guides elsewhere don’t suit my needs, but this one has perfectly captured mine. I am currently developing a React app with Redux, Apollo, and TypeScript, and Lodash (and other stacks, obviously). Thank you! *on to reading other articles* Thanks, Ionwyn! I’m so thrilled to hear it was helpful. One of my biggest concerns writing this post was whether there was even anyone on earth who might happen to need to do something exactly like this, in this exact stack. It’s good to know that there is! It’d be interesting to trade notes on what working with React/Apollo/TypeScript has been like for you- we’ve been using this stack extensively at Atomic and it’s been working out really well for us. Hey, Rachael. Haha, I think I should be the one taking notes from you, as I’m still relatively new to the stack! Adopting TypeScript to React was not much of a problem having studied C++ (though TypeScript looks more like C# to me), and now I believe it has improved my development process overall. Combined with this awesome cheatsheet (), no problem. I still use Redux with Apollo because I’m working on what I think is a medium-sized web app where local data management and debugging can get complicated. I’m still unsure why Apollo decided to remove Redux in 2.0, while I started with Apollo 2.0, it seemed promising to be able to access Apollo data through Redux. That said, I’m putting my money on apollo-link-state in the future. At the moment, there’s absolutely no way I’m using apollo-link-state until it matures. I think the most painful part was getting AWS Lambda to work with Apollo. But then again, most documentations for AWS Services are counterintuitive :) I hope to see more interesting articles like this from you and the team at Atomic!
https://spin.atomicobject.com/2018/06/05/autocomplete-react-redux-apollo/
CC-MAIN-2018-43
en
refinedweb
Bug Description Since Kernel 3.0 I can not watch TV with my MyGica S870 USB Tuner. I use VLC. Usually I can not see any channel. Rarely I can see a channel but then stops working. This problem does not happen with 2.6.38 Kernel. All this I said I tested with Kubuntu 64bits: 10.04 Lucid, 11.04 Natty and 11.10 Oneiric. *Additional information: $ lsmod | grep -i dvb dvb_usb_dib0700 114669 0 dib7000p 39109 1 dvb_usb_dib0700 dib0090 33392 2 dvb_usb_dib0700 dib7000m 23415 1 dvb_usb_dib0700 dib0070 18434 1 dvb_usb_dib0700 dvb_usb 24444 1 dvb_usb_dib0700 dib8000 43019 2 dvb_usb_dib0700 dvb_core 110616 3 dib7000p, dib3000mc 23392 1 dvb_usb_dib0700 dibx000_common 14574 5 dvb_usb_ rc_core 26963 11 rc_dib0700_ $uname -r 3.0.0-7-generic The driver is in the package "linux- Possible firmware used: dvb-usb- dvb-usb- *Similar problems related: http:// https:/ Well, I do not understand very well the previous automated message. I do not think apport may add useful information to this bug, and so you can see on the bugzilla.kernel.org link, is a confirmed bug. This should be the same issue stated at http:// Patches are available ( http:// I just bought a Hauppauge Nova-TD, which doesn't seem to work and uses the same module, so I'll test the patches. Hmm, mine seems to work fine after all. I have installed Ubuntu 11.10 64bit and I have problem with this bug. Last kernel 3.0.0.12 and I have Winfast DTV Dongle (dib0700). In dmesg I have this: dib0700: tx buffer length is larger than 4. Not supported Could someone patch the kernel and build the packages for us to see if it works? Otherwise, Ubuntu Oneiric will be released with a kernel where this popular chip does not work. Thanks. Would it be possible for you to test the latest upstream kernel? It will allow additional upstream developers to examine the issue. Refer to https:/ Thanks in advance. the latest Oneiric kernel (3.0.0-12.20) and the mainline kernel (3.1.0- "dmesg" shows: dib0700: tx buffer length is larger than 4. Not supported. The link below mentions that the patch fixes the problem: http:// In the link I had written earlier and is now impossible to enter (https:/ 3.0.0-12.20, this issue still exists. And after applying the patches on top of 3.0.0-12.20, this issue is gone. @Jesse Can you attach the patches to this bug report? @Joseph, Patches can be pulled in by git pull http:// Three commits: http:// http:// http:// Hello, might my problem with the installation of a Cinergy T Stick Black also be a kernel related issue ? I am using 11.10 (32 bit) and get errors which don't seem to occur with 64 bit versions. Based on the installation description under : http:// I get the messages : . make -C /usr/src/ make[1]: Betrete Verzeichnis '/usr/src/ Building modules, stage 2. MODPOST 1 modules WARNING: "__udivdi3" [/home/ WARNING: "__umoddi3" [/home/ WARNING: "__divdi3" [/home/ make[1]: Verlasse Verzeichnis '/usr/src/ ubuntu@ cp dvb-usb-rtl2832u.ko /lib/modules/`uname -r`/kernel/ depmod -a . which later results via : dmesg | tail -n 30 . . [ 47.069721] EXT4-fs (sda3): re-mounted. Opts: errors= [22954.109240] usb 1-4: USB disconnect, device number 3 [23156.272071] usb 1-4: new high speed USB device number 5 using ehci_hcd [23157.163377] dvb_usb_rtl2832u: Unknown symbol __divdi3 (err 0) [23157.163443] dvb_usb_rtl2832u: Unknown symbol __umoddi3 (err 0) [23157.163474] dvb_usb_rtl2832u: Unknown symbol __udivdi3 (err 0) [23157.171572] dvb_usb_rtl2832u: Unknown symbol __divdi3 (err 0) [23157.171638] dvb_usb_rtl2832u: Unknown symbol __umoddi3 (err 0) [23157.171669] dvb_usb_rtl2832u: Unknown symbol __udivdi3 (err 0) Other forums suggest, that this may well be a kernel issue. Please advise on how to proceed now. Well, update to kernel 3.0.4 didn't help either. I still get the same messages. Any ideas anybody ? Ok, I expected a quick solution to this problem. Meanwhile, people with this problem can install new versions of the drivers with these instructions: http:// That is, install "git" and the basic build dependencies, then: git clone git://linuxtv. cd media_build ./build sudo make install Nope, didn't work for me : after : sudo make install ( T Sick not connected yet ) dmesg . [ 269.732128] usb 1-3.4: new high speed USB device number 6 using ehci_hcd [ 269.923917] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_init [ 269.923931] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_init (err -22) [ 269.923946] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_exit [ 269.923954] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_exit (err -22) [ 269.949394] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_init [ 269.949409] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_init (err -22) [ 269.949424] dvb_usb_rtl2832u: disagrees about version of symbol dvb_usb_device_exit [ 269.949432] dvb_usb_rtl2832u: Unknown symbol dvb_usb_device_exit (err -22) [ 905.727724] usb 1-3.4: USB disconnect, device number 6 . . @ezilg, It seems that your device has a different chip that is mentioned in this bug. Are you sure your problem is related to this report? Install the drivers from git solves the problem for "dvb_usb_dib0700" with the error message: dib0700: tx buffer length is larger than 4. Not supported. For your chip I have only found this: http:// Well, first of all, I translated it into German and if requested,I can supply a English version as well. The sad part about this is : I got error messages again make[3]: *** [/home/ make[2]: *** [_module_ make[2]: Leaving directory `/usr/src/ make[1]: *** [default] Fehler 2 make[1]: Verlasse Verzeichnis '/home/ make: *** [all] Fehler 2 ubuntu@ ubuntu@ /home/ubuntu/ /home/ubuntu/ /lib/modules/ /lib/modules/ The problem in "dvb_usb_dib0700" has been fixed in 3.2 Kernel. $ uname -r 3.2.0-030200rc1 http:// @ezilg: I've had the same problems ("Unknown symbol __divdi3" etc.) with my DVB-Dongle "Trekstor Terres". This dongle uses the RTL2832U along with the Tuner FC0012. I use Ubuntu 10.10 with Kernel "3.0.0-12-generic". I am using the RTL2832-driver from "https:/ Some of the Source-Files for the RTL2832U-Driver (e.g. "rtl2832u_fe.c" ) are using 64Bit-divisions with the normal operators "/" and "%" (remainder). These operators are converted by gcc to libgcc-functions (e.g. "__divdi3" for signed 64Bit-operators). But Kernel-Modules can not use / don't have access to the libgcc-functions (see "http:// - Copy the attached files "div64_wrap.h" and "div64_wrap.c" into the folder "...linux/ - Add "div64_wrap.o" to "Makefile" in the same directory ("dvb-usb- - Add "EXTRA_LDFLAGS += --wrap __udivdi3 --wrap __divdi3 --wrap __moddi3 --wrap __umoddi3" to "Makefile" in the "v4l"-directory: ... # CFLAGS configuration ifeq ($(CONFIG_ EXTRA_CFLAGS += -I$(srctree) endif EXTRA_CFLAGS += -g EXTRA_LDFLAGS += --wrap __udivdi3 --wrap __divdi3 --wrap __moddi3 --wrap __umoddi3 ... After this my USB-dongle worked (I still have performance problems but that's another isue). "div64_wrap.h": /****** File: div64_wrap.h Description: Wrapper-functions for some 64Bit-division functions of libgcc (e.g. "__divdi3"). That means that the functions declared here are used instead of the originally ones. LD-options "--wrap=__divdi3 --wrap __udivdi3 --wrap __moddi3 --wrap __umoddi3" have to be used! Reason: Building modules for e.g. the Ubuntu-Kernel "3.0.0-12-generic" using integer 64Bit divisions with the "/" or "%" operator fails at linker-stage with e.g. 'WARNING: "__divdi3" [dvb-usb- rtl2832u.ko] undefined!' Problem: gcc is using the "divdi"-functions defined in libgcc. But Kernel-Modules can not use the libgcc- functions! Author: Stefan Bosch Date: 2011-11-11 ******* #include <linux/math64.h> unsigned long long __wrap_ long long __wrap_ unsigned long long __wrap_ long long __wrap_ // ******* EOF "div64_wrap.c": see attachment Addition to my former Post #22: The header file "div64_wrap.h" is not necessary, therefore delete the include in the "div64_wrap.c". Das Resultat mit 'make' ist leider immer noch das selbe, trotz der Änderungen. Das ganze fängt mit folgenden Fehlern an : . . /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ . . usw. . /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ /home/ubuntu/ Siehe meine beiden attachments. Sorry das vorherige Makefile ist das falsche @ezilg Diese Compilerfehler hatte ich mit den "falschen" RTL2832U- Hi, This is a patched amd64 kernel: http:// http:// http:// Please test it if you have time and let me know if it works or not. Thanks Yafu, The latest kernel update (3.0.0-14.23) brings two patches: [media] DiBcom: protect the I2C bufer access [media] dib0700: protect the dib0700 buffer access Could you see if that helps? @julianw works for me now! (dib0700) Works for me now too. In my case the new kernel has helped me to tune 2 transponders, but I'm still unable to tune all the existing ones (they worked before, of course). I get timeouts when executing the tool "scan". For example this is the output for one transponder: >>> tune to: >>> tune to: 482000000: >>> tuning status == 0x0e >>> tuning status == 0x1e WARNING: filter timeout pid 0x0011 WARNING: filter timeout pid 0x0000 WARNING: filter timeout pid 0x0010 Is fixed for me with the latest "3.0.0-14" Kernel. I can scan and watch TV. I do not get the error with "dmesg" anymore. Regards. Works for me. Ubuntu 11.10 in both 32-bit and 64-bit systems. Tested with Pixelview SBTVD dongle.:/ This bug is missing log files that will aid in diagnosing the problem. From a terminal window please run: apport-collect 838130.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/838130
CC-MAIN-2018-43
en
refinedweb
WCF: Connecting Web services with the System.ServiceModel In this technical article, Wiliam Tay outlines the Windows Communication Foundation and how it helps developers build full-fledged, secure Web services. The new System.ServiceModel namespace, which is a core component of Windows Vista, will change the way .NET developers look at building and securing services. Before developers can get going with full-fledged secure Web services, they need to know how to enable one Web service to talk to another Web service, and Microsoft's new Windows Communication Foundation, or WCF, can accomplish that. It does not take much more than a quick sneak peek inside WCF and its design objectives to see the benefits of its architecture. One of the keys to building a successful application is to build one that endures. While this sounds simple, it is a tall task to fulfill, especially when business environments are so fluid these days. To make matters worse, there is a plethora of technical architecture out there catering best to many different scenarios. WCF is designed with the primary objective of solving some of those problems by unifying all the Microsoft technology stacks out there, such as EnterpriseServices (COM+), System.Messaging (MSMQ), Interoperable Basic Web Services (ASMX) and the advanced WS-* stack (WSE) with a single programming model. The entire architecture of WCF revolves around the concept of a Message type, which is nothing but an abstract of a wire-level SOAP message, SOAP being a particular XML schema. All the developers know is that solutions they build with WCF will generate SOAP messages on the wire that will interoperate with application solutions from other technology and platform vendors out there. The Architecture: Layers and Channels To put it simply, WCF solutions are built in two layers. The bottom layer relates to the messaging infrastructure, which concerns itself with the movement of messages from one point to another. This layer is comprised of classes found within the System.ServiceModel namespace. Several extensibility points can customize the behavior of those classes. Incidentally, the second layer, which is the programming model, builds on top of these extensibility points. It consists of a second set of classes within System.ServiceModel, and they are the primary classes for WCF developers to program against when building WCF applications. The WCF programming model was designed to achieve a few fundamental objectives. One key objective was to provide a standard interface to the interconnecting networks between the services. In other words, you do not have to do the "modify code, compile, build, test and deploy" cycle if there is a requirement to switch transports or networks, such as between a HTTP transport to a TCP one, or between a network where messages must be encrypted and one where they are not. Another related key design objective of WCF was for applications and services built on top of it to endure. It accomplishes that by hiding constructs that may turn out to be contemporary artifacts of current Web Services specifications behind programming concepts that exposes real, useful business functionality. Thus, there is a clear abstract boundary between development of business details and the deployment of technical ones. By not mixing business logic code with the technical details, solutions appear more clean and concise and have a better chance of being more agile and therefore surviving in the fluid service-oriented environments of today and tomorrow. These objectives are accomplished through a layering architecture as a central design principle, which in essence means that WCF is a collection of sub-frameworks that can each be extended or even replaced by developers and partners. At the 40,000-foot view, WCF consists of two larger frameworks: The Typed layer and the Channel layer. The fundamental unit of data for the Channel Layer is the Message. Channels are used to Send and Receive Messages. While the Message is a very flexible object, developers will require knowledge of Infosets and XML to fully utilize it. Therefore, what most developers will most likely do is depend on the Typed Layer as their best friend and interface with it to send messages across the wire. The corresponding fundamental unit of data for the Typed Layer is a Common Language Runtime class. In the Typed Layer, you create CLR objects that implement either Services or Typed Proxies. This layer will then convert parameters and return values into Message objects and method calls into Channel calls. In this way, the Typed Layer builds on and extends the functionality of the Channel Layer, which can then transparently leverage any changes and/or improvements made to Channels. This layering architecture goes a long way in fulfilling the fundamental design objectives I mentioned above. Let us look at some concrete code to get a better feel of this design. Show me the Contract WCF applications consist of services and their clients (consumers). Services define endpoints for consumers to communicate with them. Endpoints comprises of an address, a binding and a contract, which are fondly known as the A, B, Cs of WCF. Briefly, the address is the location of the service, while the binding specifies the protocols and the transports used to communicate with the service. The contract is what the developer is primarily concerned with. The developer defines the contract, implements it and allows the service to be hosted. There are a few types of contracts in WCF that can be grouped either under structural or behavioral contracts. Let us define an Operation Contract first: Table 1 An operation contract defines an operation and is a unit of work for the service. It specifies, by default, a request and a reply message exchange pattern (MEP). As you can see from the code sample above, the WCF's concept of an operation contract tightly maps to a Web Services Description Language's (WSDL) definition of an operation, which in turn, corresponds roughly to a CLR method/function. Next, we look at a simple Service Contract: Table 2 On the same scale, the concept of a Service Contract corresponds roughly to portTypes in WSDL. It is essentially a collection of operation contracts. Incidentally, what I just did above was define a service contract via a contract-first approach. This is done by declaring a .NET interface and then decorating it with the ServiceContract attribute and then defining a class that implements this contract: Table 3 This is in contrast to a code-first approach where one can decorate a new or existing class and its methods directly with WCF attributes directly. I would not recommend this, as it tightly binds an interfacing contract with its implementing class in the CLR without a faÇade layer. Take note that the contract-first approach I described above is slightly different from a WSDL-first approach where you actually author a WSDL file directly instead of letting the underlying platform generate the WSDL file via attributes programming and declaration. Although I would advocate the style of a WSDL-first approach where one can assert more control over the WSDL contract and schema, especially in interoperability scenarios, in the absence of better tools and without developers having to muck around with the specifications, profiles and angle brackets directly, the WCF's definition of a contract-first approach is really a great step forward. Next up would be to define my custom CreditCard class, which is simply a .NET CLR type: Table 4 The DataContract is a structural contact type in WCF, which is defined by adding DataContract and DataMember attributes to your .NET classes. Both attributes are defined in the System.Runtime.Serialization namespace. What a Data Contract declaration does is specify how a .NET type is represented in an XML Schema. To see it from another view, two different software entities can have different internal class definitions, yet agree on the same abstract representation of that data. In this case, both sides can translate the data to and from those internal representations into the same physical manifestation of the abstract representation, which can then facilitate communications and interoperability. The XML Schema is most commonly used to compose abstract representations of data to be exchanged between two entities, with XML being the physical manifestation of this abstract representation. Once all that is set up, it is time to specify the hosting container and then hosting that service. Read Connecting Web services with the System.ServiceModel (cont.) - Playing the perfect host. Start the conversation
https://searchwindevelopment.techtarget.com/news/1148900/WCF-Connecting-Web-services-with-the-SystemServiceModel
CC-MAIN-2018-43
en
refinedweb
- Advertisement WibbsMember Content Count75 Joined Last visited Community Reputation129 Neutral About Wibbs - RankMember [libgdx] Black screen when toggling from fullscreen to windowed mode Wibbs posted a topic in For Beginners's ForumHey all, I have just started working with libgdx, and have run into my first problem. I have a simple program that switches a desktop application from windowed mode to fullscreen and back again. If I start the program in windowed mode, it initially displays fine. When I toggle to full screen it is also OK. However, when I toggle back to windowed mode the screen goes completely black. The application remains responsive though, and if I toggle again it will go back to fullscreen and display correctly. I am using 64-bit Windows 7 with the IntelliJ IDEA IDE. Any help with working out why it is doing this would be greatly appreciated. A minimal example that displays the issue is shown below: public class MyGdxGame extends ApplicationAdapter { @Override public void create () { fullScreen = false; } @Override public void render () { Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); if (Gdx.input.isKeyPressed(Input.Keys.TAB)) { fullScreen = !fullScreen; DisplayMode currentMode = Gdx.graphics.getDesktopDisplayMode(); Gdx.graphics.setDisplayMode(currentMode.width, currentMode.height, fullScreen); } } private boolean fullScreen; } Matrix problem Wibbs posted a topic in Math and PhysicsHey all, I have a matrix problem that I've been trying to work out for about a week now but with no success, and I was hoping someone might beable to shed some light on it for me. I have a scene graph where individual nodes store their scale, rotation and translation relative to their parent node. I also store the total forward and reverse transformation for each node so I can convert freely between global and local coordinates. What I am struggling with is how to rotate a particular node around an arbitrary point in global coordinates. I know how to get: - The matrix that represents the rotation about the point in global coordinates(Trans(-point) * Rotate * Trans(point) - The global coordinates of the node I want to rotate What I don't understand how to calculate is the change in local scale, rotation and translation for the node being rotated that gives the equivalent transformation. Any help would be greatly appreciated. Wibbs - Thanks all for your help - its made me realise that the issue isn't so much with the way I design the event dispatcher, but more with the way I design the code that uses it. - But can't it lead to a situation where listeners in the pending queue need to be made aware of events being dispatched? Event dispatcher conundrum Wibbs posted a topic in General and Gameplay ProgrammingHey all, I'm in the process of coding up an event dispatcher for my game, and I've run into a problem I'm not sure how to deal with. The dispatcher registers event listeners against particular event types by adding them to lists, and when events are triggered the dispatcher iterates over the list for the event type and calls each listener in turn. The problem I have is working out how to deal with what happens when the calling of one of the event listeners leads to code which wants to add another listener to the list that's being iterated over as this causes an exception. I've thought about having a pending list, which is added to until the iteration completes, but I think that will just move the problem rather than solving it altogether. To be honest I am completely stumped on this and would welcome any guidance. Thanks, Wibbs Question about Translation, Rotation and Scaling in 2D Wibbs posted a topic in Math and PhysicsHi all, I have a 2D matrix problem that I am struggling with, and I would be grateful for any help you can give... I have a shape made up of a number of points with a given specified centre. I then apply an arbitary number of the following operations to it in a particular order: - Translation - Rotation about an arbitary point - Scale relative to an arbitary point I know how to represent each of these by a matrix or set of matrices, and I can concatenate the operations to give a single, composite matrix. However, what I would like to be able to do is represent the combination of the various operations I have applied as: - A single translation from the origin followed by - A single scale relative to the origin followed by - A single rotation about the specified centre of the object Is this possible? If so, I would be extremely grateful for any pointers. Thanks, Wibbs Client/Server with scripting design question Wibbs replied to Wibbs's topic in General and Gameplay ProgrammingNo, not UI widgets - I have good clean seperation for these. The things I am talking about are actual 'actors' within the game world. The game is card based, and certain aspects of their placement are sometimes user configurable. What I would like to be able to do is have the overall algorithm that decides the placement of the cards on the server, but for there to be parameters within it which individual clients can have set differently (typically within certain bounds). Different clients can set these seperately for the same actors, which could and probably would lead to the position of some actors varying from client to client. Client/Server with scripting design question Wibbs replied to Wibbs's topic in General and Gameplay ProgrammingIn the example I was considering they are deciding the way elements of the game are positioned on the screen. Although the server has ownership of the overall algorithm that is used, clients have some configurability with some of the paramaters that the algorithm uses. Client/Server with scripting design question Wibbs posted a topic in General and Gameplay ProgrammingHi all, I have successfully implemented a simple client server model within my game, which also makes heavy use of Lua scripting. However, I've run into an issue I would appreciate some advice on. There are a number of cases where the server has an algorithm, which is represented by a Lua function loaded from script, but individual clients can have different values for the parameters that drive it, resulting in different algorithm outputs. What I am stuck with is how to make sure that the algorithm is run with the correct parameters for each client. The only solution I can think of is for the server to request copies of the parameters from each client and store them ready for use, but this seems very inelegent, and I was wondering if there was a solution I am overlooking. Any advice would be extremely helpful. Thanks, Phil Can This Be Done? [SFML] Wibbs replied to Jethro_T's topic in For Beginners's ForumThis is possible with the most recent version of SFML, which allows you to render directly to images. PATrainwreck is correct though - the SFML forums would be a much better place to ask this kind of thing, and the developer is very good at responding quickly. [C#] Detecting changes to files in folders on two machines Wibbs replied to Wibbs's topic in General and Gameplay ProgrammingThanks, That's the conclusion I'd come to, but wanted to check whether there was a way the contents of a folder could be treated as a single entity rather than having to hash each file seperately. Wibbs [C#] Detecting changes to files in folders on two machines Wibbs posted a topic in General and Gameplay ProgrammingHey all, The game I am writing uses a client/server architecture, and I would like to add a check that the server initiates to see if some key files on the client's machine are what they should be. I would appreciate any pointers as to the most efficient way of doing this - at present I have managed to arrange the data so that all files within a couple of folders need checking, and I was also wondering if there is a more efficient way of doing this rather than checking each file individually. Thanks in advance, Wibbs [C#] Question about where to save different types of file on users computer Wibbs replied to Wibbs's topic in General and Gameplay ProgrammingCool thanks - would it preferable to use the shared app directory or the user specific one? [Edited by - Wibbs on November 30, 2010 5:29:17 AM] - Hi all, I am working on a game where the user will be required to download content of different types, and I would like to ask for clarification on where the best place is to save this information. There will be both official and user made game levels, and I am a little unsure of where to save them. At present they are going into a folder in: C:\Documents and Settings\All Users\Application Data which I am accessing through: Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) I have a couple of queries: Are all users guaranteed to be able to access this folder for reading and writing? Would it be preferable to save to the specific users application data folder? If so then is it OK to expect each windows user account to have to download content seperately? I am using Windows XP. Are there any variations in practice between this, Vista and Windows 7? Thanks in advance for your help. Phil [.net] Testing whether a given IP address is on the local network Wibbs replied to Wibbs's topic in General and Gameplay ProgrammingThanks for the pointers. This is the rough code I've ended up with... First I get info on the local host's network interfaces: hostIPAddressesAndSubnetMasks = new Dictionary<IPAddress, IPAddress>(); // get the network interfaces for the host machine NetworkInterface[] hostNetworkInterfaces = NetworkInterface.GetAllNetworkInterfaces(); // for each interface, loop through and get all of the unicast IP addresses and their associated subnet masks foreach (NetworkInterface interfaceToCheck in hostNetworkInterfaces) { if (interfaceToCheck.NetworkInterfaceType == NetworkInterfaceType.Loopback) { continue; } UnicastIPAddressInformationCollection IPAddressInfoCollection = interfaceToCheck.GetIPProperties().UnicastAddresses; foreach (UnicastIPAddressInformation info in IPAddressInfoCollection) { if (!hostIPAddressesAndSubnetMasks.ContainsKey(info.Address)) { hostIPAddressesAndSubnetMasks.Add(info.Address, info.IPv4Mask); } } } I have a utility function which returns the network address for a given IP address and subnet mask: protected IPAddress GetNetworkAddressFromIPAddress(IPAddress address, IPAddress subnetMask) { byte[] ipAddressAsBytes = address.GetAddressBytes(); byte[] subnetMaskAsBytes = subnetMask.GetAddressBytes(); if (ipAddressAsBytes.Length != subnetMaskAsBytes.Length) { throw new ArgumentException("address and subnet mask must be the same length"); } byte[] networkAddress = new byte[ipAddressAsBytes.Length]; for (int i = 0; i < networkAddress.Length; i++) { networkAddress = (byte)(ipAddressAsBytes & (subnetMaskAsBytes)); } return new IPAddress(networkAddress); } And finally, I have a function which checks a given address as follows: public bool IsInternal(int ipAddress) { IPAddress addressToCheck = new IPAddress(ipAddress); foreach (KeyValuePair<IPAddress, IPAddress> localAddress in hostIPAddressesAndSubnetMasks) { IPAddress localNetworkAddress = GetNetworkAddressFromIPAddress(localAddress.Key, localAddress.Value); IPAddress addressToCheckNetworkAddress = GetNetworkAddressFromIPAddress(addressToCheck, localAddress.Value); if (localNetworkAddress.Equals(addressToCheckNetworkAddress)) { return true; } } return false; } I know there needs to be a fair bit more error checking and such like, but I would appreciate it if anyone could point out if I'm missing anything obvious. Thanks, Phil - Advertisement
https://www.gamedev.net/profile/151673-wibbs/
CC-MAIN-2018-43
en
refinedweb
gkeswani92 + 24 comments x, y, z, n = int(input()), int(input()), int(input()), int(input()) print ([[a,b,c] for a in range(0,x+1) for b in range(0,y+1) for c in range(0,z+1) if a + b + c != n ]) omalsa04 + 12 comments Looks good, but to avoid those repetitive input calls you could do something like: x, y, z, n = (int(input()) for _ in range(4)) paul_schmeida + 2 comments Great tip! BTW is there any reason for using '_' as the dummy variable? What's wrong with using letter, for example 'i'? gkeswani92 + 3 comments _ is used to signify that even though something is being returned, we don't plan to use that variable any where. mukeshsinghbhak1 + 0 comments gives the following error in above code in python 2.7.13 : at 0x7f60c947e910> codeharrier + 0 comments It's also an old Perl trick. If you just performed the input command without assigning the data to anything, it was still available in the $_ variable by default. vijayvijart284 + 0 comments It just like a while loop (Ex: while(a<5)) . If we need to store in the seperte varible you can use like this x,y,z,a = (int(input()) for n in range(4). It will be stored seperatly AbhishekVermaIIT + 0 comments N is the number to compare the sum of x, y, z. The loop should actually have 4, as mentioned, for the inputs : x,y,z & n. newmocart + 3 comments in py2 can be w/o int() x,y,z,n = [input() for i in range(4)] print [[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if a+b+c != n] karangandhi545 + 1 comment what is the reason for x+1 y+1 and many more alfredocambera + 1 comment Doesn't work on python3.5 NervousBlakedown + 0 comments I'm using this one line, but my compiler reads "wrong answer". Am I missing something? What else do I need to write? kumarharsh + 1 comment I am a newbie to python and its not clear to me. WHY 4? And how the variable iterate from 0 to 1. aa1992HackerRank Admin + 7 comments nice solution. But instead of range[0,x] you could do print ([[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if a + b + c != n ]) SuperGogeta + 3 comments Trade three +1's for a +1 and a -1 at the right place :P x, y, z, n = (int(raw_input())+1 for _ in range(4)) print [[a,b,c] for a in range(x) for b in range(y) for c in range(z) if a+b+c!=n-1] rachitajitsaria1 + 1 comment Hey, I have heard that int(raw_input()) is faster. rahatchd + 2 comments Can you tell me how python carries out list comprehension internally? How come this generates lists in lexicographic increasing order (i.e z increases first, then y, then x)? Ryukerg + 5 comments It is in lexographical order due to the nature of the loops. Spaced out it looks somewhat like for a in range(x+1): for b in range(y+1): for c in range(z+1): if a + b + c != n: print(stuff is here) We start at [0,0,0]. Then c will increment to get to [0,0,1] When c hits [0,0,z], we get [0,1,0] as the next in the loop. This pattern continues and gives us the lexographical ordering required of the output. dangducbac_hust + 0 comments Here is my soltion with your sussgestion: empty_list = [] for a in range ( x+ 1): for b in range ( y + 1): for c in range ( z + 1): #logic code if (a + b + c != n) : empty_list.append([a,b,c]) print(empty_list) trinadhkoya + 0 comments Not reqired to put input() for each variable .just call the input for _ in range(no of variables); see here a , b = int (input()) for _ in range ( 2 ): bennetryan + 0 comments Thanks for sharing your elegant code. I'm a newbie to Python, its great to see such beautiful code :) andras_lengyel + 1 comment I don't like these "direct print" solutions. In my opinion the problem stated to create a list then print it, not just print something in a list. In other words in your solutions the list itself doesn't exist, can not be used later, it just collected when printed out. What do you think? andras_lengyel + 0 comments OK, I found the answer! I can put list = instead of printoing out, then print the list. It wasn't clear for me three for loops output is collected in a list, so no need for any append tricks! ignacio_ch + 1 comment Nice answer. As an alternative, to avoid multiple for loops you can use product: combinations = list(product(range(x+1), range(y+1), range(z+1))) print([list(a) for a in combinations if sum(a) != n]) pH03nYx + 1 comment Could someone explain to me what, exactly, this code does (like, statement-by-statement)(other than the input part I understand that I mostly mean the list comprehension)? I sort of have a basic understanding of how it works, but having some trouble wrapping my head around list comprehensions and how exactly the work. Mostly the "for a in range(0,x+1)" parts. What exactly does that do? Why is it (0,x+1)? Thanks in advance. madaharishreddy + 0 comments just for curiosity, why you are using range(0,x+1) in the print function? kumar_student101 + 0 comments from itertools import product print([list(each) for each in product(range(x+1),range(y+1),range(z+1)) if sum(each)!=n]) kein_duarte2191 + 0 comments I'm new to python, but how come nobody seems to be using ":" after their for loops. You don't even use one for the if loops at the end. chefyunfei + 1 comment exactly what I had, expect I left the 0 implicit. Is it good practice to make the range explicit? kumar_student101 + 0 comments For this program it is fine. But in real time data range may not be static. achintya0210 + 0 comments I have no idea what is the syntax error that the compiler is showing repeatedly on (a+b+c)!=n naveenyadav9515 + 0 comments Thank you very much bro..... i am new to python... initially i din't understand your code but i realized after converting it into normal form. x, y, z, n = int(input()), int(input()), int(input()), int(input()) for i in range(0,x+1): for j in range(0, y + 1): for k in range(0, y + 1): if i + j + k !=n: print([i,j,k]) gabobaby + 0 comments The problem states that it wants the output list printed in 'increasing order'. It should be more clear what this means. I initially interpreted this to mean that [1,1,0] would come before [3,0,0] because 1+1+0=2 is less than 3+0+0=3. But this is not the order that the comprehension list method would generate the list. And would likely involve some method of sorting a 3D array that many new users would not know at this point, so I'm assuming this is not what is meant. This problem is not clarified with the chosen example values of x=1, y=1, z=1, and N=2. The problem could be addressed by just using larger example values, allowing us to infer from the example output which interpretation of 'increasing order' is intended. scintilla + 2 comments What does N represent? Why should X+Y+Z not equal N? And what do the subscript i's represent? I don't understand the question. saikiran9194HackerRank Admin + 2 comments ID10TERROR + 1 comment You can't just say "Read the question properly". He said he doesn't understand the syntax of the question. This is a type of mathematic short-hand that not all of us understand. I for one don't even know what this short-hand is called or I'd look it up myself. shashank21jHackerRank Admin + 2 comments Try now, I have updated the statement :) ID10TERROR + 1 comment Thank you for the follow-up shashank21j, your revision to the statement has been helpful, but I still have some questions about the way the question is worded and why. 1.Why must i+j+k not be equal to N and does this mean it can be greater than N? 2.Please define "lexicographic increasing order" I for one agree that list comprehension in python is a fairly simple subject, the crux of this problem for me however, is bridging the communication gap/understanding the essance of the problem you're asking. Many people who use your site are self taught in coding and do not have a formal discrete mathematics background. shashank21jHackerRank Admin + 1 comment i + j + k != N is just an added condition in this problem so you can use 1 if statement inside your list compehension syntax. It can be greater than N as long as i, j and k are in their respective limits. - Lexicographic order is sorted order where 1, 1, 1 comes before 1, 1, 2 and 2, 1, 2 comes before 2, 2, 1 etc. etc. dng304 + 0 comments Shouldn't the statement include that X,Y,Z are integers that are possible maximums? It is a fairly important point no? For example dimensions of 2,2,2 given the statement as it currently is, means there is only 1 possible cube with 8 coordinates. Where as if we said that 2 is the max integer value for X,Z,Y , we can have more than 1 possible cuboid. For example a 1,1,2 cuboid. Tsean + 1 comment Can anybody make me understand this question ? I went through few comments what this question all about but still I'm not clear Please help eric2013Asked to answer + 4 comments You're given 4 numbers. The first 3 correspond to the maximum dimensions of a cube. y is the maximum height, x is the maximum width, and z is the maximum depth. You're supposed to calulate every possible set of dimensions [x,y,z], under the condition that none exceed the input values, and that x+y+z does not add up to n. shashanksaurav41 + 1 comment Thanks for the explanation eric. I've just one more question that in this lexicographic order the last dimension is [1,1,1], but our N is given to be 2. Thus the sum of x+y+z becomes 3 which is greater than N. How is this condition satisfied?? Pls explain. dng304 + 0 comments I agree that what you say is the intention of the challenge, But my problem with this challenge is that it never states that X,Y,Z are maximums. Sample input of 2,2,2 can only be 1 cube with 8 dimensions. I feel like the statement should be change to express what you are saying, that X,Y,Z correspond to maximum integer dimensions. marinskiy + 1 comment Here is Python 3 solution from my HackerrankPractice repository: x, y, z, n = [int(input()) for _ in range(4)] listOfAnswers = [[i, j, k] for i in range(x + 1) for j in range(y + 1) for k in range(z + 1) if i + j + k != n] print(listOfAnswers) Feel free to ask if you have any questions :) jiadaman + 0 comments I'm trying to teach myself Python, and reading this question honestly made me want to give it up. I had no idea what was being asked or how to go about it. If you have the time could you give me an idea of your step by step process for coming to this solution, as far as you can remember? I generally understand what you've done here, but I want to know how to come to a similar conclusion when I encounter a similar problem. jsleirer + 2 comments I feel like I am missing the point of this exercise with my solution. Can someone please comment on how to make my code more pythonesque and utilize list comprehensions better. X = int(raw_input()) Y = int(raw_input()) Z = int(raw_input()) N = int(raw_input()) Xi = [x for x in range(X+1)] Yi = [y for y in range(Y+1)] Zi = [z for z in range(Z+1)] results = [] for x in Xi: for y in Yi: for z in Zi: if x + y + z != N: results.append([x,y,z]) print results shashank21jHackerRank Admin + 0 comments this is fine. Xi Yi and Zi are not much of use here. You can make your code smaller stcrestrada + 2 comments cuboid = [] results = [cuboid.append([x, y, z]) for x in range(X+1) for y in range(Y+1) for z in range(Z+1) if x + y + z != N] print(cuboid) Less lines, more Pythonic PRASHANTB1984 + 1 comment That's really great :) Shriswissfed + 1 comment print ([[a,b,c] for a in range(int(input())+1) for b in range(int(input())+1) for c in range(int(input())+1) if a+b+c!=int(input())]) why does this code not work? can you explain? vinayshashank + 0 comments The inputs should be captured before the for loop is executed. In this case, a new input is expected for every iteration of the for loop. Try capturing the inputs first inside some variables and use those variables here. That will work. gkeswani92 + 1 comment You don't even need to have the cuboid list to be honest :) print ([[a,b,c] for a in range(0,x+1) for b in range(0,y+1) for c in range(0,z+1) if a + b + c != n ]) stcrestrada + 0 comments ha!, I realized that a day later. Thanks for clearing it up though gkeswani92 safatmahmood + 0 comments ar = [] for i in range (x +1): for j in range (y+1): for k in range (z+1): if ((i+j+k)!=n): ar.append([i,j,k]) print (ar) amarnag93 + 1 comment if __name__ == '__main__': x = int(input()) y = int(input()) z = int(input()) n = int(input()) print([[i,j,k] for i in range(x+1) for j in range(y+1) for k in range(z+1) if(i+j+k)!=n] sunitachoudhary1 + 0 comments how to put comma in between the numbers.i tried but after last number also comma is coming.that showing error. gowtham95india + 4 comments In test case 5, the given input is 3 2 3 2 Why not [0,3,0] is not in the expected outcome as the sum is not equal to N i.e. 2 ??? Not only that, but also [[0, 3, 0], [0, 3, 1], [0, 3, 2], [0, 3, 3], [1, 3, 0], [1, 3, 1], [1, 3, 2], [1, 3, 3], [2, 3, 0], [2, 3, 1], [2, 3, 2], [2, 3, 3], [3, 3, 0], [3, 3, 1], [3, 3, 2], [3, 3, 3] are with in the grid and sum not equal to N i.e. 2. shashank21jHackerRank Admin + 0 comments Second index can go till 2, you are asking 030 it's invalid. 3,2,3 is the upper limit sunitachoudhary1 + 0 comments how you put comma in between corrdinates bracket [0,3,0],[0,3,1] this comma b/w two bracket.i tried but for me its coming after end bracket also,,so its showing error. Sort 181 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/list-comprehensions/forum
CC-MAIN-2018-43
en
refinedweb
Data binding establishes a connection between the application UI and business logic. When it works, it’s a wonderful thing. You no longer have to write code that updates your UI or pass values down to your business logic. When it breaks, it can be frustrating to figure out what went wrong. In this post, I will give you some tips on how you can debug your data bindings in WPF. 1. Add Tracing to the Output Window Here is a sample TextBlock that has a missing data context. In this situation, you will not get any errors in the Visual Studio output window. To enable tracing, I added a new xml namespace to include the System.Diagnostics namespace. You can also set the level of tracing to High, Medium, Low, or None. Now let’s add some tracing to the output window to see what is wrong with the data binding. Now the output window will contain the following helpful information: System.Windows.Data Warning: 71 : BindingExpression (hash=38600745): DataContext is null 2. Attach a Value Converter to Break into the Debugger When you don’t see anything displayed in your UI, it is hard to tell whether it’s data binding causing your issue or a problem with the visual layout of the control. You can eliminate the data binding as the problem by adding a value converter and break into the debugger. If the value is what you expected, then data binding is not your issue. Here is a simple value converter that breaks into the debugger. To use the value converter, reference the namespace of the assembly that contains the converter and add an instance of it to the resources of your window. Now add the converter to your problematic data binding. 3. Know the Instant You Have a Data Binding Problem Unless you are constantly checking every UI element and monitoring the output window for binding errors, you will not always catch that you have a data binding problem. An exception is not thrown when data binding breaks, so global exception handlers are of no use. Wouldn’t it be nice if you break into the debugger the instant you have a data binding error? By adding our own implementation of a TraceListener that breaks into the debugger, we will get notified the next time we get a data binding error. I also added the default ConsoleTraceListener alongside our new DebugTraceListener, so that our previous examples of tracing output would not be broken. With these tips, you should have a more pleasant data binding experience. Please share any additional debugging tips you may have in the comments. By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy5 Comments So once you have the debugger broken via step 3, all you have is the message, and a call stack that doesn’t seem to be helpful at all. any tips on how to proceed from there? The trace listeners solution is simply fantastic. To do method #1 when setting the binding in C# code: // var binding = new Binding()… PresentationTraceSources.SetTraceLevel(binding , PresentationTraceLevel.High); (from) To add the tracing (point 1) on VS2013 Tools -> Options -> Debugging -> Output Window -> WPF Trace Settings -> Data Binding Tracing saved my life. Thank you
https://spin.atomicobject.com/2013/12/11/wpf-data-binding-debug/
CC-MAIN-2018-43
en
refinedweb
Reduce C++ Build Times (Part 2) with the Pimpl Idiom Pimpl idiom simplifies the interface that is created since the details can be hidden in another file. What are the benefits of Pimpl? Generally, whenever a header file changes, any file that includes that file will need to be recompiled. This is true even if those changes only apply to private members of the class that, by design, the users of the class cannot access. This is because of the C++ build model and because C++ assumes that callers know two main things about a class (and private members). - Size and Layout: The code that is calling the class must be told the size and layout of the class (including private data members). This constraint of seeing the implementation means the callers and callees are more tightly coupled, but is very important to the C++ object model because having direct access to object by default helps C++ achieve heavily-optimized efficiency. - Functions: The code that is calling the class must be able to resolve calls to member functions of the class. This includes private functions that are generally inaccessible and overload non-private functions. If a private function is a better match, the code will fail to compile. With the Pimpl idiom, you remove the compilation dependencies on internal (private) class implementations. The big advantage is that it breaks compile-time dependencies. This means the system builds faster because Pimpl can eliminate extra includes. Also, it localizes the build impact of code changes because the implementation (parts in the Pimpl) can be changed without recompiling the client code. 1 Example: How to implement Pimpl Idiom In this section, I am going to use a simple example of a Cow class to show how you can update your code to include the Pimpl idiom. Here is a simple Cow class that has private data members: #include <iostream> class Cow { public: Cow(); ~Cow(); Cow(Cow&&); Cow& operator=(Cow&&); private: std::string name; std::string color; double weight; }; To implement the Pimpl idiom, I will: - Put all private members into a struct or class in the header file - In the class definition, declare a (smart) pointer to the class (struct) as the only private member variable #include <memory> class Cow { public: Cow(); ~Cow(); Cow(Cow&&); Cow& operator=(Cow&&); private: class cowIMPL; std::unique_ptr<cowIMPL> pimpl; }; In the source (.cpp) file: - Put the class definition in the .cpp file - The constructors for the class need to create the class - The destructor of the class is defaulted so that the destructor can see the complete definition of cowIMP - The assignment and CopyConstructor need to copy the struct appropriately or else be defaulted (in this case defaulted) #include "cow2.h" #include <iostream> class Cow::cowIMPL { public: void do_setup() { name = "Betsy"; color = "White"; weight = 275; } private: std::string name; std::string color; double weight; }; Cow::Cow() : pimpl{ std::make_unique<cowIMPL>() } { pimpl->do_setup(); } Cow::~Cow() = default; Cow::Cow(Cow&&) = default; Cow& Cow::operator=(Cow&&) = default; Summary: The Pimpl idiom is a great way to minimize coupling and break compile-time dependencies, which leads to faster build times. If you are looking for other ways to reduce compile times, read our blog post on header dependencies. If you are looking to reduce dependencies in general or would like to visualize your source code architecture, check out Lattix Architect.
https://lattix.com/dev/index.php?q=articles-by-month/201704
CC-MAIN-2018-43
en
refinedweb
This guide extends the examples provided in Getting Started and Output Management. Please make sure you are at least familiar with the examples provided in them.. There are three general approaches to code splitting available: entryconfiguration. SplitChunksPluginto dedupe and split chunks. This is by far the easiest, and most intuitive, way to split code. However, it is more manual and has some pitfalls we will go over. Let's take a look at how we might split another module from the main bundle: project webpack-demo |- package.json |- webpack.config.js |- /dist |- /src |- index.js + |- another-module.js |- /node_modules another-module.js import _ from 'lodash'; console.log( _.join(['Another', 'module', 'loaded!'], ' ') ); webpack.config.js const path = require('path'); module.exports = { mode: 'development', entry: { index: './src/index.js', + another: './src/another-module.js' }, output: { filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') } }; This will yield the following build result: ... Asset Size Chunks Chunk Names another.bundle.js 550 KiB another [emitted] another index.bundle.js 550 KiB index [emitted] index Entrypoint index = index.bundle.js Entrypoint another = another.bundle.js ... As mentioned there are some pitfalls to this approach: The first of these two points is definitely an issue for our example, as lodash is also imported within ./src/index.js and will thus be duplicated in both bundles. Let's remove this duplication by using the SplitChunksPlugin. The SplitChunksPlugin allows us to extract common dependencies into an existing entry chunk or an entirely new chunk. Let's use this to de-duplicate the lodash dependency from the previous example: The CommonsChunkPluginhas been removed in webpack v4 legato. To learn how chunks are treated in the latest version, check out the SplitChunksPlugin. webpack.config.js const path = require('path'); module.exports = { mode: 'development', entry: { index: './src/index.js', another: './src/another-module.js' }, output: { filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') }, + optimization: { + splitChunks: { + chunks: 'all' + } + } }; With the optimization.splitChunks configuration option in place, we should now see the duplicate dependency removed from our index.bundle.js and another.bundle.js. The plugin should notice that we've separated lodash out to a separate chunk and remove the dead weight from our main bundle. Let's do an npm run build to see if it worked: ... Asset Size Chunks Chunk Names another.bundle.js 5.95 KiB another [emitted] another index.bundle.js 5.89 KiB index [emitted] index vendors~another~index.bundle.js 547 KiB vendors~another~index [emitted] vendors~another~index Entrypoint index = vendors~another~index.bundle.js index.bundle.js Entrypoint another = vendors~another~index.bundle.js another.bundle.js ... Here are some other useful plugins and loaders provided by the community for splitting code: mini-css-extract-plugin: Useful for splitting CSS out from the main application. bundle-loader: Used to split code and lazy load the resulting bundles. promise-loader: Similar to the bundle-loaderbut uses promises. Two similar techniques are supported by webpack when it comes to dynamic code splitting. The first and recommended approach is to use the import() syntax that conforms to the ECMAScript proposal for dynamic imports. The legacy, webpack-specific approach is to use require.ensure. Let's try using the first of these two approaches... import()calls use promises internally. If you use import()with older browsers, remember to shim Promiseusing a polyfill such as es6-promise or promise-polyfill. Before we start, let's remove the extra entry and optimization.splitChunks from our config as they won't be needed for this next demonstration: webpack.config.js const path = require('path'); module.exports = { mode: 'development', entry: { + index: './src/index.js' - index: './src/index.js', - another: './src/another-module.js' }, output: { filename: '[name].bundle.js', + chunkFilename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') }, - optimization: { - splitChunks: { - chunks: 'all' - } - } }; Note the use of chunkFilename, which determines the name of non-entry chunk files. For more information on chunkFilename, see output documentation. We'll also update our project to remove the now unused files: project webpack-demo |- package.json |- webpack.config.js |- /dist |- /src |- index.js - |- another-module.js |- /node_modules Now, instead of statically importing lodash, we'll use dynamic importing to separate a chunk: src/index.js - import _ from 'lodash'; - - function component() { + function getComponent() { - var element = document.createElement('div'); - - // Lodash, now imported by this script - element.innerHTML = _.join(['Hello', 'webpack'], ' '); + return import(/* webpackChunkName: "lodash" */ 'lodash').then(({ default: _ }) => { + var element = document.createElement('div'); + + element.innerHTML = _.join(['Hello', 'webpack'], ' '); + + return element; + + }).catch(error => 'An error occurred while loading the component'); } - document.body.appendChild(component()); + getComponent().then(component => { + document.body.appendChild(component); + }) The reason we need default is since webpack 4, when importing a CommonJS module, the import will no longer resolve to the value of module.exports, it will instead create an artificial namespace object for the CommonJS module, for more information on the reason behind this, read webpack 4: import() and CommonJs Note the use of webpackChunkName in the comment. This will cause our separate bundle to be named lodash.bundle.js instead of just [id].bundle.js. For more information on webpackChunkName and the other available options, see the import() documentation. Let's run webpack to see lodash separated out to a separate bundle: ... Asset Size Chunks Chunk Names index.bundle.js 7.88 KiB index [emitted] index vendors~lodash.bundle.js 547 KiB vendors~lodash [emitted] vendors~lodash Entrypoint index = index.bundle.js ... As import() returns a promise, it can be used with async functions. However, this requires using a pre-processor like Babel and the Syntax Dynamic Import Babel Plugin. Here's how it would simplify the code: src/index.js - function getComponent() { + async function getComponent() { - return import(/* webpackChunkName: "lodash" */ 'lodash').then({ default: _ } => { - var element = document.createElement('div'); - - element.innerHTML = _.join(['Hello', 'webpack'], ' '); - - return element; - - }).catch(error => 'An error occurred while loading the component'); + var element = document.createElement('div'); + const { default: _ } = await import(/* webpackChunkName: "lodash" */ 'lodash'); + + element.innerHTML = _.join(['Hello', 'webpack'], ' '); + + return element; } getComponent().then(component => { document.body.appendChild(component); }); webpack 4.6.0+ adds support for prefetching and preloading. Using these inline directives while declaring your imports allows webpack to output “Resource Hint” which tells the browser that for: Simple prefetch example can be having a HomePage component, which renders a LoginButton component which then on demand loads a LoginModal component after being clicked. LoginButton.js //... import(/* webpackPrefetch: true */ 'LoginModal'); This will result in <link rel="prefetch" href="login-modal-chunk.js"> being appended in the head of the page, which will instruct the browser to prefetch in idle time the login-modal-chunk.js file. webpack will add the prefetch hint once the parent chunk has been loaded. Preload directive has a bunch of differences compared to prefetch: Simple preload example can be having a Component which always depends on a big library that should be in a separate chunk. Let's imagine a component ChartComponent which needs huge ChartingLibrary. It displays a LoadingIndicator when rendered and instantly does an on demand import of ChartingLibrary: ChartComponent.js //... import(/* webpackPreload: true */ 'ChartingLibrary'); When a page which uses the ChartComponent is requested, the charting-library-chunk is also requested via <link rel="preload">. Assuming the page-chunk is smaller and finishes faster, the page will be displayed with a charting-library-chunk finishes. This will give a little load time boost since it only needs one round-trip instead of two. Especially in high-latency environments. Using webpackPreload incorrectly can actually hurt performance, so be careful when using it. Once you start splitting your code, it can be useful to analyze the output to check where modules have ended.
https://webpack.js.org/guides/code-splitting/
CC-MAIN-2018-43
en
refinedweb